28 episodes

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

The Retort AI Podcast Thomas Krendl Gilbert and Nathan Lambert

    • Technology
    • 4.8 • 9 Ratings

Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.

    ChatGPT talks: diamond of the season or quite the scandal?

    ChatGPT talks: diamond of the season or quite the scandal?

    Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.html
    This is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.
    00:00 Guess the Donkey Kong Character00:50 OpenAI's New AI Girlfriend07:08 OpenAI's Business Model and Responsible AI08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness12:48 OpenAI and the Mystery Box19:10 The Blurring Boundaries of Intimacy and Technology22:05 Rousseau's Discourse on Inequality and the Impact of Technology26:16 OpenAI's Model Spec and Its Objectives30:10 The Unintelligibility of "Benefiting Humanity"37:01 The Chain of Command and the Paradox of AI Love45:46 The Form and Content of OpenAI's Model Spec48:51 The Future of AI and Societal Disruptions

    • 51 min
    Three pillars of AI power

    Three pillars of AI power

    Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence. 
    Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/1653843497932267520
    00:00: Introduction and Cryptozoologists02:00: DC and the National AI Research Resource (NAIR)05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC11:00: The AI Safety vs. Ethics Debate13:42: The Rise of the Third Entity: The Government's Role in AI19:42: New York's Influence and the Power of Narrative29:36: Silicon Valley's Insularity and the Need for Regulation36:50: The Amazon Antitrust Paradox and the Shifting Landscape48:20: The Energy Conundrum and the Need for Policy Solutions56:34: Conclusion: Finding Common Ground and Building a Better Future for AI

    • 56 min
    Llama 3: Can't Compete with a Capuchin

    Llama 3: Can't Compete with a Capuchin

    Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week. 
    Links:Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey 
    00:00 Introductions & advice from a wolf00:45 Llama 307:15 Resources and investment required for large language models14:10 What it means to be a leader in the rapidly evolving AI landscape22:07 How much of AI progress is driven by stories vs resources29:41 Critiquing the concept of Artificial General Intelligence (AGI)38:10 Misappropriation of the term AGI by tech leaders42:09 The future of open models and AI development

    • 46 min
    Into the AI Trough of Disillusionment

    Into the AI Trough of Disillusionment

    Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.
    00:00 Introduction 01:16 Recent developments in open model releases 04:21 Tom's experience viewing the total solar eclipse09:38 The Three-Body Problem book and Netflix14:06 The Gartner Hype Cycle22:51 Infrastructure constraints on scaling AI28:47 Metaphors and narratives around AI risk34:43 Rethinking AI risk as public health problems37:37 The "one-way door" nature of releasing open model weights44:04 The relationship between the AI ecosystem and the models48:24 Wrapping up the discussion in the "trough of disillusionment"
    We've got some links for you again:- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle - MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer - Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property - Earth day history https://www.earthday.org/history/ - For our loyal listeners http://tudorsbiscuitworld.com/

    • 51 min
    AI's Eras Tour: Performance, Trust, and Legitimacy

    AI's Eras Tour: Performance, Trust, and Legitimacy

    Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.
    The Taylor moment: https://twitter.com/DrJimFan/status/1769817948930072930
    00:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area09:08 Mustafa Suleyman's new role and discussion on AI safety11:31 The shift from performance to trust in AI evaluation17:31 The role of government agencies in AI policy and regulation24:07 The role of accreditation in establishing legitimacy and trust32:11 Grok's open source release and its impact on the AI community39:34 Responsibility and accountability in AI and social media platforms

    • 46 min
    Claude 3: Is Nathan too bought into the hype?

    Claude 3: Is Nathan too bought into the hype?

    Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers!
    More at retortai.com. Contact us at mail at domain.
    Some topics:- The pace of progress in AI and whether it feels meaningful or like "progress fatigue" to different groups
    - The role of hype and "vibes" in driving interest and investment in new AI models 
    - Whether the value being created by large language models is actually just being concentrated in a few big tech companies
    - The debate around whether open source AI is feasible given the massive compute requirements
    - The limitations of "open letters" and events with Chatham House rules as forms of politics and accountability around AI
    - The analogy between the AI arms race and historical arms races like the dreadnought naval arms race
    - The role of narratives, pop culture, and "priesthoods" in shaping public understanding of AI

    Chapters & transcript partially created with https://github.com/FanaHOVA/smol-podcaster.
    00:00 Introduction and the spirit of open source04:32 Historical parallels of technology arms races10:26 The practical use of language models and their impact on society22:21 The role and potential of open source in AI development28:05 The challenges of achieving coordination and scale in open AI development34:18 Pop culture's influence on the AI conversation, specifically through "Dune"

    • 43 min

Customer Reviews

4.8 out of 5
9 Ratings

9 Ratings

(-&(-: ,

Subtle Fun, Engaging, and Inform

Tom and Nathan have a good connection going on that brings creative cultural moment to describe the mania of ai. It’s dry humor and I love it! They sound so level, take ai seriously, but no mess. Thank you for this!

Vikram Sreekanti ,

Sizzling insights on the state of AI

Top-notch takes on the state of AI and insights into what’s actually going on with the LLM craze

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
TED Radio Hour
NPR

You Might Also Like

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Dwarkesh Podcast
Dwarkesh Patel
Data Skeptic
Kyle Polich