251 episodes

Audio version of the posts shared in the LessWrong Curated newsletter.

LessWrong Curated Podcast LessWrong

    • Technology
    • 5.0 • 7 Ratings

Audio version of the posts shared in the LessWrong Curated newsletter.

    [HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer

    [HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    This is a linkpost for https://twitter.com/ESYudkowsky/status/144546114693741363

    I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.

    Source:
    https://www.lesswrong.com/posts/rYq6joCrZ8m62m7ej/how-could-i-have-thought-that-faster

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    • 3 min
    [HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman

    [HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:
    For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus. I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology.
    Source:
    https://www.lesswrong.com/posts/6dd4b4cAWQLDJEuHw/my-phd-thesis-algorithmic-bayesian-epistemology

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    • 13 min
    [HUMAN VOICE] "On green" by Joe Carlsmith

    [HUMAN VOICE] "On green" by Joe Carlsmith

    Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app.
    This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far.
    Warning: spoilers for Yudkowsky's "The Sword of the Good.")

    Examining a philosophical vibe that I think contrasts in interesting ways with "deep atheism."

    Text version here: https://joecarlsmith.com/2024/03/21/on-green

    This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi

    (Though: note that I haven't put the summary post on the podcast yet.)

    Source:
    https://www.lesswrong.com/posts/gvNnE6Th594kfdB3z/on-green

    Narrated by Joe Carlsmith, audio provided with permission.

    • 1 hr 15 min
    [HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen

    [HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    This is a linkpost for https://bayesshammai.substack.com/p/conditional-on-getting-to-trade-your
    “I refuse to join any club that would have me as a member” -Marx[1]
    Adverse Selection is the phenomenon in which information asymmetries in non-cooperative environments make trading dangerous. It has traditionally been understood to describe financial markets in which buyers and sellers systematically differ, such as a market for used cars in which sellers have the information advantage, where resulting feedback loops can lead to market collapses. 
    In this post, I make the case that adverse selection effects appear in many everyday contexts beyond specialized markets or strictly financial exchanges. I argue that modeling many of our decisions as taking place in competitive environments analogous to financial markets will help us notice instances of adverse selection that we otherwise wouldn’t.
    The strong version of my central thesis is that conditional on getting to trade[2], your trade wasn’t all that great. Any time you make a trade, you should be asking yourself “what do others know that I don’t?”

    Source:
    https://www.lesswrong.com/posts/vyAZyYh3qsqcJwwPn/toward-a-broader-conception-of-adverse-selection

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    • 21 min
    LLMs for Alignment Research: a safety priority?

    LLMs for Alignment Research: a safety priority?

    A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.

    This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.

    When I try to talk to LLMs about technical AI safety work, however, I just get garbage.

    I think a useful safety precaution for frontier AI models would be to make them more useful for [...]

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:
    April 4th, 2024

    Source:
    https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority

    ---

    Narrated by TYPE III AUDIO.

    • 20 min
    [HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi

    [HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi

    Support ongoing human narrations of LessWrong's curated posts:
    www.patreon.com/LWCurated

    Source:
    https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/scale-was-all-we-needed-at-first

    Narrated for LessWrong by Perrin Walker.

    Share feedback on this narration.

    • 15 min

Customer Reviews

5.0 out of 5
7 Ratings

7 Ratings

Top Podcasts In Technology

The Neuron: AI Explained
The Neuron
Lex Fridman Podcast
Lex Fridman
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
TED Radio Hour
NPR

You Might Also Like

Dwarkesh Podcast
Dwarkesh Patel
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Astral Codex Ten Podcast
Jeremiah
Conversations with Tyler
Mercatus Center at George Mason University
Razib Khan's Unsupervised Learning
Razib Khan