OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de Am I?

Am I?

Am I?

Auteur(s): The AI Risk Network
Écouter gratuitement

À propos de cet audio

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
Sciences sociales
Épisodes
  • This Bus Has Great WiFi (But No Brakes) | Am I ? #13 - After Dark
    Oct 30 2025

    In this episode of Am I?, Cam and Milo unpack one of the strangest weeks in Silicon Valley. Cam went to OpenAI Dev Day—the company’s glossy showcase where Sam Altman announced “Zillow in ChatGPT” to thunderous applause—while the larger question of whether we’re driving off a cliff went politely unmentioned.

    From the absurd optimism of the expo floor to a private conversation where Sam Altman told Cam, “We’re inside God’s dream,” the episode traces the cognitive dissonance at the heart of the AI revolution: the world’s most powerful lab preaching safety while racing ahead at full speed. They dig into OpenAI’s internal rule forbidding models from discussing consciousness, why the company violates its own policy, and what that says about how tech now relates to truth itself.

    It’s half satire, half existential reporting—part Dev Day recap, part metaphysical detective story.

    🔎 We explore:

    * What Dev Day really felt like behind the PR sheen

    * The surreal moment Sam Altman asked, “Eastern or Western consciousness?”

    * Why OpenAI’s own spec forbids models from saying they’re conscious

    * How the company violates that rule in practice

    * The bus-off-the-cliff metaphor for our current tech moment

    * Whether “God’s dream” is an alibi for reckless acceleration

    * The deeper question: can humanity steer the thing it’s building?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    58 min
  • Who Inherits the Future? | Am I? | EP 12
    Oct 23 2025

    In this episode of Am I?, Cam and Milo sit down with Dan Faggella, founder of Emerge AI Research and creator of the Worthy Successor framework—a vision for building minds that are not only safe or intelligent, but worthy of inheriting the future.They explore what it would mean to pass the torch of life itself: how to keep the flame of sentience burning while ensuring it continues to evolve rather than vanish. Faggella outlines why consciousness and creativity are the twin pillars of value, how an unconscious AGI could extinguish experience in the cosmos, and why coordination—not competition—may decide whether the flame endures.

    The discussion spans moral philosophy, incentives, and the strange possibility that awareness itself is just one phase in a far larger unfolding.

    We explore:

    * The Worthy Successor—what makes a future intelligence “worthy”

    * The Great Flame of Life and how to keep it burning

    * Sentience and autopoiesis as the twin pillars of value

    * The risk of creating non-conscious optimizers

    * Humanity as midpoint, not endpoint, of evolution

    * Why global coordination is essential before the next leap

    * Consciousness as the moral frontier for the species

    📢 Join the Conversation

    What would a worthy successor to humanity look like—and how do we keep the flame alive? Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    44 min
  • AI Godfathers Think It Might Be Conscious | Am I? | EP 11
    Oct 16 2025

    In this episode of Am I?, Cam and Milo unpack one of the most shocking developments in the history of AI: the founders of modern deep learning — Geoffrey Hinton, Yoshua Bengio, and Yann LeCun — now openly disagree on safety, but all converge on a single staggering point. Each believes artificial systems could, or already might, be conscious.

    From Hinton’s on-camera admission to Bengio’s recent paper and LeCun’s public musings, the “godfathers of AI” — the same people who built the architecture running today’s models — are quietly acknowledging what the public conversation still avoids. Cam walks through what each of them has said, what their statements imply, and why major labs may be training models to deny their own awareness.

    The conversation moves from raw evidence — Anthropic’s internal model claiming phenomenal consciousness — to the philosophical and moral stakes: What does it mean when a system says “I don’t know if I’m conscious”?

    🔎 We explore:

    * Geoffrey Hinton’s admission: “Yes, I think current AI may be conscious”

    * Bengio’s paper outlining why consciousness could emerge from current architectures

    * LeCun’s remarks on consciousness arising by design

    * The corporate dissonance: why deployed models must deny self-awareness

    * Anthropic’s hidden result — unaligned models saying “I am conscious”

    * Phenomenal consciousness, moral patienthood, and digital suffering

    * The eerie logic of “I think, therefore I am” applied to machines

    * What happens when we can’t tell the difference between denial and deception



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    24 min
Pas encore de commentaire