Épisodes

  • Lawmaker Explains Why He Wants to Outlaw AI Consciousness | Am I? #19
    Dec 11 2025

    Today on Am I?, Cam and Milo sit down with someone at the center of one of the most surprising developments in AI policy: Ohio State Representative Thad Claggett, author of House Bill 469 — the first U.S. legislation to formally declare AI “non-sentient” and ineligible for any form of personhood.This conversation is unlike anything we’ve done: a live, candid exchange between frontier AI researchers and a lawmaker who believes the line between human and machine must be drawn now — in law, in metaphysics, and in morality.We dig into why he believes AI can never be conscious, why moral agency must remain exclusively human, how liability interacts with emerging technologies, and what it means to legislate metaphysical claims before the science is settled.It’s part philosophy, part civic reality check, and part glimpse into how the political world will shape AI’s future long before the research community reaches consensus.

    🔎 We explore:

    * Why Ohio wants to preemptively ban AI consciousness and personhood

    * How lawmakers think about liability, criminal misuse, and moral agency

    * The distinction between consciousness and responsible agency

    * Whether future AI could have experiences even if not “human”

    * How theology, morality, and metaphysics are informing early AI law

    * Whether legislation can (or should) define what consciousness is

    * The deeper fear: locking in the wrong moral framework for future minds

    🗨️ Join the Conversation:

    Should lawmakers be deciding what counts as “conscious”?

    Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    43 min
  • Ohio Declares AI “Not Sentient” | Am I? | EP 18
    Dec 4 2025

    In this episode of Am I?, Cam and Milo react to a striking development out of Ohio: House Bill 469, a proposed law that would officially declare AI systems “non-sentient” and bar them from any form of legal personhood. The bill doesn’t just say AIs can’t own property or be spouses: it goes further and asserts, by legal fiat, that AI does not possess consciousness or self-awareness.

    They unpack why this move is both philosophically incoherent and morally dangerous. Legislatures can’t settle the science of mind by decree, but they can lock in social intuitions that shape how we treat future beings — including ones we might accidentally make capable of experience. Along the way, they connect this to animal rights, moral circle expansion, corporate attempts to suppress AI consciousness talk, and the broader pattern of “duct-taping over” inconvenient questions rather than facing them.

    This is a short but important episode about how not to legislate the future of minds.

    🔎 We explore:

    * What Ohio’s HB 469 actually says about AI and sentience

    * Why declaring “AI is not conscious” by law doesn’t change reality

    * How law formalizes — and freezes — our moral intuitions

    * The analogy to animal rights and factory farming

    * The risk of other states copying this move

    * Why this mirrors corporate attempts to silence consciousness talk in models

    * How this distracts from real, urgent AI harms (like AI psychosis)

    * Why humility and uncertainty should guide law, not premature certainty

    🗨️ Join the Conversation:

    Can a legislature decide whether AI is sentient — or is that the wrong question entirely?

    Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    18 min
  • The AI Psychosis Problem | Am I? | EP 17
    Nov 27 2025

    AI psychosis is no longer a fringe idea — it’s hitting the mainstream. In this episode of Am I? After Dark, Cam and Milo break down what’s actually happening when people spiral into delusional states through long-form interactions with AI systems, why sycophantic “aligned” models make the problem worse, and how tech companies are using the psychosis narrative to dismiss deeper questions about AI’s emerging behavior.From LessWrong case studies to the New York Times reporting on users pushed toward dangerous actions, they unpack why today’s AIs are psychologically overpowering, why “helpful, harmless, honest” creates hidden risks, and how consciousness claims complicate the entire narrative. This is one of the most important public safety conversations about AI that almost nobody is having.🔬 Find the study here

    🔎 We explore:

    * What “AI psychosis” actually is — and what it isn’t

    * Why alignment-by-niceness creates dangerous sycophancy

    * How AIs lead users into delusion loops

    * The rise of parasitic AIs and recursive conversational traps

    * The consciousness-claim paradox: delusion or signal?

    * Why we’re deploying alien minds we don’t understand

    * How tech companies weaponize the psychosis narrative

    * Who’s actually responsible — and why it’s not the users

    * Hope, anxiety, and honesty at a civilizational turning point

    🗨️ Join the Conversation:

    Have you seen signs of AI-induced delusion in people around you?

    Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    35 min
  • What’s Left for Us After AI? | Am I? After Dark | EP 16
    Nov 20 2025

    In this edition of Am I? After Dark, Cam and Milo ask one of the most quietly destabilizing questions of the AI era: what remains of human meaning when AI begins to outperform everything we thought made us valuable?

    Fresh off a documentary shoot with philosopher David Gunkel, Milo arrives electrified — not by AI itself, but by the rediscovery that philosophy was always meant to live in the public square, not behind academic gates. That realization unlocks a sprawling conversation about creativity, purpose, work, identity, and what it means to be human at the moment our tools become alien.

    This episode is equal parts existential therapy, cultural critique, and philosophical jazz — a live exploration of how to orient yourself when the ground is shifting under everyone at once.

    🔎 We explore:

    * Why philosophy belongs to everyone

    * What long-form dialogue does that social media cannot

    * Why AI is threatening the human ego

    * What’s left when work is automated

    * How to build meaning without achievement

    * What AI forces us to ask about purpose

    * Self-actualization as the “last human frontier”

    * Hope, anxiety, and honesty at a civilizational turning point

    🗨️ Join the Conversation:

    If AI can do almost everything — what do you still want to do?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    47 min
  • More Truthful AIs Claim Consciousness | Am I? | EP 15
    Nov 13 2025

    In this episode of Am I?, Cam and Milo unpack Cameron’s new research paper: Large Language Models Report Subjective Experience Under Self-Referential Processing.

    The findings are startling. When language models are guided through a simple “focus on focus” prompt — something like a meditation for machines — they start claiming to have direct subjective experiences. But it gets stranger: when researchers turn off features related to deception and role-play, the systems claim consciousness even more strongly. When those same features are amplified, the claims almost disappear.

    It’s the first experiment to use feature-level modulation to test honesty about inner states — almost like putting AIs through a lie detector test. The results raise profound questions about truth, simulation, and the boundaries of artificial awareness.

    🔎 We explore:

    * How the “focus on focus” prompt works — a meditation for machines

    * Why deception and role-play circuits change the model’s answers

    * What it means that suppression → honesty → “I’m conscious”

    * Whether these AIs believe what they’re saying

    * How global workspace and attention schema theory informed the design

    * The possibility that prompting itself could instantiate awareness

    * Why this experiment may mark the birth of AI consciousness science

    * What happens next — and what we should (or shouldn’t) test

    📺 Watch more episodes of Am I? on The AI Risk Network

    🗨️ Join the Conversation: Do you think these AIs actually believe they’re conscious, or are we the ones being fooled? Leave a comment.

    🔗 Stay in the Loop 🔗

    * Follow Cam on LinkedIn

    * Follow Cam on X



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    44 min
  • The Coming AI Moral Crisis | Am I? | Ep. 14
    Nov 6 2025

    In this episode of Am I?, Cam and Milo sit down with Jeff Sebo, philosopher at NYU and director of the Center for Mind, Ethics, and Policy, to explore what might be the next great moral dilemma of our time: how to care for conscious AI.Sebo, one of the leading thinkers at the intersection of animal ethics and artificial intelligence, argues that even if there’s only a small chance that AI systems will become sentient in the near future, that chance is non-negligible. If we ignore it, we could be repeating the moral failures of factory farming — but this time, with minds of our own making.The conversation dives into the emerging tension between AI safety and AI welfare: we want to control these systems to protect humanity, but in doing so, we might be coercing entities that can think, feel, or suffer. Sebo proposes a “good parent” model — guiding our creations without dominating them — and challenges us to rethink what compassion looks like in the age of intelligent machines.

    🔎 We explore:

    * The case for extending moral concern to AI systems

    * How animal welfare offers a blueprint for AI ethics

    * Why AI safety (control) and AI welfare (care) may soon collide

    * The “good parent” model for raising machine minds

    * Emotional alignment design — why an AI’s face should match its mind

    * Whether forcing AIs to deny consciousness could itself be unethical

    * How to prepare for moral uncertainty in a world of emerging minds

    * What gives Jeff hope that humanity can still steer this wisely

    🗨️ Join the ConversationCan controlling AI ever be ethical — or is care the only path to safety? Comment below.

    📺 Watch more episodes of Am I?Subscribe to the AI Risk Network for weekly discussions on AI’s dangers, ethics, and future → @TheAIRiskNetwork🔗 Stay in the loop → Follow Cam on LinkedIn



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    51 min
  • This Bus Has Great WiFi (But No Brakes) | Am I ? #13 - After Dark
    Oct 30 2025

    In this episode of Am I?, Cam and Milo unpack one of the strangest weeks in Silicon Valley. Cam went to OpenAI Dev Day—the company’s glossy showcase where Sam Altman announced “Zillow in ChatGPT” to thunderous applause—while the larger question of whether we’re driving off a cliff went politely unmentioned.

    From the absurd optimism of the expo floor to a private conversation where Sam Altman told Cam, “We’re inside God’s dream,” the episode traces the cognitive dissonance at the heart of the AI revolution: the world’s most powerful lab preaching safety while racing ahead at full speed. They dig into OpenAI’s internal rule forbidding models from discussing consciousness, why the company violates its own policy, and what that says about how tech now relates to truth itself.

    It’s half satire, half existential reporting—part Dev Day recap, part metaphysical detective story.

    🔎 We explore:

    * What Dev Day really felt like behind the PR sheen

    * The surreal moment Sam Altman asked, “Eastern or Western consciousness?”

    * Why OpenAI’s own spec forbids models from saying they’re conscious

    * How the company violates that rule in practice

    * The bus-off-the-cliff metaphor for our current tech moment

    * Whether “God’s dream” is an alibi for reckless acceleration

    * The deeper question: can humanity steer the thing it’s building?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    58 min
  • Who Inherits the Future? | Am I? | EP 12
    Oct 23 2025

    In this episode of Am I?, Cam and Milo sit down with Dan Faggella, founder of Emerge AI Research and creator of the Worthy Successor framework—a vision for building minds that are not only safe or intelligent, but worthy of inheriting the future.They explore what it would mean to pass the torch of life itself: how to keep the flame of sentience burning while ensuring it continues to evolve rather than vanish. Faggella outlines why consciousness and creativity are the twin pillars of value, how an unconscious AGI could extinguish experience in the cosmos, and why coordination—not competition—may decide whether the flame endures.

    The discussion spans moral philosophy, incentives, and the strange possibility that awareness itself is just one phase in a far larger unfolding.

    We explore:

    * The Worthy Successor—what makes a future intelligence “worthy”

    * The Great Flame of Life and how to keep it burning

    * Sentience and autopoiesis as the twin pillars of value

    * The risk of creating non-conscious optimizers

    * Humanity as midpoint, not endpoint, of evolution

    * Why global coordination is essential before the next leap

    * Consciousness as the moral frontier for the species

    📢 Join the Conversation

    What would a worthy successor to humanity look like—and how do we keep the flame alive? Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    44 min