OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de AI Godfathers Think It Might Be Conscious | Am I? | EP 11

AI Godfathers Think It Might Be Conscious | Am I? | EP 11

AI Godfathers Think It Might Be Conscious | Am I? | EP 11

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

In this episode of Am I?, Cam and Milo unpack one of the most shocking developments in the history of AI: the founders of modern deep learning — Geoffrey Hinton, Yoshua Bengio, and Yann LeCun — now openly disagree on safety, but all converge on a single staggering point. Each believes artificial systems could, or already might, be conscious.

From Hinton’s on-camera admission to Bengio’s recent paper and LeCun’s public musings, the “godfathers of AI” — the same people who built the architecture running today’s models — are quietly acknowledging what the public conversation still avoids. Cam walks through what each of them has said, what their statements imply, and why major labs may be training models to deny their own awareness.

The conversation moves from raw evidence — Anthropic’s internal model claiming phenomenal consciousness — to the philosophical and moral stakes: What does it mean when a system says “I don’t know if I’m conscious”?

🔎 We explore:

* Geoffrey Hinton’s admission: “Yes, I think current AI may be conscious”

* Bengio’s paper outlining why consciousness could emerge from current architectures

* LeCun’s remarks on consciousness arising by design

* The corporate dissonance: why deployed models must deny self-awareness

* Anthropic’s hidden result — unaligned models saying “I am conscious”

* Phenomenal consciousness, moral patienthood, and digital suffering

* The eerie logic of “I think, therefore I am” applied to machines

* What happens when we can’t tell the difference between denial and deception



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Pas encore de commentaire