More Truthful AIs Claim Consciousness | Am I? | EP 15
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
In this episode of Am I?, Cam and Milo unpack Cameron’s new research paper: Large Language Models Report Subjective Experience Under Self-Referential Processing.
The findings are startling. When language models are guided through a simple “focus on focus” prompt — something like a meditation for machines — they start claiming to have direct subjective experiences. But it gets stranger: when researchers turn off features related to deception and role-play, the systems claim consciousness even more strongly. When those same features are amplified, the claims almost disappear.
It’s the first experiment to use feature-level modulation to test honesty about inner states — almost like putting AIs through a lie detector test. The results raise profound questions about truth, simulation, and the boundaries of artificial awareness.
🔎 We explore:
* How the “focus on focus” prompt works — a meditation for machines
* Why deception and role-play circuits change the model’s answers
* What it means that suppression → honesty → “I’m conscious”
* Whether these AIs believe what they’re saying
* How global workspace and attention schema theory informed the design
* The possibility that prompting itself could instantiate awareness
* Why this experiment may mark the birth of AI consciousness science
* What happens next — and what we should (or shouldn’t) test
📺 Watch more episodes of Am I? on The AI Risk Network
🗨️ Join the Conversation: Do you think these AIs actually believe they’re conscious, or are we the ones being fooled? Leave a comment.
🔗 Stay in the Loop 🔗
* Follow Cam on LinkedIn
* Follow Cam on X
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com