The Letter That Could Rewrite the Future of AI | Warning Shots #15
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely.
They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.”
Together, they unpack:
* Why “ban superintelligence” could become the new rallying cry for AI safety
* How public opinion is shifting toward regulation and restraint
* The fierce backlash from policymakers like Dean Ball — and what it exposes
* Whether statements and signatures can turn into real political change
This episode captures a turning point: the moment when AI safety moves from experts to the people.
If it’s Sunday, it’s Warning Shots.
⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.
🌎 www.guardrailnow.org
👥 Follow our Guests:
🔥 Liron Shapira — @DoomDebates
🔎 Michael — @lethal-intelligence
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com