OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de The Letter That Could Rewrite the Future of AI | Warning Shots #15

The Letter That Could Rewrite the Future of AI | Warning Shots #15

The Letter That Could Rewrite the Future of AI | Warning Shots #15

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely.

They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.”

Together, they unpack:

* Why “ban superintelligence” could become the new rallying cry for AI safety

* How public opinion is shifting toward regulation and restraint

* The fierce backlash from policymakers like Dean Ball — and what it exposes

* Whether statements and signatures can turn into real political change

This episode captures a turning point: the moment when AI safety moves from experts to the people.

If it’s Sunday, it’s Warning Shots.

⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

🌎 www.guardrailnow.org

👥 Follow our Guests:

🔥 Liron Shapira — @DoomDebates

🔎 Michael — @lethal-intelligence ​



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
Pas encore de commentaire