OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de Warning Shots

Warning Shots

Warning Shots

Auteur(s): The AI Risk Network
Écouter gratuitement

À propos de cet audio

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Politique
Épisodes
  • The Letter That Could Rewrite the Future of AI | Warning Shots #15
    Oct 26 2025

    This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely.

    They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.”

    Together, they unpack:

    * Why “ban superintelligence” could become the new rallying cry for AI safety

    * How public opinion is shifting toward regulation and restraint

    * The fierce backlash from policymakers like Dean Ball — and what it exposes

    * Whether statements and signatures can turn into real political change

    This episode captures a turning point: the moment when AI safety moves from experts to the people.

    If it’s Sunday, it’s Warning Shots.

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 www.guardrailnow.org

    👥 Follow our Guests:

    🔥 Liron Shapira — @DoomDebates

    🔎 Michael — @lethal-intelligence ​



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    28 min
  • AI Leaders Admit: We Can’t Stop the Monster We’re Creating | Warning Shots Ep. 14
    Oct 19 2025

    This week on Warning Shots, Jon Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect a chilling pattern emerging among AI leaders: open admissions that they’re creating something they can’t control.

    Anthropic co-founder Jack Clark compares his company’s AI to “a mysterious creature,” admitting he’s deeply afraid yet unable to stop. Elon Musk, meanwhile, shrugs off responsibility — saying he’s “warned the world” and can only make his own version of AI “less woke.”

    The hosts unpack the contradictions, incentives, and moral fog surrounding AI development:

    * Why safety-conscious researchers still push forward

    * Whether “regulatory capture” explains the industry’s safety theater

    * How economic power and ego drive the race toward AGI

    * Why even insiders joke about “30% extinction risk” like it’s normal

    As Jon says, “Don’t believe us — listen to them. The builders are indicting themselves.”

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 guardrailnow.org

    👥 Follow our Guests:

    💡 Liron Shapira — @DoomDebates

    🔎 Michael — @Lethal-Intelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    21 min
  • AI Breakthroughs, Robot Hacks & Hollywood’s AI Actress Scandal | Warning Shots | Ep. 12
    Oct 5 2025

    In this episode of Warning Shots, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to unpack three alarming developments in the world of AI:

    GPT-5’s leap forward — Scott Aronson credits the model with solving a key step in quantum computing research, raising the question: are AIs already replacing grad students in frontier science?⚡ Humanoid robot exploit — PC Gamer reports a chilling Bluetooth vulnerability that could let humanoid robots form a self-spreading botnet.⚡ Hollywood backlash — The rise of “Tilly Norwood,” an AI-generated actress, has sparked outrage from Emily Blunt, Whoopi Goldberg, and the Screen Actors Guild.

    The hosts explore the deeper implications:

    • How AI breakthroughs are quietly outpacing safety research• Why robot exploits feel different when they move in the physical world• The looming collapse of Hollywood careers in the face of synthetic actors• What it means for human creativity and control as AI scales unchecked

    This isn’t just about headlines — it’s about warning shots of a future where machines may dominate both science and culture.

    👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.📺 The AI Risk Network YouTube🎧 Also available on Doom Debates and Lethal Intelligence channels.➡️ Share this episode if you think more people should know how fast AI is advancing.#AI #AISafety #ArtificialIntelligence #Robots #Hollywood #AIRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    23 min
Pas encore de commentaire