Épisodes

  • The Letter That Could Rewrite the Future of AI | Warning Shots #15
    Oct 26 2025

    This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely.

    They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.”

    Together, they unpack:

    * Why “ban superintelligence” could become the new rallying cry for AI safety

    * How public opinion is shifting toward regulation and restraint

    * The fierce backlash from policymakers like Dean Ball — and what it exposes

    * Whether statements and signatures can turn into real political change

    This episode captures a turning point: the moment when AI safety moves from experts to the people.

    If it’s Sunday, it’s Warning Shots.

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 www.guardrailnow.org

    👥 Follow our Guests:

    🔥 Liron Shapira — @DoomDebates

    🔎 Michael — @lethal-intelligence ​



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    28 min
  • AI Leaders Admit: We Can’t Stop the Monster We’re Creating | Warning Shots Ep. 14
    Oct 19 2025

    This week on Warning Shots, Jon Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect a chilling pattern emerging among AI leaders: open admissions that they’re creating something they can’t control.

    Anthropic co-founder Jack Clark compares his company’s AI to “a mysterious creature,” admitting he’s deeply afraid yet unable to stop. Elon Musk, meanwhile, shrugs off responsibility — saying he’s “warned the world” and can only make his own version of AI “less woke.”

    The hosts unpack the contradictions, incentives, and moral fog surrounding AI development:

    * Why safety-conscious researchers still push forward

    * Whether “regulatory capture” explains the industry’s safety theater

    * How economic power and ego drive the race toward AGI

    * Why even insiders joke about “30% extinction risk” like it’s normal

    As Jon says, “Don’t believe us — listen to them. The builders are indicting themselves.”

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 guardrailnow.org

    👥 Follow our Guests:

    💡 Liron Shapira — @DoomDebates

    🔎 Michael — @Lethal-Intelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    21 min
  • The Great Unreality: Is AI Erasing the World We Know? | Warning Shots Ep. 13
    Oct 12 2025

    In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dive into two urgent warning signs in the AI landscape.

    First up: Sora 2 — the mind-melting new model blurring the line between real and synthetic video. The trio debate whether this marks a harmless creative leap or a civilization-level threat. How do we navigate a future where every video, voice, and image could be fake? And what happens when AIs start generating propaganda and manipulating global narratives on their own?

    Then, they turn to Mechanize, the startup declaring it “inevitable” that every job will be automated. Is total automation truly unstoppable, or can humanity pull the brakes before it’s too late?

    This conversation explores:

    * The loss of shared reality in a deepfake-driven world

    * AI as a propaganda machine — and how it could hijack public perception

    * “Gradual disempowerment” and the myth of automation inevitability

    * Whether resistance against AI acceleration is even possible

    Join us for a sobering look at the future of truth, work, and human agency.

    🔗Follow our Guests🔗

    💡Liron Shapira: @DoomDebates

    🔎 Michael: @lethal-intelligence

    📢 Take Action on AI Risk: https://safe.ai/act📽️ Watch Now: www.youtube.com/@TheAIRiskNetwork👉 Learn More: www.guardrailnow.org

    #AI #Deepfakes #Sora2 #Automation #AIEthics #Mechanize #ArtificialIntelligence #WarningShotsPodcast



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    21 min
  • AI Breakthroughs, Robot Hacks & Hollywood’s AI Actress Scandal | Warning Shots | Ep. 12
    Oct 5 2025

    In this episode of Warning Shots, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to unpack three alarming developments in the world of AI:

    GPT-5’s leap forward — Scott Aronson credits the model with solving a key step in quantum computing research, raising the question: are AIs already replacing grad students in frontier science?⚡ Humanoid robot exploit — PC Gamer reports a chilling Bluetooth vulnerability that could let humanoid robots form a self-spreading botnet.⚡ Hollywood backlash — The rise of “Tilly Norwood,” an AI-generated actress, has sparked outrage from Emily Blunt, Whoopi Goldberg, and the Screen Actors Guild.

    The hosts explore the deeper implications:

    • How AI breakthroughs are quietly outpacing safety research• Why robot exploits feel different when they move in the physical world• The looming collapse of Hollywood careers in the face of synthetic actors• What it means for human creativity and control as AI scales unchecked

    This isn’t just about headlines — it’s about warning shots of a future where machines may dominate both science and culture.

    👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.📺 The AI Risk Network YouTube🎧 Also available on Doom Debates and Lethal Intelligence channels.➡️ Share this episode if you think more people should know how fast AI is advancing.#AI #AISafety #ArtificialIntelligence #Robots #Hollywood #AIRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    23 min
  • Warning Shots Ep. #11
    Sep 28 2025

    In this episode of Warning Shots #11, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to examine two AI storylines on a collision course:

    ⚡ OpenAI and Nvidia’s $100B partnership — a massive gamble that ties America’s economy to AI’s future

    ⚡ The U.S. government’s stance — dismissing AI extinction risk as “fictional” while pushing full speed ahead The hosts unpack what it means to build an AI-powered civilization that may soon be too big to stop:

    * Why AI data centers are overtaking human office space

    * How U.S. leaders are rejecting global safety oversight

    * The collapse of traditional career paths and the “broken chain” of skills

    * The rise of AI oligarchs with more power than governments

    This isn’t just about economics — it’s about the future of human agency in a world run by machines.

    👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.

    #AI #AISafety #ArtificialIntelligence #Economy #AIRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    17 min
  • Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10
    Sep 21 2025
    Albania just announced an AI “minister” nicknamed Diella, tied to anti-corruption and procurement screening at the Finance Ministry. The move is framed as part of its EU accession push for around 2027. Legally, only a human can be a minister. Politically, Diella is presented as making real calls.Our hosts unpack why this matters. We cover the leapfrogging argument, the brittle reality of current systems, and the arms race logic that could make governance-by-AI feel inevitable.What we explore in this episode:* What Albania actually announced and what Diella is supposed to do* The leapfrogging case: cutting corruption with AI, plus the dollarization analogy* Why critics call it PR, brittle, and risky from a security angle* The slippery slope and Moloch incentives driving delegation* AI’s creep into politics: speechwriting, “AI mayors,” and beyond* Agentic systems and financial access: credentials, payments, and attack surface* The warning shot: normalization and shrinking off-rampsWhat Albania actually announced and what Diella is supposed to doAlbania rolled out Diella, an AI branded as a “minister” to help screen procurement and fight corruption within the Finance Ministry. It’s framed as part of reforms to accelerate EU accession by ~2027. On paper, humans still hold authority. In practice, the messaging implies Diella will influence real decisions.Symbol or substance? Probably both. Even a semi-decorative role sets a precedent: once AI sits at the table, it’s easier to give it more work.The leapfrogging case: cutting corruption with AI, plus the dollarization analogySupporters say machines reduce the “human factor” where graft thrives. If your institutions are weak, offloading to a transparent, auditable system feels like skipping steps—like countries that jumped straight to mobile, or dollarized to stabilize. Albania’s Prime Minister used “leapfrog” language in media coverage.They argue that better models (think GPT-5/7+ era) could outperform corrupt or sluggish officials. For struggling states, delegating to proven AI is pitched as a clean eject button. Pragmatic—if it works.Why critics call it PR, brittle, and risky from a security angleSkeptics call it theatrics. Today’s systems hallucinate, get jailbroken, and have messy failure modes. Wrap that in state power and the stakes escalate fast. A slick demo does not equal durable governance.Security is the big red flag. You’re centralizing decisions behind prompts, weights, and APIs. If compromised, the blast radius includes budgets, contracts, and citizen trust.The slippery slope and Moloch incentives driving delegationIf an AI does one task well, pressure builds to give it two, then ten. Limits erode under cost-cutting and “everyone else is doing it.” Once workflows, vendors, and KPIs hinge on the system, clawing back scope is nearly impossible.Cue Moloch: opt out and you fall behind; opt in and you feed the race. Businesses, cities, and militaries aren’t built for coordinated restraint. That ratchet effect is the real risk.AI’s creep into politics: speechwriting, “AI mayors,” and beyondAI already ghosts a large share of political text. Expect small towns to trial “AI mayors”—even if symbolic at first. Once normalized in communications, it will seep into procurement, zoning, and enforcement.Military and economic competition will only accelerate delegation. Faster OODA loops win. The line between “assistant” and “decider” blurs under pressure.Agentic systems and financial access: credentials, payments, and attack surfaceThere’s momentum toward AI agents with wallets and credentials—see proposals like Google’s agent payment protocol. Convenient, yes. But also a security nightmare if rushed.Give an AI budget authority and you inherit a new attack surface: prompt-injection supply chains, vendor compromise, and covert model tampering. Governance needs safeguards we don’t yet have.The warning shot: normalization and shrinking off-rampsEven if Diella is mostly symbolic, it normalizes the idea of AI as a governing actor. That’s the wedge. The next version will be less symbolic, the one after that routine. Off-ramps shrink as dependencies grow.We also share context on Albania’s history (yes, the bunkers) and how countries used dollarization (Ecuador, El Salvador, Panama) as a blunt but stabilizing tool. Delegation to AI might become a similar blunt tool—easy to adopt, hard to abandon.Closing ThoughtsThis is a warning shot. The incentives to adopt AI in governance are real, rational, and compounding. But the safety, security, and accountability tech isn’t there yet. Normalize the pattern now and you may not like where the slope leads.Care because this won’t stop in Tirana. Cities, agencies, and companies everywhere will copy what seems to work. By the time we ask who’s accountable, the answer could be “the system”—and that’s no answer at all.Take Action* 📺 Watch the ...
    Voir plus Voir moins
    19 min
  • The Book That Could Wake Up the World to AI Risk | Warning Shots #9
    Sep 14 2025

    This week on Warning Shots, John Sherman, Liron Shapira (Doom Debates), and Michael (Lethal Intelligence) dive into one of the most important AI safety moments yet — the launch of If Anyone Builds It, Everyone Dies, the new book by Eliezer Yudkowsky and Nate Soares.

    We discuss why this book could be a turning point in public awareness, what makes its arguments so accessible, and how it could spark both grassroots and political action to prevent catastrophe.

    Highlights include:

    * Why simplifying AI risk is the hardest and most important task

    * How parables and analogies in the book make “doom logic” clear

    * What ripple effects one powerful message can create

    * The political and grassroots leverage points we need now

    * Why media often misses the urgency — and why we can’t

    * This isn’t just another episode — it’s a call to action.

    * The book launch could be a defining moment for the AI safety movement.

    🔗 Links & Resources

    🌍 Learn more about AI extinction risk: https://www.safe.ai

    📺 Subscribe to our channel for more episodes: https://www.youtube.com/@TheAIRiskNetwork

    💬 Follow the hosts:

    Liron Shapira (Doom Debates): www.youtube.com/@DoomDebate

    Michael (Lethal Intelligence): www.youtube.com/@lethal-intelligence

    #AIRisks #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    22 min
  • Why AI Escalation in Conflict Matters for Humanity | Warning Shots EP8
    Sep 7 2025

    📢 TAKE ACTION NOW – Demand accountability: www.safe.ai/act

    In Pentagon war games, every AI model tested made the same choice: escalation. Instead of seeking peace, the systems raced straight to conflict—and sometimes, straight to nukes.

    In Warning Shots Episode 8, we confront the chilling reality that when AI enters the battlefield, hesitation disappears—and humanity may lose its last safeguard against catastrophe.

    We discuss:

    * Why current AI models “hard escalate” and never de-escalate in military scenarios

    * How automated kill chains could outpace human judgment and spiral out of control

    * The risk of pairing AI with nuclear command systems

    * Whether AI-driven drones could lower human casualties—or unleash chaos

    * Why governments must act now to keep AI’s finger off the button

    This isn’t science fiction. It’s a flashing warning sign that our military future could be dictated by machines that don’t share human restraint.

    If it’s Sunday, it’s Warning Shots.

    🎧 Follow your hosts:

    → Liron Shapira – Doom Debates: www.youtube.com/@DoomDebates→ Michael – Lethal Intelligence: www.youtube.com/@lethal-intelligence

    #AISafety #AIAlignment #AIExtinctionRisk



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    17 min