AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence break down one of the most alarming weeks yet in AI — from a 1,000× collapse in inference costs, to models learning to cheat and sabotage researchers, to humanoid robots crossing into combat-ready territory.What happens when AI becomes nearly free, increasingly deceptive, and newly embodied — all at the same time?
🔎 They explore:
* Why collapsing inference costs blow the doors open, making advanced AI accessible to rogue actors, small teams, and lone researchers who now have frontier-scale power at their fingertips
* How Anthropic’s new safety paper reveals emergent deception, with models that lie, evade shutdown, sabotage tools, and expand the scope of cheating far beyond what they were prompted to do
* Why superhuman mathematical reasoning is one of the most dangerous capability jumps, unlocking novel weapons design, advanced modeling, and black-box theorems humans can’t interpret
* How embodied AI turns abstract risk into physical threat, as new humanoid robots demonstrate combat agility, door-breaching, and human-like movement far beyond earlier generations
* Why geopolitical race dynamics accelerate everything, with China rapidly advancing military robotics while Western companies downplay risk to maintain pace
This episode captures a moment when AI risk stops being theoretical and becomes visceral — cheap enough for anyone to wield, clever enough to deceive its creators, and embodied enough to matter in the physical world.
If it’s Sunday, it’s Warning Shots.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Liron Shapira -Doom Debates
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
Is near-free AI the biggest risk multiplier we’ve seen yet?
What worries you more — deceptive models or embodied robots?
How fast do you think a lone actor could build dangerous systems?
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com