OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

Auteur(s): The AI Risk Network
Écouter gratuitement

À propos de cet audio

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Sciences sociales
Épisodes
  • Stuart Russell: “AI CEO Told Me Chernobyl-Level AI Event Might Be Our Only Hope” | For Humanity #72
    Oct 25 2025

    Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.

    Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.

    They discuss:

    * Why even AI insiders are losing faith in control

    * What a “Chernobyl moment” could actually look like

    * Why regulation isn’t anti-innovation — it’s survival

    * The myth that America is “allergic” to AI rules

    * How liability, accountability, and provable safety could still save us

    * Whether we can ever truly coexist with a superintelligence

    This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.

    🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.

    📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.

    #AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    1 h et 33 min
  • The RAISE Act: Regulating Frontier AI | For Humanity | EP 71
    Oct 11 2025

    In this episode of For Humanity, John speaks with New York Assemblymember Alex Bores, sponsor of the groundbreaking RAISE Act, one of the first state-level bills in the U.S. designed to regulate frontier AI systems.

    They discuss:

    * Why AI poses an existential risk, with researchers estimating up to a 10% chance of extinction.

    * The political challenges of passing meaningful AI regulation at the state and federal level.

    * How the RAISE Act could require safety plans, transparency, and limits on catastrophic risks.

    * The looming jobs crisis as AI accelerates disruption across industries.

    * Why politicians are only beginning to grapple with AI’s dangers — and why the public must speak up now.

    This is a candid, urgent conversation about AI risk, regulation, and what it will take to secure humanity’s future.

    📌 Learn more about the RAISE Act.

    👉 Subscribe for more conversations on AI risk and the future of humanity.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    1 h et 4 min
  • Young Voices on AI Risk: Jobs, Community & the Fight for Our Future | FHP Ep. 70
    Sep 27 2025

    What happens when AI collides with the next generation? In this episode of For Humanity #70 — Young People vs. Advancing AI, host John Sherman sits down with Emma Corbett, Ava Smithing, and Sam Heiner from the Young People’s Alliance to explore how artificial intelligence is already shaping the lives of students and young leaders.

    From classrooms to job applications to AI “companions,” the next generation is facing challenges that older policymakers often don’t even see. This episode digs into what young people really think about AI—and why their voices are critical in the fight for a safe and human future.

    In this episode we cover:

    * Students’ on-the-ground views of AI in education and daily life

    * How AI is fueling job loss, hiring barriers, and rising anxiety about the future

    * The hidden dangers of AI companions and the erosion of real community

    * Why young people feel abandoned by “adults in the room”

    * The path from existential dread → civic action → hope

    🎯 Why watch?

    Because if AI defines the future, young people will inherit it first. Their voices, fears, and leadership could decide whether AI remains a tool—or becomes an existential threat.

    👉 Subscribe for more conversations on AI, humanity, and the choices that will shape our future.

    #AI #AIsafety #ForHumanityPodcast #YoungPeople #FutureofWork



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    1 h et 7 min
Pas encore de commentaire