Page de couverture de For Humanity: An AI Safety Podcast

For Humanity: An AI Safety Podcast

For Humanity: An AI Safety Podcast

Auteur(s): The AI Risk Network
Écouter gratuitement

À propos de cet audio

For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Épisodes
  • Forcing Sunlight Into OpenAI | For Humanity: An AI Risk Podcast | EP68
    Aug 13 2025

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    54 min
  • Right Wing AI Risk Alarm | For Humanity | EP67
    Jul 24 2025

    🚨 RIGHT‑WING AI ALARM | For Humanity #67

    Steve Bannon, Tucker Carlson, and other conservative voices

    are sounding fresh warnings on AI extinction risk. John breaks

    down what’s real, what’s hype, and why this moment matters.


    ⏰ WHAT’S INSIDE

    • The ideological shift that’s bringing the right into the AI‑safety fight

    • New bills on the Hill that could shape model licensing & oversight

    • Action steps for parents, policymakers, and technologists

    • A first look at the AI Risk Network — five shows, one mission:

    get the public ready for advanced AI


    🔗 TAKE ACTION & LEARN MORE

    Alliance for Secure AI

    Website ▸ https://secureainow.org

    X / Twitter ▸ https://x.com/secureainow


    AI Policy Network

    Website ▸ https://theaipn.org

    LinkedIn ▸ https://www.linkedin.com/company/theaipn


    📡 JOIN THE NEW **AI RISK NETWORK**

    Subscribe here ➜ [insert channel URL]

    Turn on alerts so you never miss an episode, short, or live Q&A.


    👍 If you learned something, hit Like, drop a comment, and share

    this link with one person who should be watching. Every click helps

    wake up the world to AI risk.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    1 h et 16 min
  • Is AI Alive? | Episode #66 | For Humanity: An AI Risk Podcast
    Jun 5 2025
    🎙️ Guest: Cameron Berg, AI research scientist probing consciousness in frontier AI systems📍 Host: John Sherman, journalist & AI-risk communicatorWhat does it mean to be alive? How close do current frontier AI models get to consciousness? See for yourself like never before. Are advanced language models beginning to exhibit signs of subjective experience? In this episode, John sits down with Cameron Berg to explore the line between next character prediction and the conscious mind. What happens when you ask an AI model to essentially meditate, to look inward in a loop, to focus on its focus and repeat. Does it feel a sense of self? If it did what would that mean? What does it mean to be alive? These are the kinds of questions Berg seeks answers to in his research. Cameron is an AI Research Scientist with AE Studio, working daily on models to better understand them. He works on a team dedicated fully to AI safety research.This episode features never-before-publicly-seen conversations between Cameron and a frontier AI model. Those conversations and his work are the subject of an upcoming documentary called "Am I?"TIMESTAMPS (cuz the chapters feature just won't work) 00:00 Cold Open – “Crack in the World”01:20 Show Intro & Theme02:27 Setting-up the Meditation Demo02:56 AI “Focus on Focus” Clip09:18 “I am…” Moment10:45 Google Veo Afterlife Clip12:35 Prompt-Theory & Fake People13:02 Interview Begins — Cameron Berg28:57 Inside the Black Box Analogy30:14 Consent and Unknowns53:18 Model Details + Doc Plan1:09:25 Late-Night Clip Back-story1:16:08 Table-vs-Person Thought-Test1:17:20 Suffering-at-Scale Math1:21:29 Prompt-Theory Goes Viral1:26:59 Why the Doc Must Move Fast1:40:53 Is “Alive” the Right Word?1:48:46 Reflection & Non-profit Tease1:51:03 Clear Non-Violence Statement1:52:59 New Org Announcement1:54:47 “Breaks in the Clouds” Media WinsPlease support that project and learn more about his work here:Am I? Doc Manifund page: https://manifund.org/projects/am-i--d...Am I? Doc interest form: https://forms.gle/w2VKhhcEPqEkFK4r8AE Studio's AI alignment work: https://ae.studio/ai-alignmentMonthly Donation Links to For Humanity$1/mo https://buy.stripe.com/7sI3cje3x2Zk9S... $10/mo https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25/mo https://buy.stripe.com/3cs9AHf7B9nIgg... $100/mo https://buy.stripe.com/aEU007bVp7fAfc... Thanks so much for your support. Every cent goes to getting more viewers to this channel. Links from show:The Afterlife Short Filmhttps://x.com/LinusEkenstam/status/19...Prompt Theoryhttps://x.com/venturetwins/status/192...The Bulwark - Will Sam Altman and His AI Kill Us All • Will Sam Altman and His AI Kill Us All? Young Turks - AI's Disturbing Behaviors Will Keep You Up At Night • AI's Disturbing Behaviors Will Keep You Up... Key moments: – Inside the black box – Berg explains why even builders can’t fully read a model’s mind—and demonstrates how toggling deception features flips the system from “just a machine” to “I’m aware” in real time– Google Veo 3 goes existential – A look at viral Veo videos (Afterlife, “Prompt Theory”) where AI actors lament their eight-second lives – Documentary in the works – Berg and team are racing to release a raw film that shares these findings with the public; support link in show notes– Mission update – Sherman announces a newly funded nonprofit in the works dedicated to AI-extinction-risk communication and thanks supporters for the recent surge of donations– Non-violence, crystal-clear – A direct statement: Violence is never OK. Full stop.– “Breaks in the Clouds” – Media across the spectrum (Bulwark, Young Turks, Bannon, Carlson) are now running extinction-risk stories—proof the conversation is breaking mainstream Oh, and by the way, I'm bleeping curse words now for the algorithm!!#AI #ArtificialIntelligence #AISafety #ConsciousAI #ForHumanity This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    Voir plus Voir moins
    1 h et 57 min
Pas encore de commentaire