Épisodes

  • Jensen Huang and the zero billion dollar market, with Stephen Witt
    Dec 16 2025

    Our guest in this episode is Stephen Witt, an American journalist and author who writes about the people driving the technological revolutions. He is a regular contributor to The New Yorker, and is famous for deep-dive investigations.

    Stephen's new book is "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip", which has just won the 2025 Financial Times and Schroders Business Book of the Year Award. It is a definitive account of the rise of Nvidia, from its foundation in a Denny's restaurant in 1993 as a video game component manufacturer, to becoming the world's most valuable company, and the hardware provider for the current AI boom.

    Stephen's previous book, “How Music Got Free”, is a history of music piracy and the MP3, and was also a finalist for the FT Business Book of the Year.

    Selected follow-ups:

    • Stephen Witt - personal site
    • Articles by Stephen Witt on The New Yorker
    • The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip - book site
    • Stephen Witt wins FT and Schroders Business Book of the Year - Financial Times
    • Nvidia Executives
    • Battle Royale (Japanese film) - IMDb
    • The Economic Singularity - book by Calum Chace
    • A Cubic Millimeter of a Human Brain Has Been Mapped in Spectacular Detail - Nature
    • NotebookLM - by Google

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    46 min
  • What's your p(Pause)? with Holly Elmore
    Dec 5 2025

    Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI.”

    But PauseAI isn’t just a talking shop. They’re probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities.

    Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard’s Organismic and Evolutionary Biology department.

    Selected follow-ups:

    • Holly Elmore - substack
    • PauseAI US
    • PauseAI - global site
    • Wild Animal Suffering... and why it matters
    • Hard problem of consciousness - Wikipedia
    • The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael Plant
    • Leading Evolution Compassionately - Herbivorize Predators
    • David Pearce (philosopher) - Wikipedia
    • The AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI)
    • Nick Bostrom's new views regarding AI/AI safety - reddit
    • AI is poised to remake the world; Help us ensure it benefits all of us - Future of Life Institute
    • On being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AI
    • California Institute of Machine Consciousness - organisation founded by Joscha Bach
    • Pausing AI is the only safe approach to digital sentience - article by Holly Elmore
    • Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey Moore


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    45 min
  • Real-life superheroes and troubled institutions, with Tom Ough
    Oct 31 2025

    Popular movies sometimes feature leagues of superheroes who are ready to defend the Earth against catastrophe. In this episode, we’re going to be discussing some real-life superheroes, as chronicled in the new book by our guest, Tom Ough. The book is entitled “The Anti-Catastrophe League: The Pioneers And Visionaries On A Quest To Save The World”. Some of these heroes are already reasonably well known, but others were new to David, and, he suspects, to many of the book’s readers.

    Tom is a London-based journalist. Earlier in his career he worked in newspapers, mostly for the Telegraph, where he was a staff feature-writer and commissioning editor. He is currently a senior editor at UnHerd, where he commissions essays and occasionally writes them. Perhaps one reason why he writes so well is that he has a BA in English Language and Literature from Oxford University, where he was a Casberd scholar.

    Selected follow-ups:

    • About Tom Ough
    • The Anti-Catastrophe League - The book's webpage
    • On novel methods of pandemic prevention
    • What is effective altruism? (EA)
    • Sam Bankman-Fried - Wikipedia (also covers FTX)
    • Open Philanthropy
    • Conscium
    • Here Comes the Sun - book by Bill McKibben
    • The 10 Best Beatles Songs (Based on Streams)
    • Carrington Event - Wikipedia
    • Mirror life - Wikipedia
    • Future of Humanity Institute 2005-2024: final report - by Anders Sandberg
    • Oxford FHI Global Catastrophic Risks - FHI Conference, 2008
    • Forethought
    • Review of Nick Bostrom’s Deep Utopia - by Calum
    • DeepMind and OpenAI claim gold in International Mathematical Olympiad
    • What the Heck is Hubble Tension?
    • The Decade Ahead - by Leopold Aschenbrenner
    • AI 2027
    • Anglofuturism

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    41 min
  • Safe superintelligence via a community of AIs and humans, with Craig Kaplan
    Oct 10 2025

    Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.

    Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).

    Selected follow-ups:

    • iQ Company
    • Superintelligence - by iQ Company
    • Herbert A. Simon - Wikipedia
    • Amara’s Law and Its Place in the Future of Tech - Pohan Lin
    • The Society of Mind - book by Marvin Minsky
    • AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC News
    • Statement on AI Risk - Center for AI Safety
    • I’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor Jones
    • Secrets of Software Quality: 40 Innovations from IBM - book by Craig Kaplan
    • London Futurists Podcast episode featuring David Brin
    • Reason in human affairs - book by Herbert Simon
    • US and China will intervene to halt ‘suicide race’ of AGI – Max Tegmark
    • If Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate Soares
    • AGI-25 - conference in Reykjavik
    • The First Global Brain Workshop - Brussels 2001
    • Center for Integrated Cognition
    • Paul S. Rosenbloom
    • Tatiana Shavrina, Meta
    • Henry Minsky launches AI startup inspired by father’s MIT research

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    42 min
  • How progress ends: the fate of nations, with Carl Benedikt Frey
    Sep 17 2025

    Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there’s a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand.

    To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.

    Carl’s new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide.

    Selected follow-ups:

    • Professor Carl-Benedikt Frey - Oxford Martin School
    • How Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University Press
    • Stop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump’s Takeover")
    • OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
    • A Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - Vice
    • The future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. Osborne
    • Europe's Choice: Policies for Growth and Resilience - Alfred Kammer, IMF
    • MIT Radiation Laboratory ("Rad Lab")

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    38 min
  • Tsetlin Machines, Literal Labs, and the future of AI, with Noel Hurley
    Sep 8 2025

    Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.

    Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.

    AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.

    For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.

    Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.

    Selected follow-ups:

    • Noel Hurley - Literal Labs
    • A New Generation of Artificial Intelligence - Literal Labs
    • Michael Tsetlin - Wikipedia
    • Thinking, Fast and Slow - book by Daniel Kahneman
    • 54x faster, 52x less energy - MLPerf Inference metrics
    • Introducing the Model Context Protocol (MCP) - Anthropic
    • Pioneering Safe, Efficient AI - Conscium
    • Smartphones and Beyond - a personal history of Psion and Symbian
    • The Official History of Arm - Arm
    • Interview with Sir Robin Saxby - IT Archive
    • How Spotify came to be worth billions - BBC

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    37 min
  • Intellectual dark matter? A reputation trap? The case of cold fusion, with Jonah Messinger
    Aug 5 2025

    Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results.

    Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary.

    Selected follow-ups:

    • Jonah Messinger (The Breakthrough Institute)
    • nucleonics.org
    • U.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions (ARPA-E)
    • How Anomalous Science Breaks Through - by Jonah Messinger
    • Wolfgang Pauli (Wikiquote)
    • Cold fusion: A case study for scientific behavior (Understanding Science)
    • Calculated fusion rates in isotopic hydrogen molecules - by SE Koonin & M Nauenberg
    • Known mechanisms that increase nuclear fusion rates in the solid state - by Florian Metzler et al
    • Introduction to superradiance (Cold Fusion Blog)
    • Peter L. Hagelstein - Professor at MIT
    • Models for nuclear fusion in the solid state - by Peter Hagelstein et al
    • Risk and Scientific Reputation: Lessons from Cold Fusion - by Huw Price
    • Katalin Karikó (Wikipedia)
    • “Abundance” and Its Insights for Policymakers - by Hadley Brown
    • Identifying intellectual dark matter - by Florian Metzler and Jonah Messinger


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    41 min
  • AI agents, AI safety, and AI boycotts, with Peter Scott
    Jul 29 2025

    This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts.

    Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution?

    Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI.

    Selected follow-ups:

    • Artificial Intelligence and You (podcast)
    • Making Sense of AI - Peter's personal website
    • Artificial Intelligence and You (book)
    • AI agent verification - Conscium
    • Preventing Zero-Click AI Threats: Insights from EchoLeak - TrendMicro
    • Future Crimes - book by Marc Goodman
    • How TikTok Serves Up Sex and Drug Videos to Minors - Washington Post
    • COVID-19 vaccine misinformation and hesitancy - Wikipedia
    • Cambridge Analytica - Wikipedia
    • Invisible Rulers - book by Renée DiResta
    • 2025 Northern Ireland riots (Ballymena) - Wikipedia
    • Google DeepMind Slammed by Protesters Over Broken AI Safety Promise


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    54 min