Épisodes

  • How progress ends: the fate of nations, with Carl Benedikt Frey
    Sep 17 2025

    Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there’s a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand.

    To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.

    Carl’s new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide.

    Selected follow-ups:

    • Professor Carl-Benedikt Frey - Oxford Martin School
    • How Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University Press
    • Stop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump’s Takeover")
    • OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
    • A Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - Vice
    • The future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. Osborne
    • Europe's Choice: Policies for Growth and Resilience - Alfred Kammer, IMF
    • MIT Radiation Laboratory ("Rad Lab")

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    37 min
  • Tsetlin Machines, Literal Labs, and the future of AI, with Noel Hurley
    Sep 8 2025

    Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.

    Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.

    AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.

    For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.

    Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.

    Selected follow-ups:

    • Noel Hurley - Literal Labs
    • A New Generation of Artificial Intelligence - Literal Labs
    • Michael Tsetlin - Wikipedia
    • Thinking, Fast and Slow - book by Daniel Kahneman
    • 54x faster, 52x less energy - MLPerf Inference metrics
    • Introducing the Model Context Protocol (MCP) - Anthropic
    • Pioneering Safe, Efficient AI - Conscium
    • Smartphones and Beyond - a personal history of Psion and Symbian
    • The Official History of Arm - Arm
    • Interview with Sir Robin Saxby - IT Archive
    • How Spotify came to be worth billions - BBC

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    36 min
  • Intellectual dark matter? A reputation trap? The case of cold fusion, with Jonah Messinger
    Aug 5 2025

    Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results.

    Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary.

    Selected follow-ups:

    • Jonah Messinger (The Breakthrough Institute)
    • nucleonics.org
    • U.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions (ARPA-E)
    • How Anomalous Science Breaks Through - by Jonah Messinger
    • Wolfgang Pauli (Wikiquote)
    • Cold fusion: A case study for scientific behavior (Understanding Science)
    • Calculated fusion rates in isotopic hydrogen molecules - by SE Koonin & M Nauenberg
    • Known mechanisms that increase nuclear fusion rates in the solid state - by Florian Metzler et al
    • Introduction to superradiance (Cold Fusion Blog)
    • Peter L. Hagelstein - Professor at MIT
    • Models for nuclear fusion in the solid state - by Peter Hagelstein et al
    • Risk and Scientific Reputation: Lessons from Cold Fusion - by Huw Price
    • Katalin Karikó (Wikipedia)
    • “Abundance” and Its Insights for Policymakers - by Hadley Brown
    • Identifying intellectual dark matter - by Florian Metzler and Jonah Messinger


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    40 min
  • AI agents, AI safety, and AI boycotts, with Peter Scott
    Jul 29 2025

    This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts.

    Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution?

    Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI.

    Selected follow-ups:

    • Artificial Intelligence and You (podcast)
    • Making Sense of AI - Peter's personal website
    • Artificial Intelligence and You (book)
    • AI agent verification - Conscium
    • Preventing Zero-Click AI Threats: Insights from EchoLeak - TrendMicro
    • Future Crimes - book by Marc Goodman
    • How TikTok Serves Up Sex and Drug Videos to Minors - Washington Post
    • COVID-19 vaccine misinformation and hesitancy - Wikipedia
    • Cambridge Analytica - Wikipedia
    • Invisible Rulers - book by Renée DiResta
    • 2025 Northern Ireland riots (Ballymena) - Wikipedia
    • Google DeepMind Slammed by Protesters Over Broken AI Safety Promise


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    54 min
  • The remarkable potential of hydrogen cars, with Hugo Spowers
    Jul 18 2025

    The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes.

    Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars.

    He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water.

    There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance.

    Selected follow-ups:

    • Hugo Spowers - Wikipedia
    • Riversimple


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    43 min
  • AI and the end of conflict, with Simon Horton
    Jun 23 2025

    Can we use AI to improve how we handle conflict? Or even to end the worst conflicts that are happening all around us? That’s the subject of the new book of our guest in this episode, Simon Horton. The book has the bold title “The End of Conflict: How AI will end war and help us get on better”.

    Simon has a rich background, including being a stand-up comedian and a trapeze artist – which are, perhaps, two useful skills for dealing with acute conflict. He has taught negotiation and conflict resolution for 20 years, across 25 different countries, where his clients have included the British Army, the Saudi Space Agency, and Goldman Sachs. His previous books include “Change their minds” and “The leader’s guide to negotiation”.

    Selected follow-ups:

    • Simon Horton
    • The End of Conflict - book website
    • The Better Angels of our Nature - book by Steven Pinker
    • Crime in England and Wales: year ending March 2024 - UK Office of National Statistics
    • How Martin McGuinness and Ian Paisley forged an unlikely friendship - Belfast Telegraph
    • Review of Steven Pinker’s Enlightenment Now by Scott Aaronson
    • A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” by Philosophy Torres
    • End Times: Elites, Counter-Elites, and the Path of Political Disintegration - book by Peter Turchin
    • Why do chimps kill each other? - Science
    • Using Artificial Intelligence in Peacemaking: The Libya Experience - Colin Irwin, University of Liverpool
    • Retrospective on the Oslo Accord - New York Times
    • Remesh
    • Polis - Democracy Technologies
    • Waves: Tech-Powered Democracy - Demos

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    39 min
  • The AI disconnect: understanding vs motivation, with Nate Soares
    Jun 11 2025

    Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.

    MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.

    Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”

    MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope.

    Selected follow-ups:

    • Nate Soares - MIRI
    • Yudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” - MIRI
    • The Bayesian model of probabilistic reasoning
    • During safety testing, o1 broke out of its VM - Reddit
    • Leo Szilard - Physics World
    • David Bowie - Five Years - Old Grey Whistle Test
    • Amara's Law - IEEE
    • Robert Oppenheimer calculation of p(doom)
    • JD Vance commenting on AI-2027
    • SolidGoldMagikarp - LessWrong
    • ASML
    • Chicago Pile-1 - Wikipedia
    • Castle Bravo - Wikipedia


    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    49 min
  • Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin
    May 28 2025

    Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives.

    He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence.

    In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs.

    Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast.

    Selected follow-ups:

    • PRISM podcast
    • Henry Shevlin - personal site
    • Kinds of Intelligence - Leverhulme Centre for the Future of Intelligence
    • Consciousness, Machines, and Moral Status - 2024 paper by Henry Shevlin
    • Apply rich psychological terms in AI with care - by Henry Shevlin and Marta Halina
    • What insects can tell us about the origins of consciousness - by Andrew Barron and Colin Klein
    • Consciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et al
    • Association for the Study of Consciousness


    Other researchers mentioned:

    • Blake Lemoine
    • Thomas Nagel
    • Ned Block
    • Peter Senge
    • Galen Strawson
    • David Chalmers
    • David Benatar
    • Thomas Metzinger
    • Brian Tomasik
    • Murray Shanahan

    Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

    Voir plus Voir moins
    41 min