OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de Brain Inspired

Brain Inspired

Brain Inspired

Auteur(s): Paul Middlebrooks
Écouter gratuitement

À propos de cet audio

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.© 2019 Brain-Inspired Science
Épisodes
  • BI 229 Tomaso Poggio: Principles of Intelligence and Learning
    Jan 14 2026

    Support the show to get full episodes, full archive, and join the Discord community.

    The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

    Read more about our partnership.

    Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

    To explore more neuroscience news and perspectives, visit thetransmitter.org.

    Tomaso Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, an investigator at the McGovern Institute for Brain Research, a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of both the Center for Biological and Computational Learning at MIT and the Center for Brains, Minds, and Machines.

    Tomaso believes we are in-between building and understanding useful AI That is, we are in between engineering and theory. He likens this stage to the period after Volta invented the battery and Maxwell developed the equations of electromagnetism. Tomaso has worked for decades on the theory and principles behind intelligence and learning in brains and machines. I first learned of him via his work with David Marr, in which they developed "Marr's levels" of analysis that frame explanation in terms of computation/function, algorithms, and implementation. Since then Tomaso has added "learning" as a crucial fourth level. I will refer to you his autobiography to learn more about the many influential people and projects he has worked with and on, the theorems he and others have proved to discover principles of intelligence, and his broader thoughts and reflections.

    Right now, he is focused on the principles of compositional sparsity and genericity to explain how deep learning networks can (computationally) efficiently learn useful representations to solve tasks.

    • Lab website.
    • Tomaso's Autobiography
    • Related papers
      • Position: A Theory of Deep Learning Must Include Compositional Sparsity
      • The Levels of Understanding framework, revised
    • Blog post:
      • Poggio lab blog.
      • The Missing Foundations of Intelligence

    0:00 - Intro 9:04 - Learning as the fourth level of Marr's levels 12:34 - Engineering then theory (Volta to Maxwell) 19:23 - Does AI need theory? 26:29 - Learning as the door to intelligence 38:30 - Learning in the brain vs backpropagation 40:45 - Compositional sparsity 49:57 - Math vs computer science 56:50 - Generalizability 1:04:41 - Sparse compositionality in brains? 1:07:33 - Theory vs experiment 1:09:46 - Who needs deep learning theory? 1:19:51 - Does theory really help? Patreon 1:28:54 - Outlook

    Voir plus Voir moins
    1 h et 41 min
  • BI 228 Alex Maier: Laws of Consciousness
    Dec 31 2025

    Support the show to get full episodes, full archive, and join the Discord community.

    The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

    Read more about our partnership.

    Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

    To explore more neuroscience news and perspectives, visit thetransmitter.org.

    Alex is an associate professor of psychology at Vanderbilt University where he heads the Maier Lab. His work in neuroscience spans vision, visual perception, and cognition, studying the neurophysiology of cortical columns, and other related topics. Today, he is here to discuss where his focus has shifted over the past few years, the neuroscience of consciousness. I should say shifted back, since that was his original love, which you'll hear about.

    I've known Alex since my own time at Vanderbilt, where I was a postdoc and he was a new faculty member, and I remember being impressed with him then. I was at a talk he gave - job talk or early talk - where it was immediately obvious how passionate and articulate he is about what he does, and I remember he even showed off some of his telescope photography - good pictures of the moon, I remember. Anyway, we always had fun interactions, even if sometimes it was a quick hello as he ran up stairs and down hallways to get wherever he was going, always in a hurry.

    Today we discuss why Alex sees integration information theory as the most viable current prospect for explaining consciousness. That is mainly because IIT has developed a formalized mathematical account that hopes to do for consciousness what other math has done for physics, that is, give us what we know as laws of nature. So basically our discussion revolves around everything related to that, like philosophy of science, distinguishing mathematics from "the mathematical", some of the tools he is finding valuable, like category theory, and some of his work measuring the level of consciousness IIT says a whole soccer team has, not just the individuals that comprise the team.

    • Maier Lab
    • Astonishing Hypothesis (Alex's youtube channel)
    • Twitter:
    • Sensation and Perception textbook (in-the-making)
    • Related papers
      • Linking the Structure of Neuronal Mechanisms to the Structure of Qualia
      • Information integration and the latent consciousness of human groups
      • Neural mechanisms of predictive processing: a collaborative community experiment through the OpenScope program
    • Various things Alex mentioned:
      • “An Antiphilosophy of Mathematics,” Peter J. Freyd youtube video about "the mathematical".
      • David Kaiser's playlist on modern physics.
    • Here's a link to t...
    Voir plus Voir moins
    1 h et 58 min
  • BI 227 Decoding Memories: Aspirational Neuroscience 2025
    Dec 17 2025

    Support the show to get full episodes, full archive, and join the Discord community.

    The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.

    Read more about our partnership.

    Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released.

    To explore more neuroscience news and perspectives, visit thetransmitter.org.

    Can you look at all the synaptic connections of a brain, and tell me one nontrivial memory from the organism that has that brain? If so, you shall win the $100,000 prize from the Aspirational Neuroscience group.

    I was recently invited for the second time to chair a panel of experts to discuss that question and all the issues around that question - how to decode a non-trivial memory from a static map of synaptic connectivity.

    Before I play that recording, let me set the stage a bit more.

    Aspirational Neuroscience is a community of neuroscientists run by Kenneth Hayworth, with the goal, from their website, to "balance aspirational thinking with respect to the long-term implications of a successful neuroscience with practical realism about our current state of ignorance and knowledge." One of those aspirations is to decoding things - memories, learned behaviors, and so on - from static connectomes. They hold satellite events at the SfN conference, and invite experts in connectomics from academia and from industry to share their thoughts and progress that might advance that goal.

    In this panel discussion, we touch on multiple relevant topics. One question is what is the right experimental design or designs that would answer whether we are decoding memory - what is a benchmark in various model organisms, and for various theoretical frameworks? We discuss some of the obstacles in the way, both technologically and conceptually. Like the fact that proofreading connectome connections - manually verifying and editing them - is a giant bottleneck, or like the very definition of memory, what counts as a memory, let alone a "nontrivial" memory, and so on. And they take lots of questions from the audience as well.

    I apologize the audio is not crystal clear in this recording. I did my best to clean it up, and I take full blame for not setting up my audio recorder to capture the best sound. So, if you are a listener, I'd encourage you to check out the video version, which also has subtitles throughout for when the language isn't clear.

    Anyway, this is a fun and smart group of people, and I look forward to another one next year I hope.

    The last time I did this was episode 180, BI 180, which I link to in the show notes. Before that I had on Ken Hayworth, whom I mentioned runs Aspirational Neuroscience, and Randal Koene, who is on the panel this time. They were on to talk about the future possibility of uploading minds to computers based on connectomes. That was episode 103.

    • Aspirational Neuroscience
    • Panel
      • Michał Januszewski
        • @michalwj.bsky.social
        • Research scientist (connectomics) with Google Research, automated neural trac...
    Voir plus Voir moins
    1 h et 15 min
Pas encore de commentaire