OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de Pete Florence: Generalist, Scaling Laws, Train One Improve All | Turn the Lens Ep46

Pete Florence: Generalist, Scaling Laws, Train One Improve All | Turn the Lens Ep46

Pete Florence: Generalist, Scaling Laws, Train One Improve All | Turn the Lens Ep46

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

What if training a robot to do ONE thing automatically made it better at EVERYTHING? Pete Florence, Co-founder & CEO of Generalist and former Google DeepMind Senior Research Scientist, joins Jeff Frick at Humanoids Summit 2025 to reveal a breakthrough that fundamentally changes how we think about robot intelligence. The big discovery? Robotics has finally found its scaling laws—just like large language models. At 7 billion parameters, models cross an "intelligence threshold" where more data predictably equals more intelligence. No more hitting walls. No more plateaus. Just continuous improvement. But the real magic is cross-task generalization: when you train on one skill, the robot gets better at all skills. It's not just learning faster—it's learning universally. Pete explains why Generalist is betting on generalist robots (yes, the double meaning is intentional) when specialists have dominated for decades, how smaller models experience "ossification" and literally stop learning, and why reaching a "data-rich regime" of 270,000+ hours of real-world interaction data changed everything. He also introduces fascinating concepts like "physical hallucinations" (when robots confidently do the wrong thing) and why teaching robots epistemic humility—the ability to say "I don't know"—might be more critical than any task-specific training. From his award-winning work on Dense Object Nets at MIT to pioneering RT-2 and PaLM-E at Google DeepMind, Pete has been at the cutting edge of embodied AI. Now with GEN-0, he's proving that foundation models can work in the physical world—with all the scaling properties that made LLMs so powerful. Key Topics: The 7B parameter intelligence threshold breakthroughWhy training one task improves all tasks (cross-skill learning)GEN-0: First embodied foundation model with proven scaling lawsGeneralist vs specialist: Why Pete's betting against conventional wisdomOssification: When models give up and stop learningPhysical hallucinations in robotics270,000+ hours of real-world data and why it mattersThe data-rich regime that enables scalingTeaching robots to know their limitsComparing robotics timelines to autonomous vehicles Guest Bio: Pete Florence is Co-founder & CEO of Generalist, an embodied AI company building foundation models for physical robots. Previously a Senior Research Scientist at Google DeepMind, Pete led groundbreaking research on RT-2 (vision-language-action models) and PaLM-E (embodied multimodal language models). He earned his PhD in Computer Science from MIT under Russ Tedrake, winning multiple Best Paper awards including CoRL 2018 for Dense Object Nets and the IEEE RA-L Best Paper Award 2020. His work has been cited over 20,000 times and featured in the New York Times, WIRED, and CNN. About the Event: Recorded at Humanoids Summit 2025 (December 11-12) at the Computer History Museum in Mountain View, California. The Summit brought together 2,000+ attendees from 400+ companies and 40 countries, featuring leaders from Google DeepMind, Boston Dynamics, Physical Intelligence, and dozens of humanoid robotics startups. Links: Pete Florence: https://www.peteflorence.comGeneralist AI: https://generalistai.comGEN-0 Blog: https://generalistai.com/blog/nov-04-2025-GEN-0RT-2 Research: https://robotics-transformer2.github.ioHumanoids Summit: https://humanoidssummit.com Host: Jeff Frick, Turn the Lens / Work 20XX Episode: 46 Series: Humanoids Summit 2025 Interviews Listen to our full series from Humanoids Summit, including interviews with Carolina Parada (Google DeepMind), Jeff Burnstein (A3), and other robotics leaders.
Pas encore de commentaire