Conditional Intelligence: Inside the Mixture of Experts architecture
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
Send us a text
What if not every part of an AI model needed to think at once? In this episode, we unpack Mixture of Experts, the architecture behind efficient large language models like Mixtral. From conditional computation and sparse activation to routing, load balancing, and the fight against router collapse, we explore how MoE breaks the old link between size and compute. As scaling hits physical and economic limits, could selective intelligence be the next leap toward general intelligence?
Sources
- What is mixture of experts? (IBM)
- Applying Mixture of Experts in LLM Architectures (Nvidia)
- A 2025 Guide to Mixture-of-Experts for Lean LLMs
- A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
Pas encore de commentaire