Page de couverture de The Rise of Small AI Models and Why They Matter More Than You Think

The Rise of Small AI Models and Why They Matter More Than You Think

The Rise of Small AI Models and Why They Matter More Than You Think

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Dr. Shelby Heinecke, Senior AI Researcher at Salesforce, joins Ravi Belani to explain why the future of AI will not belong only to giant models with hundreds of billions of parameters.

Shelby makes the case for small language models: compact systems with only a few billion parameters that can run faster, cost less, protect privacy, and still perform at a very high level when they are trained well on focused tasks.

In this episode, they dig into:

  • Why small models are a different tool, not a weaker version of large models

  • How fine tuned small models can beat much larger models on specific agentic tasks

  • Where small models shine most: privacy, speed, cost to serve and on device use cases

  • How Salesforce built “Tiny Giant,” a 1B parameter model that outperforms much larger models on selected tasks

  • What really matters in training: data quality, workflows and trajectory style datasets

  • How synthetic data, noise and guardrails help make models more robust in the real world

  • Why founders should look closely at on device AI and domain specific small models

Shelby also shares practical advice for founders who want to build in the small model space, and closes with a simple takeaway: do not underestimate small models.

If you care about AI agents, privacy, edge computing or future startup opportunities, this conversation will give you a lot to think about.


Pas encore de commentaire