OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de #316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

#316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

AI is getting smarter, but now it needs better judgment.

In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence.

Robbie brings first-hand experience from Meta's trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn't be treated as arbiters of truth, why subjective domains like politics, health, and mental health pose serious risks, and why more data does not solve the alignment problem.

The conversation breaks down how AI systems are evaluated today, how engagement incentives create sycophantic and biased models, and why trust is becoming the biggest barrier to real AI adoption. Robbie also shares how Forum AI is building expert-driven AI evaluation systems that scale human judgment instead of crowd labels, and why transparency about who trains AI matters more than ever.

This episode explores AI safety, AI trust, model evaluation, expert judgment, mental health risks, misinformation, and the future of responsible AI deployment.

If you are building, deploying, regulating, or relying on AI systems, this conversation will fundamentally change how you think about intelligence, truth, and responsibility.


Want to know more about Forum AI?
Website: https://www.byforum.com/
X: https://x.com/TheForumAI
LinkedIn: https://www.linkedin.com/company/byforum/

Stay Updated:
Craig Smith on X: https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI


(00:00) Why Treating AI as a "Truth Engine" Is Dangerous
(02:47) What Forum AI Does and Why Expert Judgment Matters
(06:32) How Expert Thinking Is Extracted and Structured
(09:40) Bias, Training Data, and the Myth of Objectivity in AI
(14:04) Evaluating AI Through Consequences, Not Just Accuracy
(18:48) Who Decides "Ground Truth" in Subjective Domains
(24:27) How AI Models Are Actually Evaluated in Practice
(28:24) Why Quality of Experts Beats Scale in AI Evaluation
(36:33) Trust as the Biggest Bottleneck to AI Adoption
(45:01) What "Good Judgment" Means for AI Systems
(49:58) The Risks of Engagement-Driven AI Incentives
(54:51) Transparency, Accountability, and the Future of AI

Pas encore de commentaire