OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de Eye On A.I.

Eye On A.I.

Eye On A.I.

Auteur(s): Craig S. Smith
Écouter gratuitement

À propos de cet audio

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.Eye On A.I.
Épisodes
  • #316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment
    Jan 23 2026

    AI is getting smarter, but now it needs better judgment.

    In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence.

    Robbie brings first-hand experience from Meta's trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn't be treated as arbiters of truth, why subjective domains like politics, health, and mental health pose serious risks, and why more data does not solve the alignment problem.

    The conversation breaks down how AI systems are evaluated today, how engagement incentives create sycophantic and biased models, and why trust is becoming the biggest barrier to real AI adoption. Robbie also shares how Forum AI is building expert-driven AI evaluation systems that scale human judgment instead of crowd labels, and why transparency about who trains AI matters more than ever.

    This episode explores AI safety, AI trust, model evaluation, expert judgment, mental health risks, misinformation, and the future of responsible AI deployment.

    If you are building, deploying, regulating, or relying on AI systems, this conversation will fundamentally change how you think about intelligence, truth, and responsibility.


    Want to know more about Forum AI?
    Website: https://www.byforum.com/
    X: https://x.com/TheForumAI
    LinkedIn: https://www.linkedin.com/company/byforum/

    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Why Treating AI as a "Truth Engine" Is Dangerous
    (02:47) What Forum AI Does and Why Expert Judgment Matters
    (06:32) How Expert Thinking Is Extracted and Structured
    (09:40) Bias, Training Data, and the Myth of Objectivity in AI
    (14:04) Evaluating AI Through Consequences, Not Just Accuracy
    (18:48) Who Decides "Ground Truth" in Subjective Domains
    (24:27) How AI Models Are Actually Evaluated in Practice
    (28:24) Why Quality of Experts Beats Scale in AI Evaluation
    (36:33) Trust as the Biggest Bottleneck to AI Adoption
    (45:01) What "Good Judgment" Means for AI Systems
    (49:58) The Risks of Engagement-Driven AI Incentives
    (54:51) Transparency, Accountability, and the Future of AI

    Voir plus Voir moins
    1 h et 4 min
  • #315 Jarrod Johnson: How Agentic AI Is Impacting Modern Customer Service
    Jan 21 2026

    In this episode of Eye on AI, Craig Smith sits down with Jarrod Johnson, Chief Customer Officer at TaskUs, to unpack how agentic AI is changing customer service from conversations to real action.

    They explore what agentic AI actually is, why chatbots were only the first step, and how enterprises are deploying AI systems that resolve issues, execute tasks, and work alongside human teams at scale.

    The conversation covers real-world use cases, the economics of AI-driven support, why many enterprise AI pilots fail, and how human roles evolve when AI takes on routine work.

    A grounded look at where customer experience, enterprise AI, and the future of support are heading.



    Stay Updated:

    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Jarrod Johnson and the Evolution of TaskUs

    (03:58) Why AI Became Core to Customer Service

    (06:07) Humans, AI, and the New Support Model

    (07:16) What Agentic AI Actually Is

    (11:38) TaskUs as an AI Systems Integrator

    (14:59) How Agentic AI Resolves Customer Issues

    (19:52) Workforce Impact and the Human Role

    (23:26) Why Most Enterprise AI Pilots Fail

    (30:32) Real Client Case Study: Healthcare Impact

    (36:34) Why Customer Service Still Feels Broken

    (38:49) The End of IVR Menus and Legacy Systems

    (42:25) AI Safety, Compliance, and Governance

    (49:38) Training Humans for AI and RLHF Work

    (54:34) The Future of Agentic AI in Enterprise

    Voir plus Voir moins
    58 min
  • #313 Nick Pandher: How Inference-First Infrastructure Is Powering the Next Wave of AI
    Jan 17 2026

    Inference is now the biggest challenge in enterprise AI.

    In this episode of Eye on AI, Craig Smith speaks with Nick Pandher, VP of Product at Cirrascale, about why AI is shifting from model training to inference at scale. As AI moves into production, enterprises are prioritizing performance, latency, reliability, and cost efficiency over raw compute.

    The conversation covers the rise of inference-first infrastructure, the limits of hyperscalers, the emergence of neoclouds, and how agentic AI is driving always-on inference workloads. Nick also explains how inference-optimized hardware and serverless AI platforms are shaping the future of enterprise AI deployment.

    If you are deploying AI in production, this episode explains why inference is the real frontier.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI



    (00:00) Preview

    (00:50) Introduction to Cirrascale and AI inference

    (03:04) What makes Cirrascale a neocloud

    (04:42) Why AI shifted from training to inference

    (06:58) Private inference and enterprise security needs

    (08:13) Hyperscalers vs neoclouds for AI workloads

    (10:22) Performance metrics that matter in inference

    (13:29) Hardware choices and inference accelerators

    (20:04) Real enterprise AI use cases and automation

    (23:59) Hybrid AI, regulated industries, and compliance

    (26:43) Proof of value before AI pilots

    (31:18) White-glove AI infrastructure vs self-serve cloud

    (33:32) Qualcomm partnership and inference-first AI

    (41:52) Edge-to-cloud inference and agentic workflows

    (49:20) Why AI pilots fail and how enterprises succeed



    Voir plus Voir moins
    56 min
Pas encore de commentaire