Page de couverture de The Macro AI Podcast

The Macro AI Podcast

The Macro AI Podcast

Auteur(s): The AI Guides - Gary Sloper & Scott Bryan
Écouter gratuitement

À propos de cet audio

Welcome to "The Macro AI Podcast" - we are your guides through the transformative world of artificial intelligence.

In each episode - we'll explore how AI is reshaping the business landscape, from startups to Fortune 500 companies. Whether you're a seasoned executive, an entrepreneur, or just curious about how AI can supercharge your business, you'll discover actionable insights, hear from industry pioneers, service providers, and learn practical strategies to stay ahead of the curve.

© 2026 The Macro AI Podcast
Politique Économie
Épisodes
  • Model Context Protocol (MCP) Explained: The Economics of Scaling Enterprise AI Without Exploding Costs
    Jan 30 2026

    In this episode of The Macro AI Podcast, Gary Sloper and Scott Bryan revisit the Model Context Protocol (MCP)—a topic that continues to generate strong listener interest and real-world enterprise questions.

    As organizations move beyond AI pilots and demos, many are discovering that AI isn’t failing because of the models—it’s failing because of integration, governance, and cost. This episode explores why enterprise AI so often hits scaling walls and how MCP is emerging as a critical piece of infrastructure to remove them.

    The conversation breaks down MCP at a practical, executive level—explaining how it standardizes the way AI systems discover, understand, and safely interact with enterprise tools and data. Gary and Scott walk through why traditional API-based integrations struggle in AI-driven environments, how MCP changes the N-by-M integration problem, and why this matters for CIOs, CFOs, and CEOs planning long-term AI strategies.

    A major focus of the episode is AI economics, including a deep dive into token costs—one of the most misunderstood and underestimated drivers of enterprise AI spend. Using clear, real-world examples, the discussion shows how MCP can dramatically reduce token usage, improve performance, and turn unpredictable inference costs into a controllable operating expense.

    The episode also covers:

    • Why MCP fundamentally changes the economics of scaling enterprise AI
    • How token efficiency directly impacts ROI, latency, and adoption
    • The infrastructure and total cost of ownership tradeoffs leaders need to understand
    • Governance risks, including the rise of “shadow MCP,” and why centralized oversight matters
    • How MCP complements—not replaces—RAG in modern enterprise AI architectures

    Bottom line: MCP is not a feature or a framework—it’s becoming core infrastructure for serious enterprise AI. If you’re responsible for AI strategy, governance, or budgets, this episode explains why MCP belongs on your radar now.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Voir plus Voir moins
    18 min
  • AWS Trainium vs Nvidia: How AWS Is Redesigning the Economics of AI for Business Leaders
    Jan 26 2026

    In this episode of The Macro AI Podcast, Gary Sloper and Scott Bryan break down why Amazon’s Trainium chip is not just a hardware announcement, but a signal that the economics of AI are fundamentally changing.

    They explore how Amazon Web Services is using custom silicon like Trainium to shift enterprises from renting AI to building and owning it—and why that strategy only works when customers go deeper into the AWS ecosystem. This isn’t about winning benchmark battles; it’s about creating economic gravity around where AI gets built.

    The conversation also tackles the question every executive is asking: How does this compare to Nvidia? While NVIDIA continues to dominate AI innovation and experimentation, AWS is focused on industrial-scale economics—making large, repeatable training workloads cheaper, more predictable, and easier to operationalize inside its cloud.

    Gary and Scott then connect the dots to real enterprise strategy, including:

    • Why AI infrastructure decisions are becoming long-term financial commitments
    • How custom chips influence cloud pricing power and cost curves
    • The rise of multi-cloud strategies that separate AI innovation from AI economics, including the role of Oracle Cloud Infrastructure as a cost-efficient execution layer
    • Why FinOps is becoming essential as AI training, retraining, and inference costs compound over time

    The key takeaway for business leaders: AI advantage won’t come from simply adopting the latest models. It will come from who controls the economics of building, scaling, and evolving AI over the next decade.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Voir plus Voir moins
    13 min
  • ChatGPT Health: Why it is a Turning Point for Healthcare—and Every Regulated Industry
    Jan 21 2026

    In this episode of The Macro AI Podcast, Gary Sloper and Scott Bryan unpack one of the most consequential—but quietly introduced—AI launches to date: ChatGPT Health.

    Rather than focusing on hype, the conversation starts with fundamentals. What does ChatGPT Health actually do? What systems can it connect to? How does it stay current with your health information? And how is it architected to operate safely inside one of the most regulated domains in the world?

    From there, Gary and Scott explore how OpenAI has deliberately framed ChatGPT Health as a grounded, trust-first intelligence layer, designed to interpret and explain verified health data—rather than replace clinicians or generate unbounded medical advice. They discuss the technical architecture behind the platform, including interoperability, real-time contextual data assembly, and the “health sandbox” model that keeps personal data isolated and protected.

    The conversation then zooms out to examine the macro implications: the end of “Dr. Google,” the shifting role of patients and clinicians, the redistribution of cognitive labor in healthcare, and the emerging governance questions around data sovereignty and AI-mediated decision-making.

    Finally, the episode connects these lessons to a broader business audience—explaining why ChatGPT Health isn’t just a healthcare story, but a blueprint for how AI will move into the interpretation layer of complex, high-stakes industries everywhere.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Voir plus Voir moins
    15 min
Pas encore de commentaire