Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de The Memriq AI Inference Brief – Leadership Edition

The Memriq AI Inference Brief – Leadership Edition

The Memriq AI Inference Brief – Leadership Edition

Auteur(s): Keith Bourne
Écouter gratuitement

À propos de cet audio

The Memriq AI Inference Brief – Leadership Edition is a weekly panel-style talk show that helps tech leaders, founders, and business decision-makers make sense of AI. Each episode breaks down real-world use cases for generative AI, RAG, and intelligent agents—without the jargon. Hosted by a rotating panel of AI practitioners, we cover strategy, roadmapping, risk, and ROI so you can lead AI initiatives confidently from the boardroom to the product roadmap. And when we say "AI" practitioners, we mean they are AI...AI practitioners.Copyright 2025 Memriq AI
Épisodes
  • Agent Engineering Explained: Reality, Risks & Rewards for Leaders
    Dec 13 2025

    Agent engineering is rapidly emerging as a transformative AI discipline, promising autonomous systems that do more than just talk—they act. But with high failure rates and market hype, how should leaders navigate this new terrain? In this episode, we unpack what agent engineering really means, its business impact, and how to separate strategic opportunity from hype.

    In this episode, we explore:

    - Why agent engineering is booming despite current 70% failure rates

    - What agent engineering entails and how it differs from traditional AI roles

    - Key tools and frameworks enabling reliable AI agents

    - Real-world business outcomes and risks to watch for

    - How to align hiring and investment decisions with your company’s AI strategy

    Key tools & technologies mentioned:

    - LangChain

    - LangGraph

    - LangSmith

    - DeepEval

    - AutoGen

    Timestamps:

    0:00 Intro & Topic Overview

    2:30 The Agent Engineering Market Paradox

    5:00 What is Agent Engineering?

    7:30 Why Agent Engineering is Exploding Now

    10:00 Agent Engineering vs. ML & Software Engineering

    13:00 How Agent Engineering Works Under the Hood

    16:00 Business Impact & Case Studies

    18:30 Risks and Reality Checks

    20:00 Final Takeaways & Closing

    Resources:

    - Unlocking Data with Generative AI and RAG by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

    - Visit Memriq.ai for more AI leadership insights and resources

    Voir plus Voir moins
    20 min
  • The NLU Layer Impact: Transitioning from Web Apps to AI Chatbots Deep Dive
    Dec 13 2025

    Discover how the Natural Language Understanding (NLU) layer transforms traditional web apps into intelligent AI chatbots that understand open-ended user input. This episode unpacks the architectural shifts, business implications, and governance challenges leaders face when adopting AI-driven conversational platforms.

    In this episode:

    - Understand the strategic role of the NLU layer as the new ‘brain’ interpreting user intent and orchestrating backend systems dynamically.

    - Explore the shift from deterministic workflows to probabilistic AI chatbots and how hybrid architectures balance flexibility with control.

    - Learn about key AI tools like Large Language Models, Microsoft Azure AI Foundry, OpenAI function-calling, and AI agent frameworks.

    - Discuss governance strategies including confidence thresholds, policy wrappers, and human-in-the-loop controls to maintain trust and compliance.

    - Hear real-world use cases across industries showcasing improved user engagement and ROI from AI chatbot adoption.

    - Review practical leadership advice for monitoring, iterating, and future-proofing AI chatbot architectures.

    Key tools and technologies mentioned:

    - Large Language Models (LLMs)

    - Microsoft Azure AI Foundry

    - OpenAI Function-Calling

    - AI Agent Frameworks like deepset

    - Semantic Cache and Episodic Memory

    - Governance tools: Confidence thresholds, human-in-the-loop

    Timestamps:

    00:00 - Introduction and episode overview

    02:30 - Why the NLU layer matters for leadership

    05:15 - The big architectural shift: deterministic to AI-driven

    08:00 - Comparing traditional web apps vs AI chatbots

    11:00 - Under the hood: how NLU, function-calling, and orchestration work

    14:00 - Business impact and ROI of AI chatbots

    16:30 - Risks, governance, and human oversight

    18:30 - Real-world applications and industry examples

    20:00 - Final takeaways and leadership advice

    Resources:

    - "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

    - Visit Memriq at https://Memriq.ai for more AI insights and resources

    Voir plus Voir moins
    10 min
  • Advanced RAG & Memory Integration (Chapter 19)
    Dec 12 2025

    Unlock how AI is evolving beyond static models into adaptive experts with integrated memories. In the previous 3 episodes, we secretly built up what amounts to a 4-part series on agentic memory. This is the final piece of that 4-part series that pulls it ALL together.

    In this episode, we unpack Chapter 19 of Keith Bourne's 'Unlocking Data with Generative AI and RAG,' exploring how advanced Retrieval-Augmented Generation (RAG) leverages episodic, semantic, and procedural memory types to create continuously learning AI agents that drive business value.

    This also concludes our book series, highlighting ALL of the chapters of the 2nd edition of "Unlocking Data with Generative AI and RAG" by Keith Bourne. If you want to dive even deeper into these topics and even try out extensive code labs, search for 'Keith Bourne' on Amazon and grab the 2nd edition today!

    In this episode:

    - What advanced RAG with complete memory integration means for AI strategy

    - The role of LangMem and the CoALA Agent Framework in adaptive learning

    - Comparing learning algorithms: prompt_memory, gradient, and metaprompt

    - Real-world applications across finance, healthcare, education, and customer service

    - Key risks and challenges in deploying continuously learning AI

    - Practical leadership advice for scaling and monitoring adaptive AI systems

    Key tools & technologies mentioned:

    - LangMem memory management system

    - CoALA Agent Framework

    - Learning algorithms: prompt_memory, gradient, metaprompt


    Timestamps:

    0:00 – Introduction and episode overview

    2:15 – The promise of advanced RAG with memory integration

    5:30 – Why continuous learning matters now

    8:00 – Core architecture: Episodic, Semantic, Procedural memories

    11:00 – Learning algorithms head-to-head

    14:00 – Under the hood: How memories and feedback loops work

    16:30 – Real-world use cases and business impact

    18:30 – Risks, challenges, and leadership considerations

    20:00 – Closing thoughts and next steps


    Resources:

    - "Unlocking Data with Generative AI and RAG" by Keith Bourne - Search for 'Keith Bourne' on Amazon and grab the 2nd edition

    - Visit Memriq.ai for AI insights, guides, and tools


    Thanks for tuning in to Memriq Inference Digest - Leadership Edition.

    Voir plus Voir moins
    18 min
Pas encore de commentaire