Obtenez 3 mois à 0,99 $/mois + 20 $ de crédit Audible

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de Embedded AI Podcast

Embedded AI Podcast

Embedded AI Podcast

Auteur(s): Embedded AI Podcast
Écouter gratuitement

À propos de cet audio

A podcast about using AI in embedded systems -- either as part of your product, or during development.Embedded AI Podcast
Épisodes
  • E03 Agentic Workflow Shootout: How We Actually Code With AI (Autumn 2025 Edition)
    Nov 13 2025

    In this episode, Ryan and Luca dive into their real-world AI coding workflows, sharing the tricks, tools, and hard-learned lessons from their daily development work. They compare their approaches to using AI agents like Claude Code and discuss everything from prompt management to context hygiene. Luca reveals his meticulous TDD approach with multiple AI instances running in parallel, while Ryan shares his more streamlined VS Code-based workflow.

    The conversation covers practical topics like managing AI forgetfulness, avoiding the pitfalls of over-mocking in tests, and the importance of being strict with AI-generated code. They also explore the addictive, game-like nature of AI-assisted coding and why it feels like playing Civilization - always "just one more turn" until the sun comes up. This is an honest look at what actually works (and what doesn't) when coding with AI assistants.

    Key Topics:

    • [02:30] Tool preferences: Command line vs VS Code for AI coding
    • [05:45] Prompt management strategies and the ginormous prompt problem
    • [08:15] Context management and AI forgetfulness over time
    • [12:00] Luca's three-option planning approach to avoid first-thought bias
    • [15:30] Test-driven development with AI: Why it's essential and how to do it right
    • [20:45] The mocking problem: When AI tests interfaces instead of functionality
    • [25:00] Running multiple AI instances in parallel for complex projects
    • [30:15] The Civilization effect: Why AI coding becomes addictively engaging
    • [35:00] Code hygiene and post-generation cleanup strategies

    Notable Quotes:

    "I've learned the hard way that you must not do that. I was like, oh, this is really nice. I wrote like 10,000 lines of code this week. You know I'm fantastically productive and then I paid for it by going over those same 10,000 lines for the next three weeks and cleaning up the mess that it had made." — Luca Ingianni

    "I must use TDD if I use AI coding. Otherwise it's so easy to get off the rails." — Luca Ingianni

    "I don't have to code with the shift key ever again." — Ryan Torvik

    "Coding with AI assist just feels exactly the same way for me [as Civilization]. It just sort of sucks you in." — Luca Ingianni

    "Make sure that your AI coding agent doesn't tie your shoelaces together. Because it will." — Ryan Torvik

    Resources Mentioned:

    • ADA - Open source command line AI coding tool mentioned by Luca
    • Claude Code - AI coding assistant used by both hosts, available as command line tool and VS Code extension
    • Continue.dev - AI coding assistant mentioned by Ryan as one he tried early on
    • RueCode - AI coding tool with task management features that Ryan used before switching to Claude Code

    Connect With Us:

    • Try implementing TDD with your AI coding workflow - start small with just 3 tests at a time
    • Create your own prompt management system - whether it's a prompts.md file or slash commands
    • Share your own AI coding workflows and tricks with us - we'd love to hear what's working for you
    Voir plus Voir moins
    41 min
  • E02 RAG for Embedded Systems Development: When Retrieval Augmented Generation Makes Sense (and When It Doesn't)
    Nov 7 2025

    Ryan and Luca explore Retrieval Augmented Generation (RAG) and its practical applications in embedded development. After Ryan's recent discussions at the Embedded Systems Summit, we dive into what RAG actually is: a system that chunks documents, stores them in vector databases, and allows AI to query specific information without hallucinating. While it sounds perfect for handling massive datasheets and documentation, the reality is more complex.

    We discuss the critical challenge of chunking - breaking documents into the right-sized pieces for effective retrieval. Too big and searches become useless; too small and you lose context. Luca shares his hands-on experience trying to make RAG work with datasheets, revealing the gap between theory and practice. With modern LLMs offering larger context windows and better document parsing capabilities, we question whether RAG has missed its window of usefulness for most development tasks. The conversation covers when RAG still makes sense (legal contexts, parts catalogs, private LLMs) and explores alternatives like having LLMs use grep and other Unix tools to search documents directly.

    Key Topics:

    • [02:15] What RAG is: Retrieval Augmented Generation explained
    • [04:30] RAG for embedded documentation and datasheets
    • [07:45] The chunking problem: breaking documents into searchable pieces
    • [12:20] Vector databases and similarity search mechanics
    • [16:40] Luca's real-world experience: challenges with datasheet RAG
    • [20:15] Modern alternatives: LLMs using grep and Unix tools
    • [25:30] When RAG still makes sense: legal contexts and parts catalogs
    • [30:45] RAG vs. larger context windows in modern LLMs
    • [35:20] Private LLMs and when RAG becomes relevant again

    Notable Quotes:

    "Data sheets are inaccurate. You still have to engineer this. You cannot just go and let it go." — Ryan

    "It's so difficult to get the chunking right. If you make it too big, that's not useful. If you make it too small, then again, it becomes difficult to search for because you're losing too much context." — Luca

    "These days, LLMs are good enough at just ad hoc-ing this. You can do away with all of the complexity of vector stores and chunking." — Luca

    "We have the hardware. We can actually prove it one way or another. If it doesn't work on hardware, then it's not right." — Ryan

    "RAG is quite tempting and quite interesting, but it's deceptively simple unless you have good reason to believe that you can get it working." — Luca

    Resources Mentioned:

    • Google NotebookLM - Tool mentioned for ingesting PDFs and creating AI-generated podcasts from documents
    • TreeSitter - Syntax tree analysis tool used as alternative to RAG for code analysis
    • Embedded Systems Summit - Jacob Beningo's conference where RAG and AI topics were discussed

    Connect With Us:

    • Try experimenting with modern LLMs and their built-in document parsing capabilities before investing time in RAG implementation
    • Share your experiences with RAG in embedded development - we'd love to hear what worked and what didn't
    • Consider the trade-offs between public LLMs and private models when deciding if RAG is worth the complexity for your use case
    Voir plus Voir moins
    41 min
Pas encore de commentaire