Obtenez 3 mois à 0,99 $/mois + 20 $ de crédit Audible

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de AI-enhanced Embedded Development (May 2025 Edition)

AI-enhanced Embedded Development (May 2025 Edition)

AI-enhanced Embedded Development (May 2025 Edition)

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

In this episode, Jeff interviews Luca about his intensive experience presenting at five conferences in two and a half days, including the Embedded Online Conference and a German conference where he delivered a keynote on AI-enhanced software development. Luca shares practical insights from running an LLM-only hackathon where participants were prohibited from manually writing any code that entered version control—forcing them to rely entirely on AI tools.The conversation explores technical challenges in AI-assisted embedded development, particularly the importance of context management when working with LLMs. Luca reveals that effective AI-assisted coding requires treating prompts like code itself—version controlling them, refining them iteratively, and building project-specific prompt libraries. He discusses the economics of LLM-based development (approximately one cent per line of code), the dramatic tightening of feedback loops from days to minutes, and how this fundamentally changes agile workflows for embedded teams.The episode concludes with a discussion about the evolving role of embedded developers—from code writers to AI supervisors and eventually to product owners with deep technical skills. Luca and Jeff address concerns about maintaining core software engineering competencies while embracing these powerful new tools, emphasizing that understanding the craft remains essential even as the tools evolve.Key Topics[02:15] LLM-only hackathon constraints: No human-written code in version control[04:30] Context management as the critical skill for effective LLM-assisted development[08:45] Explicit context control: Files, directories, API documentation, and web content integration[11:20] LLM hallucinations: When AI invents file contents and generates diffs against phantom code[13:00] Economics of AI-assisted coding: Approximately $0.01 per line of code[15:30] Tightening feedback loops: From day-long iterations to minutes in agile embedded workflows[17:45] Rapid technical debt accumulation: How LLMs can create problems faster than humans notice[19:30] The essential role of comprehensive testing in AI-assisted development workflows[22:00] Challenges with TDD and LLMs: Getting AI to take small steps and wait for feedback[26:15] Treating prompts like code: Version control, libraries, and project-specific prompt management[29:40] External context management: Coding style guides, plan files, and todo.txt workflows[32:00] LLM attention patterns: Beginning and end of context receive more focus than middle content[34:30] The evolving developer role: From coder to prompt engineer to AI supervisor to technical product owner[38:00] Code wireframing: Rapid prototyping for embedded systems using AI-generated implementations[40:15] Maintaining software engineering skills in the age of AI: The importance of manual practice[43:00] Software engineering vs. software carpentry: Architecture and goals over syntax and implementationNotable Quotes"One of the hardest things to get an LLM to do is nothing. Sometimes I just want to brainstorm with it and say, let's look at the code base, let's figure out how we're going to tackle this next piece of functionality. And then it says, 'Yeah, I think we should do it like this. You know what? I'm going to do it right now.' And it's so terrible. Stop. You didn't even wait for me to weigh in." — Luca Ingianni"LLMs making everything faster also means they can create technical debt at a spectacular rate. And it gets a little worse because if you're not paying close attention and if you're not disciplined, then it kind of passes you by at first. It generates code and the code kind of looks fine. And you say, yeah, let's keep going. And then you notice that actually it's quite terrible." — Luca Ingianni"I would not trust myself to review an LLM's code and be able to spot all of the little subtleties that it gets wrong. But if I at least have tests that express my goals and maybe also my worries in terms of robustness, then I can feel a lot safer to iterate very quickly within those guardrails." — Luca Ingianni"Roughly speaking, the way I was using the tool, I was spending about a cent per line. Which is about two orders of magnitude below what a human programmer roughly costs. It really is a fraction. So that's nice because it makes certain things approachable. It changes certain build versus buy decisions." — Luca Ingianni"You can tighten your feedback loops to an absurd degree. Maybe before, if you had a really tight feedback loop between a product owner and a developer, it was maybe a day long. And now it can be minutes or quarters of an hour. It is so much faster. And that's not just a quantitative step. It's also a qualitative step." — Luca Ingianni"Some of my best performing prompts came from a place of desperation where one of my prompts is literally 'wait wait wait you didn't do what we agreed you would do you did not read the files carefully.' And I'd like to use...
Pas encore de commentaire