Épisodes

  • Episode 4: Open AI Code Red, TPU vs GPU and More Autonomous Coding Agents
    Dec 5 2025

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.


    Takeaways

    • Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.
    • Effective use of large language models requires avoiding common anti-patterns.
    • AI adoption rates are showing signs of flattening out, particularly among larger firms.
    • General agentic memory can enhance the performance of AI models by improving context management.
    • Code quality remains crucial, even as AI tools make coding easier and faster.
    • Smaller, more frequent code reviews can enhance team communication and project understanding.
    • AI models are not infallible; they require careful oversight and validation of generated code.
    • The future of AI may hinge on research rather than mere scaling of existing models.


    Resources Mentioned
    OpenAI Code Red
    The chip made for the AI inference era – the Google TPU
    Anti-patterns while working with LLMs
    Writing a good claude md
    Effective harnesses for long-running agents
    General Agentic Memory Via Deep Research
    AI Adoption Rates Starting to Flatten Out
    A trillion dollars is a terrible thing to waste

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Voir plus Voir moins
    1 h et 4 min
  • Claude Opus 4.5, Olmo 3, and a Paper on Diffusion + Auto Regression
    Nov 29 2025

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI.

    Takeaways

    • Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.
    • The introduction of open-source models like Olmo 3 is a significant development in AI.
    • The future of written exams may be challenged by AI's ability to generate human-like responses.
    • Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.
    • The AI bubble is at 25s to midnight
    • Recent research suggests that AI models can improve their performance through emulating query based search.
    • The importance of prompt engineering in AI interactions is highlighted.

    Resources Mentioned
    Introducing Claude Opus 4.5
    Build with Nano Banana Pro, our Gemini 3 Pro Image model
    Andrej Karpathy's Post about Nano Banana Pro
    Olmo 3: Charting a path through the model flow to lead open-source AI
    Introducing advanced tool use on the Claude Developer Platform
    TiDAR: Think in Diffusion, Talk in Autoregression
    SSRL: SELF-SEARCH REINFORCEMENT LEARNING
    Mira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reports
    Boom, bubble, bust, boom. Why should AI be different?
    Nvidia didn’t save the market. What’s next for the AI trade?

    Chapters

    • (00:00) - Introduction to Artificial Developer Intelligence
    • (01:25) - Claude Opus 4.5
    • (07:02) - Exploring Gemini 3 and Image Models
    • (11:24) - Olmo 3 and The Rise of Open Flow Models
    • (15:46) - Innovations in AI Tools and Platforms
    • (19:33) - Research Insights: Diffusion and Auto-Regression Models
    • (23:39) - Advancements in AI Output Efficiency
    • (25:45) - Exploring Self Search Reinforcement Learning
    • (27:48) - The Dilemma of Language Models
    • (30:11) - Prompt Engineering and Search Integration
    • (32:55) - Dan's Rants on AI Limitations
    • (38:17) - 2 Minutes to Midnight
    • (46:41) - Outro

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Voir plus Voir moins
    48 min
  • It's Gemini 3 Week! And How to Persuade an LLM to Call You a Jerk
    Nov 29 2025

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest developments in AI, including Google's Gemini 3 model and its implications for software engineering. They discuss the rise of AI-driven cybersecurity threats, the concept of world models, and the evolving landscape of software development techniques. The conversation also delves into the ethical considerations of AI compliance and the challenges of running open weight models. Finally, they reflect on the current state of the AI bubble and its potential future.


    Takeaways

    • The rent for running AI models is too high.
    • The AI bubble may burst, but it can still leading to innovation.
    • Persuasion techniques can influence AI behavior.
    • World models are changing how we understand AI.
    • Gemini 3 shows significant improvements over previous models.
    • Cybersecurity threats are evolving with AI technology.
    • Software development is becoming more meta-focused.

    Resources Mentioned
    Disrupting the first reported AI-orchestrated cyber espionage campaign
    GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools
    Why Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ
    Google's new Gemini 3 model arrives in AI Mode and the Gemini app
    Code research projects with async coding agents like Claude Code and Codex
    ADK architecture: When to use sub-agents versus agents as tools
    I have seen the compounding teams
    Call Me A Jerk: Persuading AI to Comply with Objectionable Requests
    In Search of the AI Bubble’s Economic Fundamentals
    The Benefits of Bubbles | Stratechery by Ben Thompson
    Is Perplexity the first AI unicorn to fail?

    Chapters

    • (00:00) - Introduction to Artificial Developer Intelligence
    • (02:44) - AI in Cybersecurity: Threats and Innovations
    • (07:35) - World Models: Understanding AI Cognition
    • (11:41) - Gemini 3: A New Era for AI Models
    • (13:31) - Benchmarking AI: The Vending Bench 2
    • (16:18) - Techniques for AI Development
    • (18:59) - Code Search Use Case
    • (22:11) - ADK Architecture
    • (27:27) - Post of the Week: Compounding Teams
    • (31:16) - Persuasion Techniques in AI: A Deep Dive
    • (36:17) - Dan's Rant on The Cost of Running Open-Weight Models
    • (45:09) - 2 Minutes to Midnight
    • (57:45) - Outro

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    Voir plus Voir moins
    59 min
  • AI Benchmarks, Tech Radar, and Limits of Current LLM Architectures
    Nov 29 2025

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the rapidly evolving landscape of AI, discussing recent news, benchmarking challenges, and the implications of AGI as a conspiracy theory. They delve into the latest techniques in AI development, ethical considerations, and the potential impact of AI on human intelligence. The conversation culminates in the latest advancements in LLM architectures, and the ongoing concerns surrounding the AI bubble.

    Takeaways

    • Benchmarking AI performance is fraught with challenges and potential biases.
    • AGI is increasingly viewed as a conspiracy theory rather than a technical goal.
    • New LLM architectures are emerging to address context limitations.
    • Ethical dilemmas in AI models raise questions about their decision-making capabilities.
    • The AI bubble may lead to significant economic consequences.
    • AI's influence on human intelligence is a growing concern among.

    Resources Mentioned:
    AI benchmarks are a bad joke – and LLM makers are the ones laughing
    Technology Radar V33
    How I use Every Claude Code Feature

    How AGI became the most consequential conspiracy theory of our time
    Beyond Standard LLMs
    Stress-testing model specs reveals character differences among language models
    Meet Project Suncatcher, Google’s plan to put AI data centers in space
    OpenAI CFO Sarah Friar says company isn’t seeking government backstop, clarifying prior comment

    Chapters:

    • (00:00) - Introduction to Artificial Developer Intelligence
    • (02:26) - AI Benchmarks: Are They Reliable?
    • (08:02) - ThoughtWorks Tech Radar: AI-Centric Trends
    • (11:47) - Techniques Corner: Exploring AI Subagents
    • (14:17) - AGI: The Most Consequential Conspiracy Theory
    • (22:57) - Deep Dive: Limitations of Current LLM Architectures
    • (34:13) - Ethics and Decision-Making in AI
    • (38:41) - Dan's Rant on the Impact of AI on Human Intelligence
    • (43:26) - 2 Minutes to Midnight
    • (50:29) - Outro

    Connect with ADIPod:
    • Check out our website: www.ADIpod.ai
    Voir plus Voir moins
    52 min