Épisodes

  • AI Meets MLOps: Making Sense of the Mess
    Nov 6 2025

    In this episode of AI x DevOps, Rohit sits down with Görkem Ercan, CTO at Jozu, a company building a DevOps platform for AI agents and models. Görkem, a veteran with over two decades of software experience (including contributions to the Eclipse Foundation), explains why MLOps is fundamentally different from traditional, deterministic DevOps—leading to extreme pipeline fragmentation.

    Here are some of our favourite takeaways:

    • Standardization is Key: Why OCI is the recognized standard for packaging AI/ML artifacts, and how the Model Packs project (with ByteDance, Red Hat, and Docker) is defining the artifact structure.

    • Open Source Headaches: The critical challenge maintainers face when receiving large amounts of untested, verbose, AI-generated code.

    • LLM Economics: Discover why running small, fine-tuned LLMs in-house can be cheaper and provide more predictable, consistent results than generic large providers.

    • KitOps Solution: How KitOps creates an abstraction that allows data scientists to focus on training while leveraging existing DevOps platforms for deployment.

    Tune in now to understand the standardization movement reshaping the future of AI development!

    Voir plus Voir moins
    1 h et 11 min
  • AI DevOps in Practice: A Solutions Architect's View
    Sep 8 2025

    Join host Rohit (Facets Cloud) in conversation with Sanjeev Ganjihal, Senior Specialist Solutions Architect - Containers at AWS and early Kubernetes expert. They discuss the rapid evolution of AI and DevOps, Kubernetes as the new operating system, generative AI in engineering, and the shifting landscape of roles like DevOps, SRE, and AIOps. Sanjeev shares practical advice on using AI assistants, agentic tools, self-hosted models, and the balancing act between automation, productivity, and upskilling in today’s cloud-native world.

    Voir plus Voir moins
    1 h et 6 min
  • AI Security Reality Check
    Jul 15 2025

    This podcast features a discussion with Nathan Hamiel, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on AI security.

    The conversation centers on navigating the generative AI revolution with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:

    • The balance between AI adoption and skepticism: Nathan discusses how his security outlook influences his professional adoption of AI tools, emphasizing understanding capabilities and evaluating benefits versus trade-offs before production.
    • AI productivity and its challenges: The speakers touch upon Google DORA reports, noting that while AI improves personal coding productivity, its impact on valuable work or features can be negligible or even negative, highlighting the difference between feeling productive and being productive.
    • Positive and negative impacts of AI in cybersecurity: They discuss AI's potential to enhance security tools for code scanning and auto-remediation, such as augmenting traditional fuzzing with large language models. However, they also raise concerns about the resurgence of conventional vulnerabilities in AI-generated code.
    • Emerging AI-native risks: The podcast delves into new threats like "slop squatting," or "hallucinated dependencies," where LLMs might be tricked into using malicious or non-existent libraries. Prompt injection is highlighted as "the vulnerability of generative AI," exploiting the model's inability to differentiate system instructions from user input.
    • Addressing AI security vulnerabilities: Nathan advocates for architectural changes and reducing the attack surface as the best defense against prompt injection, outlining his "RRT" (refrain, restrict, trap) approach. The need for human oversight and deterministic checks in AI development workflows is also stressed.
    • The urgency of security in AI product development: Both speakers express concern over the rush to market AI products without adequately addressing security issues, leading to unacknowledged vulnerabilities.
    • The nature of AI mistakes: A unique insight is provided on how AI mistakes differ from human errors; while human mistakes are predictable (e.g., fatigue), AI mistakes can be random and apply across all complexity levels, making them harder to predict and mitigate. The potential for "hallucinated data of today" to become "facts of tomorrow" due to AI-generated output tainting the web is also discussed.
    • Future of AI advancements: The conversation concludes by suggesting that AI improvements might be plateauing rather than growing exponentially, and that new fundamental innovations are needed to push AI forward beyond current capabilities.

    Ultimately, the podcast serves as a grounding discussion for product engineers on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.

    Voir plus Voir moins
    1 h
  • MCP Without the Hype: Founders Take
    Jun 4 2025

    In this episode, Facets.cloud co-founders Rohit and Anshul dive deep into Model Context Protocols (MCPs), explaining how they evolved from basic chat assistants to standardized tool connectors for AI-driven DevOps. You’ll learn best practices for designing MCP servers, naming conventions that reduce hallucinations, dry-run workflows for safe automation, and insights on when and why to adopt MCPs within your organization.

    Voir plus Voir moins
    51 min
  • From Click-Ops to Chat-Ops: AI's Double-Edged Promise
    May 9 2025

    In the very first episode of the AI x DevOps Podcast, we dive into how AI is actually changing infrastructure, not hypothetically, but line by line.

    Rohit Raveendran, is joined by Vincent De Smet, DevOps engineer at Handshakes.ai, and together, they explore what happens when LLMs start writing Terraform, the difference between deterministic and vibe-coded infra, and why CDK might offer a more AI-friendly future than raw HCL.

    They talk about the trade-offs of trust, the future of platform engineering in an AI-powered world, and how inner-sourced guardrails could become the foundation for safe, scalable self-service. And yes, they touch on the scary parts too like what happens when your AI agent starts doing more than you asked.

    If you're wondering what it actually looks like to bring AI into DevOps without losing control, this one’s for you.

    Wondering how AI-Ready is your DevOps? Take a 2-minute survey here to find out.

    Voir plus Voir moins
    57 min