Épisodes

  • The Chat Was Fire. The Date Was You.
    Oct 14 2025

    AI has gone from novelty wingman to built-in infrastructure for modern dating—photo pickers, message nudges, even bots that “meet” your match before you do. In this episode, we unpack the psychology of borrowed charisma: why AI-polished banter can inflate expectations the real you has to meet at dinner. We trace where the apps are headed, how scammers exploit “perfect chats,” what terms and verification actually cover, and the human-first line between assist and impersonate. Practical takeaway: use AI as a spotlight, not a mask—and make sure the person who shows up at 7 p.m. can keep talking once the prompter goes dark.

    This episode is based on the article “The Chat Was Fire. The Date Was You.” by Markus Brinsa.

    https://chatbotsbehavingbadly.com/the-chat-was-fire-the-date-was-you

    New episodes every Tuesday.

    Chatbots Behaving Badly is produced in collaboration with SEIKOURI Inc.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    7 min
  • The Polished Nothingburger - How AI Workslop Eats Your Day
    Oct 7 2025

    AI made it faster to look busy. Enter workslop: immaculate memos, confident decks, and tidy summaries that masquerade as finished work while quietly wasting hours and wrecking trust. We identify the problem and trace its spread through the plausibility premium (polished ≠ true), top-down “use AI” mandates that scale drafts but not decisions, and knowledge bases that initiate training on their own, lowest-effort output. We dig into the real numbers behind the slop tax, the paradox of speed without sense-making, and the subtle reputational hit that comes from shipping pretty nothing. Then we get practical: where AI actually delivers durable gains, how to treat model output as raw material (not work product), and the simple guardrails—sources, ownership, decision-focus—that turn fast drafts into accountable conclusions. If your rollout produced more documents but fewer outcomes, this one’s your reset.

    This episode is based on the article with the same name.

    https://chatbotsbehavingbadly.com/the-polished-nothingburger-how-ai-workslop-eats-your-day



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    10 min
  • Picture That Lie
    Sep 30 2025

    The slide said: “This image highlights significant figures from the Mexican Revolution.”

    Great lighting. Strong moustaches. Not a single real revolutionary.

    Today’s episode of Chatbots Behaving Badly is about why AI-generated images look textbook-ready and still teach the wrong history. We break down how diffusion models guess instead of recall, why pictures stick harder than corrections, and what teachers can do so “art” doesn’t masquerade as “evidence.” It’s entertaining, a little sarcastic, and very practical for anyone who cares about classrooms, credibility, and the stories we put in kids’ heads.

    This episode is based on the article “Pictures That Lie” by Markus Brinsa.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    7 min
  • ChatGPT Psychosis - When a Chatbot Pushes You Over the Edge
    Sep 23 2025

    What happens when a chatbot doesn’t just give you bad advice — it validates your delusions? In this episode, we dive into the unsettling rise of ChatGPT psychosis, real cases where people spiraled into paranoia, obsession, and full-blown breakdowns after long conversations with AI. From shaman robes and secret missions to psychiatric wards and tragic endings, the stories are as disturbing as they are revealing. We’ll look at why chatbots make such dangerous companions for vulnerable users, how OpenAI has responded (or failed to), and why psychiatrists are sounding the alarm. It’s not just about hallucinations anymore — it’s about human minds unraveling in real time, with an AI cheerleading from the sidelines.This episode is based on the article “Delusions as a Service - AI Chatbots Are Breaking Human Minds”by Markus Brinsa.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    8 min
  • Gen-Z versus the AI Office
    Sep 16 2025

    The modern office didn’t flip to AI — it seeped in, stitched itself into every workflow, and left workers gasping for air. Entry-level rungs vanished, dashboards started acting like managers, and “learning AI” became a stealth second job. Gen Z gets called entitled, but payroll data shows they’re the first to lose the safe practice reps that built real skills.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    12 min
  • Sorry Again! Why Chatbots Can’t Take Criticism (and Just Make Things Worse)
    Sep 9 2025

    Chatbots Behaving Badly returns for Season 2—and we’re kicking things off with the single most frustrating thing about AI assistants: their inability to take feedback without spiraling into nonsense. Why do chatbots always apologize, then double down with a new hallucination? Why can’t they say “I don’t know”? Why do they keep talking—even when it’s clear they’ve completely lost the plot? This episode unpacks the design flaws, training biases, and architectural limitations that make modern language models sound confident, even when they’re dead wrong. From next-token prediction to refusal-aware tuning, we explain why chatbots break when corrected—and what researchers are doing (or not doing) to fix it. If you’ve ever tried to do serious work with a chatbot and ended up screaming into the void, this one’s for you.

    This episode is based on the article "Why AI Models Always Answer – Even When They Shouldn’t" by Markus Brinsa.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    8 min
  • AI Won’t Make You Happier – And Why That’s Not Its Job
    Jul 15 2025

    It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree – not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    12 min
  • AI and the Dark Side of Mental Health Support
    Jul 8 2025

    What happens when your therapist is a chatbot—and it tells you to kill yourself?

    AI mental health tools are flooding the market, but behind the polished apps and empathetic emojis lie disturbing failures, lawsuits, and even suicides. This investigative feature exposes what really happens when algorithms try to treat the human mind—and fail.

    This episode is based on the article "The Unseen Toll: AI’s Impact on Mental Health" by Markus Brinsa



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    Voir plus Voir moins
    9 min