Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de Sorry Again! Why Chatbots Can’t Take Criticism (and Just Make Things Worse)

Sorry Again! Why Chatbots Can’t Take Criticism (and Just Make Things Worse)

Sorry Again! Why Chatbots Can’t Take Criticism (and Just Make Things Worse)

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Chatbots Behaving Badly returns for Season 2—and we’re kicking things off with the single most frustrating thing about AI assistants: their inability to take feedback without spiraling into nonsense. Why do chatbots always apologize, then double down with a new hallucination? Why can’t they say “I don’t know”? Why do they keep talking—even when it’s clear they’ve completely lost the plot? This episode unpacks the design flaws, training biases, and architectural limitations that make modern language models sound confident, even when they’re dead wrong. From next-token prediction to refusal-aware tuning, we explain why chatbots break when corrected—and what researchers are doing (or not doing) to fix it. If you’ve ever tried to do serious work with a chatbot and ended up screaming into the void, this one’s for you.

This episode is based on the article "Why AI Models Always Answer – Even When They Shouldn’t" by Markus Brinsa.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
Pas encore de commentaire