Page de couverture de AI in the shadows: From hallucinations to blackmail

AI in the shadows: From hallucinations to blackmail

AI in the shadows: From hallucinations to blackmail

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.

Featuring:

  • Chris Benson – Website, LinkedIn, Bluesky, GitHub, X
  • Daniel Whitenack – Website, GitHub, X

Links:

  • Agentic Misalignment: How LLMs could be insider threats
  • Hugging Face Agents Course

Register for upcoming webinars here!

Ce que les auditeurs disent de AI in the shadows: From hallucinations to blackmail

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.