Page de couverture de Controlling AI Models from the Inside

Controlling AI Models from the Inside

Controlling AI Models from the Inside

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.

Featuring:

  • Alizishaan Khatri – LinkedIn
  • Chris Benson – Website, LinkedIn, Bluesky, GitHub, X
  • Daniel Whitenack – Website, GitHub, X

Upcoming Events:

  • Register for upcoming webinars here!
Pas encore de commentaire