OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de AI in practice: Guardrails and security for LLMs

AI in practice: Guardrails and security for LLMs

AI in practice: Guardrails and security for LLMs

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether.

• Why guardrails matter for PII, secrets, and access control
• Where to place controls across prompt, training, and output
• Prompt injection, jailbreaks, and adversarial handling
• RAG design with vector DB separation and permissions
• Evaluation methods, risk scoring, and cost trade-offs
• AWS Bedrock guardrails vs open-source customization
• Domain-adapted safety models and policy matching
• When deterministic systems beat LLM complexity

This episode is part of our "AI in Practice” series, where we invite guests to talk about the reality of their work in AI. From hands-on development to scientific research, be sure to check out other episodes under this heading in our listings.

Related research:

  • Building trustworthy AI: Guardrail technologies and strategies (N. Brathwaite)
  • Nic's GitHub


What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Pas encore de commentaire