#443: Generative AI in MedTech: Quality, Risks, and the Autonomy Scale with Ashkon Rasooli
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
In this episode, host Etienne Nichols sits down with Ashkon Rasooli, founder of Ingenious Solutions and a specialist in Software as a Medical Device (SaMD). The conversation previews their upcoming session at MD&M West, focusing on the critical intersection of generative AI (GenAI) and quality assurance. While many AI applications exist in MedTech, GenAI presents unique challenges because it creates new data—text, code, or images—rather than simply classifying existing information.
Ashkon breaks down the specific failure modes unique to generative models, most notably "hallucinations." He explains how these outputs can appear legitimate while being factually incorrect, and explores the cascading levels of risk this poses. The discussion moves from simple credibility issues to severe safety concerns when AI-generated data is used in critical clinical decision-making without proper guardrails.
The episode concludes with a forward-looking perspective on how validation is shifting. Ashkon argues that because GenAI behavior is statistical rather than deterministic, traditional pre-market validation is no longer sufficient. Instead, a robust quality framework must include continuous post-market surveillance and real-time independent monitoring to ensure device safety and effectiveness over time.
Key Timestamps- 01:45 - Introduction to MD&M West and the "AI Guy for SaMD," Ashkon Rasooli.
- 04:12 - Defining Generative AI: How it differs from traditional machine learning and image recognition.
- 06:30 - Hallucinations: Exploring failure modes where AI creates plausible but false data.
- 08:50 - The Autonomy Scale: Applying standard 34971 to determine the level of human supervision required.
- 12:15 - Regulatory Gaps: Why no generative AI medical devices have been cleared by the FDA yet.
- 15:40 - Safety by Design: Using "independent verification agents" to monitor AI outputs in real-time.
- 19:00 - The Shift to Post-Market Validation: Why 90% validation at launch requires 10% continuous monitoring.
- 22:15 - Comparing AI to Laboratory Developed Tests (LDTs) and the role of the expert user.
Quotes"Hallucinations are just a very familiar form of failure modes... where the product creates sample data that doesn't actually align with reality." - Ashkon Rasooli"Your validation plan isn't just going to be a number of activities you do that gate release to market; it is actually going to be those plus a number of activities you do after market release." - Ashkon RasooliTakeaways
- Right-Size Autonomy: Match the AI’s level of independence to the risk of the application. High-risk diagnostic tools should have lower autonomy (Level 1-2), while administrative tools can operate more freely.
- Implement Redundancy: Use a "two is one" approach by employing an independent AI verification agent to check the primary model’s output against safety guidelines before it reaches the user.