
Inside the Research: Interpretability-Aware Pruning for Efficient Medical Image Analysis
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
In this episode, we explore the intersection of model compression and interpretability in medical AI with the authors of the newly published research paper, Interpretability-Aware Pruning for Efficient Medical Image Analysis. Join us as Vinay Kumar Sankarapu, Pratinav Seth and Nikita Malik from AryaXAI discuss how their framework enables deep learning models to be pruned using attribution-based methods—retaining critical decision-making features while drastically reducing model complexity.
We cover:
- Why traditional pruning fails to account for interpretability
- How techniques like DL-Backtrace (DLB), Layer-wise Relevance Propagation (LRP), and Integrated Gradients (IG) inform neuron importance
- Results from applying this method to VGG19, ResNet50, and ViT-B/16 across datasets such as MURA, KVASIR, CPN, and Fetal Planes
- Practical implications for healthcare AI deployment, edge inference, and clinical trustworthiness
Whether you're a machine learning researcher, AI engineer in medtech, or working on explainable AI (XAI) for regulated environments, this conversation unpacks how to build models that are both efficient and interpretable—ready for the real world.
📄 Read the full paper: https://arxiv.org/abs/2507.08330