Obtenez 3 mois à 0,99 $/mois + 20 $ de crédit Audible

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI

Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI

Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt’s HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.


Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM’s new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM’s interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.


This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.


Guest profile: https://research.ibm.com/people/dennis-wei

ICX360 Toolkit: https://github.com/IBM/ICX360

Pas encore de commentaire