Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de Trustworthy AI : De-risk business adoption of AI

Trustworthy AI : De-risk business adoption of AI

Trustworthy AI : De-risk business adoption of AI

Auteur(s): Pamela Gupta
Écouter gratuitement

À propos de cet audio

Description: Creating AI Trust is a very complex and hard problem. It is not clear what it is and how it can be operationalized. We will demystify what is Trustworthy AI, efficient adoption and leveraging it for reducing risks in AI programs.
McKinsey reports indicates companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT or profitability to their use of AI—are more likely than others to follow Trustworthy AI best practices, including explainability. Further, organizations that establish digital trust among consumers through responsible practices such as making AI explainable are more likely to see their annual revenue and profitability grow at rates of 10 percent or more.

© 2025 Trustworthy AI : De-risk business adoption of AI
Économie
Épisodes
  • AI Governance for AI Value
    Dec 1 2025

    75% of companies are now using generative AI. But only a third have responsible controls in place. That's not just a statistic—it's a ticking time bomb.

    Today, I'm speaking with Dr. Paul Dongha, Head of Responsible AI at NatWest Group and co-author of the newly released 'Governing the Machine.' He's spent three decades bridging AI innovation with ethical implementation in one of the world's most regulated industries. If you want to know how to make AI governance an accelerator rather than a blocker, this is the conversation you need to hear.

    If you're navigating the EU AI Act, building assurance platforms, or trying to earn customer trust while scaling AI, this conversation provides the roadmap. We compare notes on my AI TIPS model for operationalizing AI Governance and the Framework Ray Eitel-Porter (Author), Paul Dongha (Author), Miriam Vogel (Author) present in Governing the machine.

    Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.?
    With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website.

    Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures.

    Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up.

    For questions or comments on this podcast reach out to me.

    Voir plus Voir moins
    54 min
  • AI Cyber Threats at Warp Speed: Decoding the Attack Flow with MITRE ATLAS
    Oct 30 2025

    AI Cyber Threats at Warp Speed: Decoding the Attack Flow with MITRE ATLAS

    Is your organization ready for the AI Cybersecurity threat wave? What is the role of AI Cybersecurity in a holistic AI Governance program?

    What are the Industry partnerships from MITRE that every organization should be aware of and why?

    The landscape of AI risk is evolving at an accelerated rate, demanding a security framework built specifically for the unique attack surfaces of Machine Learning and Generative AI. Join host Pamela Gupta as she welcomes Walker Dimon, the MITRE ATLAS Lead, who is focused on advancing security for these rapidly evolving AI systems.

    This conversation reveals the critical flow and severity of modern AI threats:

    • Mapping the Adversary's Path: The MITRE ATLAS Matrix organizes the progression of attack tactics providing practitioners with a common language and taxonomy for AI threats.

    • New, Realized Threats: The focus has shifted from predictive AI attacks (like data poisoning) to complex generative AI exploits. Walker explains that ATLAS techniques are only added if they are "realized"—meaning there is real-world evidence of actual adversaries using these TTPs against victim systems.

    • The LLM Evolution: Learn about the need for new attacks taxonomies, including the recent addition of triggered injection, to capture the delayed adversarial behavior unique to complex Agentic AI systems.

    • Walker explains how CISOs can immediately use ATLAS for threat modeling by mapping data flows and user access points to the matrix.

    • It is a resource for mitigation strategies, offering strategies and exemplars like using open repository guardrail packages (e.g., Nemo guardrails) to define boundary conditions and prevent system compromise.

    Tune in to understand the dynamic nature of AI risks and get actionable guidance on leveraging the MITRE ATLAS Matrix to build trustworthy, safe, and secure AI systems. We discuss Red Teaming, Prompt Injection attacks and a new category introduced "triggered injection". I had done a deep dive in my last episode on Agentic AI attacks, that was an example of this new attack.

    Also, Pamela poses “Lightening Question - one AI security myth to retire, the most under-hyped attack vector ?”

    Walker’s response may surprise you.

    Last, Thanks to our sponsor RecordPoint, you can get more information about their unified data and governance platform.

    Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.?
    With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website.

    Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures.

    Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up.

    For questions or comments on this podcast reach out to me.

    Voir plus Voir moins
    42 min
  • Business Impact of Weaponized AI Agents
    Oct 21 2025

    I'm Pamela Gupta—2025 Joseph J. Wasserman Award Honoree, the highest honor in information security and risk governance. I'm globally ranked number three in Risk Management and number seven in Cybersecurity by Thinkers360.

    But here's what I'm most proud of: I help organizations turn AI from a risk into revenue. In my work across 120 countries and with Fortune 500 companies, I've operationalized AI governance frameworks that don't just check compliance boxes—they enable business teams to launch AI initiatives in 60 days instead of staying stuck for months.

    I created the AI TIPS framework—Trust, Integrity, Pillars, and Sustainability—four years before NIST published their AI Risk Management Framework. I've advised the U.S. Department of Defense on AI strategy. I've built AI Centers of Excellence for critical infrastructure companies. And I've designed governance systems on platforms like IBM watsonx that automate policy enforcement while enabling innovation at scale.

    My mission is simple: De-risk AI adoption so organizations can confidently embrace the most transformative technology of our generation. Because when AI governance is done right, it's not a barrier—it's an accelerator."

    Good [morning/afternoon] everyone. Today we're going to talk about one of the most significant AI security vulnerabilities discovered in 2024—and why it matters to every organization deploying AI agents.

    This is ForcedLeak. CVSS 9.4. Critical severity. It affected Salesforce Agentforce, a platform used by thousands of enterprise customers.

    This episode is for any and every organization to hear and act on as AI gets integrated into every product globally.

    Qs or Comments? Contact me at https://www.linkedin.com/in/buildingtrustedaiholistically/

    Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.?
    With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website.

    Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures.

    Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up.

    For questions or comments on this podcast reach out to me.

    Voir plus Voir moins
    37 min
Pas encore de commentaire