Épisodes

  • An Environmental Grounding with Masheika Allgood
    Dec 10 2025

    Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data.

    Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do.

    Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems.

    Related Resources

    • Taps Run Dry Initiative (Website)
    • Data Center Advocacy Toolkit (Website)
    • Eat Your Frog (Substack)
    • AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning)
    • A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book)

    A transcript of this episode is here.

    Voir plus Voir moins
    57 min
  • Your Digital Twin Is Not You with Kati Walcott
    Nov 26 2025

    Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.

    Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.

    Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.

    Related Resources

    • The False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)

    A transcript of this episode is here.

    Voir plus Voir moins
    53 min
  • No Community Left Behind with Paula Helm
    Nov 12 2025

    Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone.

    Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.

    Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.

    Related Resources

    • Generating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447
    • Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6

    A transcript of this episode is here.

    Voir plus Voir moins
    52 min
  • What AI Values with Jordan Loewen-Colón
    Oct 29 2025

    Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.

    Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.

    Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.

    Related Resources

    • HBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values
    • AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication

    A transcript of this episode is here.

    Voir plus Voir moins
    52 min
  • Agentic Insecurities with Keren Katz
    Oct 15 2025

    Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures.


    Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.

    Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.

    Related Resources

    • Article: The Silent Breach: Why Agentic AI Demands New Oversight
    • State of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/
    • The LLM Top 10: https://genai.owasp.org/llm-top-10/

    A transcript of this episode is here.

    Voir plus Voir moins
    49 min
  • To Be or Not to Be Agentic with Maximilian Vogel
    Oct 1 2025

    Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.


    Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.

    Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.


    Related Resources

    • Medium: https://medium.com/@maximilian.vogel

    A transcript of this episode is here.

    Voir plus Voir moins
    51 min
  • The Problem of Democracy with Henrik Skaug Sætra
    Sep 17 2025

    Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society.


    Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.

    Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.


    Related Resources

    • Google Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=en
    • How to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686
    • AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063
    • Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL

    A transcript of this episode is here.

    Voir plus Voir moins
    54 min
  • Generating Safety Not Abuse with Dr. Rebecca Portnoff
    Aug 20 2025

    Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.

    Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.

    Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn’s critical work by donating here.

    Related Resources

    • Thorn’s Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/
    • Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/
    • Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report

    A transcript of this episode is here.

    Voir plus Voir moins
    47 min