Épisodes

  • Privacy, Identity and Trust in C2PA, An Explainer Series - Part II: Identity in C2PA
    Sep 25 2025

    If you're looking for an accessible overview of how C2PA - aka Content Credentials - works technically and how it relates to privacy, identity, and trust, this is it!

    In our first episode in this explainer series about C2PA, we shared an overview of how C2PA works, and what it does in relation to privacy, identity, and trust. In this second episode, we dig deeper into Identity in C2PA. You'll learn about what sorts of users want identity to be attached to content metadata – and who doesn't, and about the risks and tradeoffs of identity in relation to C2PA. And you'll hear from Pam Dixon, World Privacy Forum's executive director, about existing digital identity systems, regulations and infrastructure around the world and how they could relate to C2PA.

    This episode of Privacy on the Ground features music by Liam Back and Speedtest. The Privacy on the Ground intro theme features music by Pangal.

    Featured in this episode:

    • Kate Kaye, Deputy Director of World Privacy Forum
    • Pam Dixon, Executive Director of World Privacy Forum
    Voir plus Voir moins
    27 min
  • Privacy, Identity and Trust in C2PA: An Explainer Series (Part 1)
    Sep 3 2025

    If you're looking for an accessible overview of how C2PA works technically and how it relates to privacy, identity, and trust, this is it!

    Imagine a system that automatically generates detailed data showing where the digital images, videos and documents we encounter came from, who made them, how they have changed, who owns the rights to their use, and even whether AI was used in their creation.

    Some say C2PA (Coalition for Content Provenance and Authenticity) promises to be just that. C2PA is a technical framework for connecting digital media content such as images and videos to data about the origins of and changes made to that content. But it is not just a "content labeling" system. C2PA is intended to provide signals for gauging trustworthiness of content, kind of like provenance documentation indicating the authenticity of an oil painting or showing how some ancient artifact changed hands over time.

    But how does C2PA really work? How does it relate to privacy, identity, and trust? And what could its use mean for our information and data ecosystem? It's too early to know whether C2PA will be one of those behind-the-scenes systems that shift the tectonic plates of our digital media ecosystem. But it's the right time to take a step back and assess what we do know about C2PA and what it could mean – not just for the future of digital information but for our connections to it.

    This episode of Privacy on the Ground features music by Liam Back and Speedtest. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    24 min
  • What We Learn When We Put AI Governance Tools to Use
    Jul 25 2025

    We know from World Privacy Forum's 2023 report on AI governance tools, Risky Analysis, that these tools can have problems and should be assessed before they're deployed. But what do we learn about AI governance tools when they are actually put to use? This was the focus of recent research discussed in a paper by World Privacy Forum deputy director Kate Kaye. In this short episode of Privacy on the Ground, Kaye discusses her research which she presented recently at the fourth European Workshop on AI Fairness, an academic conference in The Netherlands.

    This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.

    Read Kate Kaye's paper, "Uncovering Areas for AI Governance Tools Refinement through Real-World Use Case Analysis from Canada, Chile and Singapore," in the Proceedings of Fourth European Workshop on Algorithmic Fairness:
    https://proceedings.mlr.press/v294/kaye25a.html

    Read World Privacy Forum's 2023 report on AI governance tools, "Risky Analysis: Assessing and Improving AI Governance Tools, An international review of AI Governance Tools and suggestions for pathways forward":
    https://worldprivacyforum.org/posts/new-report-risky-analysis-assessing-and-improving-ai-governance-tools/

    Voir plus Voir moins
    15 min
  • Lawyer Will Tao on the Real-World Impacts of AI and Canada's Algorithmic Impact Assessments on Immigrants
    Jul 12 2025

    Will Tao knows first-hand how automated, algorithmic and machine learning systems used by Canada's government affect lives. The Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally) practices immigration, refugee and citizenship law in Canada. He has watched as these systems automatically determine or inform decisions affecting the lives of his clients sometimes influencing decisions about whether they can legally work, and even whether they must separate from their spouses or children. In this interview recorded in November 2024, World Privacy Forum's Kate Kaye talks with Tao about how use of algorithmic systems by Canada's immigration agency affect his clients, his experiences with Canada's Algorithmic Impact Assessments, and what he hopes to see change in relation to AI use and AI governance in Canada.

    Featured in this episode:

    • Will Tao, Founder of Heron Law Offices in Burnaby, British Columbia and co-founder of AIMICI (the AI Monitor for Immigration in Canada and Internationally)
    • Kate Kaye, Deputy Director of World Privacy Forum

    This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    44 min
  • Assessing Chile's Medical Claims Model: An AI Governance Metrics Deep Dive with Mariana Germán
    May 28 2025

    When governments create AI governance policy tools, how are they used in real-world situations? What does the process of assessing a machine learning model used by a government agency look like? In this episode of Privacy on the Ground, you'll hear all about it from an insider: Mariana Germán, a researcher in the Ethical Algorithms Project at GobLab UAI, the public innovation laboratory at Chile's Universidad Adolfo Ibáñez's School of Government. Germán and the team at GobLab helped assess a machine learning model in development for use to help decide medical claims at Chile's health agency, the Department of Social Security Superintendence or SUSESO. In this full interview recorded in September of 2024, Germán and World Privacy Forum Deputy Director Kate Kaye dig deep into the metrics and measurements used to assess the model and its risks of producing discriminatory decisions, discussing the caveats of the AI governance tools, measures and metrics themselves, and how they were applied.

    Featured in this episode:

    • Mariana Germán, researcher in the Ethical Algorithms Project at GobLab, the public innovation laboratory at Chile's Universidad Adolfo Ibáñez's School of Government
    • WPF's deputy director and Privacy on the Ground host and producer Kate Kaye

    This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    56 min
  • Why Rodrigo Moya Changed His Mind about Chile's AI Governance Tool for Assessing a Medical Insurance Claims AI Model
    Apr 29 2025

    Inside Chile's Department of Social Security Superintendence — the country's social security and medical insurance agency — medical claims processors hold the livelihoods and future health of thousands of people in their hands. They are responsible for deciding whether or not the government should pay wages when workers are on medical leave or cover other expenses such as occupational mental health related costs.

    Like many government agencies these days, the agency, known by its acronym SUSESO, has begun to use machine learning models to help its limited staff process a high volume of medical claims. The idea is to streamline and in some cases automate certain parts of that claims evaluation process.

    Use of AI and the tools used to govern and assess these systems have upended traditional government processes. And in Chile, SUSESO project manager Rodrigo Moya is caught in the middle. Moya heads up the Digital Transformation, Innovation and Project Unit in SUSESO's Technology and Operations Department. He must balance project time and resource constraints with the need to analyze risks and impacts of AI.

    In this episode of Privacy on the Ground, you'll hear the story of how Moya and others at SUSESO have used Chile's AI governance tool requiring assessment of AI systems as part of the AI procurement process, and about how Moya has navigated tensions regarding use of automation when it comes to risky government decision making affecting people's lives.

    Featured in this episode:

    • Rodrigo Moya, head of the Digital Transformation, Innovation and Project Unit in the Technology and Operations Department at SUSESO
    • Mariana German, researcher in the Ethical Algorithms Project at GobLab, the public innovation laboratory at Chile's Universidad Adolfo Ibáñez's School of Government
    • WPF's deputy director and Privacy on the Ground host and producer Kate Kaye

    This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    29 min
  • How AI Governance Tools Put Policy into Practice in Canada and Chile
    Apr 15 2025

    There's no shortage of principles and policies for governing AI from governments and NGOs around the world. But how do they put those principles and policies into practice? It's that practical side of AI governance that has been a key focus of our work at World Privacy Forum for more than two years.

    Rather than look only at government policies, in early 2023 we went layers deeper, looking at the tools that governments and NGOs around the world—from Canada to Chile to Ghana to New Zealand to Singapore—have developed for actually implementing those AI policies.

    Since then, we have observed actual use of these tools to understand how they govern and measure AI and spot where there's room for improvement. Key to that work has been talking to people who have actually used those AI governance tools, including people in Canada and Chile.

    WPF's forthcoming Privacy on the Ground series—AI Governance Tools on the Ground—features talks with some of those people. In this episode introducing the series, you'll hear WPF's founder and executive director, Pam Dixon, along with WPF's deputy director and Privacy on the Ground host and producer Kate Kaye, discuss what led to this work and how AI Governance Tools have evolved. This episode of Privacy on the Ground features music by Maciej Sadowski. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    21 min
  • Emotion Recognition and What Nazanin Andalibi's Research Tells Us about Its Impacts
    Jan 29 2025

    Emotion recognition is baked into all sorts of software and systems many of us use or experience every day, from video call systems measuring the "mood" at a work meeting, to systems used to gauge distraction at school, or impairment or anger of drivers inside their cars. Despite its increasing proliferation, emotion recognition systems and the data use embedded in them create significant privacy impacts.

    What is emotion recognition? Would fixing inaccuracy problems in these systems alleviate the potential harms they enable? Should emotion related data be recognized as a sensitive type of information along with health financial and other sensitive data? How might policymakers address potential harms of emotion recognition? Dr. Nazanin Andalibi, Assistant Professor at the University of Michigan School of Information, has a lot to say about all this, and she has the research to back it up.

    World Privacy Forum Deputy Director Kate Kaye interviewed Dr. Andalibi in June 2024 in Rio de Janeiro, Brazil. This episode of Privacy on the Ground features music by Old Wave. The Privacy on the Ground intro theme features music by Pangal.

    Voir plus Voir moins
    39 min