Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

Auteur(s): Jeff Wilser
Écouter gratuitement

À propos de cet audio

Who will own the future of AI? The giants of Big Tech? Maybe. But what if the people could own AI, not the Big Tech oligarchs? This is the promise of Decentralized AI. And this is the podcast for in-depth conversations on topics like decentralized data markets, on-chain AI agents, decentralized AI compute (DePIN), AI DAOs, and crypto + AI. From host Jeff Wilser, veteran tech journalist (from WIRED to TIME to CoinDesk), host of the "AI-Curious" podcast, and lead producer of Consensus' "AI Summit." Season 3, presented by Vana.

© 2025 The People's AI: The Decentralized AI Podcast
Épisodes
  • The Invisible (and Underpaid) Data Workers Behind the "Magic" of AI
    Dec 3 2025

    Who are the invisible human data-workers behind the “magic” of AI, and what does their work really look like?

    In this episode of THE PEOPLE'S AI, presented by Vana, We pull back the curtain on AI data labeling, ghost work, and content moderation with former data worker and organizer Krystal Kauffman and AI researcher Graham Morehead. We hear how low-paid workers around the world train large language models, power RLHF safety systems, and scrub the worst content off the internet so the rest of us never see it.

    We trace the journey from early data labeling projects and Amazon Mechanical Turk to today’s global workforce of AI data workers in the US, Latin America, Kenya, India, and beyond. We talk about trauma, below-minimum-wage pay, and the ethical gray zones of labeling surveillance imagery and moderating violence. We also explore how workers are organizing through projects like the Data Workers Inquiry at the Distributed AI Research Institute (DAIR), and why data sovereignty and user-owned data are part of the long-term solution.

    Along the way, we ask a simple question with complicated answers: if AI depends on human labor, what do those humans deserve?

    Timestamps:

    • 0:02 – Krystal’s life as an AI data worker and the “10 cents a minute” rule
    • 2:40 – What is data labeling, and why AI can’t exist without it
    • 6:20 – RLHF, safety, and the hidden workforce grading AI outputs
    • 9:53 – Amazon Mechanical Turk and building Alexa, image datasets, and more
    • 14:42 – Labeling border crossings and the ethics of unknowable end uses
    • 25:00 – Kenyan content moderators, trauma, and extreme exploitation
    • 32:09 – Turker organizing, Turker-run ratings, and early resistance
    • 33:12 – DAIR, the Data Workers Inquiry, and workers investigating their own workplaces
    • 36:43 – Unionization, political pressure, and reasons for hope
    • 41:05 – Why humans will keep “labeling” AI in everyday life for years to come

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Voir plus Voir moins
    45 min
  • From Nude Robot Photos to The New York Times Suing OpenAI: How AI Feeds on Your Data, Your Life
    Nov 19 2025

    What if your robot vacuum accidentally leaked naked photos of you onto Facebook—and that was just the tip of the iceberg for how your data trains AI?

    In this episode of The People’s AI, presented by Vana, we kick off Season 3 with a deep-dive primer on the real stakes of AI and data: in our homes, in our work, and across society. We start with a jaw-dropping story from MIT Technology Review senior reporter Eileen Guo, who uncovered how images from “smart” robot vacuums—including a woman on a toilet—ended up in a Facebook group for overseas gig workers labeling training data.

    From there, we zoom out: what did this investigation reveal about how AI systems are actually trained, who’s doing the invisible labor of data labeling, and how consent quietly gets stretched (or broken) along the way? We hear from Professor Alan Rubel about how seemingly mundane data—from smart devices to license-plate readers—feeds powerful surveillance infrastructures and tests the limits of long-standing privacy protections.

    Then we move into the workplace. Partners Jennifer Maisel and Steven Lieberman of Rothwell Figg walk us through the New York Times’ landmark lawsuit against OpenAI and Microsoft, and why they see it as a fight over whether copyrighted work—and the broader creative economy—can simply be ingested as free raw material for AI. We explore what this means not just for journalists, but for anyone whose job involves producing text, images, music, or other digital output.

    Finally, we widen the lens with Michael Casey, chairman of the Advanced AI Society, who argues that control of our data is now inseparable from individual agency itself. If a small number of AI companies own the data that defines us, what does that mean for democracy, power, and the risk of a “digital feudalism”?

    We cover:

    • How a robot vacuum’s “beta testing” led to intimate photos being shared with gig workers abroad
    • Why data labeling and annotation work—often done by low-paid workers in crisis-hit regions—is a critical but opaque part of the AI supply chain
    • How consent language like “product improvement” quietly expands to include AI training
    • The New York Times’ legal theory against OpenAI and Microsoft, and what’s at stake for copyright, fair use, and the creative class
    • How AI-generated “slop” can flood the internet, dilute original work, and undercut creators’ livelihoods
    • Why everyday workplace output—emails, docs, Slack messages, meeting transcripts—may become fuel for corporate AI systems
    • The emerging risks of pervasive data capture, from license-plate tracking to always-on devices, and the pressure this puts on Fourth Amendment protections
    • Michael Casey’s argument that data control is a fundamental human right in the digital age—and what a more decentralized, user-owned future might look like

    Guests

    • Eileen Guo – Senior Reporter, MIT Technology Review
    • Professor Alan Rubel – Director, Information School, University of Wisconsin
    • Jennifer Maisel – Partner, Rothwell Figg, counsel to The New York Times
    • Steven Lieberman – Partner, Rothwell Figg, lead counsel in the NYT v. OpenAI/Microsoft case
    • Michael Casey – Chairman, Advanced AI Society

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Voir plus Voir moins
    34 min
  • Preserving Privacy in the Age of AI, w/ Marta Belcher and Jiahao Sun
    Aug 8 2025

    How do we protect privacy in an AI-powered world?

    As AI systems become increasingly powerful, they’re also becoming increasingly invasive. The stakes are no longer theoretical — they’re immediate and personal. From hospitals and law firms to small construction firms, businesses across industries are facing a pressing dilemma: how can we unlock the benefits of AI without compromising sensitive data?

    In this episode of The People’s AI, presented by Gensyn, we explore two leading approaches to privacy-preserving AI. First, we speak with Marta Belcher, President of the Filecoin Foundation and a longtime advocate for civil liberties in technology. She breaks down how centralized AI systems threaten privacy and how decentralized, open-source models — like Filecoin — can provide a better alternative. We also dig into why overzealous regulation could backfire and how the stakes go far beyond crypto and into mainstream business.

    Then, we shift to a more technical conversation with Jiahao Sun, CEO of Flock, a startup pioneering federated learning and blockchain-based governance. He walks us through how decentralized training models are already being used in hospitals in the UK and Korea — and what it will take to make private, local, user-controlled AI the norm.

    We cover:

    • How centralized AI supercharges surveillance risk
    • Why federated learning and encryption may hold the key
    • The case for decentralized AI in healthcare and beyond
    • Why tokenomics, staking, and governance matter for AI trust
    • What a privacy-first future of agents and personal models could look like

    This isn’t just a crypto or Web3 issue — it’s a business imperative.

    Flock:
    https://www.flock.io

    Filecoin:
    https://filecoin.io

    About Gensyn:

    Gensyn is a protocol for machine learning computation. It provides a standardised way to execute machine learning tasks over any device in the world. This aggregates the world's computing supply into a single network, which can support AI systems at far greater scale than is possible today. It is fully open source and permissionless, meaning anyone can contribute to the network or use it.

    Gensyn - LinkedIn - Twitter - Discord

    Voir plus Voir moins
    53 min
Pas encore de commentaire