Épisodes

  • AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)
    Dec 10 2025

    What happens when you automate away a six-hour task? You don't get more free time ... you just do more work.

    In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.

    WHAT WE COVER:

    • What agentic AI actually is (and how it's different from ChatGPT)
    • Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent task
    • The framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)
    • Why this beats creative AI work: no judgment calls, just execution
    • The Blackboard experiment: what happens when an agent does something you didn't ask it to do
    • Security & trust: passwords, login credentials, and where your data actually goes
    • Enterprise-level agent solutions (and why they're not quite ready yet)
    • The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more output
    • How detailed instruction manuals prepared Jessica for prompt engineering
    • The human bottleneck: why your whole organization has to move at the same speed
    • Why marketing and research are next on the chopping block

    TOOLS MENTIONED:

    • ChatGPT Pro with Agents — https://openai.com/chatgpt/
    • Perplexity Comet (agentic browser) — https://www.perplexity.ai/comet
    • Zoho Billing — https://www.zoho.com/billing/
    • Constant Contact — https://www.constantcontact.com
    • Zapier — https://zapier.com
    • Elicit (systematic reviews & literature analysis) — https://elicit.com
    • Corpus of Contemporary American English — https://www.english-corpora.org/coca/
    • Descript — https://www.descript.com
    • Canva — https://www.canva.com
    • Riverside.fm — https://riverside.fm

    TIMESTAMPS:

    • 0:00 — Opening & guest cancellation
    • 1:18 — Podcast website & jingle development (and why music taste is complicated)
    • 6:34 — What is agentic AI? Jessica's invoice automation example
    • 10:33 — Why this use case actually works
    • 14:15 — The Blackboard incident (when the agent went off-script)
    • 16:21 — Security concerns: passwords, login credentials, and trust
    • 18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)
    • 19:27 — Enterprise solutions on the horizon
    • 20:57 — United Airlines cease-and-desist letters for replica training sites
    • 22:27 — Why Kimberly can't use agents in her CCRC work
    • 25:21 — How to identify your automatable workflows (the practical framework)
    • 27:57 — Research automation with Elicit & corpus linguistics
    • 30:45 — The core insight: AI shifts time, it doesn't save it
    • 34:10 — Organizational bottlenecks & human capacity limits
    • 35:08 — Pit & Peach (staying in your own canoe)

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    38 min
  • Once You See It, You Can't Unsee It: The Enshitification of Tech Platforms
    Nov 26 2025

    In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.

    The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.

    Key Takeaways

    • Enshitification refers to the degradation of tech platforms over time
    • The shift from individual users to business customers can lead to worse outcomes for end users
    • Data privacy is a critical concern as companies monetize user interactions
    • AI is predicted to significantly displace workers in coming years
    • Regulation is necessary to protect consumers from unchecked corporate power
    • Market consolidation can stifle competition and innovation
    • Recognizing these patterns is essential for navigating the tech landscape

    Further Reading & Resources

    • Cory Doctorow's Pluralistic blog
    • The Internet Con: How to Seize the Means of Computation
    • 2024 Tech Layoffs Tracker

    Streamlined "Top Links" Version (if you want minimal show notes):

    • Cory Doctorow on Enshittification
    • Enshittification book
    • "On the Dangers of Stochastic Parrots" by Bender & Gebru
    • Amazon/Diapers.com case study
    • Google Project Maven controversy
    • AI job displacement tracker

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    58 min
  • Maternal AI and the Myth of Women Saving Tech
    Nov 19 2025

    In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.

    We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.

    Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.

    If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.

    💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com

    📚 Books & Scholarly Works Mentioned

    • Global Evidence on Gender Gaps
      and Generative AI:
      https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdf
    • Pink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652l
    • Scary Smart (Mo Gawdat – maternal AI concept)
      https://www.mogawdat.com/scary-smart


    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    1 h et 1 min
  • The Containment Problem: Why AI and Synthetic Biology Can't Be Contained
    Nov 5 2025

    In this episode, Jessica teaches Kimberly about the "containment problem," a concept that explores whether we can actually control advanced technologies like AI and synthetic biology.

    Inspired by Mustafa Suleyman's book The Coming Wave, Jessica and Kimberly discuss why containment might be impossible, the democratization of powerful technologies, and the surprising world of DIY genetic engineering (yes, you can buy a frog modification kit for your garage).

    What We Cover:

    • What is the containment problem and why it matters
    • The difference between AGI, ASI, and ACI
    • Why AI is fundamentally different from nuclear weapons when it comes to containment
    • Synthetic biology: from AlphaFold to $1,099 frog gene editing kits
    • The geopolitical arms race and why profit motives complicate containment
    • How technology democratization gives individuals unprecedented power
    • Whether complete AI containment is even possible (spoiler: probably not)
    • The modern Turing test and why perception might be reality

    Books & Resources Mentioned:

    • Empire of AI by Karen Hao
    • DeepMind documentary

    Key Themes:

    • Technology inevitability vs. choice
    • The challenges of regulating rapidly evolving technologies
    • Who benefits from AI advancement?
    • The tension between innovation and safety


    Follow Women Talking About AI for more conversations exploring the implications, opportunities, and challenges of artificial intelligence.

    Leave us a comment or a suggestion!

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    53 min
  • Refusing the Drumbeat
    Oct 18 2025

    On saying no to “inevitable” AI—and what we say yes to instead.

    Kimberly and Jessica recently sat down with Melanie Dusseau and Miriam Reynoldson for an episode of Women Talkin’ ’Bout AI. We were especially looking forward to this conversation because Melanie and Miriam are our first guests who openly identify as “AI Resisters.” The timing also felt right. Both Kimberly and I have been reexamining our own stance on AI in education—how it intersects with learning, writing, and creativity—and the more distance we’ve had from running a tech company, the more critical and curious we’ve become.

    This episode digs into big, thorny questions:

    • What Melanie calls “the drumbeat of inevitability” that pressures educators to adopt AI
    • Miriam’s post-digital view of what it means to live in a world completely entangled with technology; and our shared inquiry into who actually benefits when AI tools promise to make everything faster and more efficient.
    • We also talk about data ethics, creative integrity, and the growing movement of educators saying no to automation—not out of fear, but out of care for human learning and connection.

    It’s a thoughtful, challenging, and hopeful conversation—and we hope you enjoy it as much as we did.

    About our guests: Melanie is an Associate Professor of English at the University of Findlay and a writer whose work spans poetry, plays, and fiction. Miriam is a Melbourne-based digital learning designer, educator, and PhD candidate at RMIT University whose research explores the value of learning in times of digital ubiquity.

    Melanie and Miriam are co-authors of the Open Letter from Educators Who Refuse the Call to Adopt GenAI in Education, which has collected over 1,000 signatures and was featured in an article by Forbes. Melanie is also the author of the essay Burn It Down, which advocates for AI resistance in the academy. We highly recommend reading both before diving into the episode.

    1. Melanie's personal website and University of Findlay profile
    2. Miriam’s personal website and blog "Care Doesn't Scale"
    3. Signs Preceding the End of the World by Yuri Herrera
    4. Asimov’s Science Fiction
    5. Ursula K. Le Guin
    6. Ray Bradbury

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    1 h et 13 min
  • Hallucinations, Hype, and Hope: Rebecca Fordon on AI in Legal Research
    Oct 11 2025

    In this episode of Women Talkin’ ’Bout AI, we sit down with Rebecca Fordon — law librarian, professor, and board member of the Free Law Project — to talk about how generative AI is transforming legal research, education, and the meaning of “expertise.”

    Rebecca helps us cut through the hype and ask harder questions: What problem are we really trying to solve with AI? Why are we using certain tools, and do we even know what data they’re built on?

    We talk about:

    🔹 How AI is reshaping the practice of legal research and what it means for the next generation of lawyers.
    🔹 Why hallucinated case law and “certainty amplification” reveal deeper problems of trust and transparency.
    🔹 The tension between speed and substance, and how “saving time” can actually shift where thinking happens.
    🔹 The expert pipeline problem: what happens when AI replaces the messy, formative parts of learning?
    🔹 How law librarians (and educators everywhere) are taking on the role of translators, bridging human judgment and machine outputs.
    🔹 The open-access movement in law and how the Free Law Project is democratizing legal data.

    At its heart, this episode is about reclaiming curiosity, caution, and critical thinking in a field that depends on precision, and remembering that faster isn’t always smarter.


    Learn more:
    🔗 Free Law Project: https://free.law

    🔗 AI Law Librarians: https://ailawlibrarians.com

    🔗 Aaron Tay's musings about librarianship: https://musingsaboutlibrarianship.blogspot.com/

    🔗 Refusing GenAI in Writing Studies: A Quickstart Guide: https://refusinggenai.wordpress.com/


    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    50 min
  • The Gender Gap in GenAI: Usage, Power, and Whose Voices Count
    Sep 2 2025

    In this episode of Women Talkin’ ‘Bout AI, we start by discussing the findings of a 2024 study "Global Evidence on Gender Gaps and Generative AI" (🔗 below). One overall finding is that women are 20–25% less likely than men to use generative AI, which unspools into something bigger: a story about power, voice, and who gets to shape the future.

    We also discuss own experiences in tech, noticing how the gender gap in AI isn’t just about access to tools. It’s about what counts as legitimate work, whose voices are amplified, and how cultural scripts around “cheating,” confidence, and authority get absorbed into the most influential technologies of our time.

    We talk about:

    🔹 Why women’s hesitation around AI isn’t simply resistance, but often a reflection of ethics and identity.
    🔹 How underrepresentation today could mean future AI systems are trained on a distorted mirror of humanity.
    🔹 What it means to think of AI as both a child we’re raising and a cultural intermediary that’s already reshaping our sense of normal.
    🔹 the WEIRD AI Framework: WEIRD is a term from psychology that stands for Western, Educated, Industrialized, Rich, and Democratic. Most AI systems, generative models especially, are trained on corpora that overrepresent WEIRD voices and underrepresent everyone else.
    🔹 Practical ways women can experiment, reclaim, and band together in communities of practice.
    🔹 If AI is the new baseline for productivity and creativity, then the absence of women’s voices isn’t just a gap, it’s a risk of silence becoming the default.

    Learn more:

    🔗 Gender gap study: https://www.hbs.edu/faculty/Pages/item.aspx?num=66548
    🔗 Mo Gawdat's book Scary Smart: https://www.mogawdat.com/scary-smart
    🔗 Geoffrey Hinton Says AI Needs Maternal Instincts: https://www.forbes.com/sites/pialauritzen/2025/08/14/geoffrey-hinton-says-ai-needs-maternal-instincts-heres-what-it-takes/


    💙 Follow us on our Substack: Women Writin' 'Bout AI: https://substack.com/@womenwritinboutai

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    51 min
  • Competing with Free: Why We Closed Moxie
    Aug 25 2025

    In this episode, we open up about something we haven’t shared publicly before: our decision to shut down Moxie, the startup we spent years building.

    We talk honestly about what led to that choice—the excitement of early growth, the challenges of raising money as non-technical founders, and the impossible reality of competing with free tools from tech giants like Google, OpenAI, and Microsoft.

    This isn’t just a story about one company. It’s about trust, expertise, failure, and the messy human side of working with generative AI in education and research. Along the way, we reflect on what we wish we’d known earlier, how burnout shaped our decisions, and what we’ve learned about ourselves through the process of letting go.

    What you’ll hear in this episode:

    • Why we ultimately decided to shut down Moxie
    • The pressures of fundraising and pitching as non-technical founders
    • The gap between hype and reality with AI in education
    • Lessons on trust, expertise, and failure in both startups and academia
    • How we’re processing life and work after Moxie

    If you’ve ever wondered what it really feels like to close the doors on something you’ve poured yourself into, or you’re navigating your own questions about AI, startups, or burnout—you’ll find some resonance here.

    Leave us a comment or a suggestion!

    Support the show

    Contact Jessica or Kimberly on LinkedIn:

    • Jessica's LinkedIn
    • Kimberly's LinkedIn








    Voir plus Voir moins
    58 min