Épisodes

  • When AI Goes Wrong, Who Pays? The Problem With Ghost Authority | ChatGPT Health, Pricing & Fake News
    Jan 16 2026

    When AI goes wrong, you pay for it – with your money, your data privacy, and sometimes your health. This Slop World episode pulls apart ghost authority and the dark side of artificial intelligence: broken AI ethics, surveillance pricing, and what happens when nobody is accountable for the systems running our lives. Juan and Kate break down how companies hide behind “the algorithm” while quietly exploiting data protection gaps, health privacy loopholes, and dynamic pricing schemes you never agreed to.

    From AI security failures and digital privacy you thought you had to OpenAI's ChatGPT Health and medical AI delivering life‑changing decisions, we’re asking the only question that matters: when these systems screw up, who pays the price?🫟 ADDITIONAL RESOURCES

    • When Google’s AI gets it wrong, real people pay the price: https://www.oaoa.com/people/when-googles-ai-gets-it-wrong-real-people-pay-the-price/
    • Minnesota Solar Company Sues Google Over AI Summary: https://www.govtech.com/public-safety/minnesota-solar-company-sues-google-over-ai-summary
    • Canadian Musician Ashley MacIsaac Wants to 'Stand Up' To Google After Being Falsely Accused of Forced Contact Offenses by AI Overview: https://ca.billboard.com/business/legal/ashley-macisaac-google-defamation
    • The Price is Rigged - Today, Explained | Podcast on Spotify: https://open.spotify.com/episode/49PSPtP1neuga7kvBYakIx
    • Instacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill - Consumer Reports:https://www.consumerreports.org/money/questionable-business-practices/instacart-ai-pricing-experiment-inflating-grocery-bills-a1142182490/
    • Instacart ends AI pricing test that charged shoppers different prices for the same items - Los Angeles Times: https://www.latimes.com/business/story/2025-12-22/instacart-ends-ai-pricing-test-that-charged-shoppers-different-prices-for-same-items
    • Introducing ChatGPT Health | OpenAI: https://openai.com/index/introducing-chatgpt-health/?video=1151655050
    • OpenAI launches ChatGPT Health in US sparking privacy concerns: https://www.digit.fyi/openai-launches-chatgpt-health-in-us-sparking-privacy-concerns/
    • OpenAI: Health Privacy Notice: https://openai.com/policies/health-privacy-policy/

    🫟 TOPICS00:00 Ghost Authority: Why Nobody Is Responsible When AI Messes Up01:42 Algorithmic Accountability: A Checklist to Protect Your Decisions 02:31 Google AI Overview: The Minnesota Solar Company Hallucination 03:19 Reputation Ruined: The AI Hallucination That Cost a Musician His Career05:42 Smart Research: How to Use ChatGPT, Gemini & Claude Without Being Fooled08:35 Surveillance Pricing: Why the Internet Charges You More Than Your Neighbor10:41 Instacart and Uber: The Backlash Against Dynamic Pricing 12:42 Save Money: Simple Tricks to Beat Hidden Algorithmic Pricing14:15 The Urgency Trap: How Companies Profit From Your Stress and Fear15:34 AI in Healthcare: Your Medical Data and Health Privacy Risks16:10 Juan Reacts: OpenAI’s ChatGPT Health Trailer 17:41 AI in Healthcare: Could Your Private AI Chats Raise Your Rates?21:21 The Fine Print: What OpenAI Actually Does With Your Medical Data23:39 AI Health: Why AI Can’t Tell Real Science From Internet Myths25:42 Data Protection: How to Anonymize Your Medical Test Results 27:23 Slow Down: Why Being Fast Online Makes You a Target for AI Scams30:03 The Bus Stop Test: A Simple Rule for Trusting Any AI Tool🫟 ABOUT SLOP WORLDJuan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Voir plus Voir moins
    31 min
  • AI 2027 Project: Are Tech's Biggest Names Secretly Scared? Let's Talk About It
    Jan 8 2026

    Are the very minds building AI secretly predicting our doom? AI 2027 is a real scenario being debated by the people building Artificial General Intelligence (AGI). In this episode, we dissect the leap from current LLMs to Superintelligence and why tech leaders are pivoting toward a "Building God" metaphysical flex.

    Is the 2027 timeline real or just smoke and mirrors? We get real about the immediate Artificial Intelligence risks that matter right now: the end of the self-made middle class, why Universal Basic Income (UBI) might not work as well as Sam Altman claims, and the massive AI backlash brewing for 2026.


    🫟 ADDITIONAL RESOURCES

    - AI 2027: https://ai-2027.com/

    - Doom Stack Rank: https://storage.googleapis.com/doom-stack-rank/index.html


    🫟 THE FOLKS BEHIND AI 2027

    - Daniel Kokotajlo is a former OpenAI researcher. His past AI forecasts have proven accurate, and he has been recognized by TIME100 and The New York Times.

    - Eli Lifland is a co-founder of AI Digest. He has conducted research on AI robustness and ranks first on the RAND Forecasting Initiative all-time leaderboard.

    - Thomas Larsen founded the Center for AI Policy and previously conducted AI safety research at the Machine Intelligence Research Institute.

    - Romeo Dean is completing a concurrent bachelor’s and master’s degree in computer science at Harvard. He previously served as an AI Policy Fellow at the Institute for AI Policy and Strategy.


    🫟 TOPICS

    00:00 Intro: The Great AI Divide (Extinction vs. Utopia)

    02:23 The AI 2027 Roadmap Explained

    03:05 Artificial General Intelligence (AGI) & Self-Improvement

    04:20 US vs. China: The Race Against AI Safety

    06:25 Future of Humanity: Will We Be Glorified Tamagotchis?

    07:21 Universal Basic Income (UBI): Will It Work or Not?

    09:50 AI Ethics: Algorithmic Bias & IP Theft

    10:15 Economic Risks: The AI Wealth Gap

    11:55 Why 2026 Will Be The Year of AI Backlash

    12:14 Superintelligence: The Obsession with "Building God"

    15:50 Preparing for the Future of AI (Philosophy)

    16:48 2026 Goals: Kate & Juan's Resolutions


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.


    #AGI #ArtificialIntelligence #AI2027

    Voir plus Voir moins
    20 min
  • AI Holiday Hacks: Kids, Screens & What’s Setting Off Our Alarms
    Dec 16 2025

    AI is coming to your holiday table whether you like it or not.

    In this episode, Juan and Kate share a practical AI holiday playbook for parents and families, focused on AI safety, AI for kids, and real-world holiday use cases that won’t turn dinner into a boiler room.

    They cover AI holiday hacks that can make family gatherings easier, including safe ways to entertain kids with AI, how to talk to grandparents about AI without scaring them, and which AI topics will instantly derail the room. Share your best (or worst) AI holiday conversation in the comments!


    🫟 ADDITIONAL RESOURCES

    Create new holiday traditions with AI: https://www.microsoft.com/en-us/microsoft-365-life-hacks/everyday-ai/create-new-holiday-traditions

    ‘It’s so crushing’: US families navigate divide over politics during the holidays: https://www.theguardian.com/us-news/2024/dec/23/family-politics-holiday


    🫟 TOPICS

    00:00 Why AI Keeps Coming Up at Family Holidays

    00:29 The AI Holiday Playbook Strategy

    01:29 Using AI to Entertain Kids: Helpful or Risky?

    02:41 Low-Risk AI Activities Kids Love

    03:33 Family Tech Safety: When AI Crosses a Line

    04:53 How to Explain AI Safety to Your Family

    06:46 Why AI Apps Want Faces and Family Data

    07:31 Big Tech’s Take on AI Holiday Traditions

    09:54 AI for Crafts & DIY Instructions

    10:45 The Holiday Health Tracking Fail

    11:36 AI Red Flags: Politics & Surveillance

    12:53 Parenting Safety: The Bus Station Analogy

    14:38 Economic Fears & The AI Bubble

    16:03 AI Trends: Art vs. Slop Debate

    18:51 A Simple Rule for Smarter AI Conversations


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Voir plus Voir moins
    20 min
  • Prompt Injection Attacks: Why AI Browsers Aren't Safe
    Dec 5 2025

    How much security are you willing to trade for convenience? Juan and Kate break down how prompt injection attacks exploit AI browsers like ChatGPT Atlas and Perplexity Comet, and why invisible instructions inside webpages can hijack your agents without you knowing.

    We also discuss the resume hack going viral, the difference between direct vs. indirect prompt injection, and the real strategic trade-offs between convenience and LLM security.


    🫟 ADDITIONAL RESOURCES

    - Prompt injection: A visual, non-technical primer for ChatGPT users: https://www.linkedin.com/pulse/prompt-injection-visual-primer-georg-zoeller-tbhuc/

    - AI browsers are here, and they're already being hacked: https://www.nbcnews.com/tech/tech-news/ai-browsers-comet-openai-hacked-atlas-chatgpt-rcna235980

    - Using an AI Browser Lets Hackers Drain Your Bank Account Just by Showing You a Public Reddit Post: https://futurism.com/ai-browser-hackers-drain-bank-account-public-reddit-post


    🫟 TOPICS

    00:00 - Why AI Browsers Like Atlas and Comet Are a Security Risk

    00:50 - Invisible Instructions Hijacking Your AI Agent

    01:51 - Prompt Injection Explained for Beginners

    02:39 - The Hack That Exposes AI Browser Weaknesses

    03:40 - The Resume Hack: Watch Your Data Get Stolen

    04:43 - Phishing Attack Using Simple Meta Tags

    05:20 - Hidden Malicious Prompts in Metadata & PDFs

    06:00 - Direct Injection: Forcing Models Past Guardrails

    06:41 - Indirect Injection: Embedded Instructions for Agents

    07:22 - We're Playing With Fire: AI Browser Security Is a Mess

    09:03 - Why AI Agents Get Manipulated So Easily

    12:55 - ChatGPT Atlas & Perplexity Comet: Can We Trust These Browsers?

    14:13 - What is Your Cost of Convenience? The Risks of AI Automation

    16:01 - Why First-Gen AI Agents Will Always Be Flawed


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Voir plus Voir moins
    17 min
  • Your AI Assistant Is Your Worst Distraction
    Nov 14 2025

    AI productivity tools were supposed to help, but they often end up getting in the way of getting any work done.

    Microsoft says the average worker gets hit 275 times a day, and many of those dings now come from the same AI tools that promised to keep us focused. Juan and Kate talk about how AI for business has turned into a distraction machine, how Copilot and similar tools push prompts no one asked for, and whether engagement metrics are driving this productivity mess.


    🫟 ADDITIONAL RESOURCES

    Microsoft, Work Trend Index Special Report "Breaking Down the Infinite Workday": https://www.microsoft.com/en-us/worklab/work-trend-index/breaking-down-infinite-workday


    🫟 TOPICS

    00:00 When Interruptions Take Over Your Workday

    00:08 Why AI Tools Keep Pulling Your Attention Away

    00:35 Copilot And The Problem With “Helpful” Prompts

    01:32 Why SaaS Tools Bake In Interruptions

    03:20 Every App Trying To Teach You At Once

    05:54 Your Attention As The Real Resource

    06:52 Engagement Metrics vs. Productivity

    07:50 The All-In-One AI Tools Ecosystem Theory

    09:13 Why SaaS Tools Won’t Give Up Notifications

    12:47 What People Really Do With AI at Work

    13:57 Using AI Personas To Stress-Test Your Ideas

    14:48 AI For Data Storytelling

    16:31 One Easy Step To Level Up With AI

    18:42 The Real Gap In AI Productivity At Work

    19:48 Real-Time Interruption: Meet Ramón

    20:22 How AI Could Handle Most Executive Decisions

    22:17 One More Thing...


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Voir plus Voir moins
    23 min
  • Is OpenAI’s Strategy Broken? We Talk Through Sam Altman’s (Desperate?) Bet
    Nov 6 2025

    OpenAI’s gone full YOLO: agents that work for you, a Sora app that feels like TikTok, an AI browser that wants to replace Chrome, and a new “adult freedom” stance. Juan and Kate dig into the logic, or lack of it, behind OpenAI’s everything-everywhere strategy, and why even its biggest users are starting to push back.


    🫟 Additional Resources

    The AI Resisters (Axios) https://www.axios.com/2025/10/19/ai-resistance-students-coders

    Workforce Outlook: The Class of 2026 in the AI Economy https://joinhandshake.com/themes/handshake/dist/assets/downloads/network-trends/class-of-2026-outlook.pdf

    Zuckerberg signals Meta won’t open source all of its ‘superintelligence’ AI models https://techcrunch.com/2025/07/30/zuckerberg-says-meta-likely-wont-open-source-all-of-its-superintelligence-ai-models/


    🫟 Topics

    00:00 – Intro

    00:07 – OpenAI’s new playbook: agents, Sora, and Stargate

    00:39 – AI agents everywhere: from dev tools to browsers

    02:52 – Building AGI or burning cash? What’s OpenAI’s real plan?

    06:00 – The difference between open source and closed AI models

    06:22 – Meta vs OpenAI: Competing to own AI’s Future

    09:35 – The rise of AI resistance: workers, coders, students push back

    11:24 – Using AI tools you don’t trust

    13:51 – The vibe-coding trap

    14:40 – Human-made content becoming the new luxury

    18:30 – Where’s your line in the sand with AI? Ethics and trust

    19:48 – Smarter ways to use AI

    21:49 – Puppies & babies, our weekly fix of Slop


    🫟 About Slop World

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Voir plus Voir moins
    24 min
  • Is AI Workslop Taking an Emotional Toll on Workers?
    Oct 30 2025

    AI-generated work slop is spreading, and it’s turning offices into zombie zones. In this Halloween Special, Juan and Kate dig into how automation at work and AI productivity tools are creating “zombie workers,” why Sam Altman is flirting with the D3ad Internet Theory, and how corporate AI culture is quietly eroding creativity.

    Watch the full episode for a very special Halloween-themed curated slop.


    🫟 Topics:

    00:00 – Halloween Special: AI, zombies, and slop

    00:42 – The rise of zombie workers and workplace automation

    02:39 – The emotional cost of AI-generated work slop

    03:42 – Why companies fail at AI learning and workforce training

    07:00 – The “cat’s out of the bag” moment for corporate AI

    07:38 – Can smarter workforce learning fix AI fatigue?

    09:24 – Less is more: human value in AI workplaces

    11:09 – The D3ad Internet Theory: AI’s ghost in the machine

    13:37 – Engagement farming and the end of real content

    15:08 – Is Sam Altman breaking the internet on purpose?

    18:10 – How AI filters shape online reality

    20:28 – The hidden cost of easy AI answers

    21:30 – AI Slop of the Week: Halloween Edition


    🫟 About Slop World

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.


    #OpenAI #AIethics #aiproductivity

    Voir plus Voir moins
    24 min
  • AI Security Risks: Vibe Coding Exploits, Deepfakes & New Scams
    Oct 23 2025

    October is Cybersecurity Awareness Month, and Juan and Kate talk about one of the most controversial aspects in the AI arms race: SECURITY. From the rise of “vibe coding” (AI that writes your code for you), to AI-powered scams, they unpack how the "move fast and break things" approach is opening -once again?- the door to major exploits, both in computers and humans.


    🫟 About Slop World

    Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Subscribe to @slopworldpodcast on YouTube and wherever you get your podcasts.


    🫟 Timestamps:

    00:00 – It's Cybersecurity Awareness Month!

    01:43 – The 3 Biggest AI Security Gaps

    02:39 – What Is AI Vibe Coding?

    05:05 – Is Vibe Coding a Security Nightmare?

    07:03 – What If AI Went Down Tomorrow?

    09:51 – AI-Powered Scams and Social Engineering

    14:20 – Who's the Sloppiest? Meta AI vs. Sora 2

    16:20 – The Rise of AI Slop Social Platforms

    20:27 – Are We Training Ourselves to Accept Fake Content?

    Voir plus Voir moins
    23 min