Épisodes

  • E165: STUDY Shows NFL Favors the Chiefs — Lead Researcher Explains
    Nov 1 2025

    Finance professor Spencer Barnes explains research showing postseason officiating systematically favors the Mahomes-era Chiefs—consistent with subconscious, financially driven “regulatory capture,” not explicit rigging.

    Guest bio: Dr. Spencer Barnes is a finance professor at UTEP. He co-authored “Under Financial Pressure” with Brandon Mendez (South Carolina) and Ted Dischman, using sports as a transparent lab to study regulatory capture.

    Topics discussed (in order):

    • Why the NFL is a clean testbed for regulatory capture
    • Data/methods: 13,136 defensive penalties (2015–2023), panel dataset, fixed-effects
    • Postseason favoritism toward Mahomes-era Chiefs
    • Magnitude and game impact (first downs, yards, FG-margin games)
    • Subjective vs objective penalties (RTP, DPI vs offsides/false start)
    • Regular season vs postseason differences
    • Dynasty checks (Patriots/Brady; Eagles/Rams/49ers)
    • Rigging vs subconscious bias
    • Ratings, revenue (~$23B in 2024), media incentives
    • Gambling’s rise post-2018 and bettor implications
    • Taylor Swift factor (not tested due to data window)
    • Ref assignment opacity; repeat-crew effects
    • Tech/replay reform ideas
    • Broader finance lesson on incentives and regulation

    Main points & takeaways:

    • Core postseason result: Chiefs ~20 percentage points more likely than peers to gain a first down from a defensive penalty.
    • Subjective flags: ~30% more likely for KC in playoffs (RTP, DPI).
    • Size: ~4 extra yards per defensive penalty in playoffs—small per play, decisive at FG margins.
    • Regular season: No favorable treatment; slight tilt the other way.
    • Ref carryover: Crews with a prior KC postseason official show more KC-favorable outcomes the next year.
    • Not universal to dynasties: Patriots/Brady and other near-dynasties don’t show the same postseason effect.
    • Mechanism: No claim of rigging; consistent with implicit bias under financial incentives.
    • Policy: Use tech (skycam, auto-checks for false start/offsides), limited challenges for subjective calls, transparent ref advancement.
    • General lesson: When regulators depend financially on outcomes, redesign incentives to reduce capture and protect fairness.

    Top 3 quotes:

    • “We make no claim the NFL is rigging anything. What we see looks like implicit bias shaped by financial incentives.” — Spencer Barnes
    • “It only takes one call to swing a postseason game decided by a field goal.” — Spencer Barnes
    • “If there’s money on the line, you must design the regulators’ environment so incentives don’t quietly bend enforcement.” — Spencer Barnes

    Links/where to find the work: Spencer Barnes on LinkedIn (search: “Spencer Barnes UTEP”); paper Under Financial Pressure in the Financial Review (paywall) and as a free working paper on SSRN (search the title).

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 2 min
  • E164: The Real Reason You Can Speak: Explained by Evolutionary Biologist - Dr. Madeleine Beekman
    Oct 29 2025

    How human babies, big brains, and social life likely forced Homo sapiens to invent precise speech ~150–200k years ago—and what that means for learning, tech, and today’s kids.

    Guest Bio:
    Madeleine Beekman is a professor emerita of evolutionary biology and behavioral ecology at the University of Sydney and author of Origin of Language: How We Learned to Speak and Why. She studies social insects, collective decisions, and the evolution of communication.

    Topics Discussed:

    • Why soft tissues don’t fossilize; language origins rely on circumstantial evidence
    • Three clocks for timing (~150–200k years): anatomy; trade/complex tech/art; phoneme “bottleneck”
    • Why Homo sapiens (not Neanderthals) likely had full speech
    • Language as a “virus” tuned to children; pidgin → creole via kids
    • Second-language learning: immersion over translation
    • Bees/ants show precision scales with ecological stakes
    • Evolutionary chain: bipedalism → narrow pelvis + big brains → helpless infants → precise speech
    • Ongoing human evolution (archaic DNA, altitude, Inuit lipid adaptations)
    • Flynn effect reversal, screens, AI reliance, anthropomorphism risks
    • Reading, early interaction, and the Regent honeyeater “lost song” lesson
    • Universities, online classes, and “degree over learning”

    Main Points:

    • Multiple evidence lines converge on speech emerging with anatomically modern humans ~150–200k years ago.
    • Anatomical and epigenetic clues suggest only Homo sapiens achieved full vocal speech.
    • Extremely dependent infants created strong selection for precise, teachable communication.
    • Children’s brains shape languages; kids regularize grammar.
    • Communication precision rises when mistakes are costly (bee-dance analogy).
    • Humans continue to evolve; genomes show selected archaic introgression and local adaptations.
    • Tech-driven habits may erode cognition and language skill; reading matters.
    • AI is a tool that imitates human output; humanizing it can mislead and harm, especially for teens.
    • Start early: talk, read, and interact face-to-face from birth.

    Top Quotes:

    • “Only Homo sapiens was ever able to speak.”
    • “Language will go extinct if it can’t be transmitted from brain to brain—the best host is a child.”
    • “The precision of communication is shaped by how important it is to be precise.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 11 min
  • E163: Why AI Still Loses to Humans: Renowned Psychologist Explains - Dr. Gerd Gigerenzer
    Oct 25 2025

    A candid conversation with psychologist Gerd Gigerenzer on why human judgment outperforms AI, the “stable world” limits of machine intelligence, and how surveillance capitalism reshapes society.

    Guest bio: Dr. Gerd Gigerenzer is a German psychologist, director emeritus at the Max Planck Institute for Human Development, a leading scholar on decision-making and heuristics, and an intellectual interlocutor of B. F. Skinner and Herbert Simon.

    Topics discussed:

    • Why large language models rely on correlations, not understanding
    • The “stable world principle” and where AI actually works (chess, translation)
    • Uncertainty, human behavior, and why prediction doesn’t improve much
    • Surveillance capitalism, privacy erosion, and “tech paternalism”
    • Level-4 vs. level-5 autonomy and city redesign for robo-taxis
    • Education, attention, and social media’s effects on cognition and mental health
    • Dynamic pricing, right-to-repair, and value extraction vs. true innovation
    • Simple heuristics beating big data (elections, flu prediction)
    • Optimism vs. pessimism about democratic pushback
    • Books to read: How to Stay Smart in a Smart World, The Intelligence of Intuition; “AI Snake Oil”

    Main points:

    • Human intelligence is categorically different from machine pattern-matching; LLMs don’t “understand.”
    • AI excels in stable, rule-bound domains; it struggles under real-world uncertainty and shifting conditions.
    • Claims of imminent AGI and fully general self-driving are marketing hype; progress is gated by world instability, not just compute.
    • The business model of personalized advertising drives surveillance, addiction loops, and attention erosion.
    • Complex models can underperform simple, well-chosen rules in uncertain domains.
    • Europe is pushing regulation; tech lobbying and consumer convenience still tilt the field toward surveillance.
    • The deeper risk isn’t “AI takeover” but the dumbing-down of people and loss of autonomy.
    • Careers: follow what you love—humans remain essential for oversight, judgment, and creativity.
    • Likely mobility future is constrained autonomy (level-4) plus infrastructure changes, not human-free level-5 everywhere.
    • To “stay smart,” individuals must reclaim attention, understand how systems work, and demand alternatives (including paid, non-ad models).

    Top quotes:

    • “Large language models work by correlations between words; that’s not understanding.”
    • “AI works well where tomorrow is like yesterday; under uncertainty, it falters.”
    • “The problem isn’t AI—it’s the dumbing-down of people.”
    • “We should become customers again, not the product.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 4 min
  • E162: He Built a Billion-View Empire: Now He Warns Social Media Rewires Your Brain - Richard Ryan
    Oct 22 2025

    How a tech insider who helped build billion-view machines explains the attention economy’s playbook—and how to guard your mind (and data) against it.

    Guest bio:
    Richard Ryan is a software developer, media executive, and tech entrepreneur with 20+ years in digital. He co-founded Black Rifle Coffee Company and helped take it public (~$1.7B valuation; $396M revenue in 2023). He’s built multiple apps (including a video app released four years before YouTube) with millions of downloads, launched Rated Red to 1M organic subscribers in its first year, and runs a YouTube network—led by FullMag (2.7M subs)—that has surpassed 20B views.

    Topics discussed:
    The attention economy and 2012 as the mobile/monetization inflection point; algorithm design, engagement incentives, and polarization; personal costs (anxiety, comparison traps, body dysmorphia, addiction mechanics); privacy and data brokers, smart devices, cars, geofencing; policy ideas (digital rights, accountability, incentive realignment); practical defenses (digital detox, friction, community, gratitude, boundaries); careers, college, and meaning in an AI-accelerating world.

    Main points:

    • Social platforms optimize time-on-device; “For You” feeds exploit threat/dopamine loops that keep users anxious and engaged.
    • 2012 marked a shift from tool to extraction: mobile apps plus partner programs turned attention into a tradable commodity.
    • Outrage and filter bubbles are amplified because drama wins in the algorithmic reward system.
    • Privacy risk is systemic: data brokers, vehicle SIMs, and IoT terms build behavioral profiles beyond traditional warrants.
    • Individual resilience beats moral panic: measure use, do a 30-day reset, add friction, and invest in offline community and gratitude.
    • Don’t mortgage your life to debt or trends; pursue adaptable, meaningful work—every field is vulnerable to automation.
    • Societal fixes require incentive changes (digital rights, simple single-issue bills, real accountability), not just complaints.

    Top 3 quotes:

    • “In 2012, you went from using your iPhone to the iPhone using you.”
    • “If you can’t establish boundaries and adhere to them, you have a problem.”
    • “The spirit of humanity shines in the face of adversity—we love an underdog story, and this is the underdog story.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 12 min
  • E161: From Rome to Right Now: What History Gets Wrong About Collapse - Dr. Luke Kemp
    Oct 15 2025

    Dr. Luke Kemp, an Existential Risk Researcher at the University of Cambridge shows how today’s plutocracy and tech-fueled surveillance imperil society—and what we can do to build resilience.

    Guest bio:
    Dr. Luke Kemp is an Existential Risk Researcher at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge and author of Goliath’s Curse: The History and Future of Societal Collapse. His work examines how wealth concentration, surveillance, and arms races erode democracy and heighten global catastrophic risk.

    Topics discussed:

    • The “Goliath” concept: dominance hierarchies vs. vague “civilization”
    • Are we collapsing now? Signals vs. sudden shocks
    • Inequality as the engine of fragility; lootable resources & data
    • Tech’s role: AI as accelerant, surveillance capitalism, autonomous weapons
    • Nuclear risk, climate links, and system-level causes of catastrophe
    • Democracy’s erosion and alternatives (sortition, deliberation)
    • Elite overproduction, factionalism, and arms/resource/status “races”
    • Collapse as leveler: winners, losers, and myths about mass die-off
    • Practical pathways: leveling power, wealth taxes, open democracy

    Main points:

    • “Civilization” consistently manifests as stacked dominance hierarchies—what Kemp calls the Goliath—which naturally concentrate wealth and power over time.
    • Rising inequality spills into political, informational, and coercive power, making societies brittle and less able to correct course.
    • Existential threats are interconnected; AI, nukes, climate, and bio risks share causes and amplify each other.
    • AI need not be Skynet to be dangerous; it speeds arms races, surveillance, and catastrophic decision cycles.
    • Collapse isn’t always apocalypse; often it fragments power and improves life for many outside the elite core.
    • Durable safety requires leveling power: progressive/wealth taxation, stronger democracy (especially sortition-based, deliberative bodies), and curbing surveillance and arms races.

    Top 3 quotes:

    • “Most collapse theories trace back to one driver: the steady concentration of wealth and power that makes societies top-heavy and blind.”
    • “AI is an accelerant—pouring fuel on the fires of arms races, surveillance, and extractive economics.”
    • “If we want a long future, we don’t just need tech fixes—we need to level power and make democracy real.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 17 min
  • E160: How North Korea’s Dictatorship Endures: Historian Fyodor Tertitsky Explains
    Oct 11 2025

    A deep dive with historian Dr. Fyodor Tertitsky on how North Korea’s dynasty survives—through isolation, terror, and nukes—and why collapse or unification is far from inevitable.

    Guest bio:
    Fyodor Tertitsky, PhD, is a Russian-born historian of North Korea and a senior research fellow at Kookmin University (Seoul). A naturalized South Korean based in Seoul, he is the author of Accidental Tyrant: The Life of Kim Il-sung. He speaks Russian, Korean, and English, has visited North Korea (2014, 2017), and researches using Soviet, North Korean, and Korean-language sources.

    Topics discussed:

    • Daily life under extreme authoritarianism (no open internet, monitored communications, mandatory leader portraits)
    • Kim Il-sung’s rise via Soviet backing; historical fabrications in official narratives
    • 1990s famine, loss of sponsors, rise of black markets and bribery
    • Nukes/missiles as regime-survival tools; dynasty continuity vs. unification
    • Why German-style unification is unlikely (costs, politics, identity; waning support in the South)
    • Regime control stack: isolation, propaganda “white list,” terror, collective punishment
    • Reliability of defectors’ accounts; sensationalism vs. fabrication
    • Research methods: multilingual archives, leaks, captured docs, propaganda close-reading
    • Elite wealth vs. citizen poverty; renewed patronage via Russia
    • Coups/assassination plots, succession uncertainty
    • North Korean cyber ops and crypto theft
    • “Authoritarian drift” debates vs. media hyperbole in democracies
    • Life in Seoul: safety, civility, culture

    Main points:

    • North Korea bans information by default and enforces obedience through fear.
    • Elites have everything to lose from change; nukes deter regime-ending threats.
    • Unification would be socially and fiscally seismic; absent a Northern revolution, it’s improbable.
    • Markets and graft sustain daily life while strategic sectors get resources.
    • Collapse predictions are guesses; stable yet brittle systems can still break from shocks.
    • Defector claims need case-by-case verification; mass CIA scripting is unlikely.
    • Archival evidence shows key “facts” were retrofitted to build the Kim myth.
    • Democracy’s victory isn’t automatic—citizens and institutions must defend it.

    Top 3 quotes:

    • “There is no internet unless the Supreme Leader permits it—and even then, someone from the secret police may sit next to you taking notes.”
    • “They will never surrender nuclear weapons—nukes are the guarantee of the regime’s survival.”
    • “The triumph of democracy is not automatic; there is no fate—evil can prevail.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    59 min
  • E159: Laziness Is a Myth: How Hustle Culture Hijacked Your Life
    Oct 4 2025

    Dr. Devon Price unpacks “the laziness lie,” how AI and “bullshit jobs” distort work and higher ed, and why centering human needs—not output—leads to saner lives.

    Guest bio: Devon Price, PhD, is a Clinical Associate Professor of Psychology at Loyola University Chicago, a social psychologist, & writer. Prof Price is the author of Laziness Does Not Exist, Unmasking Autism, and Unlearning Shame, focusing on burnout, neurodiversity, and work culture.

    Topics discussed:

    • The laziness lie: origins and three core tenets
    • AI’s effects on output pressure, layoffs, and disposability
    • Overlap with David Graeber’s Bullshit Jobs and status hierarchies
    • Adjunctification and incentives in academia
    • Demographic cliff and the sales-ification of universities
    • Career choices in an AI era: minimize debt and stay flexible
    • Remote work’s productivity spike and boundary erosion
    • Burnout as a signal to rebuild values around care and community
    • Gap years, social welfare, and redefining “good jobs”
    • Practicing compassion toward marginalized people labeled “lazy”

    Main points:

    • The laziness lie equates worth with productivity, distrusts needs/limits, and insists there’s always more to do, fueling self-neglect and stigma.
    • Efficiency gains from tech and AI are converted into higher expectations rather than rest or shorter hours.
    • Many high-status roles maintain hierarchy more than they create real value; resentment often targets meaningful, low-paid work.
    • U.S. higher ed relies on precarious adjunct labor while admin layers swell, shifting from education to a jobs-sales funnel.
    • In a volatile market, avoid debt, build broad human skills, and choose adaptable paths over brittle credentials.
    • Remote work raised output but erased boundaries; creativity requires rest and unstructured time.
    • Burnout is the body’s refusal of exploitation; recovery means reprioritizing relationships, art, community, and self-care.
    • A humane society would channel tech gains into shorter hours and better care work and infrastructure.
    • Revalue baristas, caregivers, teachers, and artists as vital contributors.
    • Everyday practice: show compassion—especially to those our culture labels “lazy.”

    Top three quotes:

    • “What burnout really is, is the body refusing to be exploited anymore.” — Devon Price
    • “Efficiency never gets rewarded; it just ratchets up the expectations.” — Devon Price
    • “What is the point of AI streamlining work if we punish humans for not being needed?” — Devon Price

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    59 min
  • E158: Post-Plagiarism University: Replacing Humans with AI—Belonging Dips, GPAs Slide, Integrity Erodes
    Sep 27 2025

    Dr. Joseph Crawford unpacks how AI is reshaping higher education - eroding student belonging, redefining assessment in a post-plagiarism era, and raising the stakes for soft skills.

    Guest bio
    Dr. Joseph “Joey” Crawford is a Senior Lecturer in Management at the University of Tasmania and ranks among the top 1% of most-cited researchers globally. His work centers on leadership, student belonging, and the role of AI in higher education, and he serves as Editor-in-Chief of a leading education journal.

    Topics discussed

    • AI in higher education and the “post-plagiarism” era
    • Student belonging, loneliness, and mental health impacts
    • Massification of education (8% → 30% → 50.2% participation)
    • Programmatic assessment vs. essays/exams
    • COVID-19’s lasting effects on campus culture and learning
    • Recorded lectures, flipped learning, and in-person tradeoffs
    • Soft skills, leadership education, and employability
    • Academic integrity, peer review, and AI misuse by faculty
    • Labor shortages, graduate readiness, and industry pathways
    • Social anxiety, AI “friendship,” and GPA outcomes

    Main points & takeaways

    • AI substitutes human support: Heavy chatbot use can provide a sense of social support but correlates with lower belonging and reduced GPA compared to human connections.
    • Belonging matters: Human social support predicts higher well-being and better academic performance; AI support does not translate into belonging.
    • Post-plagiarism reality: Traditional lecture-plus-essay or multiple-choice assessment is increasingly unreliable for verifying authorship.
    • Assessment is shifting: Universities are exploring programmatic assessment—fewer, higher-stakes integrity checks across a degree instead of every course.
    • Massification pressures quality: Participation in Australia rose from 8% (1989) to 30% (2020) to 50.2% (2021), straining rigor and prompting curriculum simplification and grade inflation.
    • COVID + ChatGPT = double shock: Online habits and interaction anxiety from the pandemic compounded with AI convenience, reducing peer-to-peer engagement.
    • Less face time: Many business courses dropped live lectures; students are now ~2 hours less in-class per subject, raising the bar for workshops to build soft skills.
    • Workforce mismatch: Employers want communication and leadership; graduates often lack mastery because entry-level “practice” tasks are automated.
    • Faculty risks too: Using AI to draft peer reviews can embed weak scholarship into training corpora and distort future models.
    • Pragmatic advice: Don’t fear AI—use it—but replace lost micro-interactions with real people and deliberately practice human skills (e.g., leadership, psychology).

    Top quotes

    • “We’re in a post-plagiarism world where knowing who wrote what is a real challenge.”
    • “Some students are replacing librarians, peers, and support staff with bots—they’re fast, infinitely friendly, and never judge.”
    • “AI social support doesn’t create belonging—and that shows up in grades.”
    • “The lecture isn’t gone, but in many programs it’s recorded—and students now get less in-person time.”
    • “Don’t substitute AI-created efficiency with more work—substitute it with more people.”

    🎙 The Pod is hosted by Jesse Wright
    💬 For guest suggestions, questions, or media inquiries, reach out at https://elpodcast.media/
    📬 Never miss an episode – subscribe and follow wherever you get your podcasts.
    ⭐️ If you enjoyed this episode, please rate and review the show. It helps others find us.

    Thanks for listening!

    Voir plus Voir moins
    1 h et 20 min