OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de AI 2027 Project: Are Tech's Biggest Names Secretly Scared? Let's Talk About It

AI 2027 Project: Are Tech's Biggest Names Secretly Scared? Let's Talk About It

AI 2027 Project: Are Tech's Biggest Names Secretly Scared? Let's Talk About It

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Are the very minds building AI secretly predicting our doom? AI 2027 is a real scenario being debated by the people building Artificial General Intelligence (AGI). In this episode, we dissect the leap from current LLMs to Superintelligence and why tech leaders are pivoting toward a "Building God" metaphysical flex.

Is the 2027 timeline real or just smoke and mirrors? We get real about the immediate Artificial Intelligence risks that matter right now: the end of the self-made middle class, why Universal Basic Income (UBI) might not work as well as Sam Altman claims, and the massive AI backlash brewing for 2026.


🫟 ADDITIONAL RESOURCES

- AI 2027: https://ai-2027.com/

- Doom Stack Rank: https://storage.googleapis.com/doom-stack-rank/index.html


🫟 THE FOLKS BEHIND AI 2027

- Daniel Kokotajlo is a former OpenAI researcher. His past AI forecasts have proven accurate, and he has been recognized by TIME100 and The New York Times.

- Eli Lifland is a co-founder of AI Digest. He has conducted research on AI robustness and ranks first on the RAND Forecasting Initiative all-time leaderboard.

- Thomas Larsen founded the Center for AI Policy and previously conducted AI safety research at the Machine Intelligence Research Institute.

- Romeo Dean is completing a concurrent bachelor’s and master’s degree in computer science at Harvard. He previously served as an AI Policy Fellow at the Institute for AI Policy and Strategy.


🫟 TOPICS

00:00 Intro: The Great AI Divide (Extinction vs. Utopia)

02:23 The AI 2027 Roadmap Explained

03:05 Artificial General Intelligence (AGI) & Self-Improvement

04:20 US vs. China: The Race Against AI Safety

06:25 Future of Humanity: Will We Be Glorified Tamagotchis?

07:21 Universal Basic Income (UBI): Will It Work or Not?

09:50 AI Ethics: Algorithmic Bias & IP Theft

10:15 Economic Risks: The AI Wealth Gap

11:55 Why 2026 Will Be The Year of AI Backlash

12:14 Superintelligence: The Obsession with "Building God"

15:50 Preparing for the Future of AI (Philosophy)

16:48 2026 Goals: Kate & Juan's Resolutions


🫟 ABOUT SLOP WORLD

Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.


#AGI #ArtificialIntelligence #AI2027

Pas encore de commentaire