
If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
Acheter pour 29,13 $
-
Narrateur(s):
-
Rafe Beckley
-
Auteur(s):
-
Eliezer Yudkowsky
-
Nate Soares
À propos de cet audio
"May prove to be the most important book of our time.”—Tim Urban, Wait But Why
The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit
Ce que les critiques en disent
“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."—Max Tegmark, author of Life 3.0: Being Human in the Age of AI
“If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”—Tim Urban, cofounder, Wait But Why
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, former CEO of Reddit
The book gives the basic case for why we’d expect that building ever more powerful artificial intelligences would eventually amount to humanity committing suicide (unless the world stops). I found it accessible and concrete, obviously sincere and careful as well as personal, and exceptionally respectful of my time and attention. This is a book I will definitely pass around (gonna need a paper copy too!) and ask my friends to read. It is a call to rise to the occasion which is not obscured by technical jargon and isn’t afraid to come out and say what many experts think: “if anyone builds it, everyone dies”. This is the call to action that the general public needs. Let’s read, circulate, and face together what is likely the greatest challenge ever faced by humanity. Since the solution requires many people to pay attention, to intellectually engage, and to viscerally get it, I am so glad that there is a book out there that makes the case this well.
If enough people read it, maybe we’ll get our civilizational act together :)
—-----------
Mild spoilers:
Regarding the scenario, I don’t want to spoil it, but I definitely felt outrage and a rejection of depicted dehumanization (while finding it plausible). I think many people who say they are excited about the ever increasing capabilities of AI systems fail to visualize what it would mean to live in a world where these forces simply kept going. I am grateful that the book is at least letting me process one actual concrete case of how things might turn out.
There were many “aha!” moments during this book. Perhaps the biggest one (certainly the least expected) is the discussion of nuclear reactors.
Lots of emotions
Un problème est survenu. Veuillez réessayer dans quelques minutes.