Description: 272 pages, 30th September 2025
#1 BEST SELLER ON AMAZON
The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately
AI is the greatest threat to our existence that we have ever faced.
The scramble to create superhuman AI has put us on the path to extinction - but it's not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsy and Nate Soares, explain why artificial superintelligence would be a global suicide bomb and call for an immediate halt to its development.
The technology may be complex, but the facts are simple- companies and countries are in a race to build machines that will be smarter than any person, and the world is devastatingly unprepared for what will come next.
How could a machine superintelligence wipe out our entire species? Will it want to? Will it want anything at all? In this urgent book, Yudkowsky and Soares explore the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new - and if anyone builds it, everyone dies.
About the Author
Eliezer Yudkowsky (Author)
Eliezer Yudkowsky is a founding researcher of the field of AI alignment, with influential work spanning more than twenty years. As co-founder of the non-profit Machine Intelligence Research Institute (MIRI), Yudkowsky sparked early scientific research on the problem and has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in the New York Times, New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, Washington Post, and elsewhere.
Nate Soares (Author)
Nate Soares is the president of the non-profit Machine Intelligence Research Institute (MIRI). He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.