In short
- Authors Yudkowsky and Soares warn that AI Superintelligence will die out the people.
- Critics say that extinction conversations really overshadow, such as bias, dismissal and disinformation.
- The AI debate is divided between Doomers and accelerationists who insist on faster growth.
It may sound like a Hollywood thriller, but in their new book “If someone builds it, everyone dies”, authors Elieer Yudkowsky and Nate Soares claim that if humanity creates an intelligence that is smarter than itself, “survival would not be simply unlikely – it would be impossible.
The authors claim that today’s systems are not designed by line, but “grown” by training billions of parameters. That makes their behavior unpredictable.
If intelligence scales can arise independently, such as self -preservation or power seekers, the authors warn. If such a system ever exceeds human control, they say that extinction would follow as standard.
They call the current race under technical giants and governments to build increasingly larger models a ‘suicide race’. No villain requires, only incompetence.
Why it matters
The book lands with the AI community that has already been broken in Kampen.
Ai Doomers claim that extinction is the inevitable result, unless humanity stops progress or drastically slows down. Accelerationists on the other hand through breaches in medicine, science and the economy, while slowing down is a greater danger.
Nick Bostrom’s Superintelligence made the existential matter for the first time ten years ago. Yudkowsky and Soares sharpen it in a Klaxon call.
But critics are concerned that fixation on extinction here already distracts damage: bias, dismissal, monitoring and disinformation.
What the AI models say
Because the book talks about it, we asked for the leading of AI models what they think.
As far as we know, the book has not read any of the LLMs, so the following reviews are based on the interpretations of the platforms of the assessments they have interpoled.
Chatgpt
OpenAi’s Chatgpt has framed the book as a useful provocation instead of a prophecy.
“The book is valuable as a perspective shift tool, even if you think it is exceeding it,” the chatbot replied. “It sharpens the intuitions on coordination and unintended consequences. But treating his worst case as inevitably ignores uncertainty about future architectures, regulations and limitations.”
Chatgpt stated that the model of future AI developments depends on how architectures evolve and which regulatory, social or technical guardrails arise.
“That uncertainty means that I would not treat their sausage case as inevitably,” sniffed chatgpt.
Meta
Meta Ai hit a middle note and said that the terrible warnings of the book are exaggerated, but still valuable for making debate.
“The terrible predictions can feel exaggerated, but the emphasis on caution and international cooperation is justified,” replied Meta AI. “Whether you agree with its conclusion, the book Sparks needed a debate about responsible AI development.”
Meta’s AI said that the emphasis of the authors is well justified on the need for caution and international cooperation, given the potentially catastrophic consequences of non-relieved AI development.
Twin
Google’s Gemini cast the book as a frightening but clarifying read, even if the proposed solutions seem unrealistic.
“The book is essential for understanding the extreme end of the AI risk,” said Gemini. “The proposed solution – a total closure – is politically unreachable, but the warning can gather investments in safety and control techniques before we achieve the point of no return.”
Gemini said that the book defines the extreme use of super intelligence, which in turn helps to justify robust (albeit less radical) safety measures.
Clamber
The Claude of Anthropic was even more skeptical, which was interesting, because under the leading LLMS, anthropically generally is considered the most safety conscious. But Claude described the tone of the book as “reckless” and his framing as too binary.
“The authors correctly identify real risks, but their certainty feels reckless,” said Claude. “They reject progress in AI Safety Research and frame the problem as a binary between perfection and extinction. That makes their message less constructive.”
Although their warnings emphasize important risks that justify substantial resources and caution, Claude said that the framing “everyone dies” is more designed to attract attention than accurately reflecting complex uncertainties.
“The risks are serious enough to take seriously without accepting their claims of inevitable catastrophe,” said the AI.
Grain
The AI platform of Elon Musk also surprises, because he promoted Grok as the Anti-Mainream Media AI of Record.
But Grok quickly called ‘a destructive NYT review, [calling] His arguments are weak or alarmistic. “
“This book is a memory of the intestinal party that intelligence without wisdom is a liability, and we have not yet ignored anything of the last,” Grok replied. “That said, it is Doomer -Porno for Rationalists: Exciting in his fatalism, but it supports human adaptability. Xai’s Ethos – certainty about conquest – we can view the needle: haunting Bold AI for cosmic questions while they embedded protections from the first day.”
Apart from that call, however, Grok said that “in general is a provocative, efficient lecture for anyone struggling with the future of AI – with which you are open to pessimism, but may be exaggerated if you are optimistic about technical progress.”
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.