Français

PART III

THE EXISTENTIAL RISK

Chapter 6 illustration
Original drawing by Mahigan Lepage • AI Coloring

Chapter 6

The Risk of Extinction

The economic scenario described above is, in my view, the most likely outcome—provided we let current trends continue without the technical and legislative guardrails needed to distribute the power and wealth generated by AGI.

Likely, that is, assuming we are still here to experience it.

To those new to AI safety, this possibility may seem extreme or alarmist. Yet, it is a scenario considered plausible by Yoshua Bengio, Geoffrey Hinton, Elon Musk (xAI, Grok), and Sam Altman (OpenAI, ChatGPT). All have stated that AI poses an existential risk to humanity.

Musk frequently cites a 10 to 20% probability of catastrophe. However, he maintains that if xAI does not develop an AI aligned with the "truth"—specifically his highly contestable version of it—less scrupulous actors will win the race. We saw a glimpse of this "truth" last year when Grok launched into far-right tirades and christened itself Mecha-Hitler...

As for Altman, despite founding OpenAI with a safety mission, his credibility has been tarnished by the exodus of key researchers from his "Superalignment" division. Figures like co-founder Ilya Sutskever, Jan Leike, and Daniel Kokotajlo resigned, condemning a corporate culture that prioritized commercial products over safety7.

Yet, in May 2023, Altman signed a statement from the Center for AI Safety (CAIS) alongside Bengio, Hinton, Demis Hassabis (Google DeepMind), and Dario Amodei (Anthropic). It asserted that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war"8.

It is hard to imagine the Sam Altman of today discussing an "extinction risk from AI," but the record stands. What came of that 2023 declaration? Essentially nothing. But it proves, at the very least, that concern over extinction isn't limited to the fringe theories of so-called "doomers."

In fact, surveys show that many researchers assign a significant probability to a doomsday scenario—known as P(doom). In the largest study to date (involving 2,778 experts), the median probability of AI causing human extinction is about 5%. More than a third of researchers (38%) place this risk at 10% or higher9.

To put these numbers in perspective: in the nuclear industry, a catastrophe risk (like Chernobyl or Fukushima) greater than 0.0001% is considered unacceptable.


7 Kokotajlo subsequently co-authored a prospective scenario titled AI 2027 (ai-2027.com), which details the mechanisms leading to a potential loss of control—and the safeguards that could prevent it. Initially published in April 2025, AI 2027 anticipated several developments that are now unfolding: the US-China race, the focus on automating programming work, AI companies' stated goal of automating AI research (recursion) confirmed by recent announcements, the acceleration of robotics, etc.
8 "Statement on AI Risk", open letter, Center for AI Safety (aistatement.com).
9 Katja Grace et al., "Thousands of AI Authors on the Future of AI", Journal of Artificial Intelligence Research 84:9, 2025. arxiv.org/abs/2401.02843

Planned Human Obsolescence