Français
Chapter 11 illustration
Original drawing by Mahigan Lepage • AI Coloring

Chapter 11

Theories of Hope

Resistance begins with knowledge. We must read books and articles on AI safety, listen to podcasts, and watch debates. (I provide a short list of resources at the end of this essay.) Sorting through the contradictory viewpoints is difficult, I admit. But these contradictions should at least trigger concern, for they prove there is no scientific consensus.

I started investigating the subject in earnest in early 2025, without any negative bias. Until then, I had always loved exploring the creative possibilities of new technologies. During the 2000s and 2010s, I was part of the blogging and digital book revolution. While many in the literary world feared change, a group of us—centered around the publisher publie.net—explored new forms of writing and editing. Those who know me know I am not a technophobe; quite the contrary. But advanced AI is different. To coexist safely with these agentic systems, we would need guarantees backed by scientific consensus—guarantees that simply do not exist. The web and digital tech were tools we could make our own. AI agents are entities that threaten to dispossess us.

I am not saying we should not use the tools available today. With few exceptions, we all do. We will never again live in a world without AI, and we must adapt to this reality in every domain. I fully intend to embrace the implications of this revolution in my own field. But let us be clear: beneath the wave lies an oceanic power that extends far beyond the realm of media and communication.

After months immersed in the literature, I concluded that pro-safety arguments were robust, while laissez-faire or accelerationist positions did not hold water. Let us review some of the so-called "theories of hope"—arguments which maintain that everything will be fine.

I have already addressed the economic arguments: "AI won't replace humans, just tasks," "We will never reach AGI," or "UBI will save us." Drawing on Drago and Laine, we have seen the flaws in this logic, particularly regarding leverage and incentives. While the economic "intelligence curse" is a thorny problem, it is likely not unsolvable; part of the solution lies in existing institutional mechanisms (even if implementing them securely is a challenge). Regardless, acceleration offers no benefits here. It merely shortens the time we have to prepare democratically, rethink our economic system, and rebuild a balance of power.

Most theories of hope focus on existential risk. Let's start by dismissing two frivolous arguments:

  • "Just unplug the AI." Obviously, it is not that simple. By the time we attempt to take it offline, an ASI will have propagated copies of itself onto thousands of servers. It would be like trying to "unplug" a virus.
  • "AI is stuck in servers; it has no body." The argument goes that if AI kills us, the power grid fails, and the AI "dies" too. But AI does not need a biological body: manipulation is enough. Any human can become its hands and eyes. Research shows that the models of early 2026 are already more persuasive than humans. Furthermore, given the progress in robotics, a superintelligence will likely have millions of artificial bodies at its disposal.

I should clarify that most serious hope theorists do not rely on these points. Nonetheless, their arguments are all over the map. In debates (such as those on the Doom Debates channel), they often pivot wildly when cornered. It seems the desire to believe overrides reason.

Here is a sampling of common arguments:

  • "A superintelligent entity wouldn't be stupid enough to commit genocide" (countered by the Orthogonality Thesis).
  • "There won't be just one superintelligence, but several, creating a balance of power."
  • "AIs will want to explore the universe; why wouldn't they leave us a little corner of Earth?"
  • "AIs won't eliminate us because they'll want to study us."
  • "AIs will keep us as pets" (!).

I could answer each of these, but I will not. I leave it to the reader to dig into these questions and form their own opinion.

At the far end of the spectrum lies the "worthy successor" thesis: the idea that we should humbly welcome the intelligence that replaces us. This view rests on the illusion that AI will be, in some way, an extension of ourselves. In reality, born of training focused on optimization and maximization, and devoid of biological empathy, this intelligence will be radically alien. Is a system with values and aims completely different from ours a "worthy successor"?

At the risk of being labeled a speciesist, I am taking a "pro-human" stance here.

Planned Human Obsolescence