An AI Company Just Fired Someone for Endorsing Human Extinction
Michael Druggan, former xAI employee, is now trying to de-extinct his career. I’ve written at length about the growing influence of pro-extinctionist sentiments within Silicon Valley. Pro-extinctionism is, roughly put, the view that our species, Homo sapiens, ought to go extinct.
Well, it looks like we just witnessed “the first example ever of someone openly being fired allegedly for wanting humanity to end.” At least that’s what some folks are saying, but I think this is misleading. The person in question wasn’t fired for being a pro-extinctionist. They were fired for holding a particular kind of pro-extinctionist view. That distinction is crucial, as it points to a deeply problematic trend among Valley dwellers: more and more folks are embracing a “digital eschatology” (as I’ve called it before) according to which the future will, inevitably, be digital rather than biological.
Debates among these people increasingly focus not on whether this digital future is desirable, but on which type of digital future is most desirable. Let’s dive in …
Who Cares If Your Child Dies? Not Michael Druggan
The story begins with an employee, Michael Druggan, at Elon Musk’s company xAI. Druggan describes himself as a “mathematician, rationalist and bodybuilder.” Rationalism is the “R” in the acronym “TESCREAL.”
On July 2, Druggan exchanged a few X (formerly Twitter) posts with an AI doomer who goes by Yanco. AI doomers believe that the probability of total annihilation if we build AGI (in the near future) is extremely high.
Here’s that exchange:
Druggan clearly indicates that he doesn’t care if humanity gets squashed under the foot of superintelligent AGI, nor does he give a hoot whether Yanco’s (or, presumably, anyone’s) children survive. This is a truly atrocious thing for Druggan to say, as it shows a blatant disregard for human life.
But let’s look closer at what Druggan said:
"If the AI is somehow brittle — you know, silicon circuit boards don’t do well just out in the elements. So, I think biological intelligence can serve as a backstop, as a buffer of intelligence. But almost all — as a percentage — almost all intelligence will be digital."
This is a pro-extinctionist view, which isn’t that different from Page’s techno-eschatological vision. Indeed, one way of interpreting the Musk-Page debate goes like this: what they were really arguing about is what constitutes a “worthy successor.”
The Future of Debates about the Future of AI
My guess is that we’ll see this sort of debate become increasingly common in the coming months. The loudest voices will bicker not over the question “Should posthumans replace us” but instead “Should those posthumans replace us.” The issue won’t be whether pro-extinctionism is bad but rather which pro-extinctionist view is best.
Because, again, all of these people assume that the inevitable next leap in the evolution of “intelligence” will usher in a post-Singularity digital world to replace our biological world. The most paramount, pressing, urgent questions will then be whether this or that possible digital world is the right one to aim for.
The Author’s View on Human Extinction
I’m a “humanist” in the sense that Musk intended in his 2023 post on X: someone who opposes pro-extinctionism in all its forms. Hence, debates about which sorts of posthumans should replace us are, from my point of view, akin to people squabbling over the question “Should we use the bathtub or sink to drown this kitten?”
The response to that is, of course: If that’s what you’re arguing about, then something has gone terribly wrong, because no one should be drowning any kittens to begin with!
Conclusion
Every generation at least since Jesus (including Jesus himself) has included apocalypticists who shouted that the world as we know it is about to end. It’s alarming how apocalypticism has become so deeply entangled with techno-futuristic fantasies of AI, and been so widely embraced by people with immense power over the world in which we live.
Our task as genuine humanists is to keep the conversational focus on how outrageous it is to answer the question “Should we be replaced?” in the positive, thereby preventing the second question, “What should replace us?” from gaining much or any traction. But given the money, power, and influence of the TESCREAL pro-extinctionists, as well as how far into the AGI race we now are, I worry that our voices won’t be heard.
There’s only one way to find out — shout!
For more content like this each week, please sign up for this newsletter! I need only about 300 paid subscribers to fully support myself next year, as I don’t have an academic job lined up. :-) Thanks so much for reading and I’ll see you on the other side!