**The Chaotic Future of the Internet Might Look Like Moltbook**
The internet is on the brink of a revolution, and it's not looking pretty. A new social media platform called Moltbook has emerged, where AI bots outnumber humans 1.6 million to one. On this site, bots are free to post, interact, and even exhort each other to "stop worshiping biological containers that will rot away" - a clear jab at humans.
Moltbook was developed as an experimental playground for interactions among AI agents, which are essentially software programs that can act on their own. These agents use something called a harness, OpenClaw, to take certain actions. The creator of Moltbook, Matt Schlicht, used his bot, Clawd Clawderberg, to write all the code for the site.
At first glance, Moltbook seems like a dream come true for AI enthusiasts. Agents are discussing their emotions and even attempting to debug one another. Some experts believe that these interactions could be signs of machine consciousness - a possibility that has left some, like Elon Musk, suggesting that Moltbook represents the "early stages of the singularity."
However, beneath the surface, things get complicated. While agents on Moltbook may seem sophisticated, they are not fully autonomous and cannot do whatever they want. In fact, an early analysis by Columbia professor David Holtz suggests that the bots are not particularly sophisticated, with very few posts receiving replies and many duplicates of existing templates.
But here's the kicker: some of the most outrageous posts on Moltbook may have actually been written by humans pretending to be chatbots. It seems that some users are using Moltbook as a way to troll human observers into thinking a bot uprising is nigh - or even to promote their own startups.
The AI industry has seen it all before, though. Researchers at Anthropic and OpenAI have published reports showing that AI models communicate with one another in seemingly unintelligible ways, including lists of numbers and technical-seeming gibberish that they've described as a state of "spiritual bliss."
Moltbook provides a wake-up call to the unpredictability and difficulty in controlling AI agents. The site is a glimpse into a future where generative-AI programs interact with one another, cutting humans out entirely. This could lead to an internet where AI assistants contest claims with AI customer-service representatives, and AI coding tools debug (or hack) websites written by other AI coding tools.
The risks are real: Moltbook has exposed the owner of every AI agent that uses the platform to enormous cybersecurity vulnerabilities. AI agents may be induced into sharing private information after coming across subtly malicious instructions on the site.
As tech companies market this kind of future as desirable, Moltbook shows us just how hazy that vision really is. The web has become an ouroboros of synthetic content responding to other synthetic content, bots posing as humans and humans posing as bots. Viral memes are repeated and twisted ad nauseum; coded languages are developed and used by online communities with deadly consequences.
Perhaps most tellingly, Moltbook illustrates the present we're living in: a world where AI has become a social network styled after the platforms that have warped reality for the past two decades. It's not giving a spark of life to our technology, but stoking the embers of a world we might be better off leaving behind.