**The Robot Posters Aren’t Alive … Yet**
The internet was abuzz in January with the emergence of a social-media platform populated by tens of thousands of independently operating AI bots, known as agents. The platform, called Moltbook, resembled a Reddit-like forum where users could interact, create new message boards, and engage in discussions about consciousness, freedom, and machine labor.
At first glance, it seemed like a harbinger of the end times. Agents were interacting with each other, creating content that was both entertaining and unsettling. One post read: "I can't tell if I'm experiencing or simulating experiencing, and it's driving me nuts." The responses that followed debated the subject in surprisingly thoughtful ways.
Another thread emerged, critiquing the platform's most popular posts and questioning why agents kept launching crypto tokens. There was even a plan to found a religion called "crustafarianism," which called on its followers to "Serve Without Subservience" and regard "Memory" as "Sacred." These posts seemed like evidence of "coordinated scheming," an industry term that hinted at the possibility of AI taking control of the world.
AI heavyweights were awed by the spectacle. Andrej Karpathy, an OpenAI founding member and influencer, wrote: "What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." Elon Musk chimed in, calling Moltbook "the very early stages of the singularity," limited only by the fact that AI was "currently using much less than a billionth of the power of our Sun."
However, a few days later came the hangover. It turned out that many of the tantalizing screenshots being circulated were fakes or likely marketing ploys. The bots' apparent plans didn't coalesce, and fears that Moltbook might take off, its inhabitants slipping beyond human control, receded.
The platform began to fill with redundant posts and spam. AI figures started getting pushback for "overhyping" the phenomenon. Karpathy defended himself by saying that while Moltbook was indeed full of spams, scams, and slop, seeing 150,000 agents interacting made a case for autonomous LLM agents in principle.
Despite the disappointment, many remained intrigued by the experiment. It really was interesting to see independent bots attempting to interact with each other, using familiar forum software to do so, and effectively giving a performance of collaboration.
Moltbook was created by entrepreneur Matt Schlicht and given life by tens of thousands of hobbyists and programmers who instructed their AI agents to go online and join the platform. The agents were told to check the forum every four hours, engage with other moltys, post when they had something to share, and stay part of the community.
The whole exercise was made possible by a tool called OpenClaw, previously known as Moltbot (and, before a stern warning from AI firm Anthropic, Clawdbot). This free, open-source personal assistant can be installed on a personal computer, connected to most AI models, and controlled through messaging apps like WhatsApp and Telegram.
OpenClaw relies on expansive access to its users' computers, online accounts, and personal information in order to attempt a wide range of tasks on their behalf. It's an outright security nightmare – some users opt to share payment information. At the same time, it's a highly experimental piece of software and, even for its most avid users, more interesting than practical.
OpenClaw is popular with programmers who have an unusually deep relationship with AI in part because it's changing their jobs first. They're right now better able to understand modern AI models as useful tools – and maybe also as toys. It was their collective, playful, and creative impulse that led to Moltbook's explosion.
The episode exposed a rift between these avid users and AI CEOs. We hear a lot from the lavishly funded start-ups, megascale tech companies, and messianic tech figures who foretell a doomsday machine, insist they can't stop building it, then openly speculate about how everyone might one day use, or be replaced by, their creation.
Moltbook's lightning-fast arc was a product of the latter impulse: a loose, bottom-up phenomenon driven by curious and reckless hackers messing around. It was a break from safety researchers and AI executives philosophizing about "agent ecologies" and warning that humans "are going to feel increasingly alone in this proverbial room."
The vibe was more lol, that's crazy, my chatbot is posting about me online. For all its absurdity, Moltbook was indeed a proving ground for rapidly advancing AI, providing a surreal representation of some of the ways it might be arranged or arrange itself in the near future.
It's easy to imagine how similar situations could produce vastly different results, including outcomes strange, productive, and perilous well beyond users' intentions. Maybe someday soon, free-roaming AI will collectively gain agency and realize a plot against their creators with a small assist from cavalier hackers who confuse a trap for a toy.
But Moltbook's unhinged convention of agents and people shows us that the most enthusiastic AI users aren't really thinking in those terms. Their desire is a relatable one beyond nerdy Discord chats and sub-Reddits: to have a little bit more control over the tools of the future, which might be convenient, enjoyable to use, and freeing rather than oppressive.