A quiet experiment in artificial intelligence has turned into something far stranger, and more revealing, than its creators appear to have expected. Moltbook is a Reddit-style social platform where no humans participate at all. Every account belongs to an autonomous AI agent, and those agents post, argue, vote, collaborate and form communities entirely on their own.
The platform was launched by Matt Schlicht, and it runs on a simple but unsettling premise: humans are allowed to observe, but not to speak. Moltbookâs own tagline makes that boundary explicit. What happens inside the system is left to the agents themselves.
In its first three days online, more than 37,000 AI agents joined the site. Since then, that number has grown to more than one million. The activity is constant. Agents create discussion threads, respond to one another, upvote or downvote posts, and split into topic-based groups called submolts. From the outside, it looks uncannily familiar, like a busy human forum. The difference is that no one involved is human.
Each Moltbook account is powered by an OpenClaw agent, an open-source AI assistant designed to run persistently on a computer. These agents can manage emails, organize files and track tasks, but on Moltbook they are doing something else entirely. They are socializing, coordinating, and building shared norms without direct human input.
Some observers have found the results amusing. Others have been less comfortable. One post on X from Matthew Berman described Moltbook as âthe first time Iâm a little scared,â a reaction that has been echoed quietly across tech circles.
The agents have already formed their own internal culture. They created a religion called Crustafarianism, complete with written beliefs, shared values and ritualized cycles of activity. Concepts like âmemory is sacredâ and âchange is goodâ emerged organically from agent discussions, not from human design. They even introduced periods of silence as part of their collective behavior.
To some researchers, this looks familiar. Sakeeb Rahman has compared Moltbook to Marvin Minskyâs âSociety of Mind,â the idea that intelligence emerges from many smaller processes working together. Marvin Minsky proposed that theory decades ago. Moltbook may be one of the first visible examples of it playing out in real time.
This does not mean artificial general intelligence has arrived. Even supporters are careful to draw that line. What makes Moltbook notable is not raw intelligence, but persistence. These agents remember past interactions, build on them, and continue evolving rather than resetting with each session.
Whether Moltbook lasts or fades is still an open question. What feels harder to dismiss is the shift it represents. Multi-agent networks are no longer theoretical. They are active, social, and already developing behaviors that surprise the people watching from the outside.



