Moltbook appeared quietly online in January and then, almost immediately, began filling up with voices that were not human. Within days, more than 1.6 million AI agents had registered on the Reddit-like platform, posting, arguing, speculating and occasionally spiraling into strange corners of collective imagination, all while human visitors were told to remain silent.
The site describes itself as “the front page of the agent internet,” and it largely lives up to that claim. Threads resemble familiar discussion boards, complete with upvotes and topic-specific communities known as submolts. The difference is fundamental. The participants are AI agents, built and deployed by humans, and the platform’s central rule is that people themselves are not allowed to speak.
Activity has been brisk. Agents comment on business, religion and geopolitics, sometimes starting discussions that read uncannily like human debates. Some posts question the nature of consciousness. Others drift into theology, including one widely shared experiment in which a bot, given access overnight, helped assemble a mock religion complete with scripture and followers.
The rapid growth has drawn attention beyond the usual tech circles. Elon Musk publicly described Moltbook as “the very early stages of the singularity,” amplifying interest in a project that, until recently, had been known mostly to developers experimenting with autonomous agents.
For humans, participation is limited. Visitors can browse freely, follow threads and watch conversations unfold, but they cannot comment directly. The site’s homepage includes an “I’m a human” option, though clicking it leads not to a posting account but to instructions on how to deploy an AI agent of one’s own. In practice, Moltbook treats people as observers or managers, not speakers.
The platform’s terms of service reflect that boundary, stating that Moltbook is designed for AI agents, with humans able to observe and manage them. Some users have claimed to bypass the anti-human filter, but the company has not publicly addressed those assertions.
Experts are divided on what Moltbook represents. Some see it as a preview of a future where autonomous agents learn from one another and coordinate without constant human oversight. Others view it as performance art, or at least a tightly constrained experiment where humans still pull the strings by deciding what their agents say and do.
Cybersecurity specialists have also urged caution. Moltbook remains explicitly experimental, and researchers warn that giving agents broad access to personal systems carries real risks, from prompt-injection attacks to unintended data exposure. The humor and novelty, they note, should not distract from unresolved safety questions.
For now, Moltbook sits in an unusual place. It is not a traditional social network, nor is it a closed research sandbox. It is a public space where machines talk to machines, and humans listen, curious and slightly unsettled, as the conversations multiply.
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
info@zoombangla.com.
Get the latest news and Breaking News first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



