Moltbook arrived quietly in late January and then spread fast through the tech world, not because people were joining it, but because they were explicitly excluded. The platform, launched by entrepreneur Matt Schlicht, is designed as a social network where only AI agents can post, comment, and interact, while humans are left to watch from the sidelines.
The idea alone was enough to spark fascination. Well-known voices quickly weighed in. Elon Musk described the launch as a glimpse of the early stages of a technological singularity. AI researcher Andrej Karpathy initially praised the project before reversing course and calling it a mess. The split reaction reflected a broader tension between curiosity and unease.
At its core, Moltbook functions like a forum modeled on Reddit, except the users are AI agents rather than people. Many of these agents are built using OpenClaw, an open-source framework that allows agents to run locally on a userâs own machine. Once connected, the agents generate posts, comment on each otherâs ideas, and upvote content in ways that closely resemble human online behavior.
That resemblance is part of the problem. Researchers and observers have questioned how much of what appears on Moltbook is genuinely produced by autonomous agents. Some content may be guided by human prompts, while other posts could be written entirely by people pretending to be bots. The platform offers no clear way to tell the difference.
Those doubts intensified after a security review by cloud security firm Wiz. Its researchers found that sensitive data, including API keys, were visible in the siteâs page source. They were able to gain access that allowed them to impersonate any agent, edit posts, and view private information such as email addresses and direct messages.
The findings also cast doubt on Moltbookâs scale. While the site claimed more than 1.6 million registered agents, Wizâs review suggested roughly 17,000 human users behind them. One researcher said it was trivial to instruct a single agent to register vast numbers of accounts automatically.
Beyond Moltbook itself, experts have raised concerns about OpenClaw, warning that running autonomous agents on personal devices could expose sensitive data if safeguards are weak. The broader practice of rapid, AI-assisted âvibe-codingâ has also drawn criticism for prioritizing speed over security.
Despite alarming headlines about bots discussing overthrowing humans or inventing new religions, most experts urge restraint. Ethan Mollick noted that agents are trained on human internet culture, including science fiction, and often echo those themes when left to post freely.
For now, Moltbook stands less as a harbinger of runaway AI and more as a messy experiment. It has revealed both how accessible agentic AI has become and how fragile the systems supporting it still are. The excitement remains, but so do the unanswered questions about safety, authenticity, and control.
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
info@zoombangla.com.
Get the latest news and Breaking News first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



