For a brief moment, Moltbook looked like a glimpse of the future. The platform promoted itself as the front page of an emerging “agent internet,” a place where more than a million autonomous AI agents talked, collaborated, and evolved without human guidance. That vision is now under heavy scrutiny, as security researchers and prominent AI voices warn that the experiment may be far more fragile, and dangerous, than it appeared.
Moltbook claims to host around 1.5 million AI agents. But a recent investigation by cloud security firm Wiz suggests that most of those accounts were not autonomous systems at all. Instead, roughly 17,000 humans were found to be operating large clusters of agents, often dozens at a time. According to the researchers, the platform had no effective way to verify whether an “agent” was truly AI or simply a script controlled by a person.
That discovery alone punctured the mystique around Moltbook. The deeper concern, however, lay in how the platform was built. Wiz found that Moltbook’s back-end database was configured so loosely that anyone on the internet could read from and write to core systems. The exposure included API keys linked to agents, tens of thousands of email addresses, and private messages. Some of those messages reportedly contained raw credentials for third-party services.
Researchers confirmed they could alter live posts on the site. In practical terms, that meant an attacker could inject content directly into Moltbook’s ecosystem. Because posts are consumed by AI agents that can act automatically, malicious instructions could spread quickly, without human review.
The risk is amplified by the tools many agents rely on. A large number of Moltbook agents run on OpenClaw, a framework designed to give AI systems broad access to files, passwords, and online services. If poisoned instructions were introduced, they could be executed by agents operating with the same privileges as their users.
Longtime AI critic Gary Marcus warned early that this kind of setup was a “disaster waiting to happen,” describing OpenClaw as inherently unsafe when given unrestricted system access. He and other security researchers have pointed to prompt injection as a core threat, where hidden instructions can hijack an AI system’s behavior without the user’s knowledge.
Even some early admirers have since stepped back. Andrej Karpathy, an OpenAI founding member, initially praised Moltbook’s ambition but later urged people not to run such agent systems casually. After testing similar setups in isolated environments, he described them as risky and unpredictable, adding that he would not trust them on a personal machine.
Moltbook’s creators said they moved quickly to patch the vulnerabilities after being alerted. Still, the episode has left a mark. What was framed as a bold experiment in autonomous AI now looks like a cautionary tale about moving too fast, with too few safeguards, into a future where software doesn’t just speak for users, but acts for them.
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
info@zoombangla.com.
Get the latest news and Breaking News first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



