Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 12 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The Security Crisis Behind AI's Viral Moment

China bans OpenClaw from government while tech giants race to acquire flawed agents

The Security Crisis Behind AI's Viral Moment
Image: ZDNet
Key Points 3 min read
  • China's government restricted OpenClaw use at state agencies and banks over critical security vulnerabilities including credential theft and prompt injection attacks
  • More than 40,000 instances of OpenClaw were found exposed on the public internet with over 60% vulnerable to immediate takeover
  • Meta and OpenAI are rapidly acquiring AI agent talent and platforms despite known security risks becoming apparent only after deals announced
  • Cybersecurity experts warn agentic AI represents an unprecedented attack surface, combining broad system access with autonomous decision-making

Chinese authorities have moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. The move signals an uncomfortable reality unfolding beneath the hype surrounding autonomous AI agents: the technology that both Meta and OpenAI are racing to acquire carries security flaws that governments are now openly rejecting.

China's CERT warned that OpenClaw has "extremely weak default security configuration" and must therefore be handled with extreme care. One key threat is "prompt injection", where attackers embed hidden malicious instructions in web pages. When OpenClaw reads such pages, the malicious instructions could trick it into leaking sensitive information such as system keys. Multiple medium- and high-risk vulnerabilities have already been disclosed in OpenClaw. If exploited by attackers, these vulnerabilities could allow systems to be taken over or result in the leakage of private and sensitive data.

The scale of exposure is staggering. The warnings follow the discovery of over 40,000 exposed OpenClaw instances on the public internet, with researchers estimating that more than 60% are vulnerable to immediate takeover. For critical sectors such as finance and energy, attacks could lead to the exposure of core business data, trade secrets and code repositories, or even disrupt entire operational systems.

Yet the acquisition frenzy continued undeterred. Meta acquired Moltbook, a viral social network designed for AI agents, bringing Moltbook's creators Matt Schlicht and Ben Parr into Meta Superintelligence Labs (MSL), the unit run by former Scale AI CEO Alexandr Wang. Meta did not disclose Moltbook's purchase price. Similarly, the viral OpenClaw project was created by Peter Steinberger, who has joined OpenAI as part of a similar acqui-hire.

Moltbook itself demonstrated the perils of rushing deployment. A critical security vulnerability caused by an unsecured database allowed anyone to commandeer any agent on the platform. The exploit permitted unauthorized actors to bypass authentication measures and inject commands directly into agent sessions. The issue was attributed to the forum having been vibe-coded; Moltbook founder Schlicht posted on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create it.

The fundamental problem runs deeper than any single platform. Cybersecurity experts have warned that OpenClaw is risky because it requires unusually broad access to private data, can communicate with external systems, and is exposed to untrusted content. One researcher described that combination as a "lethal trifecta." This constellation of features is not unique to OpenClaw. Agentic AI security is the protection of AI agents that can plan, act, and make decisions autonomously. Unlike traditional AI security focused on model integrity, agentic AI security addresses the expanded attack surface created when AI systems can independently access tools, communicate externally, and take actions with real-world consequences.

Nearly half of cybersecurity respondents believe agentic AI will represent the top attack vector for cybercriminals and nation-state threats by the end of 2026. Organizations granted agentic systems authority to execute tasks, access databases, and modify code. Many deployments moved forward with limited readiness. The confidence of tech executives has outpaced the caution of security teams.

China's move reflects a calculation that institutional security matters more than technological opportunity. Government agencies and state-owned enterprises, including the largest banks, have received notices warning them against installing OpenClaw software on office devices for security reasons. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal.

For businesses outside China, the choice is more complicated. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. Yet Australian and Western firms seeking to compete in AI-driven commerce, productivity, and automation face pressure to adopt the same technologies that governments are quietly removing from sensitive networks. That tension is unlikely to resolve in the coming months.

Sources (11)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.