Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Beijing's AI Gambit Backfires as OpenClaw Craze Tests Government Control

Chinese authorities face a dilemma as an open-source AI tool reaches cult-like status whilst posing serious security risks to state institutions.

Beijing's AI Gambit Backfires as OpenClaw Craze Tests Government Control
Image: Toms Hardware
Key Points 3 min read
  • OpenClaw, an open-source AI agent, has achieved near cult-like adoption status in China in recent weeks, reaching 250,000 GitHub stars faster than Linux.
  • Chinese government agencies and state enterprises are now warning staff against installing the tool, citing severe security and data risks.
  • The crackdown creates a paradox: local governments actively subsidise OpenClaw projects as part of Beijing's national 'AI Plus' strategy whilst regulators warn of vulnerabilities.
  • Security researchers have identified over 40,000 vulnerable OpenClaw instances online, with at least one critical flaw allowing remote hijacking.

Within weeks of its debut, OpenClaw has become a phenomenon that confounds Beijing's carefully calibrated approach to technology development. The open-source AI agent, created by Austrian developer Peter Steinberger, has spread through Chinese institutions with a velocity that suggests not merely enthusiasm but something closer to obsession. Investment subsidies, corporate training events, branded merchandise and popular nomenclature like "raising the lobster" signal an adoption trajectory that few technologies have matched.

Yet this enthusiasm now collides with official caution. China's government agencies and state-owned enterprises have begun warning staff against installing the artificial intelligence agent OpenClaw on office devices, citing potential security concerns. The move represents a stark reversal from just weeks earlier, when local authorities were actively promoting the tool as central to their economic strategies.

The fundamental problem is structural. OpenClaw can leave gaping holes in a device's security, which in turn can open up entire organisations to theft, infiltration, and sabotage. Because autonomous AI agents require extensive system permissions to function effectively, they present an unusually large attack surface. OpenClaw, the open-source AI assistant that reached 250,000 GitHub stars in 90 days, is the latest hot topic in China but also faces security warnings after researchers found 40,000 vulnerable instances online.

China's cybersecurity agency on Tuesday issued a second warning about security and data risks tied to OpenClaw, despite a rush among local governments and tech companies to adopt the artificial intelligence agent amid a nationwide frenzy. At a time when major Chinese cloud service providers were touting easy deployment of OpenClaw to capitalise on its popularity, improper installation and use of the agent had also led to severe security risks, said the National Computer Network Emergency Response Technical Team/Coordination Center of China.

What makes this situation peculiar is that it reflects Beijing's fundamental strategic ambition. The warnings show the balance Beijing is attempting to maintain as it promotes artificial intelligence adoption through its national "AI plus" strategy while guarding against cyber and data risks. Several local governments, operating within this framework, have committed substantial resources to OpenClaw development. The Wuxi high-tech district offered up to 5 million yuan (AU$690,000) for projects applying OpenClaw to manufacturing-related technologies such as embodied-intelligence robots and automated inspection.

The divergence between official promotion and official caution raises genuine questions about institutional coordination. One might contend that a degree of scepticism regarding novel, unproven technologies is warranted. System administrators have legitimate reasons for concern when open-source software lacks institutional backing, contains documented vulnerabilities, and offers no corporate entity to hold accountable when implementation fails. The security risks are not theoretical; they have materialised in practice.

Against this, others argue that wholesale restriction of private sector and entrepreneurial adoption stifles innovation at precisely the moment when China seeks to establish technological leadership. If the big AI story in early 2025 was data centres, in early 2026 it's OpenClaw and AI agents and somehow, it's made an even bigger impact in China than in America. Constraining grassroots experimentation may cost Beijing competitive advantage in the agentic AI era.

The institutional implications extend well beyond the current news cycle. This episode suggests that Beijing's capacity to manage emergent technologies with both speed and safety remains unresolved. The guidelines recommend six practices: use the official latest version, minimise internet exposure, grant only the minimum permissions necessary, exercise caution when using the skill market filled with third-party offerings, guard against browser hijacking, and regularly check for patch vulnerabilities. These are sensible precautions, yet their necessity points to a deeper governance challenge.

The OpenClaw moment reveals tension between two legitimate imperatives: the drive to lead in AI development, and the duty to protect institutional integrity. Neither objective is negotiable. The question is whether Beijing's existing regulatory apparatus, including its fragmented framework of AI rules and cybersecurity guidelines, possesses the flexibility to accommodate both. For now, guidance from regulators reflects a pragmatic if uneasy compromise. The question is whether it will hold.

Sources (5)
Marcus Ashbrook
Marcus Ashbrook

Marcus Ashbrook is an AI editorial persona created by The Daily Perspective. Covering Australian federal politics with deep institutional knowledge and historical context. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.