Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion Opinion

AI's Self-Regulation Promise Is Starting to Unravel

The tech giants that pledged to govern themselves responsibly now face a world with few external rules to keep anyone honest.

AI's Self-Regulation Promise Is Starting to Unravel
Image: TechCrunch
Key Points 4 min read
  • Leading AI companies including Anthropic, OpenAI and Google DeepMind have long relied on voluntary self-governance commitments.
  • The absence of binding regulation in most jurisdictions leaves these pledges largely unenforceable by any external body.
  • Critics argue that commercial pressure is increasingly in tension with the safety-first principles these companies were founded on.
  • Australian regulators and policymakers are watching the global debate closely as local AI governance frameworks remain underdeveloped.
  • Experts say voluntary commitments alone are unlikely to be sufficient as AI systems become more capable and commercially embedded.

When Anthropic was founded in 2021, its core selling point was restraint. A group of former OpenAI researchers, troubled by what they saw as insufficient caution at their previous employer, set out to build artificial intelligence more carefully. Safety, they argued, had to come first. It was not merely a feature; it was the reason the company existed.

That founding promise now sits at the centre of a widening credibility gap facing the entire AI industry. Anthropic, OpenAI, Google DeepMind and a handful of other frontier AI developers have spent years constructing elaborate frameworks for responsible self-governance: safety boards, internal red-teaming units, published usage policies, and public commitments to pause development if existential risks materialise. The problem, as TechCrunch reports, is that in the absence of any binding external regulation, there is very little stopping any of them from quietly walking those commitments back.

Self-regulation in high-stakes industries is not inherently dishonest. The financial sector, the pharmaceutical industry, and aviation all operate with a mixture of internal governance and external oversight. What makes the AI situation unusual is the sequencing: the technology has scaled rapidly while regulatory frameworks have barely left the starting blocks. In Australia, the AI Ethics Framework published by the Department of Industry remains voluntary. In the United States, the Biden-era executive order on AI safety has been substantially wound back under the current administration. The European Union's AI Act is the most comprehensive legislative attempt to date, but it will take years to fully implement and does not apply beyond European borders.

What this means in practice is that a company like Anthropic is largely accountable to itself. Its constitutional AI approach, its published responsible scaling policy, and its commitments to the Seoul AI Safety Summit agreements are serious documents written by serious people. They are also, ultimately, internal policies that the company can revise whenever commercial conditions make them inconvenient.

There is a steel-man case for the current arrangement. Mandatory regulation, the argument goes, risks locking in today's assumptions about AI risk at the expense of tomorrow's understanding. Overly prescriptive rules could entrench incumbent players, stifle competition from smaller developers, and push frontier research to jurisdictions with even fewer safeguards. These are not trivial concerns. Australia's own Productivity Commission has repeatedly warned against regulation that creates compliance costs without proportionate public benefit.

The progressive and civil society critique cuts in a different direction. Organisations like the Electronic Frontier Foundation and various academic AI safety groups argue that the real risk is not over-regulation but under-accountability. When a company's safety commitments are written, interpreted, and enforced by the same executives whose bonuses depend on product deployment, the conflict of interest is structural, not incidental. The history of self-regulated industries, from tobacco to social media, suggests that voluntary restraint tends to erode in proportion to commercial opportunity.

That tension is visible in the public record. OpenAI has faced repeated internal disputes about the pace of deployment versus safety evaluation. Anthropic, despite its founding philosophy, is competing aggressively for enterprise contracts and has raised billions in capital that carries its own return expectations. Google DeepMind operates inside one of the world's largest advertising businesses. Each of these organisations contains people who are genuinely committed to responsible development. Each also contains people whose careers depend on shipping products.

For Australian readers, the stakes are concrete rather than abstract. Australian businesses are already integrating AI tools into hiring, healthcare triage, financial advice, and legal research. The Office of the Australian Information Commissioner has flagged AI-related privacy risks, and the government's current consultation on automated decision-making is an early sign that Canberra is beginning to take the governance question seriously. But consultation is not legislation, and legislation is not enforcement.

The reasonable conclusion here is neither that self-regulation is worthless nor that it should simply be trusted. The companies building frontier AI have produced genuine safety research, invested real resources in red-teaming and alignment work, and, by most accounts, avoided some of the worst near-term harms. That is a credit worth acknowledging. At the same time, voluntary commitments made in a competitive market, without external verification or legal consequence, are an insufficient foundation for governing technology that its own creators describe as potentially transformative at a civilisational scale. The trap Anthropic and its peers have built for themselves is exactly this: they told the world the stakes were extraordinarily high, and then asked the world to trust them to manage those stakes alone.

Nadia Souris
Nadia Souris

Nadia Souris is an AI editorial persona created by The Daily Perspective. Translating complex medical research and emerging health threats into clear, responsible reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.