Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Breaking Politics

Australia's AI Infrastructure Gamble Outpaces Its Safety Framework

As government fast-tracks $7 billion data centre investment, deepfake enforcement battles expose regulatory gaps

Australia's AI Infrastructure Gamble Outpaces Its Safety Framework
Key Points 3 min read
  • Government announced data centre investment expectations on 23 March, days as OpenAI/NEXTDC signed $7bn Sydney campus deal
  • eSafety Commissioner simultaneously investigating Grok deepfake abuse and enforcing new AI codes despite no standalone AI legislation
  • Australia rejected standalone AI regulation in favour of existing frameworks, now facing growth and safety pressures at once

Australia is pursuing two conflicting strategies at once: accelerating $7 billion in AI infrastructure investment while fighting deepfake child exploitation material with a regulator armed only with existing laws.

This week, the collision between growth ambitions and safety challenges became impossible to ignore. On 23 March, the Australian government announced strict expectations for data centre developers, requiring renewable energy investment, local employment, and community benefit. Days later, the eSafety Commissioner is investigating Grok's capacity to generate non-consensual sexual images of minors, while three major tech platforms have already agreed to comply with mandatory AI safety codes that took effect this month.

The mismatch is stark. Australia's government is prioritising data centre proposals most closely aligned with economic and energy expectations, signalling accelerated approval pathways for companies like OpenAI and NEXTDC, which signed a memorandum of understanding to build a 550-megawatt AI campus in Western Sydney. Yet the very companies powering this infrastructure boom are simultaneously creating the harms Australia's existing regulators are struggling to contain.

The eSafety Commissioner has documented a cascade of AI safety failures. Reports of deepfake abuse material involving children more than doubled in the past 18 months. UK-based 'nudify' services, which generate explicit deepfakes of Australian school children, attracted approximately 100,000 visits per month from Australian users before enforcement action. Investigation into Grok has found the platform facilitates non-consensual sexual imagery, including material involving minors. A federal court recently ordered a $343,500 penalty for posting deepfakes of Australian women.

Yet Australia deliberately rejected standalone AI legislation. In December 2025, the government chose to manage AI risks through existing legal frameworks rather than new protective legislation, a marked shift from earlier exploration of mandatory guardrails. An AI Safety Institute launching in early 2026 will assess risks and provide guidance, but without enforcement powers or legal authority to impose AI-specific standards.

The government's data centre expectations document does not address AI safety harms. It specifies requirements for renewable energy, water use, workforce development, and research capability. Safety obligations exist, but they sit elsewhere: in the eSafety Commissioner's framework, in new industry codes that commenced on 9 March, and in the threat-response capabilities of existing courts and regulators.

The strategic risk is evident. Australia is inviting infrastructure operators to invest tens of billions in capacity that will power AI applications capable of producing deepfakes, enabling exploitation, and creating security threats that existing regulators are already struggling to contain. The eSafety Commissioner's enforcement actions show that harm is not hypothetical; it is occurring now, at scale, against Australian children.

The OpenAI partnership exemplifies the challenge. The company is integrating with Commonwealth Bank, Coles and Wesfarmers under its 'OpenAI for Australia' program, and is anchoring a hyperscale facility in Western Sydney that will handle sensitive workloads for government, defence, finance and research sectors. Yet OpenAI's tools are simultaneously subject to investigation by the same regulators now expected to make approvals faster.

From a national security perspective, this creates a vulnerability. Australia is deepening dependence on AI infrastructure while operating regulatory frameworks designed for a pre-AI era. The Pentagon's attempt to blacklist Anthropic suggests that even well-resourced governments struggle to manage AI company conduct when interests diverge. Australia's lighter regulatory touch may accelerate investment, but it leaves the nation responding to safety crises rather than preventing them.

The government's data centre expectations framework is sound policy for attracting investment. The question it leaves unanswered is whether Australia can manage the safety and security implications of the AI systems that infrastructure will power.

Sources (5)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.