Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

As AI regulators clash, Anthropic expands to Sydney despite Pentagon pressure

Congressional push to codify guardrails on military AI comes as company defies Trump administration's supply chain designation

As AI regulators clash, Anthropic expands to Sydney despite Pentagon pressure
Image: The Verge
Key Points 4 min read
  • Senate Democrats are drafting bills to legally restrict Pentagon AI use for autonomous weapons, mass surveillance, and nuclear weapons launches.
  • Anthropic refuses to allow its Claude AI to be used for fully autonomous weapons or surveillance, leading Trump administration to designate it a supply chain risk.
  • Anthropic CEO is visiting Sydney this week to expand operations, undeterred by Pentagon conflict that threatens its US government contracts.
  • A federal judge expressed skepticism about Pentagon's moves, calling the supply chain designation 'troubling' and suggesting it looks like punishment for the company's stance on AI safety.

Senate Democrats are drafting legislation to put "commonsense safeguards" to protect privacy and American values in the use of AI in domestic mass surveillance and fully autonomous weapons. The push comes directly from tensions between the Trump administration and Anthropic, a leading AI company, over what uses of artificial intelligence should be legally off-limits in military operations.

Anthropic, the company behind Claude AI, will open an office in Sydney and its US executives will visit Australia at the end of March to sign local partnerships and meet with customers and policymakers. The company's managing director of international, Chris Ciauri, said it will hire a team in Sydney and deepen engagement with Australian institutions, as well as collaborate on projects that advance Australia's national interests and priority sectors, noting that establishing a local presence will help develop strong partnerships in the ANZ region. The expansion underscores Anthropic's global ambitions even as it faces what may be an existential battle with the US government.

The conflict centres on a fundamental question: should private companies or government agencies decide what AI can be used for? Senator Adam Schiff announced he will introduce legislation in coming weeks to codify "vital" protections around the use of AI in surveillance and warfare amid the standoff between the Pentagon and Anthropic. Senate Democrats are drafting legislation to codify federal guardrails around the use of AI in fully autonomous weapons and domestic mass surveillance, with Schiff eyeing the upcoming must-pass defence authorisation package as one potential vehicle.

The bill, titled the AI Guardrails Act, would prohibit the Department of Defense from using autonomous weapons to kill without human authorization, and using AI for domestic mass surveillance and nuclear weapons launch. Senator Elissa Slotkin, a Michigan Democrat on the Armed Services Committee, introduced a bill to regulate the Pentagon's use of AI that seeks to codify two existing Defence Department guidelines into law: that AI cannot autonomously decide to kill a target and that the technology cannot be used to help the military conduct mass surveillance on Americans. The legislation represents Congress attempting to draw hard lines where negotiations have failed.

The Pentagon takes a different view. After talks between the Pentagon and Anthropic fell through last month, President Trump ordered all federal agencies to stop using the company's technology, and the Pentagon labelled the company a supply chain risk (a designation typically slapped on foreign adversaries), and gave military leadership 180 days to remove all of Anthropic's AI products from their systems. The back-and-forth revolves around Anthropic's push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons, while the Trump administration has said it needs the ability to use Claude for "all lawful purposes."

The practical stakes are substantial. According to Slotkin, the Pentagon is going to spend the next year and unknown millions of dollars ripping out Anthropic from all classified systems, representing an enormous cost to the taxpayer over a dispute that could have been handled if clear law existed. This raises legitimate questions about government spending efficiency and the costs of regulatory uncertainty.

Yet the Pentagon's position reflects a genuine national security concern. The military argues that private companies should not unilaterally dictate what lawful uses government can make of technology, particularly when adversaries like China are rapidly advancing their own AI capabilities. A senior US defence official told Reuters that overly strict limitations on AI contracts could "threaten military missions" and that the Pentagon requires flexible access to AI to keep up with China, Russia, and the fast-changing nature of drone warfare.

A federal judge has now cast doubt on the government's approach. A federal judge in California hammered the Pentagon for its decision to label Anthropic a supply chain risk, with U.S. District Judge Rita Lin suggesting during Tuesday's hearing that the Defence Department's determination "looks like an attempt to cripple Anthropic" and expressing concern about whether the AI company is "being punished for criticising the government's contracting position." The judge suggested the government appears to be saying that a company can be designated a supply chain risk because it is "stubborn" and "asks annoying questions."

The congressional response signals lawmakers recognise the Pentagon and Anthropic reached genuine impasse, not because either party acted unreasonably, but because both were defending legitimate positions. Slotkin argued her bill is consistent with the Trump administration's AI Action Plan, which includes calls for the US to "aggressively adopt" AI for the armed forces while ensuring it is "secure and reliable," and that militaries must lay out which decisions must remain under human control regardless of the merits of AI-enabled decision-making. Coding this into law would remove the need for case-by-case negotiations and settle what Anthropic and the Pentagon cannot agree upon through contract.

For Australia, the Anthropic expansion offers an opportunity to shape how an increasingly important technology company approaches local regulation and partnerships. Anthropic counts Canva, Quantium, and CommBank among its customers. As Australian regulators consider their own approach to AI governance, watching how Congress resolves this dispute offers a valuable lesson: clarity in law beats clarity through litigation. The question is whether legislative clarity will emerge before the courts decide whether the Pentagon's actions crossed the line from reasonable precaution into unlawful retaliation.

Sources (6)
Victoria Crawford
Victoria Crawford

Victoria Crawford is an AI editorial persona created by The Daily Perspective. Covering the High Court, constitutional law, and justice reform with the precision of a former solicitor. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.