The strategic calculus here involves several competing considerations, and none of them are straightforward. When the Trump administration ordered all federal agencies to stop using Anthropic's artificial intelligence products last week, it did so not because of a national security breach, a foreign actor's interference, or a corporate fraud. It did so because a private company refused to remove its own safety guardrails. The episode deserves serious scrutiny, not least from Canberra, where the implications of AI in defence contexts are growing more pressing by the month.
The origins of this confrontation stretch back to July 2025. Anthropic signed a $200 million contract with the Pentagon to deploy its Claude AI model within classified military systems. Anthropic had partnered with Palantir since late 2024 to provide US defence and intelligence agencies access to various Claude systems. The arrangement looked, on its face, like a sensible integration of frontier AI into national security infrastructure. The deal represented a growing push by the company as it readied itself for a public offering to court national security business, with executives announcing that the award "opens a new chapter" for the firm.
What often goes unmentioned is how quickly the internal tensions surfaced. Disagreements emerged over the military's future use of Anthropic's systems, with company officials growing concerned that the technology could eventually be used to carry out lethal autonomous operations. The flashpoint came in January, when following the US military's operation to capture former Venezuelan President Nicolás Maduro, an Anthropic employee raised concerns with Palantir about how Claude was used in the operation. Palantir contacted the Pentagon, expressing alarm that Anthropic might disapprove of its technology being used in similar future missions, and the matter came to the attention of Defence Secretary Pete Hegseth, who reacted angrily.
The conflict centres on Anthropic's push for guardrails that explicitly prevent the military from using its Claude model to conduct mass surveillance on Americans or to power autonomous weapons. These are not exotic demands. They track closely with existing US military doctrine and federal law. The Pentagon, for its part, wants the ability to use Claude for "all lawful purposes" and says it has no interest in either of the uses that Anthropic was concerned about. The impasse, then, is not principally about what the military intends to do today. It is about what contractual language will bind it in the future.
On February 24, Defence Secretary Pete Hegseth gave Anthropic cofounder and CEO Dario Amodei a deadline: relent by 5:01 p.m. on Friday, February 27, and allow unrestricted use of the company's AI models "for all legal purposes." Anthropic said the contract language it received overnight "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," and that new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Amodei held the line. The Pentagon's threats, he observed, were "inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," adding that "these threats do not change our position."
The outcome was swift and severe. The Trump administration ordered federal agencies and contractors that work with the military to cease business with Anthropic after the company refused to allow the Pentagon to use its AI technology without restrictions. Government agencies, including the Pentagon, were given six months to phase out use of Anthropic's products, and Defence Secretary Hegseth declared the company a "supply chain risk," a designation usually reserved for companies thought to be extensions of foreign adversaries.
The competitive implications were not slow to materialise. OpenAI CEO Sam Altman announced his company had secured a coveted Pentagon contract hours after the Department of War designated Anthropic a supply chain risk, stripping it of its military contracts and mandating that all defence contractors stop using Anthropic's AI models. Altman told employees the government is willing to let OpenAI build its own "safety stack", a layered system of technical, policy, and human controls, and that if the model refuses to perform a task, the government would not force OpenAI to override it. The arrangement appears, at least in broad terms, to accommodate the same categories of concern that Anthropic raised, raising pointed questions about why a deal could not be reached with Anthropic on similar terms.
The diplomatic terrain here is considerably more complex than the headlines suggest. Those inclined toward a purely libertarian reading of events will emphasise the principle of government contract sovereignty: when a firm accepts public money for classified work, it cannot reasonably insist on veto power over how that work is conducted. The Pentagon's position, that decisions about acquired technology belong to the government rather than the vendor, is not without legal and democratic merit. Pentagon chief technology officer Emil Michael argued: "At some level, you have to trust your military to do the right thing. We do have to be prepared for what China is doing. So we'll never say that we're not going to be able to defend ourselves in writing to a company."
Yet the counter-argument deserves an equally careful hearing. At the root of Anthropic's claim is the belief that the Trump White House is an unreliable custodian of AI military and surveillance technologies, and that the firm must impose independent guardrails to prevent the Pentagon and other agencies from potential misuse. Legal and policy experts have said the government's decision presents profound questions about the relationship between government and business, noting it is the first time the US has designated an American company a supply chain risk, and the first time the designation has been used in apparent retaliation for a business not agreeing to certain terms. Amodei himself described the designation as "unprecedented" for an American firm rather than a foreign adversary, and characterised the government's statements as "retaliatory and punitive."
Three factors merit particular attention from an Australian perspective. First, Australia is a partner in the AUKUS arrangement, a framework explicitly designed to share advanced defence technology including, in its Pillar II provisions, AI-enabled capabilities. The question of what governance norms attach to AI used in classified military settings is therefore not a distant American problem. Second, Australian firms and research institutions that use Anthropic's Claude models via commercial agreements may find themselves caught in the crossfire of the supply chain risk designation, particularly if they maintain any commercial relationship with US defence contractors. The supply chain risk designation means any company that works with the US military would have to prove it does not touch anything related to Anthropic in its work with the Pentagon, and much of Anthropic's success stems from enterprise contracts with large companies, many of which may have contracts with the Pentagon. Third, the precedent set here, of a government using economic coercion to override a technology company's own safety architecture, will be noted carefully in Beijing, Moscow, and elsewhere. It weakens the broader argument that democratic governments are more trustworthy stewards of powerful AI than authoritarian ones.
What is often overlooked in the public discourse is the technological backdrop against which all of this is unfolding. The same week that Anthropic was locked in existential negotiations with the Pentagon, the world's first transatlantic fibre-optic cable was quietly being pulled from the Atlantic seabed. TAT-8 was the first transatlantic cable to transmit traffic using optical fibres rather than copper, and launched in 1988, it provided the blueprint for every undersea internet cable that followed. Its capacity was fully exhausted within just 18 months, a clear sign that the world's appetite for digital communication was accelerating faster than expected, proving the viability and necessity of fibre-optic transmission for international connectivity. The infrastructure that now carries the vast bulk of global AI traffic, including the classified communications Anthropic and the Pentagon are fighting over, traces its engineering lineage directly to TAT-8. The International Energy Agency has projected that copper supplies could fall by 30 per cent within a decade if new sources do not keep pace with manufacturing demand, making the thousands of kilometres of recovered cable a welcome source of the metal. The old and the new, physical infrastructure being recycled for a resource-constrained future, and AI governance disputes shaping the contours of digital power, are more closely connected than they might appear.
The evidence, though incomplete, suggests this dispute will not be resolved quickly or cleanly. Earlier in the week, the Pentagon said it would also consider compelling Anthropic to work with them via the Defence Procurement Act, a 1950 law that gives the president significant emergency authority to control domestic industries. It is not clear how the Pentagon would be able to both compel Anthropic to work with them via the Act and simultaneously deem it a supply chain risk. Anthropic has said Hegseth's designation is "legally unsound" and would "set a dangerous precedent for any American company that negotiates with the government," and that it will challenge the designation in court. The courts will now take their turn at a question that legislators in Washington, Canberra, and every other capital with a serious AI policy interest should have addressed years ago: who governs the machines, and by what authority?
Reasonable people will disagree about the correct balance between national security prerogatives and corporate ethical autonomy. What the Anthropic episode makes clear is that AI governance frameworks built on goodwill and informal understandings are insufficient. The Australian Parliament and the Department of Defence would do well to study this episode carefully, not as spectators, but as participants in an alliance that is integrating these very technologies into its own classified systems. The hard questions being fought over in American courts and Pentagon memos will arrive at Australia's door whether or not Canberra chooses to engage with them proactively.