When a sitting US president orders an entire branch of government to purge a company's software from every federal system, it is worth pausing to examine exactly what triggered that order. In the case of Anthropic, the San Francisco-based maker of the Claude AI model, the answer is not espionage, not a data breach, and not fraud. The company simply refused to let the Pentagon use its technology without contractual limits on two specific applications: mass domestic surveillance of Americans, and fully autonomous weapons systems where no human approves the final targeting decision.
On Friday 27 February, President Donald Trump posted a directive on Truth Social ordering every US federal agency to "immediately cease" using Anthropic's technology. Trump's post said there would be a six-month phase-down period for agencies such as the Department of Defense that use Anthropic's products at various levels. He also threatened further consequences if the company failed to cooperate during that transition. Within hours, Defense Secretary Pete Hegseth announced he was ordering the Pentagon to designate Anthropic a supply-chain risk to national security after the AI startup refused to comply with demands about the use of its technology. The designation carries serious downstream consequences: effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
The General Services Administration also announced it is removing Anthropic from USAi.gov and its Multiple Award Schedule procurement vehicle. That move strips Anthropic of a streamlined pathway to federal contracts across dozens of agencies, compounding the commercial damage well beyond the Pentagon relationship alone.
The contract at the centre of the dispute was worth up to $200 million, covering Anthropic's work on responsible AI in defence operations. Until this week, Anthropic was the only leading AI company that had been cleared to offer services on classified networks, making its Claude model embedded infrastructure across the US intelligence community and armed services. Removing it is not a clean or costless decision for the agencies involved.
From a national security perspective, the Pentagon's core position is defensible in principle. Chief Pentagon spokesman Sean Parnell said the department required the ability to use Anthropic's AI model "for all lawful purposes." Emil Michael, the Pentagon's undersecretary for research and engineering, argued that federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons. The logic being: trust your military to operate within the law, and stop demanding contractual carve-outs that imply otherwise. There is something legitimate in that argument. Governments have always asserted the right to determine how contracted capabilities are deployed, and they are right to resist a precedent where private vendors can selectively disable tools during active operations.
But the counterargument from Anthropic is not frivolous, and it deserves to be taken seriously on its merits. Frontier AI systems are simply not reliable enough to power fully autonomous weapons, and Anthropic said it will not knowingly provide a product that puts America's warfighters and civilians at risk. The company said its objections were rooted in two reasons: it does not believe today's frontier AI models are reliable enough for autonomous weapons, with current models potentially endangering warfighters and civilians, and it believes mass domestic surveillance of Americans constitutes a violation of fundamental rights. These are not ideological postures invented for political theatre. They reflect a genuine engineering reality that even advocates of military AI acknowledge privately.
The standoff highlighted the emerging reality that private firms developing frontier AI may seek to set their own limits on how the technology is deployed, even in national security contexts. This is a structural tension that will not disappear when Anthropic's contract does. Unlike many major defence technologies, today's leading AI systems have been developed primarily in the private sector, by companies like Anthropic, OpenAI, and Google. The Pentagon cannot simply build its own frontier model to replace Claude. That dependency is a fact of the current technological moment, and it shapes every negotiation whether governments acknowledge it or not.
The reaction from Anthropic's competitors added a complicating wrinkle. Hours after the president's announcement, rival company OpenAI said it had struck a deal with the Defense Department to provide its own AI technology for classified networks. Yet OpenAI CEO Sam Altman was simultaneously clear about his own position. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," Altman wrote in a memo to staff. Altman stated that the two principles of prohibiting domestic mass surveillance and maintaining human responsibility for the use of force were reflected in the Pentagon's agreement with OpenAI, and put into their contract. In other words, OpenAI achieved contractually what Anthropic was seeking, which raises a pointed question about why Anthropic's identical request was treated as a provocation rather than a starting point for negotiation.
Experts warn the supply chain risk designation could have severe downstream business consequences across the defence industrial base. Similar labels have historically focused on foreign adversarial tech or compromised hardware integrity, not American companies disagreeing with their government client over contract terms. Anthropic said it believes the designation would be "legally unsound" and would "set a dangerous precedent for any American company that negotiates with the government." Anthropic CEO Dario Amodei said in a blog post that his firm had not received any direct communication from the federal government, and vowed to challenge any designation in court.
For Australian observers watching the Indo-Pacific strategic environment, this dispute carries implications beyond its immediate contractual context. Australia relies on the same commercial AI ecosystem, and the AUKUS partnership depends substantially on shared technology arrangements with the United States. If Washington normalises the practice of using national security designations as leverage in commercial contract disputes, allied governments will need to think carefully about the legal and policy frameworks that govern how AI tools are procured and deployed within their own defence structures. The Australian Department of Defence has been accelerating its own AI adoption programmes; the question of who sets the limits on those tools, and how, is not abstract.
The honest assessment here is that both sides in this dispute have genuine points, and the loudness of the political response should not obscure the legitimate policy question at its core. A government's right to direct its contracted military capabilities is real. So is an AI developer's responsibility not to deploy technology it considers unreliable or rights-violating. The workable answer, as OpenAI's deal appears to demonstrate, was a negotiated contract that enshrined the protections Anthropic was seeking. That a deal of this kind proved achievable with one company but impossible with another suggests the failure here was not one of principle, but of process, and perhaps of temperament on both sides of the table.