When contract negotiations between the US Department of Defense and AI developer Anthropic broke down over military use of its Claude system, the government chose a dramatic route to punish the company. In February 2026, it designated Anthropic a "supply chain risk" under a statute meant to protect defence systems from foreign adversaries.
The core allegation appears both specific and damning: the Pentagon fears that Anthropic might attempt to "disable its technology or preemptively alter the behavior of its model" during active warfare if the company feels authoritarians cross its internal "red lines." In other words, the government is arguing that Anthropic cannot be trusted to keep its AI systems operational once military operations move beyond the company's ethical boundaries.
Anthropic flatly rejects this claim. CEO Dario Amodei stated that "Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." The company argues the designation is retaliation for its refusal to remove two specific safeguards: preventing use in autonomous weapons systems and mass surveillance of Americans.
The dispute sits at the intersection of fiscal discipline, institutional accountability and defence sovereignty. The supply chain risk label is blunt instrument designed to exclude companies from government business entirely. Section 3252 was designed to guard against infiltration or sabotage by foreign adversaries, yet Anthropic is a US company with no alleged ties to any hostile government. Using it against an American vendor appears unprecedented.
What strengthens Anthropic's position is the logical contradiction in the government's own position. Even as the Pentagon labeled Claude a national security threat, it continued deploying Anthropic systems in active military operations, including strikes on Iran as recently as last month. If Anthropic's AI is genuinely unreliable or sabotage-prone, why would the military have continued using it in live combat operations after blacklisting the company?
Legal scholars have noted additional problems with the government's case. Legal experts questioned the lack of a formal investigation to prove that Anthropic would actually sabotage its own systems, with some calling the government's fears "conjectural." The government has presented no evidence that Anthropic executives have ever discussed disabling systems, refusing to operate equipment, or otherwise sabotaging military operations.
The Pentagon argues, conversely, that a private corporation cannot be allowed to dictate the parameters of military use once a contract is signed, and that Anthropic's refusal to allow "all lawful use" of its technology is a form of conduct, not expression. This position has intellectual weight. National defence does require control over technology vendors, and it is reasonable for military planners to demand unrestricted access to tools they have purchased.
Yet the government's remedy creates perverse incentives. If the administration can blacklist American companies for negotiating usage restrictions, defence contractors and AI developers will face an impossible choice: abandon ethical commitments or lose access to government contracts. If courts side with the government, AI companies pursuing defence work may have no legal right to impose usage restrictions, effectively forcing a choice between military contracts or safety principles, but not both.
For Australia and regional defence partners, this dispute carries broader implications. Australian defence procurement increasingly depends on American technology ecosystems. If the US government uses supply chain designations as a tool to punish companies over contractual disagreements rather than genuine security threats, it sets a concerning precedent for how such tools might be deployed in future disputes—potentially affecting Australian access to advanced AI capabilities or the terms on which Australian companies can work with US defence agencies.
The case also reveals a genuine strategic tension that reasonable defence professionals disagree on. One position holds that military systems must be absolutely under government control, with no corporate veto power over operations. Another contends that when companies have legal and ethical obligations to their shareholders and the public, those obligations cannot be erased by government contract. Courts will ultimately decide whether Anthropic's refusal to remove safeguards constitutes a permissible exercise of commercial freedom or an unacceptable constraint on military command authority.