The fundamental question being asked in federal court this week is whether the executive branch can use procurement law as a cudgel against a private company for refusing to abandon its ethical guardrails. Anthropic sued the Trump administration on Monday, seeking to reverse a blacklisting by the Pentagon that declared the artificial intelligence company a 'supply chain risk.' The stakes extend far beyond one firm's bottom line; they touch on the ability of the government to punish protected speech through its spending power.
Let us be honest about what happened here. CEO Dario Amodei announced he would not allow the company's Claude AI model to be used for autonomous weapons, or to surveil on American citizens. President Donald Trump then shared a post on social media directing federal agencies to "immediately cease" all use of Anthropic's technology. Days later, Anthropic was officially designated a supply chain risk, a move that will require defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon.
The legal argument Anthropic is making has force. The company argues officials informally imposed nationwide contracting restrictions on national security and supply-chain grounds, without formal determinations, documented evidence or consideration of less restrictive alternatives. Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei. If procurement law permits this, one must ask: what cannot the government do through its spending power?
The counter-argument deserves serious consideration. Pentagon officials have disputed that the fight with Anthropic is over lethal weapons and mass surveillance, instead claiming that private companies cannot dictate how the government uses technology in scenarios like warfare and tactical operations. This reflects a genuine principle about government autonomy: the state cannot have its operational decisions constrained by a vendor's ethical preferences, particularly in matters of national defence.
Yet here is where the Pentagon's argument breaks down under scrutiny. In the court filing, Google and OpenAI employees make the point that if the Pentagon was "no longer satisfied with the agreed-upon terms of its contract with Anthropic," the agency could have "simply canceled the contract and purchased the services of another leading AI company." The government had remedies available that did not require designating an American company as a foreign-style security threat. The Pentagon chose not to take them.
What is most revealing is the support Anthropic has received from the AI industry itself. More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday, saying they were expressing their opinions as professionals who build, train or study AI and did not represent their companies. "We are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails," they said in the brief.
The financial consequences for Anthropic are severe. Anthropic's CFO Krishna Rao said in a related filing that "across Anthropic's entire business, and adjusting for how likely any given customer is to take a maximal reading, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars." This is not hyperbole; once a company is blacklisted by the Pentagon, private customers become nervous about maintaining relationships.
Where does this leave us? The government has legitimate authority to choose which vendors to work with and to set requirements for military contracts. What it should not have is the power to brand a domestic company as a national security threat without formal process, documented evidence, or genuine opportunity for the company to respond, simply because executives disagree with how their technology might be used. If the courts side with the Pentagon here, they are signalling that procurement power can become a tool for political retaliation, constrained only by how carefully the government documents its reasoning.
Reasonable people can disagree on whether companies should impose restrictions on military use of AI systems. But reasonable people should agree that when a company stands on principle, it should not be punished through extraordinary designations designed for foreign adversaries. Legal observers have been sceptical that the designation will survive judicial scrutiny, with one former Army Ranger arguing that even the narrower formal designation would likely struggle in court, given the law's requirement for the least restrictive means. The courts will soon tell us whether that scepticism is justified.