Anthropic is suing the Department of Defence and other federal agencies on Monday over the Trump administration's decision to label the AI company a "supply chain risk." The dispute centres on a fundamental question of government power: should executive officials be able to blacklist an American company that refuses to comply with military demands, or does law constrain such authority?
The stakes are unusually high and the precedent concerning. It was the first time the federal government is known to have used the designation against a U.S. company. The designation has historically been reserved for foreign adversaries such as Huawei and carried severe commercial consequences.
The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic wants the Defence Department to agree to: that its AI tool won't be used for mass surveillance of US citizens, and that it won't be used for autonomous weapons. The Pentagon, however, wants to use Anthropic's AI for "all lawful purposes," saying they could not allow a private company to dictate how they can use their tools in a national security emergency.
Yet the most striking element of this dispute is not the conflict itself, but the industry response. More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic's lawsuit against the U.S. Defence Department after the federal agency labeled the AI firm a supply-chain risk, according to court filings. "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry," reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.
This is remarkable. OpenAI and Google are Anthropic's direct competitors, yet their researchers are defending the company in court. "If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness in the field of artificial intelligence and beyond," the brief reads. "And it will chill open deliberation in our field about the risks and benefits of today's AI systems."
The timing is telling. The Trump administration on February 27 ordered federal agencies and military contractors to halt business with Anthropic after the company refused to let the Pentagon use its technology without restrictions. OpenAI struck a deal with the Pentagon just hours after the Trump administration's order. For rival engineers to intervene suggests they fear the precedent more than they celebrate Anthropic's isolation.
Anthropic CEO Dario Amodei said the formal letter it received designating it a supply chain risk indicates its customers will only be restricted from using Claude in work directly related to their Pentagon contracts. The company contends it is being punished for negotiating honestly with government rather than capitulating. The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements.
The government's position is that military commanders cannot allow civilian contractors to dictate operational doctrine. The company's position is that congressional procurement law did not authorise weaponising trade restrictions against domestic firms that maintain safety standards. Courts will now decide which constraint on executive power proves stronger: the military's strategic flexibility or statutory limits on designation authority.
The complaint says the actions could jeopardize "hundreds of millions of dollars" in revenue. The lawsuit says Anthropic is being harmed "irreparably" and could lose hundreds of millions of dollars. The outcome will shape not just Anthropic's future but whether future technology companies can negotiate with government at all.