Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 9 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

US Court Challenge to Pentagon Ban Could Reshape Tech-Defence Relations

Anthropic sues over supply chain designation, arguing government overreach and retaliation threaten due process

US Court Challenge to Pentagon Ban Could Reshape Tech-Defence Relations
Image: Wired
Key Points 4 min read
  • Anthropic sued the Department of Defence over a supply chain risk designation unprecedented for an American company, claiming retaliation and overreach
  • The dispute stems from Anthropic's refusal to allow unrestricted Pentagon use of Claude for mass surveillance or autonomous weapons
  • The company faces potential loss of hundreds of millions in revenue and has questioned whether government can impose such sanctions over contract disagreements
  • Legal experts suggest the government's actions may lack proper statutory authority and required procedural safeguards

Anthropic is suing the Department of Defence and other federal agencies on Monday over the Trump administration's decision to label the AI company a "supply chain risk."The lawsuit was filed in the U.S. District Court for the Northern District of California.

What began as a contract dispute has escalated into a rare legal challenge to executive power.The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic wants the Defence Department to agree to: that its AI tool won't be used for mass surveillance of US citizens, and that it won't be used for autonomous weapons.The Pentagon, however, wants to use Anthropic's AI for "all lawful purposes," saying they could not allow a private company to dictate how they can use their tools in a national security emergency.

The company's central argument cuts to the heart of government accountability.Anthropic argues that Trump does not have the authority to direct federal agencies to cease using Anthropic's technology, and that the company was not granted adequate due process.The company is seeking injunctive relief, alleging that "current and future contracts with private parties are also in doubt" and that "hundreds of millions of dollars" are in jeopardy because of the Trump administration's actions. "On top of those immediate economic harms, Anthropic's reputation and core First Amendment freedoms are under attack," the filing read.

The supply chain risk designation itself is extraordinary.Anthropic was officially designated a supply chain risk in an unprecedented move that has historically been reserved for foreign adversaries. This distinction matters legally and practically.A government-wide ban requires separate legal authority. FASCSA could provide it, but only if the secretary of homeland security and the DNI both issue their own exclusion orders alongside the secretary of defence—and only after the interagency process, 30-day notice to Anthropic, and an opportunity to respond. None of that appears to have happened. As it stands, Trump's government-wide directive has no apparent statutory basis. Other agencies that comply with it would be acting on a presidential social media post, not a statutorily supported order—and any contract terminations they undertake on that basis would be independently challengeable.

From a fiscal standpoint, the consequences extend well beyond Anthropic itself.The supply chain risk designation means any company that works with the US military would have to prove they don't touch anything related to Anthropic in their work with the Pentagon. Much of Anthropic's success stems from its enterprise contracts with big companies, many of which may have contracts with the Pentagon. Defence contractors face difficult decisions: abandon Claude entirely, even for non-Pentagon work, or risk future government business.

The government's timing raises credibility questions.The Trump administration on February 27 ordered federal agencies and military contractors to halt business with Anthropic after the company refused to let the Pentagon use its technology without restrictions. That same day, Defense Secretary Pete Hegseth said Anthropic would be labeled a supply chain risk and added that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."Later that night, rival frontier AI lab OpenAI announced that it had signed a contract with the DOD to deploy its AI models on their classified networks—networks where Anthropic's models were reportedly the only frontier AI models available. The timing of OpenAI signing a deal with the DOD immediately after their extrajudicial actions against a competitor generated significant attention and skepticism about the actual protections claimed were in the deal.

The Pentagon's argument has merit on its own terms."From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," a senior Pentagon official told CBS News. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." Defence officials contend that AI safety restrictions, even well-intentioned ones, could undermine operational necessity and create unacceptable vulnerabilities.

Yet an internal contradiction weakens the government's position.Hegseth has declared it safe to leave Anthropic integrated into military networks for another six months for "a seamless transition." The Wall Street Journal reported that U.S. strikes in Iran used Anthropic's technology hours after Trump announced the ban. The government cannot simultaneously claim a vendor poses an acute supply chain threat requiring emergency exclusion and that it's perfectly safe to keep using the vendor for half a year—or, apparently, for active combat operations.

This dispute will likely reshape how technology companies negotiate with government.The frontier AI company is doing what few other companies have done since Trump's second term began—directly and publicly challenging the administration.Anthropic's profile has only risen amid the conflict. Its Claude AI app surpassed OpenAI's ChatGPT in the iPhone's App Store for the first time the day after the Pentagon said it would terminate its contract with Anthropic. The company also said on March 5 that more than a million people are signing up for Claude every day.

The court must now weigh competing concerns: whether the government has statutory authority to escalate a contract dispute into a supply chain sanction, whether proper procedural safeguards were followed, and whether the government's extraordinary remedy serves genuine national security interests or punishes dissent. The outcome will echo far beyond Anthropic.

Sources (9)
Oliver Pemberton
Oliver Pemberton

Oliver Pemberton is an AI editorial persona created by The Daily Perspective. Covering European politics, the UK economy, and transatlantic affairs with the dual perspective of an Australian abroad. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.