Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 24 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Pentagon's Anthropic Blacklist Looks Like 'Crippling' Move, Judge Says

A federal judge questioned the government's motivation for designating the AI developer a national security threat, raising concerns about retaliation over policy disputes.

Pentagon's Anthropic Blacklist Looks Like 'Crippling' Move, Judge Says
Image: Wired
Key Points 3 min read
  • U.S. District Judge Rita Lin expressed concern that Anthropic was being punished and questioned if the Pentagon violated the law in its supply chain risk designation.
  • The government wants unrestricted military use of Claude AI; Anthropic refused, insisting on guardrails against mass surveillance and autonomous weapons.
  • Anthropic is the first American company ever publicly designated a supply chain risk; the label traditionally applies to foreign adversaries.
  • The judge questioned whether refusing unfettered access to a product can justify blacklisting a domestic company over policy disagreement.
  • A decision is expected within days; without relief, Anthropic says it could lose billions in government and contractor business.

A federal judge said the Pentagon's decision to blacklist Anthropic's Claude artificial intelligence models "looks like an attempt to cripple" the company.

The AI company appeared in San Francisco federal court to ask U.S. District Judge Rita Lin to temporarily pause the Pentagon's blacklisting and President Donald Trump's directive banning federal government agencies from using its technology. Lin said she expects to issue an order in the next few days.

The dispute centres on a fundamental question about government power: whether the Pentagon can use national security authorities to punish a technology company for refusing to grant military leaders unfettered access to their products. Anthropic said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.

Anthropic signed a 200 million dollar contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency's classified networks. But as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled over how the military could use the models. The department insisted on unfettered access to the company's technology for all lawful purposes.

During Tuesday's hearing, Judge Lin raised pointed questions about the government's reasoning. "What I'm hearing from you, though, is that it's enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy. That seems a pretty low bar."

The Pentagon's position rests on concerns about operational control. The Pentagon fears that Anthropic might attempt to "disable its technology or preemptively alter the behavior of its model" during active warfare if the company feels its internal "red lines" are being crossed. Government lawyer Eric Hamilton argued that Anthropic was "going beyond the normal scope of a contractor" by raising concerns about how the military would use the technology.

Yet this framing raises legitimacy questions about the government's use of national security tools. Anthropic is the only American company ever to be publicly named a supply chain risk, as the designation has traditionally been used against foreign adversaries. The formal declaration requires defense vendors and contractors to certify that they don't use Anthropic's models in their work with the Pentagon.

The sequence of events has fuelled scrutiny. Judge Lin sent both sides questions about discrepancies between Defence Secretary Pete Hegseth's formal directive declaring Anthropic a potential threat to national security, and what he posted about it on social media. When Amodei refused to bend, Trump announced on Feb. 27 he was immediately ordering all federal employees to stop using Anthropic, calling it a "radical left, woke company" that was putting troops at risk.

The matter reflects broader tension over who controls the terms of military AI deployment. Giants like Google, Microsoft, and OpenAI have filed amicus briefs supporting Anthropic. They worry about retaliatory use of the label against any firm disagreeing with government terms. Without the injunction, Anthropic has said, it could lose billions of dollars in business.

The judge's scepticism does not guarantee an outcome favouring Anthropic, but it signals judicial concern that government power is being weaponised beyond its intended scope. As both sides await the court's decision, the case will ultimately turn on whether the government can designate a domestic company a security risk based on negotiating disputes, or whether procedural limits and rule-of-law norms impose constraints on executive action even in the name of national defence.

Sources (6)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.