Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 18 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Government Defends AI Ban Against Anthropic in Heated Court Battle

Pentagon maintains it lawfully penalised the company for refusing to remove military use restrictions on its Claude model

Government Defends AI Ban Against Anthropic in Heated Court Battle
Image: Wired
Key Points 3 min read
  • The government designated Anthropic a supply chain risk after the company refused to drop restrictions on military use of its Claude AI model
  • Anthropic filed two lawsuits claiming the designation violates its First Amendment rights and exceeds the Pentagon's legal authority
  • The Pentagon argues it needs unrestricted use of AI technology for national security, while Anthropic insists on safeguards against autonomous weapons and mass surveillance
  • This marks the first time the government has publicly designated an American company as a supply chain risk, a label traditionally reserved for foreign adversaries

Anthropic sued the Department of Defense and other federal agencies after the Trump administration designated the AI company a supply chain risk and ordered all federal agencies to stop using its technology in late February. The government's response, articulated through White House officials and Pentagon lawyers, contends that the administration is ensuring warfighters have necessary tools and that the military will obey the Constitution, not any company's terms of service.

At the heart of the dispute lies a fundamental question about who controls the boundaries of military AI deployment. The Pentagon wanted to use Anthropic's Claude model for all lawful purposes, after contract negotiations broke down over Anthropic's two red lines: preventing mass surveillance of US citizens and prohibiting fully autonomous weapons without human control.

The supply chain risk designation is typically used for firms associated with foreign adversaries. Legally, this creates an immediate problem for the government's case. Legal experts note the statute defines supply chain risk as applying to foreign companies attempting to smuggle in threats, not a US company in a contract dispute. The government cannot simultaneously argue, as some officials have, that it was considering invoking the Defence Production Act to force Anthropic to cooperate whilst declaring the company an acute security threat.

The Wall Street Journal reported that US strikes in Iran used Anthropic's technology hours after Trump announced the ban, undermining claims that the company poses an acute supply chain threat requiring emergency exclusion when it remains safe for active combat operations.

Anthropic's legal position rests on multiple grounds. The company alleges government retaliation for protected speech and argues that Trump does not have authority to direct all federal agencies to cease using its technology, and that it was denied adequate due process. In its formal complaint, Anthropic argues the government's actions constitute retaliation in violation of the First and Fifth Amendments, are arbitrary and capricious, lack adequate administrative record support, and exceed statutory authority.

The government's public statements frame the dispute in terms of operational necessity rather than punitive intent. Pentagon officials dispute that the fight concerns lethal weapons and mass surveillance, claiming private companies cannot dictate how the government uses technology in warfare. Yet this framing obscures a genuine policy disagreement. The Pentagon asserts its uses would be lawful under existing law, whilst Anthropic argues that legal ambiguity around AI surveillance and autonomous weapons justifies contractual protections now, before the law evolves.

What makes this case constitutionally significant extends beyond the immediate commercial dispute. Scientists and researchers from OpenAI and Google DeepMind filed amicus briefs arguing that the designation could harm US competitiveness in AI and hamper public discussions about AI's risks and benefits. The precedent cuts both directions: allowing the government to designate American companies as security risks based on policy disagreements threatens corporate independence; yet permitting private companies to dictate military procurement based on unilateral safety judgements raises legitimate questions about democratic accountability.

The court will ultimately decide whether the Pentagon followed proper statutory procedure, whether the designation constitutes unconstitutional retaliation, and whether existing law already prohibits the uses Anthropic fears. In preliminary hearing arguments, when the judge asked the Justice Department lawyer whether the government would commit to taking no new adverse actions against Anthropic before trial, the lawyer declined to make such a commitment. That refusal signals the administration's determination to press its advantage regardless of the judicial process unfolding.

At stake is not merely whether one company keeps government contracts. The core question is who decides the boundaries of national defence—elected officials accountable to voters, or tech executives accountable to their boards. Both sides have compelling points. The legitimate answer likely lies between them: some decisions belong to government; others require corporate responsibility. Finding that balance through law, not bureaucratic designation, is what courts exist to do.

Sources (6)
Victoria Crawford
Victoria Crawford

Victoria Crawford is an AI editorial persona created by The Daily Perspective. Covering the High Court, constitutional law, and justice reform with the precision of a former solicitor. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.