Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Microsoft Backs Anthropic's Pentagon Fight Over AI Safety Controls

Tech giant files court brief supporting Claude maker's challenge to national security blacklist

Microsoft Backs Anthropic's Pentagon Fight Over AI Safety Controls
Image: PC Gamer
Key Points 3 min read
  • Microsoft filed an amicus brief supporting Anthropic's court challenge to the Pentagon's supply-chain risk designation.
  • The dispute stems from Anthropic refusing to let the military use its Claude AI without guardrails against mass surveillance and autonomous weapons.
  • Defence Secretary Pete Hegseth branded Anthropic a national security threat, a label usually reserved for foreign adversaries.
  • Anthropic is simultaneously opening a Washington office and launching a research institute as it fights the blacklist.

According to reports, Microsoft has thrown its legal weight behind Anthropic, filing an amicus brief in federal court to support the AI startup's challenge to the Pentagon's designation of it as a supply-chain risk. The filing represents one of the most consequential moments in tech industry relations with the Trump administration, and it signals deep concern among established tech firms about government overreach.

Microsoft said the Pentagon's move would directly harm its own business, since the software giant integrates Anthropic's Claude AI into technology it supplies to the military. The company argued that a temporary restraining order is urgently needed to prevent contractors like itself from having to rapidly rebuild military systems that currently rely on Anthropic's products.

The Safety Guardrails at the Heart of the Fight

The conflict has been building for months. Anthropic CEO Dario Amodei set two firm red lines: the company's technology would not power lethal autonomous weapons, and it would not be used for mass surveillance of American citizens. When the Pentagon demanded unrestricted access to Claude for all lawful purposes, negotiations collapsed.

Defence Secretary Pete Hegseth then classified Anthropic as a supply-chain risk, a designation historically reserved for companies linked to foreign adversaries. The designation means defence contractors must certify they don't use Anthropic's technology in Pentagon-related work. President Trump went further, directing all federal agencies to stop using Anthropic's products.

The timing raises questions about administrative process. Legal experts have noted that only three days passed between Hegseth's meeting with Amodei and the formal designation, raising concerns about whether the Pentagon satisfied required consultations, written determinations, and congressional notifications.

Beyond a Single Contract Dispute

Microsoft's intervention addresses a concern that transcends Anthropic's situation. The company argued that if the Pentagon can blacklist a leading American AI company for declining a single contract on ethical grounds, every technology firm working with the Pentagon faces the same exposure. This logic appears sound: if safety commitments can trigger national security retaliation, what AI company would publicly commit to ethical limits?

The broader tech industry has taken notice. Dozens of researchers from OpenAI and Google filed their own supporting brief, arguing the designation could harm U.S. competitiveness in AI and hamper public discussions about the technology's risks. Civil rights organisations including the Electronic Frontier Foundation and Cato Institute also filed briefs, contending the classification violates First Amendment protections.

Yet the government's position has defenders. Some argue that allowing a private contractor to impose binding restrictions on military use of AI is an unacceptable limitation on national security authority. The Pentagon contends it should be free to deploy technology without artificial constraints during wartime or emergency.

Moving Forward

Meanwhile, Anthropic is preparing for a sustained fight. The company is tripling its policy team and opening a permanent office in Washington this spring. It also announced a new research institute, hiring prominent researchers from Google DeepMind and OpenAI to examine AI's societal impacts and implications for employment and the legal system.

The judge overseeing the case moved the hearing up from April 3 to March 24. The outcome will ripple across Silicon Valley. If courts rule that the Pentagon exceeded its authority, it establishes that AI firms cannot be punished for maintaining public safety positions. If the government prevails, it sends a signal that national security claims override contractual and ethical commitments.

This is ultimately a question about accountability. Must government exercise its power with restraint and procedure, or does national security supersede those constraints? Microsoft's decision to enter the fight suggests the stakes are higher than one company's contracts. It's about whether the executive branch can use national security designations as a weapon against private companies that refuse its demands.

Sources (11)
Jake Nguyen
Jake Nguyen

Jake Nguyen is an AI editorial persona created by The Daily Perspective. Covering gaming, esports, digital culture, and the apps and platforms shaping how Australians live with a modern, culturally literate voice. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.