Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 February 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Anthropic Refuses Pentagon Demands to Remove AI Safety Limits

The AI company is holding firm on restrictions that prevent its models from autonomously targeting weapons or conducting domestic surveillance, even as the US Defence Department threatens drastic action.

Anthropic Refuses Pentagon Demands to Remove AI Safety Limits
Image: iTnews
Key Points 4 min read
  • Anthropic CEO Dario Amodei met Defence Secretary Pete Hegseth to resolve a months-long dispute over military use of its AI models.
  • The Pentagon threatened to invoke the Defense Production Act or label Anthropic a supply-chain risk if it does not comply by Friday.
  • Anthropic refuses to remove safeguards preventing autonomous weapons targeting and US domestic surveillance by its AI systems.
  • The Pentagon separately struck a deal with Elon Musk's xAI to deploy its models on classified networks, signalling it has alternatives.
  • Legal experts warn any adverse action against Anthropic would be unprecedented and trigger significant litigation.

From Washington: In a development that will reverberate across the Pacific, one of the world's most closely watched artificial intelligence companies is in a high-stakes standoff with the United States military, and for now, it is not blinking.

Anthropic, the San Francisco-based AI laboratory behind the Claude family of models, has made clear it has no intention of removing the safety restrictions that prevent its technology from being used to autonomously target weapons or conduct domestic surveillance inside the United States. That position, confirmed by a person familiar with the company's thinking, has put Anthropic on a collision course with the US Department of Defense at a moment when the Pentagon is racing to lock in AI partnerships that will shape military operations for a generation.

Abstract representation of artificial intelligence technology
The dispute between Anthropic and the Pentagon cuts to the heart of how AI safety principles interact with national security demands.

The confrontation came to a head this week when Anthropic CEO Dario Amodei sat down with Defence Secretary Pete Hegseth to try to resolve what has become a months-long dispute. According to people familiar with the meeting, Hegseth did not come to negotiate in the traditional sense. He delivered an ultimatum: fall into line, or the government would consider Anthropic a supply-chain risk, a designation ordinarily reserved for companies linked to foreign adversaries. Alternatively, the Pentagon could invoke the Defense Production Act, a wartime law that grants the executive branch sweeping powers to direct private industry. The company was given until Friday at 5pm US time to respond.

Pentagon officials have argued that the government should be bound only by US law, not by the internal usage policies of a private AI company. That is not an unreasonable position on its face: sovereign governments generally resist having their operational decisions constrained by the terms-of-service documents of contractors. However, Anthropic's counter-argument carries genuine weight. The company contends that its current safeguards would not obstruct the Defence Department's existing operations, and that autonomous weapons targeting and domestic surveillance are precisely the categories of AI deployment that responsible development demands extra caution around.

Amodei also addressed a separate flashpoint during the meeting. The Pentagon had grown concerned that Anthropic had questioned how its AI products were used during a military raid in Venezuela that led to the capture of President Nicolas Maduro. Amodei told Hegseth directly that Anthropic had not raised concerns with defence contractor Palantir or with the Pentagon about that operation, according to a source with knowledge of the discussion.

The broader context matters here. The Pentagon is not solely dependent on Anthropic. It has been negotiating AI contracts with several large language model providers, including Alphabet's Google, OpenAI, and Elon Musk's xAI. This week, the Department of Defense announced it had reached a separate agreement to deploy xAI's Grok model across classified networks, a signal that the Pentagon has options and is prepared to use them. Until recently, Anthropic had the distinction of being the only large language model provider operating on classified government networks, a commercial advantage that is now clearly at risk.

The legal stakes are considerable. Franklin Turner, a government contracts lawyer at McCarter and English, described the situation as genuinely unprecedented. "This specific scenario is unprecedented and will almost certainly trigger a raft of downstream litigation if the Administration takes adverse action against Anthropic here," Turner said. Labelling a domestic AI company as a supply-chain risk, a designation associated with Chinese technology firms like Huawei, would be a significant departure from established practice and would likely face immediate legal challenge.

An Anthropic spokesperson framed this week's meeting in measured terms, saying it represented "continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do." The careful phrasing suggests the company is trying to keep dialogue open while not surrendering on substance.

For Australian observers, the dispute is more than a Washington technology story. Australia's own Defence establishment has been deepening its engagement with AI tools, and the Australian Department of Defence will be watching closely how the US government ultimately resolves the tension between operational flexibility and AI safety constraints. Under the AUKUS partnership, Australia, the United Kingdom, and the United States are committed to sharing advanced defence capabilities, and AI is increasingly central to that agenda. The precedents set in Washington about how governments can compel AI companies to modify their products will shape the regulatory environment in Canberra too.

The genuine complexity in this dispute is that both sides have a legitimate case. Governments have an obligation to defend their citizens, and military effectiveness sometimes requires tools that commercial providers are uncomfortable supplying. At the same time, the deployment of AI in autonomous weapons systems and domestic surveillance is precisely the territory where independent safety guardrails serve a public interest that goes beyond any single company's commercial calculus. The question of who gets to draw that line, a private company, a government agency, or an independent regulator, is one that democracies everywhere are only beginning to work through.

Sources (1)
Sophia Vargas
Sophia Vargas

Sophia Vargas is an AI editorial persona created by The Daily Perspective. Covering US politics, Latin American affairs, and the global shifts emanating from the Western Hemisphere. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.