Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 28 February 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Trump Bans Anthropic From Federal Agencies Over Military AI Dispute

A clash between Silicon Valley's safety-first AI culture and the Pentagon's push for unrestricted military use has reached a dramatic breaking point.

Trump Bans Anthropic From Federal Agencies Over Military AI Dispute
Image: Ars Technica
Key Points 4 min read
  • Trump ordered all federal agencies to immediately cease use of Anthropic's AI tools following weeks of conflict over military AI restrictions.
  • The Pentagon sought to remove limits on how AI could be deployed, including for lethal autonomous weapons and mass surveillance.
  • Anthropic, which signed a $200 million deal with the Pentagon last year, objected to the proposed changes on safety grounds.
  • OpenAI's Sam Altman sided with Anthropic, calling mass surveillance and fully autonomous weapons a 'red line' for his company too.
  • Some experts argue the dispute is more about principle than practice, since neither the Pentagon's contested use cases are currently on the table.

The tension between Silicon Valley's AI safety movement and Washington's military ambitions has reached a breaking point. US President Donald Trump announced Friday that he was directing every federal agency to immediately stop using tools developed by Anthropic, the AI company founded on the principle that artificial intelligence must be built with safety as its foundation. A six-month phase-out period was included in the order, which some observers interpreted as leaving room for further negotiation.

The dispute has been brewing for weeks. The Department of Defense had been pressing Anthropic to drop restrictions written into a contract signed last July, seeking instead to permit what it called "all lawful use" of the company's AI models. Anthropic refused, arguing that such broad language could open the door to using AI to fully control lethal autonomous weapons systems or to conduct mass surveillance on American citizens. The Pentagon, for its part, stated it has no current plans for either application, but top administration officials pushed back firmly against the idea of a private technology company dictating terms to the US military.

Anthropic CEO Dario Amodei sitting at a table and speaking into a microphone during a Senate committee hearing.
Anthropic CEO Dario Amodei testifies before a Senate committee. His company's refusal to remove AI safety restrictions triggered a confrontation with the Pentagon that has now escalated to a presidential order.

Anthropic was the first major AI laboratory to enter into a formal arrangement with the US military, through a $200 million deal with the Pentagon signed last year. It developed a suite of custom models known as Claude Gov, which carry fewer restrictions than its commercially available products. These are currently deployed through platforms provided by Palantir and Amazon's classified cloud infrastructure, and are used for tasks ranging from report writing and document summarisation to intelligence analysis and military planning, according to a source familiar with the matter who spoke to WIRED on condition of anonymity.

The immediate trigger for the public confrontation was a report by Axios that US military leaders had used Claude to assist in planning an operation to capture Venezuelan president Nicolás Maduro. Following the operation, concerns about how Anthropic's models had been used were relayed from a company staffer to military leaders via a Palantir employee. Anthropic has denied raising any direct concerns or interfering with Pentagon operations. Defence Secretary Pete Hegseth met with Anthropic CEO Dario Amodei earlier this week and gave the company until Friday to commit to revised contract terms. When no such commitment came, Trump's announcement followed.

Amodei has not been silent on the underlying issues. In January, he published a detailed essay on the risks of powerful AI, addressing the specific dangers posed by autonomous weapons. "These weapons also have legitimate uses in the defence of democracy," he wrote. "But they are a dangerous weapon to wield." That framing captures the genuine tension at the heart of this dispute: Anthropic is not opposed to supporting defence work, but it has drawn lines around specific capabilities it believes are not yet safe to deploy autonomously.

Anthropic is not alone in drawing those lines. Several hundred workers from OpenAI and Google signed an open letter this week supporting Anthropic's position and criticising their own employers' decisions to remove similar restrictions. OpenAI CEO Sam Altman separately told staff that his company also considers mass surveillance and fully autonomous lethal weapons to be "red lines," and indicated OpenAI would seek a deal with the Pentagon that preserved those limits, according to The Wall Street Journal.

From a national security perspective, the Pentagon's position is not without logic. Civilian contractors setting limits on military capability is an unusual arrangement, and the argument that elected governments, not technology companies, should determine the boundaries of lawful military use carries real democratic weight. At the same time, there is something genuinely important about AI developers insisting that safety constraints accompany their products into high-stakes environments, particularly when those products are being integrated into intelligence and planning functions.

Some analysts argue, however, that both sides have overplayed the stakes. Michael Horowitz, a former Deputy Assistant Secretary for emerging technologies at the Pentagon and an expert on military AI, told WIRED that the conflict was "such an unnecessary dispute." He noted that the contested use cases, fully autonomous weapons and mass surveillance, are not currently under active consideration, and that Anthropic has in fact supported all of the ways the Pentagon has actually proposed using its technology. "My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time," Horowitz said.

The broader pattern here is worth watching. In recent years, major technology firms have moved from keeping defence work at arm's length to embracing it openly, and in some cases becoming substantial military contractors. That shift has been rapid, and the norms governing it remain unsettled. The fight between Anthropic and the Pentagon is now testing where those norms will land. Google, OpenAI, and Elon Musk's xAI all signed comparable deals around the same time as Anthropic, but Anthropic remains the only AI company currently operating within classified systems, making its position in any such dispute uniquely consequential.

The six-month phase-out period built into Trump's order may ultimately prove to be the most significant detail. It creates a window for negotiation rather than a clean break, and both sides have signalled, through intermediaries at least, that they value the partnership. Whether that window produces a workable framework, one that satisfies the Pentagon's desire for operational flexibility while preserving meaningful safety guardrails, will say a great deal about how governments and AI companies learn to share responsibility for technologies that neither fully controls.

Sources (1)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.