There is a certain irony in the United States government threatening to blacklist one of its own AI companies using legal mechanisms designed for Chinese military contractors. But that is precisely where the standoff between Anthropic and the Pentagon has landed, with consequences that stretch well beyond a single $200 million defence contract.
The dispute, reported by Axios, came to a head on Friday when a deadline set by Defence Secretary Pete Hegseth expired without Anthropic agreeing to the Pentagon's core demand: that its Claude AI model be made available for, in the department's words, "all lawful purposes." That phrase, deceptively simple, encompassed two specific applications Anthropic has long refused to permit: mass domestic surveillance and fully autonomous weapons capable of making lethal targeting decisions without human oversight.
In a statement published on Anthropic's website, CEO Dario Amodei described the Pentagon's threats as "inherently contradictory": one labels Anthropic a security risk, the other treats Claude as essential to national security. He declined to budge. The company said it "cannot in good conscience" accede to requests it believes would create unacceptable risks, and vowed to challenge the supply chain risk designation in court.
The Fallout
The administration's response was swift. Hegseth formally designated Anthropic a national security "supply chain risk," declaring that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." President Trump announced that all federal agencies must "immediately" stop using Anthropic, though the Defence Department and certain other agencies would be given a six-month phase-out period to transition to other services.
Anthropic called the designation "legally unsound" and said it would "set a dangerous precedent for any American company that negotiates with the government," adding that it would challenge the decision in court.
The supply chain risk label has historically been reserved for US adversaries, and Anthropic noted it had "never before been applied to an American company." Jack Shanahan, a former leader of the Pentagon's AI initiatives, wrote on social media that the government "painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end," adding that Claude was already widely used across government in classified settings and that Anthropic's red lines were "reasonable."
What Anthropic Was Actually Protecting
To understand the stakes, it helps to know just how deeply embedded Claude already is in US military operations. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications including intelligence analysis, modelling and simulation, operational planning, and cyber operations. Cutting ties would require the Pentagon to have a replacement ready for Claude, which is currently the only model used in classified systems.
Anthropic's two hard limits, the ones at the centre of this dispute, are the mass surveillance of Americans and fully autonomous weaponry. Anthropic's position is that AI is not reliable enough to operate weapons, and there are no laws or regulations yet covering how AI could be used in mass surveillance. These are not outlandish concerns. They reflect a genuine policy vacuum that governments worldwide, including Australia's, have yet to adequately fill.
The Pentagon's counter-argument is that it is already illegal for the department to conduct mass surveillance of Americans, and that internal policies restrict the military from using fully autonomous weapons. In other words, the DoD argued that Anthropic's guardrails were redundant because existing law already prohibited those uses. Anthropic was unmoved, pointing out that its proposed contract language would have allowed those safeguards to be "disregarded at will."
OpenAI Steps In
The Pentagon's search for alternatives moved quickly. Elon Musk's xAI recently signed a contract to bring its Grok model into classified settings, while the Pentagon has been speeding up conversations with OpenAI and Google about moving their models into classified systems. Within hours of the Anthropic ban taking effect, OpenAI reached a deal with the Pentagon. The real question is what that deal actually entailed.
OpenAI CEO Sam Altman told employees that the government is willing to let OpenAI build its own "safety stack," a layered system of technical, policy, and human controls between a powerful AI model and real-world use, and that if the model refuses to perform a task, the government would not force OpenAI to make it comply. Those are, in substance, almost exactly the protections Anthropic had been seeking. The irony of OpenAI apparently securing the terms Anthropic was denied is not subtle.
The Safety Pledge Complication
Anthropic's stand on Pentagon guardrails has been complicated by a separate but poorly timed policy decision. Just days before the deadline, the company quietly overhauled its flagship Responsible Scaling Policy, removing its 2023 commitment to halt development of more powerful AI models unless it could guarantee adequate safety measures in advance.
In 2023, Anthropic had committed to never train an AI system unless it could guarantee in advance that its safety measures were adequate. For years, its leaders touted that promise as evidence that it was a responsible company that would withstand market incentives to rush development of a potentially dangerous technology. The policy change is separate and unrelated to Anthropic's discussions with the Pentagon, according to a source familiar with the matter. But perception is its own reality, and the timing has given critics ammunition to question whether Anthropic's safety credentials are as robust as its public positioning suggests.
In February, the company raised $30 billion in new investments at a valuation of approximately $380 billion, with annualized revenue growing at 10x per year. A company at that scale, with those investor expectations, faces enormous pressure to grow. Critics arguing that commercial interests are quietly shaping the safety calculus are not simply being cynical.
Where This Leaves the AI Industry
The Pentagon's ultimatum and the Anthropic blowup carry implications well beyond one company's government contracts. Ahead of the Friday deadline, top members of the Senate Armed Services Committee sent a private letter to both Anthropic and the Pentagon urging them to resolve their dispute, with Senate leaders calling on Hegseth and Amodei to extend negotiations and work with Congress to find a solution. That congressional instinct, to find a legislated middle ground rather than brute-force resolution, reflects a more sensible long-term approach.
The real problem this dispute exposes is the absence of binding law. The Australian Parliament faces its own version of this reckoning. Without clear statutory frameworks governing AI use in defence and law enforcement, disputes like this one get resolved by contract negotiation and political pressure rather than democratic deliberation. That is not a sustainable model for anyone who cares about accountability.
Anthropic's stand on autonomous weapons and mass surveillance reflects a genuinely important principle: that the people building AI should retain some say in how it is used, particularly where existing legal frameworks have not kept pace with technological capability. The Pentagon's position, that a private company cannot veto military operational decisions, is also a legitimate one in a democratic state governed by civilian authority. Both things can be true simultaneously.
The most pragmatic path forward is the one that OpenAI's deal with the Pentagon apparently sketches out: a safety stack built by the company, transparency about what the model will and will not do, and contractual clarity rather than a blanket "all lawful purposes" clause that leaves room for abuse. Whether Anthropic gets the chance to return to that table, or whether it spends the next year in court instead, may depend as much on politics as on principle.