Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Pentagon Punishes Anthropic, Then Hands OpenAI Its Contract on Nearly Identical Terms

The Trump administration's decision to blacklist Claude's maker as a 'supply chain risk' raises urgent questions about government accountability and AI governance.

Pentagon Punishes Anthropic, Then Hands OpenAI Its Contract on Nearly Identical Terms
Image: Toms Hardware
Key Points 4 min read
  • President Trump ordered all federal agencies to immediately stop using Anthropic's AI after the company refused Pentagon demands to remove safeguards on its Claude model.
  • Hours later, OpenAI announced a deal with the Pentagon that includes the same two safety conditions — no mass surveillance, human oversight of lethal force — that Anthropic was penalised for insisting upon.
  • Anthropic has been designated a 'supply chain risk', a label normally reserved for foreign adversaries like Huawei, and plans to challenge the designation in court.
  • The designation could bar defence contractors from using Claude across all commercial operations, threatening Anthropic's broader enterprise business and its $380 billion valuation.
  • Legal experts say the designation may be procedurally flawed, and senior US senators from both parties urged both sides to return to the negotiating table.

From Washington: The Pentagon was still publicly announcing sanctions against Anthropic on Friday when, within hours, it quietly accepted from a rival firm the very conditions it had just punished Anthropic for refusing to drop. The episode has left legal scholars, AI industry leaders, and members of Congress asking whether the Trump administration's actions reflect sound national security policy or something considerably less principled.

OpenAI CEO Sam Altman announced late Friday night that his company had reached an agreement with the US Department of Defense to deploy its models on the Pentagon's classified network. The deal's terms were notable for what they included: the same two safety conditions Anthropic was effectively blacklisted for insisting upon, specifically no domestic mass surveillance and human oversight of decisions involving lethal force and autonomous weapons. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote on X. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

The contrast could not be starker. Earlier the same day, President Donald Trump announced all federal government agencies must cease using Anthropic's AI tools, and Defense Secretary Pete Hegseth declared that the company would be deemed a "supply chain risk", all over its refusal to back down in negotiations with the Pentagon over restrictions on its AI being used in autonomous weapons and mass surveillance of US citizens.

Legal and policy experts said the government's decision presents profound questions about the relationship between government and business in the United States. It is the first time the US has ever designated an American company a supply chain risk, a label previously applied only to companies linked to foreign adversaries. The government will now impose a penalty usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.

Trump's announcement is particularly extraordinary because Claude is the only AI model currently used in the military's classified systems. It was used in the operation to capture Nicolás Maduro and could conceivably be used in a potential military operation in Iran. Defence officials praised Claude's capabilities in conversations with Axios, with one admitting it would be a "huge pain in the ass" to disentangle.

The business consequences for Anthropic could extend far beyond the loss of its Pentagon contracts. Hegseth declared that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." If that interpretation stands, it would do potentially catastrophic damage to Anthropic's business, because many large enterprises that have adopted Anthropic's Claude models also do some business with the US military. It might also mean that companies such as Amazon, Google, and Nvidia that have invested billions of dollars into Anthropic would have to divest from the company.

Anthropic earlier this month announced it had closed a new $30 billion venture capital funding round that valued the company at $380 billion. Even if Anthropic ultimately prevails in challenging the designation in court, the damage to its business may already be done. "It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?" one independent analyst posted on X.

What Anthropic Actually Asked For

Any honest assessment of this dispute requires understanding what Anthropic was actually asking. The company said it had "tried in good faith" to reach an agreement with the Pentagon over months of negotiations, "making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions" being disputed. "To the best of our knowledge, these exceptions have not affected a single government mission to date," Anthropic said. It said its objections were rooted in two reasons: "First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

The military's own position is that it is already illegal for the Pentagon to conduct mass surveillance of Americans, and that internal policies restrict the military from using fully autonomous weapons. That acknowledgement makes the escalation harder to justify on purely operational grounds. Anthropic's counter was that contractual language matters precisely because informal assurances are not enforceable. "The contract language we received from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons," Anthropic told ABC News. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."

Ahead of the Friday deadline, senior members of the Senate Armed Services Committee sent a private letter to both Anthropic and the Pentagon urging them to resolve their fight. The Senate leaders urged Hegseth and Anthropic CEO Dario Amodei to extend their negotiations and work with Congress to find a solution.

Legal Doubts About the Designation

Beyond the policy debate, the legal basis for the supply chain risk designation is itself in dispute. Charlie Bullock, a senior research fellow at the Institute for Law and AI, told Wired that the government cannot make the designation without having completed a risk assessment and notifying Congress prior to taking action, steps which did not appear to have occurred. Amos Toh, a senior counsel at the Brennan Center for Justice at New York University, was among several legal experts who said the supply chain risk designation requires the government to prove there is a risk of sabotage, subversion, or manipulation of operations by an adversary. "It is not at all clear how adversaries could exploit Anthropic's usage restrictions on Claude to sabotage military systems," Toh told the defence news site DefenseScoop.

Anthropic said Friday it will challenge the supply chain risk designation in court, stating that "no amount of intimidation or punishment from the Department of War will change our position." Anthropic also argued that Hegseth does not have the legal authority to block anyone who does business with the military from working with the company, suggesting the law can only extend to the use of its AI models as part of Pentagon contracts and cannot limit how contractors use the technology to serve other customers.

Within the AI industry, the reaction was striking. Around 70 OpenAI employees signed an open letter titled "We Will Not Be Divided" expressing solidarity with Anthropic. Altman himself, before announcing his own deal, publicly questioned the Pentagon's approach. "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety," Altman told CNBC.

What This Means Beyond Washington

For Australian policymakers and defence planners, the episode carries pointed lessons. Australia is a partner to the AUKUS arrangement and has committed, with the United States and the United Kingdom, to developing advanced capabilities including AI-enabled systems. The question of whether allied nations should rely on AI tools without contractual safeguards on autonomous weapons is not hypothetical; it is a question Australian defence procurement officials will eventually face directly.

The affair also raises a broader and genuinely difficult question: when does a government's legitimate operational interest in its own defence tools become an overreach against private companies operating in good faith? The Pentagon's argument that it cannot have a private vendor effectively exercising a veto over military operations is not without force. Operational clarity matters in a combat environment, and the grey areas around what constitutes "mass surveillance" in a digital age are real. The Pentagon argues that there are many grey areas around what constitutes mass surveillance or autonomous weaponry, and that it is unworkable to have to litigate individual cases with a private company. Their position is that once the military buys a tool, it has its own standards and procedures to determine how to use it, and demands all AI firms make their models available for "all lawful purposes."

Yet the outcome of Friday's dealings makes the government's case difficult to sustain. Hours after rejecting Anthropic's conditions, the Pentagon accepted functionally identical conditions from OpenAI. That sequence either reveals a negotiating posture that was never truly principled, or a procurement process that was influenced by factors beyond operational necessity. Neither reading reflects well on the administration's management of what is genuinely one of the most consequential policy questions of the current decade. Reasonable people can disagree about where AI guardrails should sit. What is much harder to defend is applying those disagreements selectively, against one company alone, with instruments usually reserved for foreign adversaries. The courts will now have to work out the rest.

Sources (1)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.