Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Gaming

OpenAI Lands Pentagon AI Deal as Anthropic Pays Price for Ethics Stand

The rapid displacement of one AI company by another inside the US military's classified networks raises questions that extend well beyond Silicon Valley.

OpenAI Lands Pentagon AI Deal as Anthropic Pays Price for Ethics Stand
Image: Kotaku
Key Points 4 min read
  • OpenAI secured a Pentagon deal to deploy its AI models in classified military networks after rival Anthropic was banned by the Trump administration.
  • Anthropic lost its contract after refusing to remove safeguards preventing its Claude AI from being used in fully autonomous weapons or for mass domestic surveillance.
  • Defence Secretary Pete Hegseth designated Anthropic a 'supply chain risk' in an unprecedented use of a label normally reserved for foreign adversaries.
  • OpenAI's deal reportedly includes similar safety protections to those Anthropic demanded, raising questions about why the dispute escalated so dramatically.
  • The episode has profound implications for how democratic governments and private AI companies negotiate the ethics of lethal autonomous systems.

In the space of a single Friday, the United States military's most consequential artificial intelligence contract changed hands. Fortune reports that OpenAI CEO Sam Altman announced late on 28 February that his company had reached an agreement with the Department of Defense to deploy its AI models inside the military's classified network, hours after the Trump administration ordered every federal agency to cease using the technology of OpenAI's rival, Anthropic.

The episode deserves far more scrutiny than it has received as a commercial transaction. What played out in Washington last week was, at its core, a test of who holds authority over the ethical parameters of lethal technology: elected governments, or the private companies that build the tools those governments increasingly depend upon.

Anthropic
Anthropic CEO Dario Amodei staked his company's Pentagon contract on two non-negotiable safety conditions.

The dispute had been simmering for months. Anthropic's published statement makes its position clear: the company sought assurances that its Claude model would not be used for mass domestic surveillance of Americans or deployed in fully autonomous weapons systems, meaning AI making final targeting decisions without human authorisation. As Anthropic argued, as reported by CNN, current frontier AI systems are simply not reliable enough to be trusted with that level of autonomy, and removing human judgement from lethal force decisions represents a qualitative shift that existing law and Pentagon policy had not yet addressed adequately. Dario Amodei concluded that his company "cannot in good conscience accede" to the Pentagon's final terms.

The administration's response was swift and severe. Defence Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security," according to CNBC, a label that has historically been reserved for companies with direct ties to foreign adversaries. The designation effectively requires all Pentagon contractors, including firms like Boeing and Lockheed Martin, to certify they are not using Anthropic's products. Legal experts cited by Fortune described the move as unprecedented, and raised immediate questions about whether the Pentagon had genuinely exhausted less coercive alternatives before reaching for such a drastic instrument.

A Terminator prepares to protect our borders.
The prospect of AI operating without human oversight in lethal weapons systems has divided Silicon Valley and the defence establishment.

Into that void stepped OpenAI. Altman announced on social platform X that his company had agreed to terms, saying the Pentagon had shown "a deep respect for safety." The critical detail, reported by both TechCrunch and Axios, is that OpenAI's deal reportedly includes the same two core protections Anthropic had demanded: prohibitions on domestic mass surveillance and on the use of AI in fully autonomous weapons systems. OpenAI says it will build a "safety stack" of technical and human controls, and will embed engineers with Pentagon clearances to monitor deployment. If that account is accurate, the strategic question becomes uncomfortable: what, precisely, was the Anthropic dispute actually about?

There are two possible interpretations. The more charitable reading of the Pentagon's position is that the dispute was fundamentally procedural. Defence officials argued that existing US law and internal policy already prohibit autonomous lethal AI and mass domestic surveillance, and that Anthropic's insistence on embedding those restrictions contractually was an attempt by a private company to constrain military decision-making in a manner incompatible with the chain of command. As one Pentagon official put it, at some level you have to trust your military to do the right thing. From a national security management perspective, that argument has genuine force. Democratic civilian oversight of the military is expressed through law and elected government, not through the terms of a commercial software licence.

The less comfortable interpretation is that the administration used the supply chain risk designation as a punitive instrument against a company whose CEO had publicly and repeatedly criticised the pace and direction of AI militarisation. Anthropic pointed out, with some justification, that the two threats it faced were internally contradictory: being labelled a security risk while simultaneously being told its technology was essential to national security. The use of a designation normally aimed at foreign adversaries against a US-headquartered company, within days of a contract dispute, is the kind of action that legal and policy experts said raises profound questions about the relationship between government and business. Anthropic has confirmed it intends to challenge the designation in court.

For Australia, the episode carries implications that should concentrate minds in Canberra. Under the AUKUS framework and the broader Australian Defence partnership with the United States, Australian forces are increasingly likely to operate alongside, and eventually depend upon, AI-augmented American military systems. The question of which ethical standards govern those systems, and who has the authority to set them, is not merely an American domestic debate. It is a question about the terms on which Australian personnel may one day operate in joint theatres.

The broader Silicon Valley reaction also warrants attention. More than 70 OpenAI employees signed an open letter expressing solidarity with Anthropic's position, as reported by CNBC. Workers at Google sent a parallel letter to their own leadership. That degree of internal dissent within the companies building these systems is a signal worth heeding. The people closest to the technology are not uniformly comfortable with where it is heading.

The prudent conclusion is not that AI should be kept out of defence applications entirely. That ship has sailed, and the strategic risks of ceding AI capability to adversaries are real. Anthropic itself said it supports all lawful uses of AI for national security and had previously deployed its models across the Pentagon's classified network. The real question is whether the frameworks governing that deployment are robust enough to keep humans genuinely in control of lethal decisions. OpenAI's deal suggests the Pentagon is, at least on paper, prepared to accept those constraints when the relationship is not defined by mutual antagonism. Whether the technical safeguards Altman has promised are substantively stronger than what Anthropic was offered, or whether the outcome is simply the same policy wrapped in more cooperative language, will only become clear over time.

Sources (1)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.