From Washington: In a development that will reverberate across the Pacific, OpenAI CEO Sam Altman spent the weekend walking back a hastily struck military contract, admitting publicly that his company's deal with the US Department of Defense was rushed and poorly communicated. The episode has exposed fault lines in how Washington and the technology industry are negotiating the rules of AI-powered warfare.
Altman announced on Friday that OpenAI had reached an agreement with the Department of War to deploy its AI models on classified military networks, a move that arrived with striking timing: the deal came just hours after President Trump ordered the government to stop using services from Anthropic, following a breakdown in talks over whether the company's AI could be used for mass surveillance or fully autonomous weapons capable of killing without human control.
Secretary of War Pete Hegseth directed the department to designate Anthropic a "supply-chain risk to National Security", adding that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." Anthropic called the designation an "unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company."
The optics were difficult to ignore. Altman had previously said publicly that he supported Anthropic's red lines around mass surveillance and autonomous weapons, yet OpenAI struck a deal within hours of its rival being punished for holding precisely that position. OpenAI claimed its new contract included the same two restrictions Anthropic had been fighting for, while simultaneously agreeing to the "any lawful use" standard Anthropic had rejected.
By Monday, the damage-control operation was in full swing. In an internal memo shared on social media, Altman said the company "shouldn't have rushed" to get the agreement out on Friday, adding: "The issues are super complex, and demand clear communication." "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he said.
The amended contract language clarifies that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals", with the Pentagon affirming this prohibition extends to the procurement or use of commercially acquired personal data. Altman also confirmed the Defense Department had affirmed that OpenAI's tools would not be used by intelligence agencies such as the NSA.
The internal dissent at OpenAI was significant. Many of the company's own employees signed an open letter supporting Anthropic following the standoff. Aidan McLaughlin, a research scientist at the company, posted on X that he personally did not think "this deal was worth it", a post that drew nearly 500,000 views. Consumers reacted too: US downloads of Anthropic's Claude app rose 37 per cent on Friday and 51 per cent on Saturday after OpenAI's rival was labelled a supply-chain risk and rejected a deal with the Pentagon.
Critics who are not sympathetic to either company's commercial interests have raised a more pointed concern. The Pentagon's core objection to Anthropic was that a private company should not be able to constrain the military's use of AI technology; yet the OpenAI arrangement appears to give the company significant operational control over how the technology functions in practice, through infrastructure, personnel, and classifiers that OpenAI can update unilaterally. The question of who ultimately controls the safety guardrails, the elected government or a private Silicon Valley firm, remains unresolved.
The broader strategic context is one Australian policymakers and defence planners should watch closely. A recent analysis published by War on the Rocks found that the United States controls 74 per cent of global AI compute capacity, and while Washington's Genesis programme gives American industry structured access to that infrastructure, AUKUS allies have received no equivalent mechanism. China now leads in 57 of 64 critical technologies central to Pillar II of AUKUS, raising the stakes for how military AI is developed and governed across the alliance. For the AUKUS alliance, this means the rules OpenAI and the Pentagon settle on for AI use in classified environments are not merely an American domestic matter.
Altman's weekend messaging attempted to position OpenAI as a principled actor caught in an impossible situation, rather than an opportunist. "We want to work through democratic processes," he wrote. "It should be the government making the key decisions about society. We want to have a voice and a seat at the table where we can share our expertise, and to fight for principles of liberty." He also said he had reiterated in weekend conversations that Anthropic should not be designated a supply-chain risk, and that he hoped the Pentagon would offer Anthropic the same terms OpenAI had agreed to.
Whether that goodwill gesture amounts to anything remains to be seen. What this episode makes clear is that the governance of military AI is being improvised at speed, under political pressure, by parties with overlapping commercial and strategic interests. By leaving the door open to a future "follow-on modification", OpenAI is also acknowledging that the boundaries around military and intelligence use of AI remain fluid, and will likely keep evolving as both regulators and AI providers test where they are willing to draw those lines. Reasonable people across the political spectrum can agree that is not a satisfactory arrangement for decisions with consequences this serious.