There is a reason Sam Altman chose to announce his company's new Pentagon contract late on a Friday night. By his own admission, the deal was rushed, the optics were poor, and the timing could hardly have been more uncomfortable. After negotiations between Anthropic and the Pentagon collapsed, President Donald Trump directed federal agencies to stop using Anthropic's technology, and Secretary of Defense Pete Hegseth designated the AI company a supply-chain risk. OpenAI then quickly announced it had reached a deal of its own for its models to be deployed in classified environments. The whole episode unfolded within hours, and the AI industry is still trying to work out what it means.
How We Got Here
Anthropic, which signed a $200 million contract with the Pentagon last July, wanted assurances that its AI models would not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had given Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its Claude model from being used for domestic mass surveillance or entirely autonomous weapons, or face losing its contract. The Pentagon said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."
That deadline passed without agreement. Defense Secretary Pete Hegseth then designated Anthropic a "Supply-Chain Risk to National Security", a label typically reserved for foreign adversaries that would force DoD vendors and contractors to certify they don't use Anthropic's models. Legal and policy experts said the government's unprecedented decision presents profound questions about the relationship between the government and business in the US. It is the first time the US has ever designated an American company a supply-chain risk.
Being labelled a "supply chain risk" not only ends Anthropic's $200 million Defence Department contract, but it also forces anyone seeking to do business with the US military to cut ties with the AI firm. Declaring a company a supply-chain risk is a penalty typically reserved for businesses from adversarial countries, such as Chinese tech giant Huawei.
OpenAI Steps In
OpenAI CEO Sam Altman said late Friday that his company had agreed to terms with the Department of Defense for use of its models "in their classified network." Surprisingly, Altman claimed in a post on X that OpenAI's new defence contract includes protections addressing the same issues that became a flashpoint for Anthropic. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
OpenAI says its agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. The company lists three red lines: no use of its technology for mass domestic surveillance, no use to direct autonomous weapons systems, and no use for high-stakes automated decisions such as "social credit" systems.
OpenAI says its red lines are more enforceable because deployment is limited to cloud-only (not at the edge), keeps its safety stack working as the company sees fit, and keeps cleared OpenAI personnel in the loop. In other words, OpenAI secured roughly what Anthropic was asking for, just with different architecture and a contract the Pentagon was willing to sign.
Altman fielded questions about the deal on X, where he admitted it had been rushed and resulted in significant backlash against OpenAI, to the extent that Anthropic's Claude overtook OpenAI's ChatGPT in Apple's App Store on Saturday. That last data point is worth pausing on: the AI company that just lost its government contract apparently gained consumer goodwill. The internet, as ever, has its own opinions about who the good guys are.
The Anthropic Problem
The core dispute was not actually about whether the Pentagon intended to conduct mass surveillance or build Terminator-style weapons. The restrictions in the OpenAI agreement reflect existing US law and Pentagon policies, and the intention was not to invent new legal standards. Anthropic's concern was that the law has not caught up with AI, and that a powerful AI model could supercharge the legal collection of publicly available data, from social media posts to geolocation. That is a serious and specific concern, not a philosophical one, whatever some Pentagon officials implied publicly.
Senior Pentagon official Emil Michael described Anthropic CEO Dario Amodei as a "liar" with a "God complex" who was "ok putting our nation's safety at risk." That kind of language, directed at an American CEO running an American company, alarmed civil liberties groups well beyond the usual tech-industry commentariat. Even if Anthropic ultimately prevails in challenging the supply-chain risk designation in court, the damage to its business may be done. "It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?"
Anthropic had a $200 million contract with the Pentagon that has now been cancelled. But that is not a huge blow to a company reportedly on track to generate at least $18 billion in revenue this year. Instead, the larger concern is the extent to which other enterprises will have to stop using Anthropic's technology.
What the Critics Are Saying
The pushback against the administration's handling of this dispute has come from across the political spectrum. Democratic Senator Mark Warner, vice chair of the Senate Select Committee on Intelligence, raised concerns that national security decisions may be driven by political considerations rather than careful analysis, as reported by CNBC. The Center for Democracy and Technology warned the episode "chills private companies' ability to engage frankly with the government about appropriate uses of their technology," and that threats of this kind "normalise an expansive view of executive power that should worry Americans all across the political spectrum."
After OpenAI published its blog post, tech commentator Mike Masnick at Techdirt claimed the deal "absolutely does allow for domestic surveillance," because it says the collection of private data will comply with Executive Order 12333, which Masnick described as how the NSA hides domestic surveillance by capturing communications outside the US even if they contain information from or about US persons. In response, OpenAI's head of national security partnerships Katrina Mulligan argued on LinkedIn that deployment architecture matters more than contract language, and that by limiting deployment to a cloud API, OpenAI can ensure its models cannot be used in certain ways regardless of what any contract says.
OpenAI says it also wanted to de-escalate things between the Defence Department and the US AI industry, noting that a good future will require real and deep collaboration between government and the labs. As part of its deal, OpenAI asked that the same terms be made available to all AI labs, and specifically that the government try to resolve things with Anthropic, calling the current state "a very bad way to kick off this next phase of collaboration."
The Bigger Picture for AI Governance
Strip away the political theatre and what remains is a genuinely hard problem. Unlike many major defence technologies, today's leading AI systems have been developed primarily in the private sector, by companies like Anthropic, OpenAI and Google. The increasing capabilities of those systems have forced the Pentagon to bargain with those companies over usage policies or opt for less proven services. Until this week, Anthropic was the only leading AI company cleared to offer services on classified networks.
Governments everywhere, including in Canberra, are watching this dispute closely. The question of whether a private AI company can or should retain veto power over how its products are used by a sovereign military is not a uniquely American one. Australia's own AI governance frameworks, including the AI Ethics Framework maintained by the Department of Industry, and the Department of Defence's emerging AI strategy, will eventually need to answer the same questions Anthropic and OpenAI are fighting over right now.
Altman's own framing of the gamble is candid enough to be almost refreshing. "We really wanted to de-escalate things, and we thought the deal on offer was good," he said. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as rushed and uncareful."
That is an honest assessment of a genuinely difficult call. The administration's decision to weaponise a national security designation against a domestic company over a contract dispute sets a troubling precedent, regardless of one's views on AI safeguards. But Anthropic's refusal to accept any version of "all lawful purposes" language, even as OpenAI managed to do so with stronger technical protections, raises its own questions about whether rigidity helped or hurt the cause it was trying to defend. Reasonable people, including people with genuine national security expertise, disagree sharply on where the right line sits. What nobody should be comfortable with is a process this chaotic being used to answer a question this consequential. The Australian Parliament would do well to take note before it faces a version of this same argument closer to home.