From Washington: In a development that will reverberate across the Pacific, hundreds of engineers and researchers at two of America's most powerful technology companies signed a public letter this week backing a rival firm's refusal to hand the US military unrestricted access to artificial intelligence tools. The letter, titled We Will Not Be Divided, was addressed to the leadership of Google and OpenAI, urging them to stand with Anthropic against what the signatories described as an attempt by the Pentagon to coerce AI companies into abandoning their ethical guardrails.
As reported by Engadget, the open letter had gathered more than 450 signatures at the time of publication, with roughly 400 coming from Google employees and the remainder from OpenAI. About half of the signatories chose to attach their names publicly; the rest remained anonymous, though all were verified as current employees. The original organisers described themselves as unaffiliated with any AI company, political party, or advocacy group.
The letter is the latest chapter in an extraordinary confrontation between Anthropic and US Defense Secretary Pete Hegseth. At the heart of the dispute are two restrictions that Anthropic CEO Dario Amodei has long insisted must remain in place: a prohibition on using the company's AI model, Claude, for domestic mass surveillance of American citizens, and a refusal to allow the technology to power fully autonomous weapons systems that could make lethal decisions without human approval. As Amodei has written publicly, these represent lines that no AI company should cross.
The Pentagon holds contracts worth up to $200 million each with Anthropic, OpenAI, Google DeepMind, and Elon Musk's xAI, awarded last year to develop frontier AI capabilities for both warfighting and enterprise defence applications. Anthropic's Claude was the only model cleared for the military's most sensitive classified work. That distinction gave the company both leverage and vulnerability when Hegseth issued an ultimatum demanding that Anthropic accept an "all lawful use" clause that would, in Anthropic's reading, allow its safety restrictions to be overridden at will.
A Deadline, a Blacklist, and an Awkward Resolution
When Anthropic refused to comply by the 5:01 pm Friday deadline, the consequences were swift and severe. President Donald Trump ordered all US government agencies to immediately cease using Anthropic's products, with a six-month phase-out period for those already embedded in government systems. Hegseth followed by formally designating Anthropic a "supply chain risk to national security," a designation that, according to ABC News, effectively bars every defence contractor, supplier, and partner from conducting commercial activity with the company. Anthropic said the designation was "legally unsound" and pledged to challenge it in court.
The irony arrived hours later. OpenAI CEO Sam Altman announced that his company had struck a deal with the Pentagon to deploy its models in classified networks, and that the agreement included the very same core restrictions Anthropic had fought for: prohibitions on domestic mass surveillance and fully autonomous weapons. According to Axios, Altman had earlier told his employees in an internal memo that he would draw the same red lines as Anthropic. As he told CNBC on Friday morning, he did not personally believe "the Pentagon should be threatening DPA against these companies" — a reference to threats to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance regardless of its wishes.
The result was, as legal analysts at Lawfare noted, a striking contradiction: the Pentagon had simultaneously declared Anthropic a risk so severe it must be expelled from all government systems, and so essential that the government had considered invoking emergency wartime powers to compel its cooperation. The logic does not quite hold together, and it invites the question of whether this was principally a dispute about safety policy or about who holds authority over contract terms.
The Legitimate Tension Beneath the Drama
Defenders of the Pentagon's position make a serious argument that deserves honest consideration. Governments, not private corporations, bear democratic responsibility for national security decisions. Allowing a technology vendor to impose unilateral restrictions on how the military conducts operations sets a precedent with real consequences, as one defence analyst told NPR: contractors do not ordinarily tell the Defence Department how their products may be used, "because otherwise you'd be negotiating use cases for every contract." There is a principled case that elected governments, accountable to the public, should be the final arbiters of what is lawful in a theatre of war.
The counter-argument is equally substantial. Anthropic's two restrictions were not broad policy demands; they were narrow, specific limits targeting applications that most legal scholars and AI safety researchers regard as genuinely dangerous: AI systems making irreversible lethal decisions without human oversight, and tools that could be turned on a country's own population. As Amodei argued, existing law has not kept pace with AI's capacity to supercharge the collection of publicly available data, from social media to geolocation records, in ways that could constitute surveillance without technically violating current statutes.
The fact that OpenAI ultimately secured a deal containing essentially the same protections Anthropic had requested suggests the Pentagon's objections were not purely operational. Hegseth's characterisation of Anthropic's guardrails as "woke AI" and his Under Secretary's personal attack on Amodei as a man with a "God complex" were not the language of dispassionate contract negotiation. They were political signals, and the AI industry's response, from the workers' open letter to Altman's public solidarity, reflected a recognition that the industry as a whole was being tested.
What This Means for Australia
For Australian policymakers and defence planners, this saga carries direct relevance. Australia is a partner in the AUKUS alliance, which includes deep cooperation on advanced technologies, including AI. The question of which AI systems will be cleared for use in classified defence environments, and on what terms, is not merely an American domestic matter. If the US military's approach to AI governance becomes the baseline for allied nations, Australia will need to form clear views on where its own red lines sit.
The Albanese government has been developing its own AI governance frameworks, and the Australian AI Ethics Framework published by the Department of Industry already identifies human oversight and the avoidance of harm as core principles. Whether those principles would survive the kind of pressure Anthropic faced is a question worth asking now, before it becomes urgent.
Reasonable people can disagree about where private companies' responsibilities end and governments' authority begins. What this week demonstrated is that the technology industry's most capable AI systems are now so deeply embedded in national security infrastructure that the two cannot be cleanly separated. The workers who signed that open letter were not being naive idealists; they were acknowledging a reality their employers cannot afford to ignore. And neither can the governments that increasingly depend on these tools to project power. Finding workable rules for this new terrain, through legislation, alliance frameworks, and genuine negotiation rather than ultimatums, is the only approach that will hold.