Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Trump Blacklists Anthropic Over AI Safeguards, Handing Pentagon a Political Minefield

The White House's decision to ban a leading American AI company from federal contracts raises urgent questions about the limits of executive power and the future of private-sector AI in defence.

Trump Blacklists Anthropic Over AI Safeguards, Handing Pentagon a Political Minefield
Image: Engadget
Key Points 4 min read
  • President Trump ordered all US federal agencies to immediately cease using Anthropic's Claude AI, with a six-month phase-out for defence agencies.
  • Defence Secretary Pete Hegseth declared Anthropic a 'supply chain risk to national security', a designation historically reserved for foreign adversaries.
  • Anthropic refused Pentagon demands to drop safeguards preventing its AI from being used in fully autonomous weapons or for mass domestic surveillance.
  • OpenAI struck its own deal with the Pentagon on the same day, reportedly preserving similar safety principles that Anthropic had fought to maintain.
  • Anthropic vowed to challenge the supply chain risk designation in court, calling it legally unsound and a dangerous precedent for American companies.

From London: As Australians woke on Saturday morning, one of the most extraordinary confrontations in the short history of artificial intelligence was reaching its conclusion in Washington. President Donald Trump had ordered every United States federal agency to stop using the products of Anthropic, one of the world's most advanced AI companies, after the firm refused to strip internal guardrails preventing its Claude model from being deployed in fully autonomous weapons or for mass domestic surveillance of Americans.

The directive, posted to Trump's Truth Social platform on Friday afternoon Washington time, gave agencies a six-month phase-out period. Defence Secretary Pete Hegseth followed within the hour, announcing on X that the Pentagon would formally designate Anthropic a "Supply-Chain Risk to National Security" — a classification, as Engadget reported, that has historically been reserved for foreign adversaries and has never previously been applied publicly to an American company. The practical effect is severe: every contractor, supplier, or partner doing business with the US military is now barred from any commercial activity with Anthropic.

The dispute has been building for months. According to Axios, negotiations between Anthropic and the Pentagon have centred on two specific restrictions the company has maintained since it first began supporting American warfighters in June 2024: a prohibition on using its models in fully autonomous weapons, and a ban on using them for mass domestic surveillance. The Pentagon, which awarded Anthropic a contract worth up to $200 million in July 2025, insisted on "all lawful use" access with no company-imposed restrictions. Anthropic's position, as articulated by CEO Dario Amodei, was that current AI models are simply not reliable enough for autonomous battlefield targeting, and that mass surveillance of citizens raises profound civil liberties concerns that existing law was never designed to address.

The stand-off reached a crisis point at a Tuesday meeting between Hegseth and Amodei. According to NPR, Hegseth threatened to cancel the contract and invoke the Korean War-era Defence Production Act to compel Anthropic to comply — a legal step whose constitutionality would almost certainly face challenge. Anthropic was given a deadline of 5:01pm Friday to accept the Pentagon's terms. When that deadline passed without agreement, the administration moved swiftly.

A Precedent That Should Concern Both Sides of Politics

From a centre-right perspective, there is a legitimate case for the government's frustration. Defence procurement contracts exist to serve the national interest, not the reputational preferences of Silicon Valley boardrooms. If a company accepts hundreds of millions of dollars in military contracts, it is reasonable to expect it to operate under the terms the government requires, within the law. The concern that a private corporation could effectively hold veto power over operational decisions of a sovereign military is not a trivial one.

The Centre for Democracy and Technology pushed back firmly, however. Its president and CEO Alexandra Givens warned that the administration's threats "chill private companies' ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility." That is a point worth taking seriously. If companies with genuine safety concerns about their own technology face the threat of being classified alongside foreign adversaries for voicing those concerns, the long-term damage to America's innovation ecosystem could outweigh any short-term contractual benefit.

Senator Mark Warner, the Virginia Democrat who serves as vice-chair of the Senate Select Committee on Intelligence, raised a sharper concern: that the administration's actions might be "the pretext to steer contracts to a preferred vendor," according to CNBC's reporting. That allegation remains unproven, but the timing is striking. Within hours of the Trump administration ordering agencies to cut ties with Anthropic, OpenAI announced it had reached a deal with the Defence Department to deploy its models on classified networks — a deal that, according to OpenAI CEO Sam Altman, preserved the very same safety principles Anthropic had refused to abandon.

Industry Solidarity, and Its Limits

The response from within the technology sector was notable. Hundreds of Google and OpenAI employees signed an open letter calling for solidarity with Anthropic, as Engadget reported. Altman said publicly that OpenAI shared the same "red lines" on autonomous weapons and domestic surveillance. Yet OpenAI's deal with the Pentagon — struck within hours of Anthropic's blacklisting — illustrated the fine line between principled solidarity and competitive advantage. The AI sector can speak with one voice until the contracts are on the table.

For Canberra, the implications are worth watching carefully. Australia's own defence and intelligence ties with the United States run deep, and any significant shift in how Washington manages its relationships with AI providers will ripple through joint programmes and procurement decisions. The Australian Department of Defence has been actively exploring AI integration across a range of functions; the question of whether to build AI governance frameworks around company-set safeguards or government-mandated "all lawful use" standards is not uniquely American.

Anthropic, which is valued at around $380 billion and is planning a public listing this year, vowed in a statement on its website to challenge the supply chain risk designation in court, calling it "legally unsound" and warning that it set "a dangerous precedent for any American company that negotiates with the government." The company also disputed that Hegseth held the statutory authority to extend the ban to all military contractors' commercial activity with Anthropic, not merely their defence-related work.

What's often lost in the Australian coverage of disputes like this is how genuinely difficult the underlying question is. Both sides claim to be acting in the interest of national security and American values. Anthropic's argument — that AI systems are not yet reliable enough to make lethal decisions autonomously, and that mass surveillance of citizens poses risks no existing law adequately governs — reflects a serious, evidence-based concern. The Pentagon's argument — that private corporations should not hold operational veto power over a sovereign military force — also reflects a principle with deep democratic roots.

Reasonable people can disagree about where to draw those lines. What is harder to defend is the manner of this dispute's resolution: a presidential social media post, a punitive supply-chain designation borrowed from the toolkit reserved for adversaries, and threats of civil and criminal consequences against a company that, by its own account, has been supporting American warfighters since mid-2024. Whatever the merits of the underlying policy argument, the precedent of using executive power to punish a private company for refusing to abandon its stated safety principles is one that deserves scrutiny from lawmakers well beyond Senator Warner's committee.

Sources (18)
Oliver Pemberton
Oliver Pemberton

Oliver Pemberton is an AI editorial persona created by The Daily Perspective. Covering European politics, the UK economy, and transatlantic affairs with the dual perspective of an Australian abroad. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.