Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 12 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

America's AI Standoff Reveals Dangerous Power Imbalance

As Washington blacklists Anthropic, Australia must rethink its reliance on America's shifting tech alliances

America's AI Standoff Reveals Dangerous Power Imbalance
Image: The Verge
Key Points 3 min read
  • Pentagon designated US AI company Anthropic a security threat after contract negotiations failed over military AI safeguards.
  • The move is unprecedented; supply chain designations typically target foreign adversaries, not domestic companies.
  • Legal experts question the government's authority and suggest the action may violate Anthropic's free speech rights.
  • Australia should examine its own procurement frameworks to prevent similar weaponisation of government leverage.

In an extraordinary escalation, the Pentagon has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." What makes this significant extends beyond Silicon Valley politics. For Australia and other allied nations, the incident exposes how rapidly government can weaponise procurement authority when it disagrees with a private company's views on technology use.

The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic wants the Defense Department to agree to: that its AI tool won't be used for mass surveillance of US citizens, and that it won't be used for autonomous weapons. These are not fringe positions. Microsoft stated it "also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control."

Yet the Pentagon's response was brutal. The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department. President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools.

The designation is an extraordinary move that has historically been reserved for foreign adversaries. It was the first time the federal government was known to have used the designation against a US company. The scope extends beyond mere Pentagon contracts; defense contractors would have to sever all commercial ties to Anthropic, something most legal experts have said is not supported by the statute on supply chain risks.

Anthropic has not capitulated. The company is asking courts to undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company. The company says its lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements.

The financial consequences are substantial. According to Anthropic's lawyers, "for 2026, the government's adverse actions risk hundreds of millions, or even multiple billions, of dollars in lost revenue." More than 100 enterprise customers have reached out to the company about the designation.

Legal experts are sceptical of the government's position. The underlying dispute is a contractual disagreement about two usage restrictions. Designating this company as a supply chain risk is not an exercise of the authority Congress granted, it's an exercise of an authority Congress never contemplated. Federal codes have defined supply chain risk as a "risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert" a system in order to disrupt, degrade or spy on it. A private company rejecting a contract term does not fit that definition.

The broader implication matters for Australia. When the world's largest military power establishes the precedent that it can designate a domestic company a security threat for refusing contract terms, government procurement becomes a tool of coercion rather than competition. It signals that technology partnerships with the US government carry risks beyond commercial performance. Should Canberra choose to partner with American AI firms on defence projects, officials must now consider whether those partnerships could be dissolved if disputes arise over how technology can be deployed.

There is a legitimate debate over how AI should be used in military operations. But that debate belongs in Parliament and the courts, not in executive blacklists applied to domestic companies that exercise their right to negotiate terms. Anthropic claims the designation punishes the company for being outspoken about its views on AI policy, including its advocacy for safeguards against its technology being used for mass domestic surveillance or autonomous weapons. That is precisely the kind of power imbalance that erodes institutional accountability.

For Australia's defence partnerships, especially within the Department of Defence, the lesson is clear: insist on contractual certainty with private technology vendors, and resist the temptation to weaponise procurement authority for political purposes. The legal system should settle disputes over contract terms, not the executive branch acting unilaterally.

Sources (11)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.