In December, the Pentagon quietly completed a milestone that signals a decisive shift in how the American military integrates commercially developed artificial intelligence: 3 million employees, warfighters, and contractors were given access to AI on their desktop, according to Pentagon leadership. The tool was Google's Gemini, deployed through a new platform called GenAI.mil.
What began as an experiment has rapidly scaled. Since its launch in December, 1.2 million Defence Department employees have used the AI chatbot for unclassified work, running 40 million unique prompts and uploading more than four million documents. The adoption exceeded anything the Pentagon had previously achieved with commercial AI systems.
The expansion reflects a deliberate strategy to embed frontier AI into operational and administrative workflows. Pentagon employees can now use Gemini to streamline complex administrative tasks, including summarising policy handbooks, generating project-specific compliance checklists, extracting key terms from statements of work, and creating detailed risk assessments for operational planning. The Pentagon has also unveiled a new "Agent Designer" tool that will allow the 3 million employees to create their own custom AI assistants to automate tasks and streamline complex workflows.
But beneath this success lies an ideological and structural conflict that has become impossible to ignore. Even as Google's Gemini rolled out, the Pentagon designated Anthropic, another leading AI company, a supply chain risk and effectively blacklisted it from defence work. The dispute exposes a fundamental tension: who controls how commercial AI systems are deployed in military contexts?
The Anthropic Standoff
The Pentagon issued the supply chain risk designation after negotiations to update its contract with Anthropic broke down over two red lines that Anthropic wants the Defence Department to agree to: that its AI tool won't be used for mass surveillance of US citizens, and that it won't be used for autonomous weapons.
The Pentagon, however, wants to use Anthropic's AI for "all lawful purposes," saying they could not allow a private company to dictate how they can use their tools in a national security emergency. The defence department's position is not unreasonable. A military organisation requires maximum operational flexibility; constraining capabilities based on a vendor's policy preferences creates risks.
Anthropic's perspective also has merit. CEO Dario Amodei said AI cannot currently be used reliably and safely for cases like mass surveillance and autonomous weapons. The company's argument is not that these uses are inherently wrong, but that current technology is inadequate for them. From a risk management standpoint, a developer voluntarily constraining applications beyond its technical capacity is a reasonable position.
Neither side is entirely wrong, which is precisely why the conflict matters. The Pentagon designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department. This was the first time the federal government was known to have used the designation against a US company. The measure is extraordinary.
Anthropic filed suit on Monday against the DOD and other agencies, arguing that the Trump administration's ban on its tech and the supply-chain risk designation are unlawful. The lawsuit claims the designation punishes Anthropic for being outspoken about its views on AI policy, including its advocacy for safeguards against its technology being used for mass domestic surveillance or autonomous weapons.
The Larger Question
The Google-Anthropic contrast reveals something crucial about how states acquire military capability. When a technology is developed by commercial firms serving multiple customers, the state cannot unilaterally determine its properties. It can negotiate, contract, or regulate; it cannot simply command a private company to abandon its foundational design choices.
Google's willingness to work with the Pentagon reflects different corporate calculus than Anthropic's resistance. Google unveiled a $200 million-ceiling contract with the department in July to deploy its frontier AI tools. Both approaches are defensible. Google's choice to partner broadly with defence expands its market and reinforces its dominance. Anthropic's choice to maintain guardrails protects its reputation and reflects its founding values. Neither position is obviously corrupt or obviously reckless.
What matters going forward is whether military AI deployment happens through negotiation and contracting, or through coercion. The company says its lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements. That distinction is important. The question is not whether the Pentagon should use Anthropic's technology, but whether the Pentagon can punish companies for declining to meet its demands.
Meanwhile, training has lagged far behind adoption: only 26,000 people have been trained in how to use AI since December and future sessions run by the Defence Department are fully booked. The Pentagon is deploying powerful generative AI systems at scale to millions of personnel with minimal preparation. That gap between capability and competence deserves scrutiny regardless of which vendor provides the tools.
The Pentagon will likely win this confrontation with Anthropic through sheer institutional power. But that does not resolve the underlying problem: whether military integration of commercial AI should proceed through partnership with companies that retain values, or through coercive acquisition of companies that comply.