Anthropic is reinforcing its intellectual and political defences at a critical moment. As the artificial intelligence company battles an unprecedented Pentagon blacklist, it has announced a restructuring of its leadership that combines research operations into a new think tank while dramatically expanding its government affairs presence.
Jack Clark, one of Anthropic's four co-founders, is stepping down from his role as head of public policy to lead The Anthropic Institute, a newly created research operation that consolidates three existing teams. Under Clark's direction, the Institute will manage the Frontier Red Team, Economic Research programme, and work on societal impacts. Sarah Heck, Anthropic's new head of public policy, will replace Clark and oversee a public policy team that the company is tripling in size as it opens a permanent office in Washington this spring.
The timing reflects the gravity of the crisis facing the company. Anthropic filed a lawsuit against the Trump administration after the company was blacklisted and deemed a threat to U.S. national security. The startup was officially designated a supply chain risk, an extraordinary move that has historically been reserved for foreign adversaries.
The designation emerged from a breakdown in contract negotiations with the Pentagon. Anthropic wanted assurance that its technology would not be tapped for fully autonomous weapons or domestic mass surveillance, but the DOD wanted Anthropic to grant the agency unfettered access to Claude across all lawful purposes. During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon's terms by 5:01pm on Friday. After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military's needs, or else declare the company a supply chain risk. Anthropic refused to compromise.
The consequences are substantial. The complaint says the actions could jeopardise "hundreds of millions of dollars" in revenue. More broadly, defence tech companies are telling employees to stop using Claude and to switch to other artificial intelligence models following the designation. Multiple defence tech execs said they're preemptively moving their workforce off of Claude, with one defence company executive saying they told employees to start switching out Claude for other models, a process that could take a week or two.
Yet Anthropic's position carries intellectual weight. Tara Chklovski, CEO of Technovation, said that if the Defence Department pursues this strategy to its end and cuts off Anthropic, it could be a dangerous decision. She said Anthropic has been the most deliberate model creator when it comes to building systems for the military, and that any alternative supplier the government uses will be less safe. Dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief supporting Anthropic. The group argued that the supply chain risk designation could harm US competitiveness in the industry and hamper public discussions about the risks and benefits of AI. They also said Anthropic's red lines raise legitimate concerns.
The paradox is striking. Claude remains the only AI model deployed across the military's classified networks, and according to multiple reports, it was used during the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran. The Pentagon simultaneously depends on the technology it has blacklisted.
Anthropic's organisational restructuring suggests the company is preparing for a prolonged standoff. By consolidating its research operations under Clark and building out its policy capacity in Washington, the company is signalling that this conflict will not be resolved quickly. "AI is advancing faster than any technology in history, and the window to get policy right is closing," said Sarah Heck, Anthropic's new head of public policy.
The larger question is whether the government will blink first. A senior Defence official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this." That language suggests resolve, but the operational reality is especially complicated within the DOD, in part because the U.S. is actively carrying out a military operation in Iran. Anthropic's models have been used to support that operation, even after it was blacklisted.
Reasonable observers can disagree on whether Anthropic's refusal to grant the Pentagon unrestricted use of its technology represents principled leadership or reckless obstruction. The company frames it as a defence of safety standards and democratic oversight of military AI. The Pentagon views it as a vendor attempting to dictate terms on matters of national security. Both positions have legitimate foundations in different values: corporate autonomy and technological responsibility on one side; state prerogative and wartime necessity on the other.
What appears clear is that Anthropic believes this fight will be won or lost in Washington, not in the marketplace. By consolidating its research agenda and tripling down on policy engagement, the company is betting that intellectual credibility and political strategy can overcome the immediate economic damage of a supply chain risk designation.