From Washington: Anthropic's high-stakes clash with the Pentagon has produced an unexpected dividend for the AI company in the only market that truly matters: paying customers. As federal pressure on the startup intensifies, Claude is converting business users at a rate that outpaces its rival OpenAI, signalling a broader shift in how enterprises choose their artificial intelligence tools.
In February, Anthropic's business software subscriptions grew 4.9 percent month over month, whilst OpenAI's subscription share fell 1.5 percent, according to data from Ramp, an AI fintech company. OpenAI's decline marked the largest single-month loss for any AI model company since Ramp began tracking business AI adoption.
The numbers reveal genuine momentum beyond headline drama. Nearly one in four businesses on Ramp now pays for Anthropic, up from one in 25 a year ago, according to an economist at the tracking firm. Anthropic is winning roughly 70 percent of new business deals. Despite trailing OpenAI overall, OpenAI still leads in total business subscription market share at 34.4 percent to Anthropic's 24.4 percent, though the gap narrows monthly.
The context matters. In late January, tensions erupted between Anthropic and the Defence Department over whether the company could maintain guardrails restricting its models from military use in mass surveillance or autonomous weapons. Reuters reported the rift centred on Anthropic's refusal to remove model safeguards to make its systems more amenable to military applications. Rather than capitulate, Anthropic pushed back publicly at the end of February, despite the political cost.
That defiance proved commercially potent. On March 4, Washington designated Anthropic a supply chain risk to US national security; simultaneously, Anthropic's Defence Department dispute coincided with a surge in Claude installations and in ChatGPT removals. Claude surged past ChatGPT to become the most downloaded free app in Apple's App Store. For many users, especially in knowledge work and software development, the distinction mattered: a company unwilling to compromise on safety guardrails signalled a different set of values.
Not everyone applauds Anthropic's stance. The fundamental tension remains real. The Pentagon says it doesn't intend to use AI in mass surveillance or autonomous weapons, but requires AI companies to allow their models to be used "for all lawful purposes". It remains unclear why the Defence Department agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticised Anthropic for allegedly being overly concerned with AI safety. OpenAI, by contrast, struck a deal within days of Anthropic's refusal.
The economics create a genuine dilemma for investors and policymakers. Anthropic's annual revenue run rate now stands at $14 billion, after raising another $30 billion in February. Yet in a court filing, Anthropic's CFO Krishna Rao said the company has won over $5 billion of revenue since entering the commercial market, suggesting the headline figure overstates current cash generation. The Pentagon designation could slash their 2026 revenue by billions of dollars if contractors abandon the platform to maintain federal contracts.
Anthropic is doubling down on professional infrastructure, stacking integrations with financial terminals and developer tools, rather than chasing consumer scale. This niche focus shields it somewhat from government pressure on consumer products whilst positioning it as indispensable in regulated sectors and financial services. One Google principal engineer publicly acknowledged that Claude reproduced a year of architectural work in one hour; Microsoft has widely adopted Claude Code internally across major engineering teams.
The question facing Anthropic is whether commercial momentum can survive legal and regulatory assault. The Claude maker filed two complaints against the Department of Defence on Monday in California and Washington, D.C. Legal experts suggest the government's case may be undermined by a mismatch between the law invoked and Anthropic's conduct, internal contradictions in the Pentagon's behaviour and evidence that its decision may have been driven by animus rather than security. Courts move slowly; businesses move fast. If the supply chain designation persists for months, defence contractors will migrate to alternatives for institutional reasons, not principle.
For Australian technology leaders watching this unfold, the conflict illustrates a genuine tension between government authority and commercial autonomy. Anthropic could have chosen the path of least resistance, accepted the Pentagon's terms, and deferred to lawyers to manage downstream risks. Instead, it chose transparency and principle. The market has rewarded that choice, at least in the near term. Whether voters, courts, and defence officials ultimately side with the company remains an open question.