In a San Francisco courtroom this week, a federal judge delivered a sharp rebuke to the Trump administration's unprecedented decision to blacklist an American artificial intelligence company. U.S. District Judge Rita Lin issued the ruling on Thursday, two days after lawyers for Anthropic and the U.S. government appeared in court for a hearing. The moment carried the weight of constitutional principle colliding with national security doctrine, and the court sided with the technology company.
Anthropic sued the administration to try to reverse its blacklisting by the Pentagon and President Donald Trump's directive banning federal agencies from using its Claude models, seeking the injunction to pause those actions and prevent further monetary and reputational harm as the case unfolds. A federal judge in California has indefinitely blocked the Pentagon's effort to "punish" Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," US District Judge Rita Lin wrote in a stinging 43-page ruling.
The designation represents an extraordinary moment in American technology and defence policy. Anthropic is the first American company to publicly be named a supply chain risk, as the designation has historically been reserved for foreign adversaries. The label requires Defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military.
The conflict did not emerge from nowhere. Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled. The DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. The two failed to reach an agreement, and now, the dispute will be settled in court.
During the hearing, the judge's scepticism toward the government's position became evident. "Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor," Lin said during the hearing Tuesday. "I don't see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law."
The government's defence rested on concerns about future risk. Eric Hamilton, lawyer for the U.S. government, said Tuesday the DOD had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems," which is why the company was designated a supply chain risk. Yet the judge found this rationale unconvincing. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'"
In her ruling, Lin articulated a tension that defines this case: institutions claim they need absolute freedom to operate as they see fit, while companies claim the right to set boundaries on how their creations are used. The Pentagon contends it cannot allow contractors to dictate policy; Anthropic argues no private company should be punished for contractual disagreement expressed through public statement.
The practical stakes are considerable. Without the injunction, Anthropic has said, it could lose billions of dollars in business. Without it, the company has said in filings that it could lose billions of dollars in business and suffer further reputational harm. Microsoft threw its support behind Anthropic on Tuesday, saying a judge should issue a restraining order that would block the Pentagon's designation of the artificial intelligence giant as a supply chain risk "for all existing contracts." Such an order would "enable a more orderly transition and avoid disrupting the American military's ongoing use of advanced AI."
Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. The underlying litigation will continue, and deeper questions about the government's power to silence private contractors remain unsettled. Yet on this preliminary question, the court has signalled that the administration's move likely crossed constitutional lines. Whether appeals courts agree will shape how government and technology companies negotiate the boundary between national security and free speech in the years ahead.