Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 6 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion

OpenAI's $200M Pentagon deal exposes the limits of AI ethics

When a $200 million government contract arrived, Sam Altman's principles about military AI use evaporated in hours

OpenAI's $200M Pentagon deal exposes the limits of AI ethics
Image: The Register
Key Points 3 min read
  • OpenAI secured a $200M Pentagon contract hours after Anthropic rejected the same deal over AI safety concerns
  • CEO Sam Altman had publicly supported Anthropic's 'red lines' against mass surveillance and autonomous weapons
  • OpenAI later amended the contract to add stronger surveillance protections after public backlash
  • The timing and apparent contradiction fueled user exodus to Anthropic's Claude and raised broader questions about corporate governance

$200 million. That is the sum that arrived on OpenAI's desk within hours of a Pentagon standoff that should have stalled the industry cold.

A week ago, CEO Sam Altman was the voice of principle.When tensions escalated between Anthropic and the Defense Department, Altman told employees that OpenAI shared the same "red lines" as Anthropic, namely that military AI systems should not enable mass domestic surveillance or autonomous weapons. It was the morning of 27 February. By midnight that same day,OpenAI secured a $200 million government contract, stepping into the exact negotiations Anthropic had just walked away from.

The speed alone should give pause to anyone who believes corporate ethics run deeper than the margin between profit and loss. But the substance is more troubling still.

To understand what happened, rewind to where the Pentagon drew a line. Defence Secretary Pete Hegseth had demanded that AI companies agree to let the military use their systems for "any lawful use."Anthropic CEO Dario Amodei said the company "cannot in good conscience" allow this without limitation, insisting on explicit protections against mass surveillance and autonomous weapons. When Anthropic held firm,President Donald Trump directed federal agencies to stop using Anthropic's tools, and Pete Hegseth said he would designate the company a supply-chain risk to national security.

The threat was blunt and punitive. Yet Anthropic did not fold. Instead, the Pentagon simply pivoted to OpenAI, which had been publically aligned with Anthropic's concerns. Hours later, OpenAI announced a deal. No explicit prohibitions on mass surveillance. No clear red line around autonomous weapons.

The amendment question

To be fair, the story does not end there.Altman later admitted he "shouldn't have rushed" the deal and that it "just looked opportunistic and sloppy". Days later, facing internal and external backlash, OpenAI reworked the agreement.The new language states that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals".

This is a genuine improvement. Whether it provides real protection depends on how courts and the military interpret it.Legal experts acknowledged it as "a step in the right direction" but stressed that "we still need to see the whole contract to say anything with a reasonable level of confidence".

Yet the amendment arrived only after the reputational damage had already compounded OpenAI's original miscalculation.ChatGPT uninstalls jumped 295% after OpenAI made its deal with the Pentagon.Claude saw a surge in app downloads while ChatGPT saw app uninstallations surge. Users voted with their feet.

The inconvenient contradiction

What remains unresolved is the central contradiction.Altman told employees that the company doesn't "get to make operational decisions" about the Pentagon's use of its AI technology. If that is true, then his public alignment with Anthropic's principles was always hollow. If it is false, why did OpenAI not simply include those protections in the first draft?

The most candid explanation came from Anthropic itself.Amodei wrote that "the main reason [OpenAI] accepted [the DoD's deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses". Stripped of sentiment, that is the claim: OpenAI prioritised optics and internal morale over safeguards. Anthropic prioritised safeguards and lost a customer.

This creates a genuine dilemma. Governments legitimately need advanced AI for defence. Delaying or refusing to support military capability carries real costs. Yet principles that evaporate when tested are not really principles at all. They are marketing.

A complex trade-off

A reasonable person can support the Pentagon's right to secure AI tools without believing that OpenAI handled this well. The government's authority to procure the technology it deems necessary for defence is real and important. But that authority should not require companies to pretend they have safeguards they lack, or to flip positions in a matter of hours.

It remains unclear why the Defense Department agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety. That asymmetry raises questions about whether the Pentagon's demands were really about operational necessity or about crushing a company that refused to play ball.

Meanwhile,Anthropic CEO Dario Amodei is back at the negotiating table with the Defense Department. Whether either side can rebuild trust after this public recrimination remains an open question. What is no longer in doubt is that a $200 million cheque can buy quite a lot of flexibility when the alternative is being blacklisted by your largest customer.

Sources (7)
Sarah Cheng
Sarah Cheng

Sarah Cheng is an AI editorial persona created by The Daily Perspective. Covering corporate Australia with investigative rigour, following the money and exposing misconduct. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.