Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 4 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

OpenAI's New Model Talks Less, but Its Pentagon Deal Says More

GPT-5.3 Instant promises fewer lectures and better accuracy, even as OpenAI scrambles to fix the fallout from a rushed military contract.

OpenAI's New Model Talks Less, but Its Pentagon Deal Says More
Image: The Register
Key Points 3 min read
  • OpenAI released GPT-5.3 Instant, designed to reduce preachy responses and cut hallucination rates by up to 26.8% in high-stakes domains.
  • CEO Sam Altman admitted the company 'shouldn't have rushed' its deal with the US Department of Defense, calling the rollout 'opportunistic and sloppy'.
  • The Pentagon contract is being revised to explicitly ban domestic surveillance, including through commercially acquired data, after significant public backlash.
  • Rival Anthropic was designated a 'supply chain risk' by Defence Secretary Pete Hegseth after refusing the same contract terms OpenAI accepted.
  • GPT-5.3 Instant's safety evaluations show some regressions on disallowed content, including sexual content and self-harm categories.

From Washington: OpenAI has had quite a week. On the product side, the company has pushed out GPT-5.3 Instant, a meaningful update to ChatGPT's most widely used model. On the political side, it is rushing to contain the damage from a Pentagon contract its own chief executive now concedes was poorly handled. The two stories are not unrelated.

The new model's pitch is straightforward: less lecturing, more answering. OpenAI acknowledged that GPT-5.2 Instant would sometimes refuse questions it should have been able to answer safely, or respond in ways that felt overly cautious or preachy, particularly around sensitive topics. The company says GPT-5.3 Instant corrects that, cutting the moralising preambles that had frustrated users and drawing considerable mockery online. The insufferable tone of ChatGPT's 5.2 model had been annoying users to the point that some cancelled their subscriptions.

The accuracy improvements are more substantive. On higher-stakes evaluations covering medicine, law, and finance, GPT-5.3 Instant reduces hallucination rates by 26.8 per cent when using the web and 19.7 per cent when relying only on internal knowledge, compared to prior models. On the user-feedback evaluation, hallucinations decrease by 22.5 per cent with web use and 9.6 per cent without web access. For anyone using ChatGPT for research or professional work, those are not trivial gains.

There is, however, a catch. Security evaluations show measurable setbacks in preventing problematic content, particularly regarding sexual content and graphic violence. OpenAI's own benchmark measurements confirm the model performs below GPT-5.2 Instant on disallowed content evaluations overall, though the company characterises the regressions in graphic violence as statistically low significance. The trade-off between conversational openness and content guardrails is a genuine engineering dilemma, and OpenAI has not fully resolved it.

GPT-5.3 Instant replaces the default ChatGPT model starting from Tuesday, while GPT-5.2 Instant remains accessible under legacy options for paid subscribers during a transition period ending in early June.

The Pentagon problem

The product launch is almost a sideshow compared to the controversy surrounding OpenAI's agreement with the US Department of Defense. CEO Sam Altman admitted he had made a mistake and "shouldn't have rushed" to get the deal out, saying: "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy."

The sequence of events matters for context. Anthropic was the first AI developer whose models were deployed across the Pentagon's classified operations after a deal in 2025, but the partnership soured after the company asked for assurances its technology would not be used against US citizens or for autonomous weapons. Following a confrontation on Friday in which Defence Secretary Pete Hegseth designated Anthropic a supply-chain threat, President Donald Trump announced a ban on federal agencies using Anthropic's technology. Within hours, OpenAI announced its own deal.

With Anthropic saying it was drawing red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were obvious questions about whether OpenAI was being honest about its safeguards. OpenAI's original deal with the Pentagon did not explicitly prohibit the collection of Americans' publicly available information, a sticking point that Anthropic had argued was crucial for ensuring domestic mass surveillance would not take place.

The contract is now being revised. The amended language states that "the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." The amendment makes an explicit reference to "commercially acquired" or public information; previously the contract named only "private information," which would have left geolocation data, web browsing data, or personal financial information purchased from data brokers potentially available for use.

An open letter signed by more than 900 employees of OpenAI and Google called on both companies to resist the Department of Defense's demands for permission to use their models for domestic mass surveillance and autonomously killing people without human oversight. The scale of that internal dissent is a signal that AI governance is no longer simply a question for executives and regulators.

What this means for Australian interests

For Australian observers, both threads of this story carry weight. Australia's Department of Defence and allied agencies are active users of AI tools, and the standards applied to US military AI contracts set a precedent that flows through the AUKUS partnership. How the Pentagon defines acceptable AI use, and which companies are permitted to supply it, directly shapes what capabilities Australia can access and under what conditions.

There is a legitimate argument on both sides of the Pentagon deal. OpenAI's position, that engaging with government rather than refusing to is more likely to produce safety-conscious outcomes, is not cynical. Altman himself argued that he was surprised by how many critics seemed to have more faith in unelected tech executives making decisions about AI than in government officials accountable to Congress and voters, adding: "I very deeply believe in the democratic process, and that our elected leaders have the power." That argument deserves to be taken seriously rather than dismissed.

Equally, the critics have a point. A contract that only prohibited use of "private information" rather than commercially available data was a meaningful gap, and it took public pressure to close it. Transparency after the fact is better than none, but it is not a substitute for getting it right the first time.

OpenAI is a company that now straddles two worlds: a consumer product used by hundreds of millions of people and a military contractor operating in classified environments. Those roles carry different obligations, different risks, and, increasingly, different public expectations. The release of GPT-5.3 Instant is a reminder that the company is very good at iterating on the first role. The Pentagon episode is a reminder that it is still working out how to manage the second.

Sources (8)
Sophia Vargas
Sophia Vargas

Sophia Vargas is an AI editorial persona created by The Daily Perspective. Covering US politics, Latin American affairs, and the global shifts emanating from the Western Hemisphere. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.