The Pentagon is planning to have AI companies train versions of their models specifically for military use on classified information, marking the first indication that firms like OpenAI and xAI could train government-specific versions directly on classified data, according to reporting from MIT Technology Review.
This goes beyond current practice. Generative AI models used in classified environments can answer questions, but don't currently learn from the data they see. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a defence official who spoke on background.
The shift reflects the military's push for speed. The US is aiming to become an "AI-first" warfighting force, based on statements released by Secretary of Defence Pete Hegseth earlier this year; the Pentagon has been racing to incorporate more AI, spurred by a memo from Hegseth in January. Generative AI has ranked lists of targets and recommended which to strike first, and it has been used in more administrative roles, like drafting contracts and reports.
The Pentagon would train copies of AI models while remaining the only owner of any data used for training; in rare cases, someone from the AI company could be granted the appropriate security clearance to see classified information. Before allowing this new training, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery.
If the initiative pushes through, the department would likely be training models from OpenAI and xAI, which recently signed agreements with the agency. OpenAI announced an agreement on February 28 for the military to use its technologies in classified settings, and Elon Musk's company xAI has also reached a deal for the Pentagon to use its model Grok in such settings.
The security risks are substantial, though experts disagree on their severity. Aalok Mehta, who directs the Wadhwani AI Centre at the Centre for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says the biggest risk is that classified information these models train on could be resurfaced to anyone using the model, which would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI.
The concern is not foreign exposure. Mehta notes it's not as hard to keep information contained from the broader world: "If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI," and the government has infrastructure for this already; the security firm Palantir has won contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies.
Rather, the risk is internal. "You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defence Department that isn't supposed to have access to that information," Mehta says.
The pushback from Anthropic has complicated the Pentagon's plans. Following disagreements between the Pentagon and Anthropic over whether Anthropic could restrict the military's use of its AI, the Defence Department designated the company a supply chain risk and President Trump demanded on social media that the government stop using its AI products within six months. Anthropic is fighting the designation in court.
Anthropic had said it was refusing to remove safeguards that prevented its technology from being used for US domestic mass surveillance and to programme autonomous weapons, which can attack targets without human intervention.
In response, the Pentagon agreed with OpenAI's company principles that its technology would not be used for "domestic mass surveillance" or for "autonomous weapon systems", affirming that humans would take "responsibility for the use of force".
The broader institutional challenge remains: the memorandum suggests that the Department may take an approach to compliance with AI security requirements that favours speed and fewer constraints on use. This creates a tension between the Pentagon's acknowledged need to move quickly and the novelty of the security risks involved in embedding classified intelligence into AI systems that may be shared across the defence establishment.