Caitlin Kalinowski, a senior leader in robotics at OpenAI since November 2024, has departed the company due to concerns about the company's military negotiations. Her resignation puts a spotlight on the governance questions surrounding AI companies and their role in military applications at a moment when the industry is racing to secure government contracts.
In a social media post, Kalinowski wrote that OpenAI did not take enough time before agreeing to deploy its AI models on the Pentagon's classified cloud networks, saying that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."She wrote on social media that while she has "deep respect" for OpenAI CEO Sam Altman and the team, the company announced the Pentagon deal "without the guardrails defined."
The broader context matters here.Just over a week ago, OpenAI revealed its partnership with the Pentagon, following failed talks between the Department of War and Anthropic, which had sought safeguards to prevent its AI from being used for mass domestic surveillance or fully autonomous weapons.By CEO Sam Altman's own admission, OpenAI's deal with the Department of Defense was "definitely rushed," and "the optics don't look good." After negotiations between Anthropic and the Pentagon fell through on Friday, President Donald Trump directed federal agencies to stop using Anthropic's technology after a six-month transition period, and Secretary of Defense Pete Hegseth said he was designating the AI company as a supply-chain risk.
After the company signed the agreement, Altman explained that the contract incorporates protections similar to those that were a point of contention in Anthropic's negotiations, with two of OpenAI's core safety principles—bans on domestic mass surveillance and ensuring human responsibility for the use of force, including in autonomous weapons systems—reflected in the Pentagon agreement. OpenAI has published selected contract language claiming these safeguards are in place, though the full text remains unpublished.
However, significant scepticism persists among observers.Brad Carson, a former congressman and general counsel of the Army who now leads a Washington policy group, noted that "OpenAI has said that the Department of War contractually agreed not to use ChatGPT in agencies that surveil American people," but added "they refuse to release to the public this contractual provision," leading him to conclude "I don't think this provision doesn't really exist, and they are just trying to fake it."A former Pentagon official who worked on military artificial intelligence applications told The Intercept the caveats around "intentional" surveillance are worryingly unclear, saying "That's the get out of jail free card right there. The language gives them enough flexibility to still do whatever the fuck they want, more or less, and then say, whoops, sorry, didn't mean to."
The governance question Kalinowski raised cuts to the heart of a genuine tension. AI companies do face real pressure to support national security work, yet the speed with which OpenAI moved, the lack of transparency about final safeguards, and the competitive dynamics that followed Anthropic's rejection all suggest a process that prioritised speed over careful deliberation. Whether the technical safeguards OpenAI claims will hold up in practice, and whether they constitute adequate oversight, remains genuinely uncertain without public access to the contract itself.
For Australian readers, this story matters because it illustrates how decisions made by American tech firms about military AI applications can set precedent globally. The frameworks these companies accept now will influence how governments worldwide approach AI and national security. The balance between supporting legitimate defence interests and maintaining meaningful civilian oversight of powerful technologies is not a problem unique to the United States; Australia will face identical questions as it develops its own AI strategy.