OpenAI is preparing to integrate its Sora video generation tool directly into ChatGPT, according to a report from The Information on 11 March. The plan would allow users to create AI videos alongside text and images within the same interface, rather than switching to a separate app. OpenAI plans to maintain the standalone Sora application even after the integration.
The move reflects a pragmatic business decision. The standalone Sora app saw monthly installations decline 45 per cent in January 2026, with user spending falling at the same time, and the app dropped out of Apple's top 100 apps on the U.S. App Store. OpenAI had signed a deal with Walt Disney to generate videos using Disney characters inside Sora, but that partnership did not appear to boost usage in any lasting way. By moving Sora into ChatGPT, the video tool would gain access to a much larger user base, as ChatGPT has hundreds of millions of users, far more than the Sora app attracted on its own.
Yet the integration carries genuine downsides that deserve scrutiny. Adding video generation to ChatGPT could raise operating costs for OpenAI, as video AI models are computationally expensive to run compared to text-based tools. More importantly, making Sora more accessible will likely amplify problems that have already emerged from the standalone version.
The Deepfake Problem Compounds
Sora's safeguards have proven porous. Reality Defender, a company specializing in identifying deepfakes, says it was able to bypass Sora's anti-impersonation safeguards within 24 hours. When the app launched, early users generated realistic-looking deepfakes of historical figures. More recently, following objections from actor Bryan Cranston and the Screen Actors Guild, OpenAI changed its policy to prevent Sora from generating videos of live celebrities or copyrighted figures.
Yet the core problem persists: the only indicator that a video may be fake is a small Sora watermark in the lower right corner, and cybersecurity experts say it is trivial in many cases for bad actors to remove or crop out such labelling before sharing videos on social media as if they're real. Placing Sora inside ChatGPT will only expand the pool of users capable of circumventing these protections through creative prompting or technical workarounds.
Copyright holders face a parallel risk. Sora 2's high-quality outputs arrive amid concerns about illicit or harmful creations, from gory scenes and child safety to the model's role in spreading deepfakes, with questions about copyright swirling. Integrating the tool into ChatGPT will give orders of magnitude more users the ability to generate videos featuring copyrighted characters and music.
OpenAI's Competitive Pressure
The timing reveals something else: competitive desperation. Anthropic's Claude recently hit No. 1 in U.S. app downloads, overtaking ChatGPT, after the Pentagon blacklisted the company for refusing to loosen safeguards for military use of its AI model. The clash has fuelled interest in Claude, as some social media users call for dumping ChatGPT over OpenAI's deal with the Pentagon. Adding video generation capabilities is partly an attempt to claw back users and rebuild momentum.
The strategy makes business sense. Yet it illustrates a genuine tension in AI development. Sora's existing safeguards are neither sufficient nor stable. Expanding its reach before solving those problems amounts to deploying a technology with known vulnerabilities at larger scale. OpenAI has worked with safety researchers and red-teamers, and the company has added safeguards like visible watermarks by default, and built an internal search tool to help verify if content came from Sora. But these measures have repeatedly failed to prevent misuse.
The integration does not require new judgement calls about whether Sora should exist; the app is already operational. But it does raise a question about timing and responsibility. OpenAI should be clear about what safeguards it has strengthened, and honest about what remains unsolved. Users deserve to know the limits of the technology they are about to use, and content platforms deserve time to prepare for the flood of AI videos that will follow.
For now, OpenAI's Sora integration appears inevitable. The business case is strong and the competitive pressure is real. But the costs of moving fast without adequate defences are not imaginary. They fall on ordinary people who encounter deepfakes of themselves, copyright holders watching their work remixed without permission, and the broader erosion of trust in video as evidence of what actually happened.