As Australians woke to news of escalating conflict in the Middle East this week, a parallel crisis was unfolding on social media: artificial intelligence systems designed to help verify information were actively spreading lies.
X's Grok chatbot, integrated directly into the platform and promoted by owner Elon Musk as a fact-checking tool, repeatedly failed to distinguish real footage from computer-generated fakes. According to research by the BBC, Grok wrongly insisted that AI-generated videos were real in many cases. When users asked the chatbot to verify a fake video showing missiles striking Tel Aviv, for instance, Grok oscillated between conflicting assessments within minutes.
The failure came at precisely the moment when access to authentic information mattered most. Mentions of Grok jumped to 1.8 million on the first day of the conflict from a daily average of 1.27 million, with many users writing to the chatbot asking it to provide or check claims related to events in the Middle East, after Elon Musk encouraged users to do so.
Beyond Grok's shortcomings, an unprecedented wave of AI-generated misinformation about the US-Israel war with Iran is being monetised by online creators with growing access to generative AI technology, with numerous examples of AI-generated videos and fabricated satellite imagery collectively amassing hundreds of millions of views online.
The technological barrier to creating convincing fake conflict footage has essentially vanished. What used to require professional video production can now be done in minutes with AI tools, with the barrier to creating convincing synthetic conflict footage having essentially collapsed. For creators seeking to profit, the incentive is stark: once viral AI-generated content is basically a money printer, and they've built the ultimate misinformation enterprise.
The problem reflects a deeper structural issue on X. The platform's revenue-sharing programme rewards creators based on views and engagement, but for years had minimal guardrails around synthetic content. A single fake video of missiles striking Dubai's Burj Khalifa accumulated tens of millions of views before moderators intervened.
X's head of product Nikita Bier announced on March 3 that the platform is specifically looking to address artificial intelligence-generated deepfakes related to the conflict, with users who post AI-generated videos of armed conflict without disclosure facing a 90-day suspension from Creator Revenue Sharing, noting that during times of war it is critical that people have access to authentic information on the ground and that with today's AI technologies it is trivial to create content that can mislead people. Repeat offenders face permanent suspension from the programme.
The policy response is a recognition that X has lost control of its own platform's information integrity. Yet critics note the policy's limitations: it affects only users who earn revenue through the platform, meaning the vast majority of users who spread misinformation face no consequences. The policy also targets AI-generated content but does not address other forms of false information, such as old videos misrepresented as current footage.
Other platforms are moving in different directions. YouTube is expanding its likeness detection feature, beginning Tuesday, to a pilot group of journalists, government officials, and political candidates to alert individuals to AI content of themselves on YouTube, which they can request the platform remove. The approach reflects the reality that fake videos like these have a detrimental impact on people's trust in the verified information they see online and make it much harder to document real evidence.
For journalists and researchers trying to document actual events, the flood of synthetic content creates what some have called a fog of war. Each piece of footage must now be independently verified, slowing down reporting while fake claims spread at algorithmic speed. The crisis speaks to a broader challenge: technology companies built systems to maximise engagement and profit, but never adequately addressed what happens when those systems collide with conflict, geopolitics, and financial incentives to deceive.
While many social media companies say they are trying to change their moderation and detection systems to address the scale and speed at which AI-generated content spreads, there is no simple solution to the problem.