From Tokyo: There is a particular kind of vertigo that sets in when a conflict erupts and the internet floods with imagery faster than any journalist can process it. The joint US-Israel strikes on Iran that began on 28 February 2026 produced exactly that effect, and within hours the social media ecosystem was awash with videos that bore almost no relationship to events on the ground.
Some of the footage was recycled from earlier conflicts. Some was taken from military-themed video games. Some was produced entirely by AI. AAP FactCheck found numerous examples of content generated using artificial intelligence being passed off as genuine war footage, or old imagery being misattributed to recent events. In one case circulating widely on X, an Australian-based commentator posted a video of a burning building claiming it showed a CIA facility in Dubai targeted by Iran, when the footage actually showed a 2015 fire at a residential building in Sharjah.
The verification challenge is not simply one of volume. Emmanuelle Saliba, chief investigative officer of digital forensics group Get Real, described it as "the first time we've seen generative AI be used at scale during a conflict." That matters because the tools producing this content have matured rapidly. UC Berkeley digital forensics professor Hany Farid noted that many AI-generated videos run eight seconds or less, or are assembled from eight-second clips edited together, a limitation imposed by Google's Veo 3 text-to-video model. Knowing this, verification experts advise treating suspiciously brief clips with particular scepticism.

US firm GetReal Security identified a wave of fabricated videos linked to the conflict, tracing visually compelling clips depicting apocalyptic scenes of war-damaged aircraft and buildings to Google's Veo 3 AI generator, known for its hyper-realistic output. The Carnegie Endowment for International Peace has separately documented how, even among expert investigators, advanced detection tools and expert analysts sometimes could not agree on whether specific footage was authentic or AI-generated. That ambiguity cuts in multiple directions: false claims about AI manipulation have been used to discredit real footage, and claims that something is AI-generated have been deployed to dismiss compromising material.
Into this environment, X moved on 3 March with a targeted policy change. X head of product Nikita Bier announced that creators who use AI technology to mislead audiences about armed conflicts will be removed from the company's Creator Revenue Sharing Programme for 90 days, with permanent exclusion for repeat offenders. Enforcement will draw on available metadata embedded by AI systems in combination with Community Notes, X's crowd-sourced fact-checking tool. Bier also confirmed that creators can comply by selecting a "Made with AI" label through the platform's content disclosure menu.
The policy has attracted immediate scrutiny, and not without reason. As Engadget reported, the measure is notably narrow, applying only to creators enrolled in the revenue-sharing programme, not to ordinary accounts. X already watermarks content produced by its own Grok chatbot, and is separately testing a broader AI-labelling toggle, though no timeline has been confirmed for that broader feature. Critics also point to the semantic tension in Bier's framing: he invoked "times of war" as justification, yet the United States has not formally declared war since 1942, and the current conflict carries no such legal designation.
The structural problem runs deeper than any single platform's policy. Experts warn that major tech platforms have largely weakened safeguards in recent years by scaling back content moderation and reducing reliance on human fact-checkers, creating the conditions in which AI-generated wartime disinformation can spread with unprecedented speed. Detection tools, even when available, often fail in their purpose because they are not designed for global populations, different languages, and varied media formats, or because they do not deliver results that are comprehensible to a journalist, a non-expert, or a sceptical public.
For Australian audiences, the issue is not remote. Thousands of Australians remain stranded in the Middle East as military tensions escalate, with an Australian defence base reported to have come under attack from Iran. When Australians turn to social media for updates on the safety of family members or the status of a conflict that directly involves Australian strategic partners, the quality of information they encounter carries real consequences.
The honest position is that no single intervention, whether X's revenue-sharing penalty, AAP FactCheck's diligent debunking, or AI detection algorithms from firms like GetReal Security, is equal to the scale of the problem. What verification experts, platform moderators, and independent journalists are doing is necessary work. But the gap between what can be verified and what has already spread is measured in millions of views and minutes, not hours. As the Carnegie Endowment has warned, there is a growing gulf between society's ability to detect increasingly realistic synthetic content and the efficacy of the detection tools available during active conflict.
Reasonable people disagree on where platform responsibility ends and individual media literacy begins. Compelling arguments exist for heavier regulatory obligations on social media companies, just as there are legitimate concerns about concentrating editorial power in the hands of a small number of Silicon Valley gatekeepers. What the current crisis illustrates is that neither the market nor community moderation has yet produced a credible answer. In the meantime, the most reliable guidance remains straightforward: treat unverified footage of an active conflict with the same scepticism you would give an anonymous tip, and follow accredited fact-checkers such as AAP FactCheck and the ABC's RMIT FactCheck before sharing anything that looks too dramatic to be true.