Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 3 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

X Moves to Flag AI War Videos, But the Policy Has Holes

The platform will penalise paid creators for undisclosed synthetic conflict footage, though critics note the rules leave most users untouched.

X Moves to Flag AI War Videos, But the Policy Has Holes
Image: Engadget
Key Points 3 min read
  • X will suspend creators from its revenue sharing programme for 90 days for posting unlabelled AI-generated armed conflict videos, with permanent removal for repeat offences.
  • The policy applies only to creators enrolled in the paid revenue sharing programme, leaving non-monetised accounts entirely outside its scope.
  • Violations will be detected via Community Notes and AI-generated metadata, though enforcement reliability remains an open question.
  • X's head of product Nikita Bier cited the importance of authentic information 'during times of war', even as the current US-Israel-Iran conflict has not been formally declared a war.
  • A broader AI labelling toggle for all users is reportedly in testing, but X has not committed to a timeline for its rollout.

There is a particular kind of damage that a convincing fake can do in wartime. A fabricated video of a missile strike, indistinguishable from real footage, spreads faster than any correction. It hardens opinion, inflames publics, and can shape the political conditions in which governments make decisions. It is against this backdrop that X has announced a targeted but limited policy to address AI-generated conflict videos on its platform.

X will suspend creators from its revenue sharing programme if they post AI-generated videos depicting armed conflicts without disclosing they were made with AI. Head of product Nikita Bier announced the policy change on 3 March, saying first-time violators will be cut off for 90 days, with repeat offenders permanently removed from the programme. The timing is pointed: the announcement came three days after the US and Israel launched strikes against Iran, sparking a chain of violent reactions.

The scale of the misinformation problem on X is not trivial. Outdated and AI-generated videos on X have spread misleading claims about US-Israeli strikes in Iran and Iran's counterattacks. Fact-checkers at PolitiFact have documented specific instances where footage from unrelated events years ago was recirculated as live war reporting. The synthetic content problem is worsening as the technology improves: AI video generation has advanced to the point where generated content has become almost indistinguishable from real footage for most viewers.

From an accountability standpoint, X's decision to use monetisation as the lever for compliance is commercially rational. Through the revenue sharing programme, X Premium-subscribed creators or verified organisations with at least five million organic impressions in the past three months and at least 500 verified followers are eligible to earn from their content. Threatening that income stream is likely to concentrate minds among the platform's most active and influential posters, who generate a disproportionate share of the content that goes viral.

Bier explained that creators would need to click on the post menu and select 'Add Content Disclosures', where they will find a 'Made with AI' label option. Violations will be flagged through Community Notes, X's crowd-sourced fact-checking system, or by detecting metadata from generative AI tools. The reliance on Community Notes is worth scrutinising: the system has a documented track record of inconsistency, and crowd-sourced moderation applied to fast-moving conflict content during an active military confrontation is an imperfect mechanism at best.

The more substantive criticism of the policy is its deliberately narrow scope. The rules apply only to creators enrolled in the platform's revenue sharing programme and only to AI-generated videos of armed conflicts, not to AI content in general or non-monetised accounts. That is a significant carve-out. The research literature on synthetic media suggests that the majority of AI-generated videos on X are classified as political, and many of those circulating most widely are shared by accounts that are not monetised in any formal sense. A policy that covers the most commercially invested creators while leaving open accounts free to share synthetic conflict footage without consequence addresses the symptom rather than the cause.

Advocates for stronger platform governance will also point to the language Bier used to justify the change. Bier framed the change as necessary "during times of war," though the current conflict between the United States, Israel and Iran has not been formally, or at least not legally, declared a war. The US Senate's records show the United States has not formally declared war since 1942. Whether that legal distinction matters to the practical effect of synthetic footage flooding a platform is a separate question, but it reveals a policy being crafted under pressure, in response to a specific and urgent situation, rather than as part of a coherent long-term content integrity framework.

X has separately been testing a broader AI labelling toggle that would allow any user to mark a post as containing synthetic content, as first reported by Social Media Today. The platform has not committed to a rollout timeline. Until that broader mechanism is in place, the current policy remains a targeted measure that addresses a fraction of the overall problem. Whether it is a first step toward a more comprehensive approach, or a minimal intervention designed to deflect criticism during a news cycle, will depend on what X does next. The answer to that question matters well beyond the platform itself.

Sources (8)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.