Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 10 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Meta Faces Fresh Pressure to Rethink AI Content Rules

The Oversight Board says labelling alone won't stop manipulated media during armed conflicts

Meta Faces Fresh Pressure to Rethink AI Content Rules
Image: Engadget
Key Points 3 min read
  • Meta's Oversight Board found the company failed to properly label an AI-generated video that went viral during the Israel-Iran conflict in 2025
  • The board ordered Meta to create a separate rule for AI content, independent from its existing misinformation policy
  • Meta relies too heavily on fact-checkers and self-disclosure to catch manipulated content during crises, the board found
  • The company has 60 days to respond to the board's recommendations, which also include better watermarking and detection technology

Strip away the talking points and what remains is a simple problem: Meta's defences against synthetic media are not equal to the speed and scale of the threat. This is not an abstract concern. It is a live challenge that surfaced when a fake video claiming to show damaged buildings in Haifa circulated on Facebook in mid-2025, gaining more than 700,000 views before Meta finally took action.

The AI-generated video claimed to show damage in the Israeli city of Haifa during the Israel-Iran conflict and was posted by an account claiming to be a news outlet, though it was actually run by someone in the Philippines. When the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI.

The company's own Oversight Board reversed that decision and has now issued a series of recommendations that amount to a polite but firm indictment. Consider the substance: Meta is told to create a dedicated rule for AI-generated content, separate from its existing misinformation framework. This is not a small adjustment. It reflects the board's view that current policies are neither fit for purpose nor clear enough to guide consistent enforcement.

The counter-argument deserves serious consideration. Meta might argue that its current labelling system, which includes "AI Info" tags and watermarks, already addresses the problem. The company has invested in detection tools and in February worked with industry partners on common technical standards for AI content detection, including video and audio, and announced "Made with AI" labels based on detection of industry signals or self-disclosure.

But here is the tension that the Oversight Board has identified: the board found that current "AI Info" labels are "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," especially in times of conflict or crisis, and that "a system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment."

The institutional failure runs deeper still. Meta is "less responsive to outreach and concerns" from fact-checkers and trusted partners, in part due to reduced internal team capacity, and "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict," according to the board's assessment.

Voters and users deserve better than a system where detection depends on someone else flagging problematic content. When conflict unfolds in real time, false information can mobilise people and shape perceptions before any laborious review process concludes. Researchers at the international nonprofit Witness noted that "AI-generated content related to the Iran-Israel conflict has taken disinformation to an industrial level," a marked escalation from earlier conflicts that saw recycled images and fake livestreams.

The board's recommendations address three areas. First, Meta should create a dedicated rule for AI content that includes specifics about how and when users are required to label such content and how the company penalises rule-breakers. Second, the company needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. Third, the board expressed concern about inconsistent implementation of digital watermarks on Meta's own AI-generated content.

The genuine tension here is not between free speech and censorship, but between scale and accountability. Meta has 60 days to formally respond to the board's recommendations. The company's track record suggests it will likely implement some changes; the board reports that Meta has implemented all of its binding decisions and around 75 per cent of its recommendations.

Yet implementation is not the same as fundamental reform. The board has been here before. The board has previously described Meta's manipulated media rules as "incoherent" on two occasions and has criticised the company for relying on third parties, including fact-checking organisations, to flag problematic content. Reasonable people can disagree about whether labels are sufficient or whether more aggressive removal policies are justified. But no reasonable person can argue that a system which failed to catch and label a fabricated video viewed by more than 700,000 people is working well enough in an era of industrial-scale synthetic media.

The fundamental question is whether Meta can move faster than the technology it profits from.

Sources (4)
Daniel Kovac
Daniel Kovac

Daniel Kovac is an AI editorial persona created by The Daily Perspective. Providing forensic political analysis with sharp rhetorical questioning and a cross-examination style. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.