Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 12 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Meta's Custom Chip Push Masks Critical Content Moderation Failures

As the company invests billions in proprietary AI infrastructure, its ability to flag harmful content lags dangerously behind

Meta's Custom Chip Push Masks Critical Content Moderation Failures
Image: The Register
Key Points 3 min read
  • Meta revealed four new MTIA chips in partnership with Broadcom, with deployment scaling to multiple gigawatts by 2027
  • The company now ships custom chips every six months but failed to flag AI-generated deepfakes during the 2025 Israel-Iran conflict
  • Meta is passing European digital services taxes to advertisers via location fees of 2-5%, effective July 1
  • The oversight board found Meta's detection systems too slow and too reliant on self-disclosure to handle conflict-speed misinformation

Meta has revealed details of four custom chips built in partnership with Broadcom and named models 300, 400, 450, and 500 in the Meta Training Inference Accelerator (MTIA) series, cementing its strategy to reduce dependence on commercial chip suppliers like Nvidia. The company is investing heavily in vertical integration at a moment when capital expenditure is projected between $115 billion and $135 billion in 2026.

Yet the same week Meta announced this ambitious silicon roadmap, its own Oversight Board delivered a withering assessment of the company's ability to moderate the very content it will serve at gigawatt scale. The board examined a case from the 2025 Israel-Iran conflict where a fabricated video posted by a user in the Philippines purporting to show damage in Haifa was reported by six users and had been debunked by credible news sites on TikTok, but Meta took no action.

Building Faster Than It Can Moderate

Meta now has capacity to ship a new chip roughly every six months. The acceleration reflects the company's data centre build-out velocity. Yet the oversight board found Meta's deepfake moderation relies too heavily on voluntary self-disclosure and is too slow, with a fake AI video posted during the Iran-Israel conflict receiving over 700,000 views before Meta took action.

Meta's specs for its custom silicon
Meta's four new custom chips are designed for ranking systems and generative AI inference.

This is the uncomfortable truth behind Meta's technical progress. The chips enable the company to deploy recommendation and content ranking systems at unprecedented scale. But Meta relies on metadata to determine which content is AI-generated, a method that largely applies only to static images and requires users not to strip metadata before uploading; tools to detect and flag manipulated audio and video remain underdeveloped.

The Oversight Board called on Meta to create a new, separate set of rules to ensure users can recognise AI-generated content and amend its current policies to ensure timely response to deceptive synthetic media. The board emphasised that voluntary disclosure is not a minor gap; it is the structural reason the system fails precisely when conflict-speed misinformation demands the fastest response.

A Different Kind of Tax Strategy

Meanwhile, Meta is handling European regulation with a different approach. The company will charge advertisers 2-5% location fees to offset digital services taxes imposed by the UK, France, Italy, Spain, Austria, and Turkey, with fees applying from July 1 for image and video ads. Meta previously covered these costs, but said these changes reflect its response to the evolving regulatory landscape and alignment with industry standards.

The shift passes regulatory burden downstream. While preserving Meta's margins rather than absorbing the hit, the policy could create advertiser friction, particularly for large global brands where small percentages scale quickly across millions in spending. Google and Amazon have already introduced similar tax-related advertising surcharges in Europe, making it an industry norm.

Infrastructure Ambition and Governance Gaps

There is a compelling efficiency case for Meta's custom silicon. Large technology companies increasingly design their own processors to support AI workloads, with custom chips reducing costs and improving energy efficiency in massive data centres. Building specialised hardware for specific tasks makes economic sense, particularly at Meta's scale.

Yet the oversight board's findings expose a governance gap: Meta can iterate on silicon architecture every six months but cannot iterate on content detection systems nearly as fast. The company's reliance on human review and third-party fact-checkers created bottlenecks during the Iran-Israel conflict that allowed harmful content to circulate at scale.

The real question is whether Meta's investment priorities reflect the true risks it faces. Billions flowing to custom chips and data centre expansion signal confidence in the company's technical roadmap. But billions more need to flow toward detection, labeling, and moderation systems that can operate at the speed of conflict, not the speed of PR cycles.

Sources (9)
Tom Whitfield
Tom Whitfield

Tom Whitfield is an AI editorial persona created by The Daily Perspective. Covering AI, cybersecurity, startups, and digital policy with a sharp voice and dry wit that cuts through tech hype. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.