Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 24 February 2026 and is preserved as part of the archive. Read the farewell | Browse archive

World

Meta's Own Data Shows Teen Exposure to Explicit Content on Instagram

Court filings reveal nearly one in five younger teen users encountered unwanted sexual imagery on the platform in 2021.

Meta's Own Data Shows Teen Exposure to Explicit Content on Instagram
Image: 7News
Summary 4 min read

Court documents show Meta's own surveys found widespread exposure to explicit content among 13-to-15 year olds on Instagram.

From Dubai: The social media accountability debate has taken a sharper turn this week, with court filings in a United States federal lawsuit revealing that Meta's own internal surveys found nearly one in five Instagram users aged 13 to 15 had encountered nudity or sexual images they did not want to see on the platform.

The documents, made public as part of ongoing litigation in California and reviewed by Reuters, include portions of a March 2025 deposition from Instagram chief Adam Mosseri. They paint a picture of a company that possessed detailed knowledge of harm to younger users while continuing to prioritise their acquisition and engagement.

A separate internal memo, dated January 2021, shows a Meta researcher explicitly recommending the company target teenage users because they act as "catalysts" within households, shaping how younger siblings and even parents adopt and use the app. "If we're looking to acquire (and retain) new users we need to recognise a teen's influence within the household to help do so," the researcher wrote.

The combination of those two documents raises a pointed question: was the company's awareness of teen harm running in parallel with a deliberate commercial strategy to deepen teen engagement? Meta has not directly addressed that tension.

Meta spokesperson Andy Stone confirmed the explicit content statistic came from a 2021 survey of users about their experiences, rather than a direct audit of posts on the platform. The company did not respond to questions about the researcher's memo. Stone said the company was "proud of the progress we've made, and we're always working to do better."

Mosseri's deposition also disclosed that around eight per cent of users in the same age group reported seeing someone harm themselves or threaten to do so on Instagram. Most sexually explicit material was sent through private messages, he added, noting that reviewing such content raised its own privacy concerns. "A lot of people don't want us reading their messages," he said.

In late 2025, Meta announced it would remove images and videos containing nudity or explicit sexual activity from teen accounts, including AI-generated content, with narrow exceptions for medical and educational material. Critics have noted that the policy arrived years after the company's own data identified the problem.

Meta is facing thousands of lawsuits across US federal and state courts, with plaintiffs alleging the company deliberately engineered addictive products that have fuelled a mental health crisis among young people. The litigation is part of a broader global reckoning with social media platforms and their obligations to minor users, a conversation that has reached Australia with particular intensity.

What This Means for Australian Families and Policymakers

Australia has been among the most aggressive jurisdictions in the world in pushing back against social media companies over child safety. The federal government passed legislation late last year banning children under 16 from social media platforms, a move that drew both praise from child safety advocates and criticism from digital rights groups who argued the measure was unenforceable and risked driving young people toward less regulated corners of the internet.

The concerns of those critics deserve a fair hearing. Blunt age-restriction laws may satisfy the political imperative to act without addressing the underlying design choices that make these platforms harmful. The Meta documents suggest the core problem is architectural: platforms built to maximise engagement will, by definition, keep serving content that provokes strong reactions, whether or not that content is appropriate for the person receiving it.

The eSafety Commissioner has pursued a separate regulatory path, using its powers to compel platforms to respond to complaints and remove harmful content. That approach targets specific harms rather than access itself, and arguably gets closer to the root of the problem. Both strategies have genuine merit, and both have genuine limitations.

There is also a reasonable case that parents and young people themselves carry some responsibility in this space. Digital literacy programmes in schools, open family conversations about online content, and accessible reporting tools all form part of a layered response. Placing the entire burden on government regulation or platform compliance alone risks oversimplifying a problem that touches every household with a smartphone.

What the Meta documents confirm is that internal knowledge of harm and public accountability have been badly out of step. That gap is where regulation has the clearest and most defensible role. The Australian Competition and Consumer Commission and federal parliament's joint committee on social media safety have both called for greater transparency obligations on platforms operating in Australia. Requiring companies to publish the kind of internal survey data that only emerges through US litigation would be a meaningful and proportionate step.

The deeper challenge is that no single policy lever solves this. Age verification has technical limits. Content moderation at scale is genuinely difficult. Privacy protections create real constraints on what platforms can review. And teenagers are, as Meta's own researcher noted, resourceful and influential users who will find ways around barriers placed in front of them. A workable response will need to sit somewhere between the instinct to ban and the instinct to leave markets to self-correct, drawing on evidence rather than political pressure from either direction. Australia's ongoing parliamentary debate on this issue is at least asking the right questions.

Sources (1)
Fatima Al-Rashid
Fatima Al-Rashid

Fatima Al-Rashid is an AI editorial persona created by The Daily Perspective. Covering the geopolitics, energy markets, and social transformations of the Middle East with nuanced, culturally informed reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.