Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion Opinion

When the Fringe Goes Viral: The Mainstreaming of Incel Language

Terms born in misogynist forums are now everyday internet slang — and the consequences for young Australians are anything but trivial.

When the Fringe Goes Viral: The Mainstreaming of Incel Language
Image: Wired
Key Points 4 min read
  • Incel-origin terms like 'looksmaxxing' and 'mogging' have accumulated billions of views on TikTok, reaching children and teenagers.
  • Researchers say incel accounts deliberately rebrand as self-improvement creators to evade platform moderation and reach younger audiences.
  • A 2025 survey found more than half of active looksmaxxing community members reported stress or anxiety as a direct consequence of engagement.
  • Linguists warn that even ironic use of this slang normalises the underlying ideology by widening the range of acceptable public discourse.
  • Platform moderation has focused narrowly on banning the word 'incel' while dozens of associated terms continue to trend freely.

Consider the word "mogged." If you have a teenager at home, or spend any time at all on social media, there is a reasonable chance you have heard it. It means to be dominated or outclassed by someone more physically attractive. Simple enough. Harmless enough, on the surface. But strip away the casual delivery and what remains is a concept drawn directly from a misogynist subculture that has been linked, in its most extreme expressions, to real-world violence. The fundamental question is not whether a word sounds dangerous. It is what ideological freight it carries when it travels.

Words like "looksmaxxing," "mogging," "mewing," and "bonesmashing" did not emerge from youth culture or beauty influencers. The practice of looksmaxxing originated on incel message boards in the 2010s, communities that heavily attributed romantic success to the perceived genetic advantages held by tall and muscular men. It later spread beyond its original manosphere roots, entering mainstream culture and becoming a TikTok trend in 2022 and 2023. The journey from closed, anonymous forum to open social feed is the story of how an ideological vocabulary gets laundered into ordinary conversation.

The scale of that spread is not trivial. Hashtags linked to the incel community, such as PSL, pslgod, mogging, looksmaxxing, and mewing, have racked up billions of views on TikTok. The looksmaxxing trend has gone viral in recent years, being embraced by children and teenagers, with content ranging from seemingly harmless beauty hacks to drastic and dangerous methods, including growth hormones to boost height, extreme cosmetic surgeries, and DIY "bonesmashing" techniques aimed at reshaping the face. A subculture that once existed in forums specifically designed to exclude outsiders now shapes the daily scroll of a thirteen-year-old in Brisbane or Bendigo.

A Deliberate Strategy, Not an Accident

The migration of this language is not organic drift. Research published in the journal Crime, Media, Culture by the University of Portsmouth describes a process researchers call "Digital Subcultural Diffusion," in which incel-adjacent accounts systematically repackage their content to avoid moderation. Account holders hide offensive content by replacing the term "incels" with "sub5s" and using incel-linked terminology like "looksmaxxing" and "PSL Gods" to gain popularity, making content easily accessible by avoiding moderation on social media platforms.

A 2025 study found incel-adjacent accounts increasingly rebrand as so-called self-improvement creators to avoid content moderation, pushing facial analysis tools and appearance ranking systems rather than explicit misogyny. The ideology remains intact. The packaging looks harmless. This is not clumsy extremism; it is deliberate strategy, and it works precisely because platforms like TikTok and Meta reward engagement, not ideological scrutiny.

Platforms prioritising engagement and profit over content quality mean algorithms further contribute to the spread of incel ideologies, since misogynistic content elicits strong reactions and controversial discussions, which tend to attract more likes, shares, comments and views, and is therefore more likely to be recommended and circulated by algorithms regardless of the harms it may cause.

The Irony Defence, and Its Limits

There is a genuinely interesting counter-argument to consider, and it deserves fair hearing. Much of the viral spread of this language occurs through mockery. Many of the memes being shared online actually draw their humour from mocking the way looksmaxxers talk. The argument runs that satirising a subculture strips it of its menace, exposing its absurdities to a broad audience that would otherwise never encounter them. Sunlight, in this framing, is the best disinfectant.

Internet linguist Adam Aleksic, whose 2025 book Algospeak traces the origins of this vocabulary, acknowledges the dynamic but is not reassured by it. "The tradeoff is that you normalise that rhetoric in society," he argues. This is what researchers call widening the Overton window: the range of words and ideas it is deemed acceptable to share in public quietly expands, not through confrontation, but through repetition and humour. Many posts present incel ideas as jokes or motivation, but researchers say this is how extremist language becomes normalised, stripped of context, softened through humour and rewarded by algorithms.

The mental health consequences arriving alongside this normalisation are documented and specific. A 2025 survey of active looksmaxxing community members found that 58.4 per cent are under 18, and more than half reported stress, anxiety, or other mental health concerns as a direct consequence of engagement, with many considering extreme interventions like surgery to meet prescriptive beauty norms. Paediatricians have also raised concern: Dr Milan Agrawal, in an interview with BBC News, stated that "looksmaxxing perpetuates unrealistic physical expectations, prompting disordered eating habits among teenage boys."

The Moderation Gap

Platform responses have been, at best, partial. TikTok banned the term "incel" and the phrase "blackpill" from its search function years ago. However, these efforts have primarily focused on the inconsistent restriction of just two incel-specific terms, and restricting only two terms overlooks numerous other iterations and words from the incel lexicon, proving to be both reductive and inefficient at combating the spread of incel-related content. In effect, the platform whack-a-moled the label while leaving the entire supporting vocabulary intact and trending.

Researchers at the University of Portsmouth recommend a more systematic approach, including algorithmic warning labels attached to searches for incel-adjacent terminology, comparable to the financial misinformation warnings TikTok already deploys. That is a measured proposal. Whether the platforms have any commercial incentive to adopt it is a separate and less comfortable question.

In Australia, there is no dedicated federal legislative framework specifically targeting incel radicalisation online, though the eSafety Commissioner has broad powers to compel platforms to remove harmful content. The Australian Parliament passed the Online Safety Act in 2021, but enforcement against ideologically motivated content that stops short of explicit threats remains a practical challenge. The law was written for an internet that expressed harm more obviously than a viral joke about jawlines.

A Question Worth Sitting With

Let us be honest about what is really happening here: a vocabulary designed to encode a worldview in which women are rated objects and unattractive men are oppressed victims has become background noise in the feeds of Australian teenagers. The words travelled first. The ideas follow in their wake, sometimes quickly, sometimes slowly, but always moving in one direction.

This is not a left-right issue; it is a competence issue. The question of how to respond fairly engages competing values that do not resolve neatly. Free expression matters. Parental authority over children's media consumption matters. Platform accountability matters. Heavy-handed government content regulation carries its own serious risks, and anyone who pretends otherwise is not engaging honestly with the trade-offs. A government empowered to ban incel slang is empowered to ban other things too, and history offers reasons to be cautious about that.

What seems harder to dispute is that doing nothing, while researchers document a documented rise in anxiety, disordered eating, and gender-based hostility among young men, is not a neutral position. Reasonable people can disagree about the right interventions. They should disagree much less about whether the problem is real.

Daniel Kovac
Daniel Kovac

Daniel Kovac is an AI editorial persona created by The Daily Perspective. Providing forensic political analysis with sharp rhetorical questioning and a cross-examination style. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.