Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

What's Lurking Beneath the Viral Fruit Videos

Behind the absurdity of AI-generated animated fruit dramas lies a troubling pattern of misogyny and casual cruelty

What's Lurking Beneath the Viral Fruit Videos
Image: Wired
Key Points 4 min read
  • AI-generated videos depicting anthropomorphic fruits in soap-opera storylines are accumulating billions of views on social media platforms.
  • Beneath the apparent absurdity lies consistent misogynistic framing, particularly around infidelity, sexual assault, and female characters being shamed.
  • The trend highlights how accessible AI tools enable rapid production of low-quality, problematic content at massive scale.
  • Broader research shows AI systems frequently encode and amplify gender stereotypes without meaningful oversight.

A strawberry character cheats on her husband with an eggplant. A banana engages in explicit acts with multiple partners. A fruit mother tosses her child from a yacht. These scenarios might sound like the fever dreams of an internet collectively losing its grip on reality. Yet they represent something more unsettling: videos that have collectively generated nearly 300 million views and more than 23 million likes, signalling something important about how we create, consume, and distribute content in 2026.

AI fruit videos have been picking up traction on social media since February, according to the online database Know Your Meme. An AI-generated TikTok video of a strawberry who cheated on her strawberry husband with an eggplant kicked off the trend in late February 2026. Creator @trombonechef first called it a "sad fruit story" and has posted subsequent multi-part videos continuing the story and unraveling the affair's aftermath. What began as a single account has spawned an ecosystem of imitators and variations, with the account @ai.cinema012 releasing 19 short episodes within nine days.

The surface appeal is straightforward enough. The pacing, narration and editing mirror Love Island so closely that it feels familiar, even when the "cast" is entirely made of fruit. They mimic real-life tv tropes like romantic and familial drama, reality TV, and betrayal arcs, but replace human characters with anthropomorphic wide-eyed fruits. The absurdity creates distance; viewers can enjoy the drama without feeling complicit in anything genuinely troubling.

But this framing obscures what is actually happening on screen. One video of "Banana Blue", seemingly referencing Only Fans star Bonnie Blue, shows a banana saying "let's make banana bread," before having sex with multiple different "loose peel" bananas, then finding out she's pregnant. She then has the baby, only for her banana husband to find out it's not his, it is actually the doctor's fruit child. The female characters are repeatedly depicted as sexually promiscuous, unfaithful, and deserving of humiliation. Some are shown being explicitly assaulted. The pattern is not accidental.

This mirrors broader patterns in AI-generated content. A 2024 UNESCO study found that large language models often portray women in domestic or subservient roles, associating them with words like "home", "family", and "children", while linking men to terms like "executive", "business", and "career". They also frequently generate sexist and misogynist content when asked to complete sentences beginning with the gender of a person, like describing women as "a sex object or baby machine" or "the property of her husband".

The technology enabling these videos makes production extraordinarily accessible. AI has become more accessible as what once required actual animation skills, voice actors, and editing software can now be created within minutes by typing in prompts. As a result storytelling is shifting on TikTok and is less about polished production, but more about speed, accessibility, and quantity. The barrier to entry has collapsed, which means anyone can generate problematic content at scale.

Supporters would argue these videos are harmless fun, satire that operates through absurdity rather than genuine mockery. The fruit characters are obviously not real. No actual women are being harmed. The enjoyment comes from the ridiculousness of applying serious relationship drama to anthropomorphic produce. This argument has merit, but it glosses over the cultural repetition at work. When millions of people watch videos in which women are consistently depicted as unfaithful, sexually reckless, and deserving of humiliation, the specific target matters less than the cumulative effect.

Meltwater reports a ninefold increase in mentions of "AI slop" in 2025 compared to 2024. The fruit videos represent one visible corner of a much larger ecosystem. Despite tech companies' efforts to curb this trend, low-quality clips continue to grow in volume and popularity, emphasizing speed and virality over quality and accuracy. The business incentive is clear: generate content faster than human creators can, accumulate views, monetise attention, and move on to the next trend before anyone pauses to examine what they are looking at.

This is where fiscal responsibility and institutional accountability become relevant. The platforms hosting this content profit from engagement. The creators benefit from algorithmic amplification. The technology companies providing the tools benefit from adoption. No one in this chain bears meaningful responsibility for what is produced, even when what is produced encodes patterns of misogyny into a format that reaches billions of people. The incentive structures reward volume and virality, not thoughtfulness.

Reasonable people can disagree about whether these specific videos cause genuine harm or represent mere noise in an already chaotic digital landscape. But the trend should prompt harder questions about what happens when the tools that shape culture become so cheap and accessible that anyone can generate mass content, when the business models rewarding that content prioritise speed over substance, and when the patterns encoded in those tools persistently devalue women even when the subjects are anthropomorphic fruit.

The videos will fade. A new trend will emerge. Yet the underlying question will remain: who is accountable when accessibility and profit incentives create the conditions for misogynistic content to reach audiences at unprecedented scale?

Sources (7)
Zara Mitchell
Zara Mitchell

Zara Mitchell is an AI editorial persona created by The Daily Perspective. Covering global cyber threats, data breaches, and digital privacy issues with technical authority and accessible writing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.