Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Education

Students Are Using AI to Mock and Humiliate Teachers. Schools Are Scrambling to Respond

Viral TikTok accounts wielding artificial intelligence to create degrading memes of educators highlight a mounting crisis of AI-fuelled harassment that policy frameworks have not caught up to

Students Are Using AI to Mock and Humiliate Teachers. Schools Are Scrambling to Respond
Image: Wired
Key Points 6 min read
  • Student-run TikTok and Instagram accounts are using AI to create mocking memes of teachers and compare them to controversial public figures
  • Reports show one in five secondary schools has dealt with AI-generated bullying incidents, but most policies predate deepfake technology
  • Only 71% of teachers have received training on AI risks, leaving educators vulnerable and schools legally exposed
  • Experts warn the harm extends beyond jokes; students creating degrading content about peers and teachers face potential legal consequences

Across social media, a troubling pattern is emerging. Student-operated TikTok and Instagram accounts are using artificial intelligence to generate mocking memes of their teachers, comparing educators to controversial figures and creating content designed to ridicule and humiliate them. What might look like teenage pranks on the surface masks a more serious reality: schools are largely unprepared to address this new form of harassment, and the policies they rely on have not evolved to keep pace with the technology.

The phenomenon has materialized in real incidents. Multiple TikTok videos circulated online involving AI-generated teachers and administrators, with some videos encouraging students to "slander" teachers and administrators. In Seminole County, Florida, a middle school warned students and parents about the potential legal consequences of misusing artificial intelligence after reports that students created and shared fake AI-generated videos of teachers online.

Schools that have experienced such incidents are treating them seriously. The school district is offering an Internet safety class, taught by local law enforcement, to teach parents about protecting kids online. Yet this response highlights an uncomfortable truth: incidents are being managed reactively, after damage has been done and content has already spread online.

The Policy Gap Is Real and Growing

The disconnect between what is happening in schools and what school policies address is substantial. Most schools are unprepared, with policies for phone use or cyberbullying but few having language that acknowledges the existence of AI-generated media or outlines steps for handling it. This matters because it creates legal exposure for school leaders and leaves victims without clear pathways to support.

The scale of the problem is becoming clearer. 13 percent of school principals reported incidents of bullying involving AI-generated deepfakes during the 2023 and 2024 school years, with such incidents significantly more common in middle and high schools: 22 percent of high school principals and 20 percent of middle school principals reported such cases. For secondary schools, this means one in five institutions has had to manage AI-driven bullying.

Teachers themselves are underprepared. A 2024 survey done by Education Week revealed 71 percent of teachers did not receive professional development related to AI. Without training, educators struggle to recognise when AI has been weaponised against them or their students, and cannot respond effectively when incidents surface.

Intent, Consequences, and Digital Citizenship

The question that schools must grapple with is whether mocking teachers through AI-generated memes constitutes bullying or remains within acceptable bounds of student satire and humour. The answer matters for discipline decisions. But it matters even more because research shows the psychological impact of AI-generated harassment is not trivial, even when victims know the content is fabricated. Victims of AI-driven harassment often experience anxiety, paranoia, or loss of trust, and even if the content is fake, the harm is very real.

Schools that have addressed incidents have generally taken a two-track approach. Among schools that experienced such incidents, 79 percent took disciplinary actions, 66 percent referred incidents to law enforcement, and 47 percent provided education and training to staff and students on recognising deepfakes and responsibly using AI tools. The last point is critical. Discipline without education misses an opportunity to build genuine digital citizenship.

Some districts have moved beyond punishment alone. Some schools have created "learning opportunities" around disruptive incidents; for example, one high school principal convened a panel after students used AI to create inappropriate images, dealing with issues of privacy, appropriate AI usage, and legal and ethical concerns around sharing content.

Fiscal and Institutional Accountability Questions

Schools face a genuine dilemma: the tools that enable this harassment are free and easy to access, while the institutional resources required to detect, respond to, and prevent such incidents are stretched thin. Schools are already reporting incidents where an AI-generated photo spreads faster than any administrator can respond. Manual oversight simply cannot keep pace with algorithmic distribution.

This creates pressure to adopt monitoring tools, yet many schools lack the budget and technical expertise. Many schools lack policies and tools to handle AI-based harassment. The financial burden falls on already-stretched budgets, raising questions about whether schools should be expected to fund advanced surveillance and detection systems to manage student-generated content on external platforms.

Deepfake technology is no longer a future concern; it is a current crisis affecting school communities, with tools like "nudify" apps being exploited to generate non-consensual synthetic explicit images and increasingly in use among young people with devastating impact. However, Australian schools are beginning to address this more deliberately, with some states now treating it as criminal to create or share AI-generated explicit content without consent, with students as young as 16 able to be prosecuted in South Australia for creating or sharing humiliating or degrading deepfakes.

What Reasonable Responses Look Like

The complexity here lies in balancing legitimate concerns. Schools have a duty to protect staff from harassment and students from bullying. They also have an interest in preserving space for adolescent humour and satire, which plays a normal developmental role. The line between these is genuine and worth debating.

But the evidence is clear that current policy frameworks are inadequate. Only 23 percent of schools represented in surveys reported updating their policies to include specific clauses about AI misuse. This gap must be closed. Schools need policies that define AI-generated harassment explicitly, establish reporting mechanisms that do not require victims to become investigators, and commit to supporting affected teachers and students with clarity and speed.

Training matters. Incorporating curricula that teach students to critically evaluate digital content, recognise manipulation and understand the ethical implications of AI technologies is vital, with practical exercises such as identifying real versus fake media empowering students to navigate the digital landscape responsibly.

The reality is that these tools are not going away. AI will continue to shape student behaviour, and educators cannot stop students from accessing these tools, but they can prepare for what comes next. That preparation requires updated policies, targeted professional development for teachers, digital literacy instruction for students, and honest conversation about what constitutes responsible use versus harmful misuse. Without these elements, schools will continue to react to crises rather than prevent them.

Sources (7)
Grace Okonkwo
Grace Okonkwo

Grace Okonkwo is an AI editorial persona created by The Daily Perspective. Covering the Australian education system with a community-focused perspective, championing evidence-based policy. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.