edcircuit
Share Your Voice on edCircuit
Promotional graphic for the CoSN 2026 EdTech Conference featuring event details, a city skyline logo, and five professionally dressed people smiling against a blue gradient background.
Home Hot Topics - controversial Deepfakes in Schools: The New Cyberbullying Crisis
6 minutes read

Deepfakes in Schools: The New Cyberbullying Crisis

How AI-Driven Bullying Is Harming Student Mental Health and What Schools Must Do

Deepfakes in schools are driving a new wave of cyberbullying, harming student mental health and forcing districts to confront urgent safety gaps.

Deepfakes in schools are no longer an abstract concern for educators. The substitute teacher has been in the classroom for less than ten minutes when she notices it.

A cluster of students huddled around a Chromebook. Laughing too loudly. Phones angled toward the screen. When she walks over, the laughter stops.

What she sees makes her pause.

On the screen is an explicit image of a woman. The face looks familiar. Too familiar. It takes a moment to register why. It’s hers, or at least it looks like her. The body is not real. The image was generated using artificial intelligence, stitched together from a Facebook photo pulled online and shared across group chats before the day even started.

By the time the front office is alerted, the image has already spread well beyond that classroom.

This is no longer a hypothetical scenario. Versions of this moment are unfolding in schools across the country, and increasingly, the targets are students, teachers, and school staff alike.

Deepfakes in schools have become one of the most disruptive and damaging forms of cyberbullying, combining artificial intelligence, sexual exploitation, and peer cruelty in ways that directly threaten student mental health, school safety, and trust in educational systems.

What deepfake bullying looks like in schools today

Deepfake cyberbullying differs from earlier forms of online harassment because it fabricates something that looks like proof. An image. A video. A voice recording. Content that appears real long enough to cause lasting harm.

In school communities, this often includes:

  • AI-generated sexual images created from real student or staff photos

  • Face-swapped videos falsely depicting sexual behavior, drug use, or criminal acts

  • Synthetic audio clips imitating a student’s or educator’s voice

  • Rapid distribution through private group chats, backup accounts, and direct messages

The barrier to entry is low. Many tools are easy to access and require little technical skill. A single yearbook photo or social media post can be enough to generate content that spreads faster than adults can intervene.

Deepfakes in Schools: A real case schools should be paying attention to

In late 2025, Associated Press reported on a case in Louisiana that drew national attention. A high school student learned that classmates had used an AI “nudify” app to create and circulate a fake nude image of her.

Although the image was fabricated, the harm was real. When the student confronted those responsible, she was disciplined and ultimately removed from school. The students who created and shared the image faced limited consequences.

The case exposed a critical gap. Many school discipline systems were not designed to address AI-generated abuse. When policies fail to account for synthetic media, responses can unintentionally punish the targeted student rather than protect them, deepening trauma and eroding trust.

The mental health impact is severe and distinct

Cyberbullying has long been linked to anxiety, depression, school avoidance, and increased risk of self-harm. Deepfake bullying intensifies these effects in several ways.

Identity violation
Deepfakes manipulate a person’s likeness in ways that feel invasive and deeply personal. For students, this can trigger intense shame, fear, and a sense that their identity has been taken from them.

Loss of control and permanence
Even when content is removed, students know copies can exist indefinitely. The fear that an image could resurface at any moment creates ongoing stress and hypervigilance.

Inescapable exposure
Unlike harassment that stays online, deepfake bullying follows students into classrooms, hallways, buses, and extracurricular spaces. School becomes the place where the harm is silently replayed.

Institutional betrayal
When adults minimize the incident because the content is “fake,” delay intervention, or discipline students for emotional reactions, students experience a second injury. Trust in the system breaks down.

These responses align with trauma reactions, not misconduct. When schools fail to recognize this, distress is misinterpreted as defiance.

This is also a cybersecurity and safety issue

Deepfake incidents involve unauthorized image manipulation, mass digital distribution, and, in some cases, content that may meet legal definitions related to sexual exploitation of minors.

Schools need clear procedures for:

  • Preserving digital evidence without further spreading harm

  • Coordinating platform reporting and takedown requests

  • Protecting student privacy

  • Knowing when incidents rise to the level of legal involvement

Treating deepfake abuse as “student drama” rather than digital harm exposes districts to serious legal, ethical, and reputational risk.

Deepfakes in Schools: The scale of the problem is accelerating

The growth of AI-generated abuse is no longer anecdotal. According to the National Center for Missing and Exploited Children, reports of AI-generated child sexual abuse images submitted to its CyberTipline increased from 4,700 in 2023 to more than 440,000 in just the first six months of 2025.

That surge reflects both the rapid spread of generative AI tools and how quickly they are being misused in ways that directly affect children and adolescents. Schools are not insulated from this trend. They are often where the consequences surface first.

Laws are changing, but schools cannot wait

Lawmakers across the country are moving to address the misuse of generative AI. According to the National Conference of State Legislatures, by 2025, at least half of U.S. states had enacted legislation addressing the creation and distribution of AI-generated images and audio, including laws targeting simulated child sexual abuse material.

Enforcement is no longer theoretical. Students have faced prosecution in states such as Florida and Pennsylvania. Schools in states including California have expelled students involved in deepfake abuse. In Texas, a fifth-grade teacher was charged after allegedly using AI tools to create child sexual abuse material involving students.

The legal landscape is shifting quickly, and schools that fail to update policies and training risk being caught unprepared when incidents escalate beyond campus.

Why current school policies fall short

Most bullying and harassment policies were written for an earlier digital era. They focus on intent, repetition, and direct communication. Deepfake bullying often does not fit those categories.

The harm can occur:

  • Without repeated actions by the same individual

  • Without direct confrontation

  • Before administrators are even aware content exists

Without explicit policy language addressing AI-generated and manipulated media, responses become inconsistent and reactive.

What districts and states must do now

This issue cannot be managed classroom by classroom. System-level leadership is required.

Districts and states should lead by:

  • Updating policies to explicitly address AI-generated images, video, and audio

  • Establishing response protocols that prioritize victim protection and evidence preservation

  • Providing trauma-informed mental health supports without disciplinary bias

  • Training staff on recognizing and responding to synthetic media abuse

  • Engaging families and community partners before crises occur

Deepfakes in Schools: The responsibility ahead

The rise of deepfake bullying is not simply a student behavior issue. It is a test of leadership.

Artificial intelligence is already embedded in the lives of young people. Waiting for another incident, another lawsuit, or another headline is not caution. It is avoidance.

The responsibility of education systems is clear: ensure schools are places of protection, accountability, and care in a rapidly changing digital world.

The question facing education leaders is no longer whether deepfakes will reach your community.
It is whether your system will be ready to respond when they do.

Subscribe to edCircuit to stay up to date on all of our shows, podcasts, news, and thought leadership articles.

  • edCircuit is a mission-based organization entirely focused on the K-20 EdTech Industry and emPowering the voices that can provide guidance and expertise in facilitating the appropriate usage of digital technology in education. Our goal is to elevate the voices of today’s innovative thought leaders and edtech experts. Subscribe to receive notifications in your inbox

    View all posts
Promotional graphic for the CoSN 2026 EdTech Conference featuring event details, a city skyline logo, and five professionally dressed people smiling against a blue gradient background.

Join Thousands of Other Subscribers

This field is for validation purposes and should be left unchanged.

Participate in the COmmunity

Promotional graphic with the text “Register Today for the EdTech Conference of the Year! www.CoSN.org/CoSN2026.” Below is a skyline and Ferris wheel graphic with “CoSN 2026.” Blue gradient background.
Science Safety - Safer Labs, Safer STEM, Safer CTE, Safer Arts, Safer Cyber

Use EdCircuit as a Resource

Would you like to use an EdCircuit article as a resource. We encourage you to link back directly to the url of the article and give EdCircuit or the Author credit.

MORE FROM EDCIRCUIT

edCircuit emPowers the voices of education, with hundreds of  trusted contributors, change-makers and industry-leading innovators.

YOUTUBE CHANNEL

@edcircuit

Copyright © 2014-2025, edCircuit Media – emPowering the Voices of Education.  

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00