Deepfakes in schools are no longer an abstract concern for educators. The substitute teacher has been in the classroom for less than ten minutes when she notices it.
A cluster of students huddled around a Chromebook. Laughing too loudly. Phones angled toward the screen. When she walks over, the laughter stops.
What she sees makes her pause.
On the screen is an explicit image of a woman. The face looks familiar. Too familiar. It takes a moment to register why. It’s hers, or at least it looks like her. The body is not real. The image was generated using artificial intelligence, stitched together from a Facebook photo pulled online and shared across group chats before the day even started.
By the time the front office is alerted, the image has already spread well beyond that classroom.
This is no longer a hypothetical scenario. Versions of this moment are unfolding in schools across the country, and increasingly, the targets are students, teachers, and school staff alike.
Deepfakes in schools have become one of the most disruptive and damaging forms of cyberbullying, combining artificial intelligence, sexual exploitation, and peer cruelty in ways that directly threaten student mental health, school safety, and trust in educational systems.
Deepfake cyberbullying differs from earlier forms of online harassment because it fabricates something that looks like proof. An image. A video. A voice recording. Content that appears real long enough to cause lasting harm.
In school communities, this often includes:
AI-generated sexual images created from real student or staff photos
Face-swapped videos falsely depicting sexual behavior, drug use, or criminal acts
Synthetic audio clips imitating a student’s or educator’s voice
Rapid distribution through private group chats, backup accounts, and direct messages
The barrier to entry is low. Many tools are easy to access and require little technical skill. A single yearbook photo or social media post can be enough to generate content that spreads faster than adults can intervene.
In late 2025, Associated Press reported on a case in Louisiana that drew national attention. A high school student learned that classmates had used an AI “nudify” app to create and circulate a fake nude image of her.
Although the image was fabricated, the harm was real. When the student confronted those responsible, she was disciplined and ultimately removed from school. The students who created and shared the image faced limited consequences.
The case exposed a critical gap. Many school discipline systems were not designed to address AI-generated abuse. When policies fail to account for synthetic media, responses can unintentionally punish the targeted student rather than protect them, deepening trauma and eroding trust.
Cyberbullying has long been linked to anxiety, depression, school avoidance, and increased risk of self-harm. Deepfake bullying intensifies these effects in several ways.
Identity violation
Deepfakes manipulate a person’s likeness in ways that feel invasive and deeply personal. For students, this can trigger intense shame, fear, and a sense that their identity has been taken from them.
Loss of control and permanence
Even when content is removed, students know copies can exist indefinitely. The fear that an image could resurface at any moment creates ongoing stress and hypervigilance.
Inescapable exposure
Unlike harassment that stays online, deepfake bullying follows students into classrooms, hallways, buses, and extracurricular spaces. School becomes the place where the harm is silently replayed.
Institutional betrayal
When adults minimize the incident because the content is “fake,” delay intervention, or discipline students for emotional reactions, students experience a second injury. Trust in the system breaks down.
These responses align with trauma reactions, not misconduct. When schools fail to recognize this, distress is misinterpreted as defiance.
Deepfake incidents involve unauthorized image manipulation, mass digital distribution, and, in some cases, content that may meet legal definitions related to sexual exploitation of minors.
Schools need clear procedures for:
Preserving digital evidence without further spreading harm
Coordinating platform reporting and takedown requests
Protecting student privacy
Knowing when incidents rise to the level of legal involvement
Treating deepfake abuse as “student drama” rather than digital harm exposes districts to serious legal, ethical, and reputational risk.
The growth of AI-generated abuse is no longer anecdotal. According to the National Center for Missing and Exploited Children, reports of AI-generated child sexual abuse images submitted to its CyberTipline increased from 4,700 in 2023 to more than 440,000 in just the first six months of 2025.
That surge reflects both the rapid spread of generative AI tools and how quickly they are being misused in ways that directly affect children and adolescents. Schools are not insulated from this trend. They are often where the consequences surface first.
Lawmakers across the country are moving to address the misuse of generative AI. According to the National Conference of State Legislatures, by 2025, at least half of U.S. states had enacted legislation addressing the creation and distribution of AI-generated images and audio, including laws targeting simulated child sexual abuse material.
Enforcement is no longer theoretical. Students have faced prosecution in states such as Florida and Pennsylvania. Schools in states including California have expelled students involved in deepfake abuse. In Texas, a fifth-grade teacher was charged after allegedly using AI tools to create child sexual abuse material involving students.
The legal landscape is shifting quickly, and schools that fail to update policies and training risk being caught unprepared when incidents escalate beyond campus.
Most bullying and harassment policies were written for an earlier digital era. They focus on intent, repetition, and direct communication. Deepfake bullying often does not fit those categories.
The harm can occur:
Without repeated actions by the same individual
Without direct confrontation
Before administrators are even aware content exists
Without explicit policy language addressing AI-generated and manipulated media, responses become inconsistent and reactive.
This issue cannot be managed classroom by classroom. System-level leadership is required.
Districts and states should lead by:
Updating policies to explicitly address AI-generated images, video, and audio
Establishing response protocols that prioritize victim protection and evidence preservation
Providing trauma-informed mental health supports without disciplinary bias
Training staff on recognizing and responding to synthetic media abuse
Engaging families and community partners before crises occur
The rise of deepfake bullying is not simply a student behavior issue. It is a test of leadership.
Artificial intelligence is already embedded in the lives of young people. Waiting for another incident, another lawsuit, or another headline is not caution. It is avoidance.
The responsibility of education systems is clear: ensure schools are places of protection, accountability, and care in a rapidly changing digital world.
The question facing education leaders is no longer whether deepfakes will reach your community.
It is whether your system will be ready to respond when they do.
Subscribe to edCircuit to stay up to date on all of our shows, podcasts, news, and thought leadership articles.
Data Privacy in Schools is often talked about at the district office or in IT…
Safer STEM Spaces start with recognizing a simple truth. Instruction has changed faster than many…
The Science of Reading is a vast, interdisciplinary body of research that explains how children…
Immersive Learning is no longer a novelty reserved for occasional demonstrations or special events. In…
Open Educational Resources (OER) are no longer a niche idea in education. They have become…
Professional development has always been the heartbeat of the school year. It is where teachers…