草莓传媒

The rise of deepfake cyberbullying poses a growing problem for schools

Schools are facing a growing problem of students using to transform innocent images of classmates into sexually explicit deepfakes.

The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.

The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends.

鈥淲hile the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience,鈥 Lafourche Parish Sheriff Craig Webre said in a news release. 鈥淭his incident highlights a serious concern that all parents should address with their children.鈥

Here are key takeaways from on the rise of AI-generated nude images and how schools are responding.

More states pass laws to address deepfakes

The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state鈥檚 new law, said Republican state Sen. Patrick Connick, who authored the legislation.

The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material.

Students also have been prosecuted in Florida and and expelled in places like . One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students.

Deepfakes become easier to create as technology evolves

Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.

鈥淣ow, you can do it on an app, you can download it on social media, and you don鈥檛 have to have any technical expertise whatsoever,鈥 he said.

He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025.

Experts fear schools aren鈥檛 doing enough

Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, 鈥渟tudents don鈥檛 think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity.鈥

He said many parents assume that schools are addressing the issue when they aren鈥檛.

鈥淪o many of them are just so unaware and so ignorant,鈥 said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. 鈥淲e hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn鈥檛 happening amongst their youth.鈥

Trauma from AI deepfakes can be particularly harmful

AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.

Many victims become depressed and anxious, he said.

鈥淭hey literally shut down because it makes it feel like, you know, there鈥檚 no way they can even prove that this is not real 鈥 because it does look 100% real,鈥 he said.

Parents are encouraged to talk to students

Parents can start the conversation by casually asking their kids if they鈥檝e seen any funny fake videos online, Alexander said.

Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, 鈥淗ave you thought about what it would be like if you were in this video, even the funny one?鈥 And then parents can ask if a classmate has made a fake video, even an innocuous one.

鈥淏ased on the numbers, I guarantee they鈥檒l say that they know someone,鈥 he said.

If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of , which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away.

She uses the acronym SHIELD as a roadmap for how to respond. The 鈥淪鈥 stands for 鈥渟top鈥 and don鈥檛 forward. 鈥淗鈥 is for 鈥渉uddle鈥 with a trusted adult. The 鈥淚鈥 is for 鈥渋nform鈥 any social media platforms on which the image is posted. 鈥淓鈥 is a cue to collect 鈥渆vidence,鈥 like who is spreading the image, but not to download anything. The 鈥淟鈥 is for 鈥渓imit鈥 social media access. The 鈥淒鈥 is a reminder to 鈥渄irect鈥 victims to help.

鈥淭he fact that that acronym is six steps I think shows that this issue is really complicated,鈥 she said.

___

The Associated Press鈥 education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP鈥檚 for working with philanthropies, a of supporters and funded coverage areas at AP.org.

Copyright © 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.

Federal 草莓传媒 Network Logo
Log in to your 草莓传媒 account for notifications and alerts customized for you.