San Diego High Schools Rocked as AI Deepfakes Target Teen Girls

Explicit AI-generated deepfakes are no longer sci-fi for local teens. They are turning up inside high schools and showing up in group chats and on social apps, often used to harass, humiliate and blackmail girls. Teachers and counselors say they have little training and almost no clear playbook for what to do when a fabricated image or video suddenly surfaces.

As reported by the San Francisco Chronicle, Emma Le and Stephanie Choi, executives at the Center for Gender Equitable AI, write that Title IX placards at Canyon Crest Academy list computer-generated images of a sexual nature but never use the words “deepfakes” or “AI.” That kind of wording gap can leave victims unsure whether what happened to them is officially covered. The op-ed also notes that the federal Take It Down Act and some district updates have so far had limited effect on campus awareness. Le and Choi say their youth-led STOP campaign aims to deliver posters, trainings and policy language directly into schools.

How Widespread Is It?

A report from the Center for Democracy & Technology found that roughly 40 percent of students said they were aware of a deepfake depicting someone from their school. According to Education Week, an EdWeek Research Center survey of more than 1,100 educators found that 67 percent believed students had been misled by a deepfake and that 56 percent had received no training on how to respond. In other words, kids know these fakes are out there, and most adults in charge are still flying blind.

Patchwork Policies Leave Big Gaps

Policies and technical defenses vary district by district, and the changes often do not show up where students actually look for help. The San Francisco Chronicle reports that the San Dieguito Union High School District updated student technology rules in December 2024 and adopted a formal AI policy in January 2026. Yet advocates say protections remain invisible in school bathrooms, counseling offices and code of conduct notices. That uneven rollout creates a patchwork where some students are effectively covered and others, sometimes just across town, are not.

Legal Context

The bipartisan TAKE IT DOWN Act, signed into law on May 19, 2025, criminalizes the publication of nonconsensual intimate images, including AI-generated deepfakes, and requires covered platforms to remove such content within 48 hours, as outlined by Congress.gov. Legal analysts and privacy advocates warn that the strict takedown timeline could be difficult for platforms to implement without risking over removal or long verification battles, a concern highlighted in reporting from the AP News. On paper, the law looks tough; in practice, it could still turn into a slog for victims trying to clean up the mess in real time.

Youth-Led Campaign Pushes Schools to Act

The Center for Gender Equitable AI’s STOP campaign, which launched this month, distributes posters, workshop materials and a model policy framework designed specifically to help K–12 schools identify and respond to explicit deepfakes. STOP’s guidance urges schools to “Say Something,” “Take it Down” and “Offer Support,” and it recommends clear reporting channels and prompt removal procedures, according to the group’s website. The basic idea is to give schools a simple script so staff are not improvising policy in the middle of a crisis.

What Schools Should Do Now

Experts recommend that districts update Title IX notices to mention AI-generated images explicitly, fund targeted teacher professional development and create clear, confidential reporting pathways for students. Training quality is a concern: the EdWeek Research Center found only a small share of educators rated their deepfake training good or excellent, underscoring the need for better professional development. Filtering known “nudify” sites on school networks and pairing technical controls with counseling and legal resources can blunt immediate harms while longer-term policy work continues…

Story continues

TRENDING NOW

LATEST LOCAL NEWS