Advertisement

L.A. school district probes inappropriate images shared at Fairfax High. More AI abuse?

A building at Fairfax High School
Fairfax High School, where authorities are investigating allegations that inappropriate photos were circulated.
(Google Maps)
Share

Los Angeles school officials are investigating allegations that inappropriate photos were “created and disseminated within the Fairfax High School community,” in what appears to be the latest alleged misuse of technology by students, a district statement said.

Last week, Laguna Beach High School administrators announced that they had launched an investigation after a student allegedly created and circulated “inappropriate images” of classmates through the use of artificial intelligence.

In January, five Beverly Hills eighth-graders were expelled for their involvement in the creation and sharing of fake nude pictures of classmates. The students superimposed pictures of classmates’ faces onto nude bodies generated by artificial intelligence. In total, 16 eighth-grade students were targeted by the pictures, which were shared through messaging apps, according to the district.

Advertisement

It was not immediately clear if AI was used in the incident at Fairfax High. The L.A. Unified School District did not provide that information in its statement.

“These allegations are taken seriously, do not reflect the values of the Los Angeles Unified community and will result in appropriate disciplinary action if warranted,” the district said in the statement, which went out to parents Tuesday afternoon.

Based on a preliminary investigation, “the images were allegedly created and shared on a third-party messaging app unaffiliated with Los Angeles Unified,” the district stated.

District officials called attention to their efforts to provide “digital citizenship” lessons to students from elementary through high school. In the statement, officials said the nation’s second-largest school system “remains steadfast in providing training on the ethical use of technology — including AI — and is committed to enhancing education around digital citizenship, privacy and safety for all in our school communities.”

In similar investigations, the local police department has been involved. L.A. Unified did not disclose whether Los Angeles police or school police have been involved in its investigation or whether diciplinary actions have been taken.

Deepfake technology can be used to combine photos of real people with computer-generated nude bodies. Such fake images can be produced using a cellphone.

Advertisement

A 16-year-old high school student in Calabasas said a former friend used AI to generate pornographic images of her and circulated them, KABC-TV reported last month. In January, AI-generated sexually explicit images of Taylor Swift were distributed on social media.

If a California student shares a nude photo of a classmate without consent, the student could conceivably be prosecuted under state laws dealing with child pornography and disorderly conduct, experts say. But these laws would not necessarily apply to an AI-generated deepfake.

Several federal bills have been proposed, including one that would make it illegal to produce and share AI-generated sexually explicit material without the consent of the individuals portrayed. Another bill would allow victims to sue.

In California, lawmakers have proposed extending prohibitions on revenge porn and child porn to computer-generated images.

School districts are trying to get a handle on the technology. This year, the Orange County Department of Education began leading monthly meetings with districts to talk about AI and how to integrate it into the education system.

Advertisement