Jenna Ortega revealed that she quit Twitter (now X) after explicit artificial images of her as a teenager became rampant.
“I hate AI,” Ortega, a star known for his roles in Wednesday and Tim Burton’s upcoming Beetlejuice, boldly stated his decision in an interview with The New York Times.
“I mean, AI can be used for incredible things. I think I saw the other day that artificial intelligence can detect breast cancer four years before it progresses. That’s amazing. Let’s just leave it at that. Did you create a Twitter account when you were 14 because you had to, and love looking at salacious edited content of yourself as a child? No. That’s horrifying. That’s corrupting. That’s wrong.”
The dark side of AI and social media
Ortega’s first encounter with the dark side of social media was when he was 12 years old, when he received unsolicited explicit photos from a follower, marking the beginning of a series of harrowing experiences.
“I used to have a Twitter account, and someone said, ‘Yeah, go ahead and create your own image,'” Ortega recalls. “I ended up deleting it a couple of years ago because after the show, a ton of crazy images and pictures were posted and it was just chaos, so I deleted it.”
In March, Facebook and Instagram ran ads using blurred deepfake nude images of an underage Ortega to promote an AI app.
Pop star Taylor Swift was also targeted by the technology in January, when a proliferation of deepfakes of explicit photos of her spread on the X app led to her name temporarily becoming unavailable for search on X.
Despite the difficulties, Ortega remains grateful for the lessons she’s learned throughout her career. “I have some regrets, my parents have some regrets, and when I look back, I wouldn’t change anything,” she said of beginning her acting career as a child.
Increased regulation and technology companies’ response
Recent advances in generative AI platforms have led to an increase in the creation of non-consensual explicit images. Last year, the Internet Watch Foundation issued an urgent warning after receiving information that US high school boys were creating deepfake nude photos of their female classmates.
Several US states have laws banning deepfakes, but enforcement has been difficult as these images still appear at the top of search results on popular search engines.
“Microsoft has a long-standing commitment to promoting child safety and removing illegal and harmful content from our services,” a Microsoft spokesperson said in a statement. “We removed this content and remain committed to strengthening our defenses and protecting our services from inappropriate content and behavior online.”
Google also addressed the issue, saying, “Google Search has strong protections in place to limit access to abhorrent content that depicts CSAM.” [child sexual abuse material] “Content that sexually depicts or exploits minors may violate our policies. These systems also work against synthetic CSAM imagery. We proactively detect, remove and report such content in Search, and we also have additional safeguards in place to filter and demote content that sexualizes minors.”