Jenna Ortega revealed that she deleted her Twitter (now known as X) account after receiving explicit images of herself as a child generated by artificial intelligence.
What happened: Ortega, 21, recounted her experience during a conversation on The New York Times podcast “The Interview.” She expressed her disdain for AI and said she was sent AI-generated images of herself as a child on Twitter.
“I hate AI,” Ortega said when asked about the technology, which can create realistic images and videos, including explicit deepfake images, describing the experience as horrifying and corrupting.
Ortega, known for her roles in “Stuck in the Middle” and “Jane the Virgin,” said she was encouraged to join Twitter to build her image, but when she was 12, she received her first direct message – an unsolicited sexual photo.
She deleted the app “about two or three years ago” due to an influx of disturbing images, which she said she found disturbing and disgusting.
“Did I, at 14, create a Twitter account because I had to, and enjoy looking at salacious edited content of myself as a child? No. It’s horrifying. It’s corrupting. It’s wrong,” she said in the interview.
In response to these cases, Rep. Joseph Morrell (D-NY) has proposed the “Prevent Deepfake Intimate Images Act” in 2023, which aims to criminalize the sharing of digitally altered sexual images. The bill is currently before the House Judiciary Committee.
Related articles: Cupertino revamps App Store leadership, iPhone 16 launch date, India rollout: This week on Appleverse
Why it matters: The rise in explicit AI-generated content is prompting important action from big tech companies and policymakers. In April, Meta Platforms Inc. followed the Oversight Board’s recommendation to expand its policy on showing AI-generated content across Facebook, Instagram and Threads to now include photos and audio in addition to videos.
In May, OpenAI introduced new AI tools to detect images created by the DALL-E generator. These tools include advanced watermarking techniques to better identify AI-generated content, and a framework called Model Spec to guide the operation of future AI tools.
By August, Google had taken steps to combat explicit deepfake content appearing in search results: The tech giant introduced new online safety features to simplify the removal of explicit deepfakes and prevent them from ranking high in search results.
But not all AI advances have been good. In mid-August, Elon Musk’s Grok AI chatbot faced criticism after it allowed users to generate disturbing images, including inappropriate depictions of politicians and celebrities. The controversy highlighted the ongoing challenges of regulating AI-generated content.
Ortega is not the first person to face such issues: Earlier this year, Taylor Swift was targeted with similar AI-generated explicit content on Twitter, which temporarily blocked searches for her name.
Last week, former President Donald Trump caused controversy by sharing a doctored image on his Truth Social platform that falsely suggested pop star Taylor Swift and her fans were endorsing his presidential campaign.
Captioned “I accept!”, Trump’s post featured an AI-generated election poster of Swift wearing a patriotic top hat, with the message, “Taylor wants you to vote for Donald Trump.”
Read next:
Image courtesy of Shutterstock
This story was produced with Benzinga Neuro and edited by Kaustubh Bagalkote.
Market news and data provided by Benzinga API
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.