Since Joe Biden announced on Sunday that he would not run for reelection and endorsed Vice President Kamala Harris as the Democratic nominee, the internet has been abuzz with reactions and memes.
Harris supporters on social media have noted some of the vice president’s funniest moments and quirky speeches over the years, such as “Do you think I fell out of a coconut tree?”
But some supporters of Republican presidential nominee Donald Trump have chosen a different path: sharing manipulated media on social media promoting a fake speech by Kamala Harris that she never gave.
The video, which has gone viral on TikTok and Twitter, shows Kamala Harris speaking to a crowd, but it’s a deepfake: the video has been edited and the audio replaced with what appears to be an AI-generated clone of her voice.
Media Matters for America released a report on Monday about a deepfake that went viral on TikTok and garnered millions of views. Shortly after the report, TikTok removed the post and the fake audio from its platform.
“TikTok has strict policies against harmful AI-generated content and misleading, edited media, and we proactively remove this content while partnering with fact-checkers to assess the accuracy of content on TikTok in real time,” a TikTok spokesperson said in a statement provided to Mashable.
Mashable Lightspeed
Kamala Harris deepfake resurfaces after presidential candidate election
This is not the first time a deepfake video of Kamala Harris has been circulated online: when it was first posted last year, multiple media outlets denied it.
The deepfake video uses real footage of Harris speaking to an audience at Howard University in 2023, but the footage has been digitally altered.
“Today is today, yesterday was yesterday,” Kamala Harris is heard slurring her words in the video. “Tomorrow is tomorrow’s today, so if you live for today, future today will become past today and tomorrow’s today.”
But Harris never said that.
The full video of the live event does not include any of the scenes seen in the popular deepfake video, and experts say the video features digital noise around her mouth, which they say is due to editing the video to match the fake audio. Additionally, the fake audio does not include any background noise or sounds from the crowd.
Either way, more than a year after the Harris deepfake was exposed as fake, the video went viral last week after a right-wing user uploaded it to Elon Musk’s X platform. The post is still up on X, where it has been viewed more than 3.4 million times. Per their policies, X will not remove this type of content. However, X users have managed to add a user-generated community note to the post, informing others that the video is fake.
Unlike X, AI-generated misinformation violates TikTok’s platform rules. TikTok says it proactively removes 98% of content that violates its policies. But one of the Harris deepfake viral uploads was viewed more than 4.1 million times before it was removed, according to a Media Matters report. TikTok says it is working to detect and remove other Harris deepfake uploads.
Deepfake videos have long been a concern in political campaigns, and now that AI-generated audio and video tools are freely available and accessible to the public, deepfakes could become a bigger issue than ever in 2024.