It’s only been a few days since Elon Musk-owned social media site X (formerly Twitter) unveiled the latest version of its artificial intelligence chatbot Grok. The new update, Grok-2, released on August 13, lets users generate AI images with simple text prompts. The problem is that this model doesn’t have any of the average safety guardrails that other popular AI models have. Simply put, you can do almost anything with Grok. And it does.
Grok is a generative artificial intelligence model: a system that learns by itself and creates new content based on what it has learned. Over the past two years, advances in data processing and computer science have made AI models extremely popular in the tech sector, with both startups and larger companies like Meta developing their own versions of the tool. But in X’s case, its progress has been marked by concerns from users and experts that the AI bot is going too far. Since Grok’s update, X has been filled with wild user-generated AI content, including some of the most widespread content about politicians.
There have been countless examples of suggestive and violent content in the past, including an AI image of former President Donald Trump caressing a pregnant Vice President Kamala Harris, and an image of Mickey Mouse and Musk surrounded by a pool of blood and holding up an AK-47. But when a concerned user of X pointed out the AI bot’s unbridled abilities, Musk played it off as indifferent, calling it “the most fun AI in the world.” Now, when users point out political content, Musk simply comments with “cool” or a laughing emoji. In one example, when an X user posted an AI image of Musk supposedly pregnant with Trump’s child, X’s owner responded with even more laughing emojis, writing, “If you live by the sword, you should die by the sword.”
Well, if you live by the sword, you should die by the sword🤣🤣
— Elon Musk (@elonmusk) August 15, 2024
As researchers continue to develop the field of generative AI, the ethical implications of the technology continue to be debated and raised concerns. During this US presidential election season, experts have expressed concern that AI could influence or aid in spreading problematic lies to voters. Musk, in particular, has been heavily criticized for sharing manipulated content. In July, an X owner posted a digitally altered video clip of Vice President Harris, using her voice to claim that Harris called President Joe Biden “demented” and called Harris the “ultimate diversity hire.” Musk shared the post with his 194 million X followers without a disclaimer that the post was manipulated, which went against X’s stated guidelines that prohibit “synthetic, manipulated, or out-of-context media that may mislead or confuse people and cause harm.”
Editor’s Recommendation
Other generative models have had issues in the past, but the most popular models like ChatGPT have much stricter rules about the images users can generate. OpenAI, the developer of the model, doesn’t allow images to be generated naming politicians or celebrities. The guidelines also prohibit using AI to develop or use weapons. But X users claim that Grok generates images that promote violence and racism, including ISIS flags, politicians wearing Nazi insignia, and dead bodies.
Nikola Banovic, an associate professor of computer science at the University of Michigan, Ann Arbor, told Rolling Stone that Grok’s problem isn’t just the lack of guardrails in its model, but its widespread accessibility as a bot that can be used with little to no training or tutorials.
“There’s no question that there are dangers that these tools are now available to the general public. They could be used effectively to spread misinformation and disinformation,” he said. “What’s particularly challenging is [models] “AI is approaching the ability to generate things that are really realistic and maybe even plausible, but the public may not have the media or AI literacy to spot that as disinformation. We’re now approaching the stage where we can look more closely at some of these images, understand the context better and the public can see that the image is not real.”
trend
Related
Representatives for X did not respond to Rolling Stone’s request for comment. The Grok-2 and its Mini version are currently in X’s beta version, available only to participants who pay for X Premium, but the company has announced plans to continue developing the models further.
“This continues to be a broader discussion about what the norms and ethics of creation are. [and deploying] “These kinds of models touch on the question of whether AI platform owners are liable,” Banovich adds, “but we rarely hear the question, ‘What is the liability of the AI platform owners who are adopting these kinds of technologies and deploying them to the general public?’ I think this is something we need to have a discussion about.”