What happened? Elon Musk’s X found itself in hot water this week after it was revealed that it had trained its AI chatbot, Grok, with public user data by default, a move that has sparked backlash from both users and regulators over data privacy concerns.
The controversy erupted when some eagle-eyed X users noticed a new option buried in the platform’s privacy settings that allowed them to opt out of having their data used to train Grok. Outrage quickly spread across the platform as users realized that their posts and interactions were already being used for AI training purposes, likely without their knowledge or consent.
X has remained largely mum on the details: In a brief tweet, the platform’s safety account confirmed that “all X users can control whether their public posts are used to train Grok,” but did not clarify when this option was introduced or when the data collection actually began.
All X users can control whether their public posts can be used to train the AI search assistant Grok. This option is available in addition to the existing control over whether Grok-related interactions, inputs, and results are utilized. This setting…
– Safety (@Safety) July 26, 2024
The privacy page for the web version of X states that “X’s posts, and user interactions, inputs, and results on Grok” may be used by both X and its AI service provider, xAI, for “training and fine-tuning purposes.”
Everything you do on Twitter is being used to train the generative AI Grok, without your consent. It’s on by default and embedded in your settings without your knowledge.
Just to be safe, turn this off under “Data Sharing and Personalization” and clear your history. pic.twitter.com/Ix9C8PHxqZ
– Theo (@tprstly) July 26, 2024
Though it’s hidden in the fine print, X’s privacy policy has technically allowed this kind of data use since at least September 2023.
Aside from the user backlash, regulators have also taken issue with X’s sneaky behaviour. As reported by The Guardian, the UK’s Information Commissioner’s Office (ICO) said it was “enquiring” into X. Meanwhile, X’s EU regulator, the Irish Data Protection Commission (DPC), said it was “surprised” by the default settings as it had already had discussions about X’s data processing for AI like Grok.
Under GDPR rules, companies cannot use pre-selected default settings to assume consent to potentially invasive data processing.
AI chatbots like Grok are no strangers to shady data processing, and it’s no surprise that Grok is no exception. To churn out human-like responses, these models need to crunch loads of data, from books to websites to social posts, even if it means treading into grey areas of copyright. In April, a report emerged showing that OpenAI had trained LLM by transcribing more than 1 million hours of YouTube videos; Google did the same with its Gemini model.
Anyway, here’s how to opt out of this mess via the platform’s settings (on the website, but not on the mobile app).
Click “More” in the navigation panel and select “Settings and Privacy.” Click “Privacy and Safety.” Scroll down to the “Data Sharing and Personalization” section and select “Grok.” Uncheck the box “Allow Grok posts, interactions, typing, and results to be used for training and fine-tuning.”
Alternatively, you can visit the Grok settings page directly. Currently, mobile app users can’t opt out, but X says this setting will be added soon.