Elon Musk’s social media platform, X, is facing a series of privacy complaints after it used European Union users’ data to train AI models without obtaining consent. This issue came to light when a vigilant social media user noticed a setting indicating that X had started processing regional user data for its Grok AI chatbot.
The Irish Data Protection Commission (DPC), which oversees X’s compliance with the EU’s General Data Protection Regulation (GDPR), expressed “surprise” at this discovery. The GDPR mandates that any use of personal data must have a valid legal basis, and violations can result in fines of up to 4% of global annual turnover. The nine complaints, filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland, and Spain, allege that X failed to meet this requirement by using Europeans’ posts for AI training without their consent.
Max Schrems, chairman of the privacy rights nonprofit noyb, which is supporting the complaints, commented on the situation, saying, “We have seen countless instances of inefficient and partial enforcement by the DPC in the past years. We want to ensure that Twitter fully complies with EU law, which — at a bare minimum — requires asking users for consent in this case.”
The DPC has already initiated some actions against X’s data processing for AI model training, including legal action in the Irish High Court to seek an injunction to stop the data use. However, noyb argues that the DPC’s efforts are inadequate, highlighting that X users have no way to request the deletion of “already ingested data.” In response, noyb has filed GDPR complaints in Ireland and seven other countries.
The complaints assert that X lacks a valid basis for using the data of approximately 60 million EU users for AI training without their consent. The platform appears to be relying on “legitimate interest” as its legal basis for the AI-related processing, but privacy experts argue that consent is necessary.
“Companies that interact directly with users simply need to show them a yes/no prompt before using their data. They do this regularly for lots of other things, so it would definitely be possible for AI training as well,” Schrems suggested.
In June, Meta halted a similar plan to process user data for AI training after noyb supported some GDPR complaints and regulators intervened.
X’s approach of quietly using user data for AI training without notifying users allowed the activity to go unnoticed for several weeks. According to the DPC, X was processing Europeans’ data for AI model training between May 7 and August 1.
Users of X gained the ability to opt out of the processing through a setting added to the web version of the platform, seemingly in late July. However, prior to this, there was no way to block the processing. Additionally, it is difficult for users to opt out of their data being used for AI training if they are unaware it is happening.
The GDPR is designed to protect Europeans from unexpected uses of their information that could affect their rights and freedoms. In challenging X’s legal basis, noyb references a judgment by Europe’s top court last summer, which related to a complaint against Meta’s use of data for ad targeting. The judges ruled that a legitimate interest legal basis was not valid for that use case and that user consent should be obtained.
Noyb also points out that providers of generative AI systems often claim they are unable to comply with other core GDPR requirements, such as the right to be forgotten or the right to obtain a copy of personal data. These concerns are also part of ongoing GDPR complaints against OpenAI’s ChatGPT.