Elon Musk’s X targeted with nine privacy complaints after grabbing EU users’ data for training Grok
Elon Musk’s rebranded Twitter, now X, is facing mounting scrutiny in Europe over its data practices. Nine privacy complaints have been filed against the platform, alleging that it illegally collected user data for the training of its AI chatbot, Grok.
The complaints, filed with European data protection authorities, cite concerns over the lack of transparency and consent regarding the use of user data for AI training. They claim that X’s privacy policy, which was updated in June, fails to adequately inform users about the extent to which their data is being used for AI purposes.
One of the complaints, filed by the Austrian privacy watchdog, highlights the “substantial risks” posed by the use of personal data for AI development without explicit user consent. They argue that this practice violates the General Data Protection Regulation (GDPR), the EU’s strict data privacy law.
“The complaints are about the lack of transparency and the absence of clear consent from users,” said Max Schrems, founder of the privacy advocacy group Noyb, which filed several of the complaints. “X seems to be using user data for AI training without any real understanding or control for the users involved.”
This development comes amidst growing concerns regarding the ethical and legal implications of AI development and the use of personal data. The EU is actively working on legislation to regulate AI, with a particular focus on ensuring transparency and data privacy.
While X has not yet publicly responded to the complaints, the platform has previously defended its data practices, stating that it only uses publicly available data for AI training. However, the complaints allege that this claim is misleading, as X collects vast amounts of personal data from its users, including their tweets, likes, and direct messages.
The European data protection authorities are now investigating the complaints. If found to be in violation of GDPR, X could face hefty fines and be forced to change its data practices. This case could have significant implications for the future of AI development and the use of user data in Europe.