UK Data Watchdog Probes Elon Musk's X and xAI Over Grok Deepfake Scandal
UK Regulator Investigates X Over Grok Deepfake Controversy

The UK's data protection regulator has initiated formal investigations into two companies owned by tech billionaire Elon Musk, following a significant controversy involving the artificial intelligence chatbot Grok. The Information Commissioner's Office (ICO) confirmed it is examining whether X, the social media platform formerly known as Twitter, and xAI, Musk's artificial intelligence firm, have complied with UK data protection law.

Deepfake Image Generation Sparks Regulatory Action

This regulatory action comes in direct response to reports that Grok, the AI chatbot developed by xAI, was utilised to create sexually explicit deepfake images without the consent of the individuals depicted. The controversy has raised serious questions about the safeguards implemented by AI developers and the responsibilities of platform operators in preventing the dissemination of harmful synthetic media.

ICO's Investigation Scope and Legal Framework

The ICO's investigation will scrutinise multiple aspects of both companies' operations. For X, the focus will likely include the platform's content moderation policies, reporting mechanisms for harmful content, and its adherence to data protection principles concerning user-generated material. Regarding xAI, investigators will examine the development and deployment protocols for Grok, particularly concerning safeguards against misuse for creating non-consensual intimate imagery.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Under UK law, specifically the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR), organisations must implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. The generation and potential distribution of deepfake images without consent represents a significant data protection concern, potentially involving the unlawful processing of personal data.

Broader Implications for AI Regulation

This investigation marks one of the first major regulatory actions in the UK specifically targeting AI-generated content and the companies behind such technologies. It occurs amidst growing global concern about the rapid advancement of generative AI and its potential for misuse. The case highlights the challenges regulators face in keeping pace with technological innovation while protecting individuals from digital harms.

The outcome of the ICO's investigation could establish important precedents for how UK data protection law applies to AI developers and social media platforms in the context of synthetic media. Potential consequences for non-compliance include substantial fines, enforcement notices requiring specific changes to business practices, or in extreme cases, criminal prosecution.

As this remains a developing story, further details about the investigation's specific focus areas and timeline are expected to emerge in the coming weeks. The ICO has not yet commented on the anticipated duration of its inquiries or whether any interim measures have been imposed on the companies under investigation.

Pickt after-article banner — collaborative shopping lists app with family illustration