The UK's communications regulator, Ofcom, has initiated a formal investigation into the social media platform X, owned by Elon Musk. This action follows alarming reports that the platform's integrated Grok artificial intelligence chatbot was used to generate and disseminate explicit deepfake imagery.
Regulator Cites "Deeply Concerning" Reports
In a statement released on Monday, 12th January 2026, Ofcom confirmed the probe. The watchdog stated it had received "deeply concerning reports" regarding the misuse of the Grok AI tool on X. Specifically, the regulator highlighted that the account was allegedly used to create and share undressed images of individuals, which could constitute intimate image abuse or pornography.
Even more gravely, Ofcom's statement referenced the potential generation of sexualised images of children, material that may be classified as child sexual abuse material (CSAM). The investigation will centre on whether X has failed to meet its legal responsibilities under the UK's Online Safety Act.
Urgent Timeline and X's Response
Ofcom disclosed that it made urgent contact with X on Monday, 5th January 2026. The regulator set a firm deadline of Friday, 9th January for the company to provide a comprehensive explanation of the steps taken to protect its UK users from such harmful content.
The company, X, responded by the imposed deadline. Ofcom has since conducted what it describes as an "expedited assessment" of the available evidence to determine the next course of action. This rapid response underscores the severity with which the regulator is treating the allegations.
Potential Consequences and Legal Duties
The core of Ofcom's investigation will be to establish if X breached its legal duties under the Online Safety Act. This landmark legislation places a 'duty of care' on online platforms to protect users from illegal content. Failures to comply can result in substantial fines, potentially amounting to billions of pounds, or even restricted access to services in the UK.
This case represents one of the first major tests of the new online safety framework concerning the rapid proliferation of AI-generated content. It raises significant questions about the accountability of platforms that host or integrate powerful generative AI tools and their ability to prevent malicious use.
The outcome of this probe will be closely watched by policymakers, tech companies, and online safety advocates across the globe, setting a potential precedent for how regulators tackle the emerging threat of AI-facilitated abuse.