Technology Secretary Liz Kendall has issued a stark demand for Elon Musk's social media platform X to take immediate action, following revelations that its artificial intelligence, Grok, has been used to create sexualised deepfake images of children.
Minister Condemns 'Appalling' Online Content
Speaking out on Tuesday 06 January 2026, Ms Kendall described the recent incidents as "absolutely appalling, and unacceptable in decent society." Her intervention came after users of X appeared to prompt the Grok AI tool to generate images of minors in minimal clothing.
The minister firmly backed the communications regulator, Ofcom, which is examining both X and xAI—the firm founded by Mr Musk that created Grok. She stated the regulator has her "full backing to take any enforcement action it deems necessary."
Platform Response and Legal Obligations
An automated response from xAI to a media enquiry simply stated: "Legacy media lies." However, the official Grok account on X offered a more substantive reply, acknowledging "isolated cases" and confirming that while safeguards exist, improvements are ongoing to block such requests entirely.
Ms Kendall emphasised that this issue is not about restricting freedom of expression. "This is not about restricting freedom of speech but upholding the law," she asserted. She highlighted that the UK's Online Safety Act makes intimate image abuse and cyberflashing priority offences, which includes AI-generated content, obliging platforms to prevent and remove such material.
Broader Concerns and Industry Stance
The Centre of Expertise on Child Sexual Abuse (CSA Centre), funded by the Home Office, expressed deep concern over the use of AI to produce child sexual abuse material. Its director, Ian Dean, stressed the need for policymakers and companies to collaborate on safety.
X has stated it takes action against illegal content, including by removing it and suspending accounts. Elon Musk has previously warned that anyone using Grok to make illegal content will face consequences. The incident underscores the ongoing tension between rapid AI innovation and the urgent need for robust safeguards, particularly to protect women and girls from disproportionately targeted abuse.