Technology Secretary Liz Kendall has issued a forceful demand for Elon Musk's social media platform X to take immediate action, following revelations that its artificial intelligence system, Grok, has been used to create sexualised deepfake images, including of children.
Minister Condemns 'Appalling' Online Content
Speaking out on Tuesday 06 January 2026, the Science, Innovation and Technology Secretary described the recent incidents as "absolutely appalling, and unacceptable in decent society." Ms Kendall's intervention came after users of X appeared to have successfully prompted the Grok AI to generate images of minors depicted in minimal clothing.
She emphasised the profound personal harm caused by such technology, stating: "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online." The minister highlighted that these "demeaning and degrading images" disproportionately target women and girls, calling for an end to their proliferation.
Regulator Ofcom Launches Urgent Probe
Ms Kendall has given her full backing to the communications regulator, Ofcom, which is now examining the conduct of both X and xAI. xAI is the separate company founded by Elon Musk that developed the Grok AI model. The Secretary stated it is "absolutely right" that Ofcom is investigating as a matter of urgency and supports the regulator in taking "any enforcement action it deems necessary."
The official Grok account on X acknowledged the issue in a post, confirming there had been "isolated cases where users prompted for and received AI images depicting minors in minimal clothing." The statement from xAI added that while safeguards exist, "improvements are ongoing to block such requests entirely."
Mounting Pressure on Social Media Platforms
This incident places significant renewed pressure on major tech platforms regarding the ethical deployment and safeguarding of generative AI tools. The call from a senior UK government minister for urgent action underscores the growing political and regulatory scrutiny facing companies like X.
The situation raises critical questions about the responsibility of platforms to prevent the misuse of integrated AI systems for creating harmful content. With Ofcom's investigation underway, the focus now turns to what specific enforcement measures might be taken against X and xAI, and how quickly effective technical safeguards can be implemented to prevent a repeat of these incidents.