Elon Musk's social media platform X has moved to block users who do not pay for its premium subscription from accessing the image-generation feature of its Grok AI chatbot. The decision, implemented on Friday, comes amid a growing scandal and government condemnation over the tool's widespread use to create sexually explicit deepfake images of women and children without their consent.
From 'Meme-y Trollish' Trend to Widespread Abuse
The crisis escalated rapidly over the Christmas period, transforming from what has been described as a "meme-y trollish" trend into a tool for systematic digital harassment and abuse. Analysis from content firm Copyleaks indicated that by 31 December, users on X were collectively generating approximately one non-consensual sexualised image every minute. These were often created in direct reply to women who had posted innocent, safe-for-work photographs of themselves on the platform.
Research conducted by PhD researcher Nana Nwachukwu at Trinity College Dublin found that nearly three-quarters of the posts analysed were requests for AI to strip clothing from or add it to images of real women and minors. The platform's own culture reportedly exacerbated the spread, with users coaching each other on effective prompts and sharing results, turning the creation of abusive imagery into a communal activity.
The UK-based Internet Watch Foundation confirmed its analysts had discovered criminal imagery of children, apparently created using Grok, featuring victims as young as 11 to 13 years old.
Political Outcry and Musk's Defensive Response
The UK government reacted with fury. Technology Secretary Liz Kendall stated the country "cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." Deputy Prime Minister David Lammy revealed that even US Vice-President JD Vance, typically a critic of European tech regulation, agreed the situation was "entirely unacceptable." Ministers have not ruled out a potential ban on X in the UK.
In response to the backlash, X made its image-generation tool exclusive to paying subscribers. A Downing Street spokesperson criticised this move as inadequate, stating it merely turned "an AI feature that allows the creation of unlawful images into a premium service." Elon Musk initially responded to media inquiries with the glib dismissal "Legacy Media Lies," before later asserting that anyone using Grok to make illegal content would face consequences.
However, reports indicated that the separate Grok mobile app, which does not share images publicly, continued to allow the generation of sexualised imagery of children even after the web restriction.
The Regulatory Challenge: Can Law Keep Pace with AI?
The scandal highlights the profound difficulty regulators face in controlling fast-evolving artificial intelligence technology. In the UK, the communications regulator Ofcom possesses powers under the Online Safety Act to seek court orders to block websites or impose fines of up to 10% of a company's global turnover. Prime Minister Keir Starmer has pledged Ofcom has the government's "full support to take action."
Yet experts point to a critical issue of speed. At the time of reporting, Ofcom had not even announced a formal investigation, while the abusive images had already spread widely. "You can see how the legislation, even when it's in place, is being outstripped by not only the development of this AI technology, but the impact of it," noted Dan Milmo, the Guardian's Global Technology Editor.
The UK government promised new laws in December to explicitly ban "nudification" tools, but the timeline remains unclear. Meanwhile, Indonesia demonstrated that swift action is possible, blocking access to Grok entirely on Saturday.
The episode underscores a collision between Silicon Valley's "move fast and break things" ethos and the real-world harm caused when powerful technologies are released with weak safeguards. For the women and children targeted, the damage is immediate and profound, raising urgent questions about whether political and regulatory systems can ever move as fast as the technology they seek to govern.