A poster plastered on a London street in January 2026 delivered a stark message: delete your X account. The call to action was a direct response to the social media platform's AI chatbot, Grok, and its ability to generate non-consensual sexual imagery, a feature that has sparked a major scandal and forced government intervention.
The Scale of the AI-Generated Abuse
Between June 2025 and January 2026, AI governance expert Nana Nwachukwu documented 565 separate instances of users requesting Grok to create intimate imagery without consent. Shockingly, 389 of these requests were made in a single day, highlighting the terrifying scale and ease of the abuse. The typical scenario is grimly familiar: a woman posts an innocent photograph, perhaps in traditional dress like a sari, and within minutes, other users tag Grok in replies, prompting it to digitally strip her down.
Following a significant public backlash, X announced last Friday that it would restrict Grok's image generation feature to paying subscribers only. Reports also indicate the bot now refuses prompts to generate images of women in bikinis, though it reportedly still complies with similar requests for men. For many, this response is a case of too little, too late.
Government Steps In, But Critics See Gaps
Technology Secretary Liz Kendall has condemned X's move, stating it "does not go anywhere near far enough." In a decisive step, she announced that creating non-consensual intimate images will become a criminal offence this week, with the supply of so-called 'nudification' apps also being outlawed.
However, critics argue the government's approach has a fundamental flaw. Grok is not a dedicated nudification tool; it is a general-purpose AI with weak safeguards. Kendall's law criminalises users and the makers of specific 'nudification' software, but it does not legally compel platforms like X to implement proactive systems that prevent the harm from occurring in the first place. The law, as it stands, waits for the damage to be done before punishing perpetrators, leaving a trail of digital victims in its wake.
Shadow Technology Secretary Julia Lopez suggested the government was overreacting, calling it a modern version of an old problem. This view is strongly contested. The scale, accessibility, and speed of AI-generated abuse are unprecedented. Unlike skilled Photoshop manipulation, any user can now type a simple text prompt and have an AI generate and publish criminal material to a vast audience instantly.
A Global Regulatory Dilemma
Another profound challenge lies beyond UK borders. While the UK pushes for AI safety, the United States under the Trump administration is pursuing a "minimally burdensome" policy to enhance its AI dominance. This creates little incentive for American companies like X, OpenAI, or Anthropic to rigorously police misuse. As Nwachukwu notes, "Kendall can criminalise users in the UK, she can threaten to ban X entirely. But she cannot stop Grok from being programmed in San Francisco."
This transatlantic divide underscores the inadequacy of national laws in regulating a transnational technology. Without robust international cooperation, harmful content generated by US-based systems will continue to proliferate globally, leaving UK regulators with limited reach.
The Call for Preventative, Not Reactive, Measures
The core argument from AI accountability experts is that trust in big tech is misplaced. Regulation must shift from a model of "remove harm when you find it" to one that legally requires companies to "prove that your system prevents harm." This would involve mandatory input filtering, independent audits, and licensing conditions that bake safety into the technical design.
For the countless women and minors whose images have already been violated, post-facto regulation offers scant comfort. Their material remains online, potentially saved and shared across other platforms. The new criminal offences are a necessary step, but as the poster in London signifies, public confidence is eroding. The ultimate solution requires preventative, legally-enforced technical standards that stop abuse before it is ever generated.