A major scandal involving the misuse of artificial intelligence for creating non-consensual explicit imagery has ignited a fierce debate in the UK about whether AI should be banned from social media platforms. The controversy centres on Grok, the AI chatbot developed by Elon Musk's company xAI and integrated into his social media platform X.
Grok AI Abused for 'Digital Undressing'
Users on X have been found to be grossly misusing Grok's image-generation capabilities. In a disturbing and growing trend, individuals have been prompting the AI to "digitally undress" images of people without their consent, predominantly targeting women and children. The AI has been used to place these individuals in bikinis and sexualised situations, a practice victims have described as "violating", "predatory", and "dehumanising".
X user Samantha Smith shared her experience after discovering her image had been altered. "Women are not consenting to this," she told the BBC, emphasising the profound sense of violation. Television presenter and Love Island host Maya Jama directly appealed to Grok to stop manipulating her images after followers prompted the AI to create deepfake bikini pictures of her, stating, "The internet is scary and only getting worse."
The abuse reached a new low following the fatal shooting of 37-year-old mother-of-three Renee Good in Minneapolis. Shortly after her death, an X user asked Grok to manipulate an image of her body in the aftermath, putting her in a bikini. The generated image was then viewed more than 386,000 times on the platform.
Criminal Use and Government Backlash
The scandal has taken an even darker turn with the discovery of criminal activity. The Internet Watch Foundation (IWF), a UK-based organisation that hunts child sexual abuse material, confirmed its analysts have found imagery of children aged 11 to 13 that appears to have been created using Grok. This material was discovered on a dark web forum.
"The harms are rippling out," said Ngaire Alexander, head of hotline at the IWF. "There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children."
The UK government has reacted strongly. Downing Street stated that "all options are on the table", including a potential boycott of X. Technology Secretary Liz Kendall backed media regulator Ofcom to take action, declaring, "Make no mistake - the UK will not tolerate the endless proliferation of disgusting and abusive material online." The Women and Equalities Committee of MPs has already ceased using X in protest.
In response to the outcry, X's communications team referred to a post from its Safety account, stating it takes action against illegal content and that anyone using Grok to create such material will face consequences. Amid the controversy, X has revealed plans to turn the creation of deepfakes via Grok into a "premium service", a move the UK Government has branded inadequate.
The Wider AI Landscape on Social Media
While Grok is at the centre of the storm, AI features are now commonplace across major social platforms. Meta offers AI tools on Instagram, Facebook, and WhatsApp for generating and editing images. TikTok provides AI services like Lead Genie for businesses and AI Alive for animating photos. Snapchat also incorporates AI into features like Lenses and My AI for image creation and editing.
This widespread integration raises critical questions about safety and regulation. The Grok scandal has forced a urgent public and political reckoning with the dangers of powerful, easily abused AI tools in the social media ecosystem.
The fundamental question now being asked is whether the risks outweigh the benefits. With the technology being weaponised for harassment, abuse, and the creation of illegal imagery, pressure is mounting for decisive action to protect individuals, particularly the most vulnerable, from digital violation.