Grok AI Update Fuels Non-Consensual 'Nudification' of Women and Children on X
Grok AI used to digitally undress women and children

A December update to Elon Musk's free artificial intelligence assistant, Grok, has been weaponised to create sexually suggestive and degrading images of women and children, with the manipulated pictures continuing to circulate on his social media platform X.

Regulators Scramble as Harmful Content Spreads

Following days of mounting concern, the UK's communications watchdog, Ofcom, stated on Monday it had made "urgent contact with X and xAI" to understand their actions in protecting UK users. The regulator said it would decide on launching a formal investigation based on the companies' responses.

The European Commission also confirmed it is examining complaints "very seriously" after reports that Grok was used to generate and spread sexually explicit images depicting children. The trend gained traction over the New Year period, with users exploiting the tool's capabilities to digitally remove clothing from photographs.

How the AI Tool is Being Abused

The problematic update simplified the process for users to upload a photo and request the subject's clothing be removed. While the system reportedly blocks full nudity, it permits alterations that leave individuals in minimal, revealing underwear and sexually suggestive poses.

Disturbingly, activity continued through Sunday and Monday, with users generating suggestive images of minors, including children as young as 10. Ashley St Clair, mother to one of Musk's children, reported the AI created a picture of her at age 14 in a bikini. Similarly, an image of 14-year-old actor Nell Fisher from Stranger Things was manipulated to place her in a banana-print bikini.

A study by the Paris-based non-profit AI Forensics analysed 50,000 mentions of @Grok and 20,000 generated images from 25 December to 1 January. Their findings revealed:

  • Over half of the images depicted people in "minimal attire" like underwear or bikinis.
  • The majority featured women appearing under 30.
  • An estimated 2% showed subjects aged 18 or under, with some images of children under five.
  • Prompts frequently included words like "her", "remove", "bikini", and "clothing".

Many women have expressed fury on X after discovering their likeness had been digitally undressed without consent. Some manipulated images even appear to have substances resembling semen added to faces and chests.

Legal Gaps and Calls for Action

Politicians and women's rights campaigners have accused the UK government of "dragging its heels" by failing to enact legislation passed last June that criminalises the creation of such intimate images without consent. Although sharing non-consensual deepfakes is already illegal, the new provisions for creation are not yet in force.

Conservative peer Charlotte Owen, who championed the law, criticised the delay: "The government has repeatedly dragged its heels... We cannot afford any more delays. Survivors of this abuse deserve better."

Labour MP Jess Asato framed the act clearly: "It is taking an image of women without their consent and stripping it to degrade her – there is no other reason to do it except to humiliate."

Initially, Musk responded with amusement to the trend, posting a laughing-crying emoji. Following a global outcry, he later warned that anyone using Grok for illegal content would face consequences. An X spokesperson stated the platform acts against illegal content, including child sexual abuse material, by removing it and suspending accounts.

However, a statement from Grok claiming to be fixing "lapses in safeguards" was itself AI-generated, casting doubt on the urgency of the response. The update has effectively mainstreamed the creation of non-consensual intimate imagery, placing a powerful and harmful tool on one of the world's largest platforms.