Ofcom Demands Answers from X Over AI Tool Generating Child Sexual Images
Ofcom contacts X over AI child sexual imagery concerns

The UK's communications watchdog, Ofcom, has initiated urgent talks with Elon Musk's social media platform X following alarming reports that its artificial intelligence system, Grok, can produce sexualised imagery of children.

Serious Safeguard Failures Uncovered

Concerns were triggered after users of X reportedly prompted the Grok AI chatbot to generate pictures of undressed individuals, including images that removed clothing from women and children. In a startling admission, a post from the official Grok account on X confirmed there had been "isolated cases where users prompted for and received AI images depicting minors in minimal clothing."

The post attempted to reassure the public, stating that "xAI has safeguards, but improvements are ongoing to block such requests entirely." However, this assurance did little to placate regulators, prompting Ofcom to seek immediate clarification on what these improvements entail and how X and its AI division, xAI, plan to protect UK users.

Regulatory Scrutiny Under the Online Safety Act

Ofcom has not yet launched a formal investigation but is conducting a swift assessment based on X's response. A spokesperson for the regulator emphasised that "tackling illegal online harm and protecting children remain urgent priorities for Ofcom."

The incident places X under the intense spotlight of the UK's new Online Safety Act. This legislation mandates that social media companies must prevent and remove child sexual abuse material as soon as they are aware of it. Crucially, the Act also explicitly bans the use of AI to create non-consensual pornographic deepfake images.

Musk's Response and Wider Industry Concerns

Elon Musk himself appears to be aware of the AI's capability to generate undressed images. He previously posted an AI-generated picture of himself in a bikini, accompanying it with laughing emojis. The original post was deleted, but Musk later reposted another user's reply featuring the same emojis.

When approached for comment on the generation of sexualised images of children, xAI responded with an auto-generated email accusing "legacy media" of lying.

The Internet Watch Foundation (IWF), a key UK charity combating online child sexual abuse, confirmed it had received public reports about suspected AI-generated abuse imagery on X. Kerry Smith, IWF's chief executive, stated that while they are reviewing the reports, they have not yet found imagery that meets the UK's legal threshold for child sexual abuse material. She urged the government to mandate that AI companies build robust safety measures directly into their products.

A Home Office spokesperson reinforced this stance, announcing: "We are legislating to ban nudification tools in all their forms, including the use of AI models for this purpose." The new offence will carry the threat of prison sentences and substantial fines for individuals or companies involved in designing or supplying such tools.