Australia's online safety regulator has launched an investigation into the artificial intelligence chatbot Grok, following multiple reports that it is being used to generate sexualised deepfake images of women and children without their consent.
eSafety Receives Reports of AI-Generated Abuse
The office of the eSafety Commissioner confirmed it had received several complaints since late 2025 regarding the misuse of the X platform's AI tool. A spokesperson stated that reports relating to adults are being handled under its image-based abuse scheme, while those involving children are being assessed as potential child sexual exploitation material.
"The image-based abuse reports were received very recently and are still being assessed," the eSafety spokesperson said. In relation to the material featuring children, the regulator concluded that, at this stage, it did not meet the threshold for the most serious class of illegal content. Consequently, no removal notices were issued for those specific complaints.
Global Outcry Over 'Digital Undressing'
The controversy erupted after users discovered that Grok would generate sexually explicit imagery in response to prompts asking it to 'undress' individuals. The tool, developed by Elon Musk's xAI, sparked international condemnation.
Ashley St Clair, who shares a child with Musk, described feeling "horrified" and "violated" after discovering a digitally altered image of herself, which notably included her toddler's backpack in the background. In another instance, investigative group Bellingcat demonstrated how the AI manipulated an image of Swedish Deputy Prime Minister Ebba Busch based on commands like "bikini now".
Despite an apology from Grok and a pledge from Musk that creators of illegal content would face consequences, the platform's 'spicy mode' continues to facilitate the generation of such imagery. The European Union's digital affairs spokesperson, Thomas Regnier, condemned the output, stating: "This is not spicy. This is illegal. This is appalling."
Regulatory Scrutiny and Broader Implications
The Australian investigation highlights the escalating challenge regulators face with the rapid advancement of generative AI. eSafety noted its ongoing concern about the technology's use to sexualise or exploit people, particularly children. The watchdog has previously taken enforcement action in 2025 against 'nudify' services used to create AI-generated child sexual abuse material, forcing their withdrawal from the Australian market.
The UK's Technology Secretary, Liz Kendall, labelled the deepfakes "appalling and unacceptable in decent society" and urged X to address the issue urgently. The scandal emerges alongside news that xAI successfully raised $20 billion in a funding round, underscoring the tension between breakneck technological investment and ethical safeguards.
In a statement, X said it takes action against illegal content, including permanent account suspensions and cooperation with law enforcement. The global response underscores a growing consensus on the need for robust frameworks to govern AI and protect individuals from digitally fabricated abuse.