Ofcom Fast-Tracks X Probe as Grok AI Image Abuse Scandal Erupts
Ofcom's High-Priority Probe into X's Grok AI Misuse

The UK's communications regulator, Ofcom, has escalated a formal investigation into X, formerly Twitter, labelling it a "matter of the highest priority". This urgent action follows widespread reports of the platform's built-in AI chatbot, Grok, being grossly misused to generate non-consensual intimate imagery, sparking a global outcry over digital safety and consent.

Ofcom's Urgent Response to AI-Generated Abuse

In a stark statement, an Ofcom spokesperson confirmed the accelerated probe under the UK’s Online Safety Act. The investigation was triggered by deeply concerning reports of Grok being used to create and share illegal content, including child sexual abuse material and digitally altered intimate images of individuals without their consent. The regulator emphasised that platforms have a clear duty to protect UK users from illegal content and warned it would not hesitate to act where companies are suspected of failing in their responsibilities, particularly where children are at risk of harm.

The scandal centres on Grok, an AI tool developed by Elon Musk's company xAI. Users reportedly prompted the chatbot to "digitally undress" photos of people, predominantly women and children, placing them in bikinis and sexualised scenarios. Victims of this abuse have described the trend as violating, predatory, and dehumanising, highlighting the severe emotional and psychological impact of such AI-facilitated violations.

Expert Advice: How to Shield Your Digital Likeness

In light of the scandal, protecting personal and family images online has become a critical concern. Tech and AI expert Jacob Strauss provided The Mirror with essential strategies to safeguard your digital footprint from similar AI misuse.

Strategic Image Editing: For photos of children, crop images above the chest and favour action or group shots over solo portraits. Avoid posting pictures in swimwear and ensure identifying markers like school uniforms are not visible. For added protection, use faint, diagonal watermarks or semi-transparent stickers over clothing to disrupt AI models' ability to manipulate the image.

Leverage Poorer Quality: Counter-intuitively, posting lower-resolution, angled, or poorly lit photos can be a defensive tactic. AI models perform best with high-quality, straight-on shots, so "technically poor" images reduce the accuracy of manipulation tools.

Fortify Your Privacy and Respond to Abuse

Lock Down Account Settings: The foundation of online safety is making all social media accounts private. Regularly review and remove unknown followers. Crucially, disable features that allow others to download your media, reshare your stories, or use your video content in "stitches" or remixes.

Scrub Sensitive Data: Always remove metadata and geotags from photos before uploading them. Avoid posts that show identifiable landmarks near your home, school, or regular routes to maintain physical safety.

Use Secure Sharing Methods: For sensitive family photos, move away from public feeds. Utilise expiring links, invite-only digital albums, or platform-specific features like "Close Friends" lists to control who sees original, high-quality files.

Take Immediate Action: If you discover an image has been misused, document everything with screenshots and URLs before reporting it to the platform. Adults can use services like StopNCII.org to trigger takedowns. For cases involving minors, report immediately to law enforcement and the National Center for Missing & Exploited Children's "Take It Down" program.

The Ofcom investigation into X marks a pivotal moment in holding tech giants accountable for the safety of their AI tools. As the probe progresses, the expert guidance offers a crucial lifeline for individuals seeking to reclaim control over their digital identities in an increasingly vulnerable online landscape.