A senior Irish police officer has revealed that hundreds of investigations are currently underway into content shared on the social media platform X, formerly Twitter. This comes amid serious and escalating concerns over potential child sexual abuse material (CSAM) generated using the platform's artificial intelligence tool, Grok.
Committee Hears of 200 Active Investigations
Irish politicians convened an Oireachtas Media Committee hearing on Wednesday 14 January 2026, bringing together law enforcement and experts. The session focused on the proliferation of AI-generated CSAM and sexualised content.
During questioning from committee chairman and Labour Party TD Alan Kelly, Detective Chief Superintendent Barry Walsh of An Garda Siochana confirmed the scale of the probe. "As of this morning, there are 200 reports that are being investigated involving content that is child sexual abuse material, or child sexual abuse indicative material," he stated, clarifying that all related to the Grok AI tool.
Mr Walsh explained the investigative process involves assessing content for criminality, identifying those responsible where possible, and taking subsequent actions which may include executing warrants, interviews, and court proceedings.
Legislative Framework and Victim Support
The senior officer, who is attached to the Garda National Cyber Crime Bureau, said gardai believe existing legislation allows for thorough investigations. He specifically referenced "Coco's Law", which deals with offences relating to intimate images.
Detective Superintendent Michael Mullen emphasised that AI-generated images are treated exactly the same as real images under this law. "It makes no difference. If it's AI-generated, under Coco's law it is still a criminal offence – as simple as that," he told the committee.
Mr Walsh encouraged any victims to contact their local Garda station for specialist support, noting that victims of intimate image abuse can also report online via Hotline.ie. He assured the public that all reports are being "treated with utmost seriousness".
Calls for a 'Robust Response' from AI Providers
While recent commentary has focused on Grok, Mr Walsh warned that it was a "conceptual possibility" other AI models could be trained to create such harmful content. He called for a "robust response" from AI service providers to ensure their models cannot be manipulated to create unlawful material.
He stated a minimum step is for online platforms to ensure disseminated material is appropriate for audiences and has been vetted for accuracy, lamenting that this was clearly not currently the case.
The committee heard that gardai mainly deal with CSAM referrals through the US-based National Centre for Missing & Exploited Children, with referrals skyrocketing from 13,300 in 2024 to roughly 25,000 in 2025.
Fianna Fail senator Alison Comyn shared her personal experience of having her face placed on pornographic images, describing it as "deeply upsetting and violating". She highlighted the new scale of the threat, where AI can create such content "in seconds and sent out at the touch of a button" to millions globally.
Mr Walsh confirmed that increased investment had allowed his unit to reduce its backlog of unactioned Coco's Law cases from "hundreds" down to "around 50 or 60".