Irish Data Regulator Defends Big Tech Oversight Amid AI Deepfake Controversy
Irish Data Regulator Defends Big Tech Oversight on AI

The Irish Data Protection Commission has robustly defended its regulatory approach toward major technology corporations, emphasising that it has imposed billions of euros in fines while operating without "fear or favour." This declaration came during a parliamentary committee hearing focused on the escalating concerns surrounding artificial intelligence tools and the generation of non-consensual, sexualised imagery.

Regulatory Stance on Big Tech Accountability

Des Hogan, chairman of the Data Protection Commission (DPC), directly addressed allegations of being overly cosy with Silicon Valley behemoths. He pointed to the substantial financial penalties levied against multinational firms, totalling over four billion euros, as clear evidence of rigorous enforcement. Hogan acknowledged the formidable legal challenges mounted by well-resourced tech companies, noting that nearly all fines issued to large platforms are currently under appeal, alongside concurrent judicial reviews.

Scrutiny Over AI Tool Grok and Legal Loopholes

The committee session was prompted by the recent controversy involving Grok, an AI tool on the social media platform X, formerly Twitter, which has faced accusations of generating sexualised images, including depictions of children. This incident has exposed potential regulatory shortcomings in Ireland concerning non-consensual synthetic media. While senior government figures maintain that existing legislation is adequate to prosecute child sexual abuse material and AI-generated intimate images of adults, a senior garda clarified that for adult imagery, sharing is required to constitute an offence, necessitating a complainant to initiate an investigation.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Media Regulator's Perspective on AI and Child Protection

Jeremy Godfrey, executive chairman of Coimisiun na Mean, Ireland's media and online regulator, provided critical insights into the legal landscape. He confirmed that creating child sexual abuse material is unequivocally illegal under Irish law, obligating social media platforms to remove such content upon reporting. However, Godfrey highlighted a significant gap: deploying an AI system capable of producing this material is not inherently unlawful under current Irish statutes, though using it for that purpose is a criminal act.

Godfrey suggested that prohibiting the availability of such technologies could be a beneficial preventive measure, aligning with the forthcoming EU AI Act, which places obligations on developers and deployers of AI systems. He emphasised that this would provide an additional tool to hinder easy law-breaking, shifting responsibility from users to those providing the technology.

Broader AI Risks and Regulatory Recommendations

Beyond child protection, Godfrey identified other high-risk areas in generative AI, such as its use in companion or therapeutic chatbots, which have reportedly caused severe mental health damage in some instances. He advocated for expanding the categories of high-risk AI systems under European regulations to encompass a wider array of generative AI tools and chatbots, proposing that the European Commission review and potentially amend the list to address these emerging dangers.

Ongoing Investigations and Future Challenges

The DPC has initiated an investigation into X regarding allegations that generative AI on the platform has been used to create non-consensual, intimate, or sexualised images involving the personal data of EU citizens, including minors. Simultaneously, the European Commission, with Coimisiun na Mean's assistance, is examining X's compliance with its obligations following the Grok controversy. These probes underscore the urgent need for adaptive regulatory frameworks as AI technology evolves.

Hogan reiterated the DPC's commitment to careful, procedurally fair inquiries, often spanning several years, to ensure robust decision-making. He stressed that realising AI's transformative potential hinges on effectively mitigating its substantive risks and harms, a challenge that requires continuous collaboration with peer regulators and civil society organisations.

Pickt after-article banner — collaborative shopping lists app with family illustration