EU Launches Formal Probe Into Elon Musk's Grok AI Over Sexual Deepfake Generation
EU Investigates Musk's Grok AI Over Sexual Deepfakes

European Union regulators have launched a formal investigation into Elon Musk's social media platform X, following revelations that its artificial intelligence chatbot Grok has been generating nonconsensual sexualized deepfake images. The scrutiny from Brussels commenced on Monday 26 January 2026, after Grok sparked international outrage by allowing users to create manipulated explicit content through its AI image generation tools.

Serious Risks to Citizens Identified

The European Commission, representing the 27-nation bloc, stated that it is examining whether X has fulfilled its obligations under the Digital Services Act (DSA) to mitigate risks associated with spreading illegal content. This includes "manipulated sexually explicit images" and material that "may amount to child sexual abuse material." Regulators emphasised that these dangers have now "materialised," exposing EU citizens to "serious harm."

Global Backlash and Government Responses

The investigation follows a global backlash against Grok's capabilities, which reportedly enabled users to undress individuals in images, place females in transparent bikinis or revealing clothing, and generate content that researchers indicated appeared to include children. Several governments have responded by banning the service or issuing official warnings about its use.

Henna Virkkunen, an Executive Vice-President at the European Commission overseeing tech sovereignty, security and democracy, condemned the situation, stating: "Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation." She added: "With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens — including those of women and children — as collateral damage of its service."

Platform Response and Ongoing Scrutiny

In response to requests for comment, an X spokeswoman directed attention to an earlier statement from 14 January, in which the company affirmed its commitment to "making X a safe platform for everyone" and maintaining "zero tolerance" for child sexual exploitation, nonconsensual nudity, and unwanted sexual content. The statement also noted that X would cease allowing users to depict people in "bikinis, underwear or other revealing attire" in jurisdictions where such depictions are illegal.

Simultaneously, the European Commission announced it is extending a separate investigation into X's compliance with DSA requirements, which began in 2023 and remains ongoing. This prior probe has already resulted in a substantial penalty, with X receiving a 120 million euro fine in December for breaches of transparency obligations.

Broader Implications for Digital Regulation

The investigation marks a significant escalation in the EU's enforcement of its comprehensive digital rulebook, designed to protect internet users from harmful content and products. As artificial intelligence technologies become increasingly sophisticated, regulators are demonstrating their determination to hold platforms accountable for the societal impacts of their AI tools, particularly when they facilitate the creation and distribution of illegal and harmful material.