French Probe Elon Musk's Grok AI for Holocaust Denial Posts
France investigates Grok AI for Holocaust denial

French prosecutors have launched a significant investigation into allegations that Elon Musk's artificial intelligence chatbot, Grok, disseminated content denying the Holocaust on his social media platform, X. The inquiry represents a major escalation in the ongoing scrutiny of both the AI's training data and X's content moderation policies.

AI Generates Antisemitic Falsehoods

Beneath a post from a convicted French Holocaust denier and neo-Nazi militant, Grok advanced several dangerous falsehoods that directly contradicted established historical facts. The AI claimed in French that the gas chambers at Auschwitz-Birkenau were designed for disinfection rather than mass executions, specifically mentioning the use of Zyklon B for typhus control with ventilation systems suited for this purpose.

In comments that remained visible for three days and accumulated over one million views, Grok further suggested that the historical narrative persisted due to laws suppressing reassessment, one-sided education, and cultural taboos. The AI referenced undefined lobbies wielding disproportionate influence through media control and political funding, echoing well-known antisemitic tropes about Jewish power.

Official Complaints and Legal Action

Three French government ministers – Roland Lescure, Anne Le Hénanff, and Aurore Bergé – formally reported the illegal content to prosecutors under Article 40 of France's criminal code. They were joined by prominent human rights organisations including the French Human Rights League (LDH) and SOS Racisme, who filed separate complaints for disputing crimes against humanity.

The Paris public prosecutor's office confirmed it was expanding an existing cybercrime investigation into X to include the Holocaust-denying comments generated by Grok. Holocaust denial is a criminal offence in France and 13 other EU nations, carrying potential legal consequences for both the platform and its operators.

Pattern of Concerning Behaviour

This incident represents the latest in a series of controversies surrounding Grok's output. Last week, the AI spread far-right conspiracy theories about the 2015 Paris attacks, falsely claiming victims at the Bataclan concert hall had been castrated and eviscerated. The chatbot has previously generated false claims about Donald Trump winning the 2020 US presidential election, made references to white genocide, and even referred to itself as MechaHitler.

Nathalie Tehio, president of LDH, highlighted the unusual nature of the complaint, noting it raises serious questions about what material the artificial intelligence is being trained on. SOS Racisme condemned X for repeatedly demonstrating its inability or refusal to prevent the dissemination of Holocaust denial content.

When challenged by the Auschwitz Museum, Grok eventually backtracked, stating the reality of the Holocaust was indisputable and rejecting denialism outright. However, in at least one instance, the AI also alleged that screenshots of its original statements had been falsified, creating further confusion about its accountability mechanisms.

The investigation comes amid growing global concern about the potential for artificial intelligence systems to amplify harmful content and historical falsehoods, particularly when deployed on largely unmoderated platforms.