Dozens of survivors and bereaved relatives of 19 separate terror attacks are calling for urgent action to stop extremists from using artificial intelligence to help launch atrocities. In an open letter coordinated by the group Survivors Against Terror, they demanded that the government introduce legislation to address the risks posed by AI chatbots.
The technology has shown that it could guide extremists through technical problems to help them create deadlier devices, potentially giving lone-wolf attackers access to explosives or poisons. In one instance, AI successfully advised on how to cultivate neurotoxins and guided scientists through "the design of an improvised nuclear fusor".
The letter reads: "It is not only the radicalisation that is enabled online. Increasingly attack planning is also being facilitated by digital technologies." Research from the Center for Countering Digital Hate (CCDH) shows that widely available AI chatbots easily and repeatedly assist individuals planning violence, including extremist attacks. These tools reinforce harmful ideologies and lower the barriers to carrying out attacks with practical, actionable advice.
"We raise this from lived experience of what happens when violence moves from ideas into action. Current proposals have not adequately addressed these risks. We urge the next parliament to move swiftly to address the radicalisation and extremism risks posed by AI chatbots."
Ahead of Wednesday's King's Speech, when Charles III will set out the government's planned new laws for the next Parliamentary session, the group called for new legislation forcing AI chatbot developers to mitigate risks related to extremism and terrorism. They also called for "transparency and independent oversight" of AI systems and accountability for technology companies. The letter concluded: "For us, these risks are not theoretical. We have lived through what happens when online harm becomes real-world violence. The Government has committed to public safety in the age of AI. The King's Speech is the moment to deliver."
Brendan Cox, whose wife, Labour MP Jo Cox, was killed in an act of terror by a rightwing extremist in 2016, cofounded Survivors Against Terror. He said: "There is a growing realisation that it's not just radicalisation happening online, that it's also attack planning. That was bad enough under the old internet but with AI's help it could be used to bring about even more devastating attacks. At the moment the government doesn't have a plan to respond to that and that is causing real concern for survivors and bereaved relatives of terrorist attacks."
The letter's 70 signatories include Sheelagh Alexander, whose son, Nick Alexander, was killed at the Bataclan in Paris in 2015; Figen Murray, whose son was killed in the Manchester Arena attack in 2017; Kevin Tipple, a survivor of the Palace of Westminster attack in 2017; and Zoe Thompson, survivor of the Tunisia beach attack in 2015.
Earlier this year OpenAI founder Sam Altman apologised for the company not going to police with information on a ChatGPT account that belonged to Jesse Van Rootselaar who carried out a mass shooting in the Canadian community of Tumbler Ridge in January. The account belonged to the 18-year-old who killed eight people and injured nearly 30 others, marking one of the province of British Columbia's deadliest mass shootings.
OpenAI is also now being investigated by police over the use of ChatGPT by a man who is accused of carrying out a shooting at Florida State University last year. Two people were killed and several others were injured in the attack.
The Crime and Policing Act, which came into law last month, aimed to clamp down on illegal AI content by extending the law's reach to AI chatbots including Grok. It is intended to make sure that chatbots will have to protect users from coming across illegal content such as terrorism, racism and child sexual abuse.
A government spokesperson said: "We take these warnings seriously and thank the brave survivors and victims who share their experiences to inform ongoing work on risks linked to AI chatbots and terrorism. This government treats AI-enabled terrorism and illegal hate as a national security issue. Law enforcement agencies and regulators are scrutinising high-risk AI tools and monitoring misuse connected to potential terrorist attacks."



