UK Government to Close AI Chatbot Loophole, Impose Fines and Bans
UK to Close AI Chatbot Loophole, Impose Fines and Bans

The UK government is set to announce significant legal changes to close a critical loophole in the Online Safety Act, targeting AI chatbot providers that put children at risk. Under the new measures, companies could face massive fines or even see their services blocked in the UK if they fail to comply with illegal content duties.

Immediate Action on AI Chatbot Risks

Prime Minister Keir Starmer will unveil a "crackdown on vile illegal content created by AI" on Monday, following public outrage over Elon Musk's Grok AI tool generating sexualised images of real people. The government plans to move swiftly to force all AI chatbot providers to abide by the Online Safety Act or face severe consequences for breaking the law.

Closing the Legal Gap

Currently, AI chatbots that create harmful material without searching the internet, such as content encouraging self-harm or generating child sexual abuse material, fall outside the scope of existing laws unless it amounts to pornography. This loophole, known for over two years, will be addressed within weeks to ensure these tools are covered by the act.

"Technology is moving really fast, and the law has got to keep up," said Starmer. "The action we took on Grok sent a clear message that no platform gets a free pass. Today we are closing loopholes that put children at risk, and laying the groundwork for further action."

Penalties for Non-Compliance

Companies that breach the Online Safety Act can face punishments of up to 10% of their global revenue, and regulators can apply to courts to block their connections in the UK. This applies to AI chatbots used in ways that endanger children, with the government emphasising that providers must design their tools safely.

Broader Social Media Restrictions

In addition to the AI crackdown, Starmer is accelerating plans to restrict social media use by children. If agreed by MPs after a public consultation, new measures could be implemented as early as this summer, potentially including a ban on under-16s accessing social media or restrictions on features like infinite scrolling.

Political and Public Response

The Conservatives have criticised the government's approach, with Shadow Education Secretary Laura Trott calling it "more smoke and mirrors," noting that the consultation has not yet started. "Claiming they are taking 'immediate action' is simply not credible when their so-called urgent consultation does not even exist," Trott said.

However, advocacy groups like the NSPCC have welcomed the steps. Chief Executive Chris Sherwood highlighted cases where young people were harmed by AI chatbots, such as a 14-year-old girl given inaccurate information about eating disorders. "AI is going to be that on steroids if we're not careful," Sherwood warned.

Industry and Regulatory Context

OpenAI, behind ChatGPT, and xAI, maker of Grok, have been approached for comment. Since the tragic case of 16-year-old Adam Raine, whose family alleges he was encouraged by ChatGPT, OpenAI has introduced parental controls and age-prediction technology to limit harmful content.

The government also plans to consult on making it impossible for social media platforms to send or receive nude images of children, a practice already illegal. Technology Secretary Liz Kendall stated, "We will not wait to take the action families need, so we will tighten the rules on AI chatbots and we are laying the ground so we can act at pace."

The Molly Rose Foundation, established after 14-year-old Molly Russell's suicide linked to online content, called the measures "a welcome downpayment" but urged the prime minister to commit to a stronger Online Safety Act that prioritises product safety and children's wellbeing.