Technology Secretary Liz Kendall has launched a scathing attack on the UK's media regulator, Ofcom, accusing it of unacceptable delays in implementing crucial online safety laws designed to protect families.
A Letter of Deep Disappointment
In a blistering letter addressed directly to Ofcom's chief executive, Dame Melanie Dawes, Kendall expressed her deep concern and disappointment over the sluggish pace of progress. The minister stated that families across the country have been waiting too long for the protections promised under the Online Safety Act (OSA), which finally became law in October 2023.
Central to her criticism is the delay in enforcing new duties related to harmful but legal content. These duties would compel social media firms to give adult users control over their feeds, allowing them to filter out hateful and abusive material targeting characteristics like race, religion, and sexual orientation. Such content is already banned for children.
Specific Concerns Over Antisemitism and Harmful Content
Kendall singled out the spread of antisemitic content online as a particular area of alarm. She warned Dame Melanie that dealing with this issue is a priority for this government, echoing clear directives from the Prime Minister.
The minister further argued that these delays are hindering vital work to protect vulnerable groups, specifically women and girls from harmful content. The implementation of 'user empowerment' tools, which are key to these protections, has been pushed back.
Ofcom's Timeline and Defence
According to its latest published roadmap, Ofcom does not plan to consult on these additional duties for categorised services until around July 2026. The regulator has faced heavy criticism for the painstakingly long consultations it has undertaken since the act was passed.
In its defence, an Ofcom spokesman pointed to factors beyond its control, including a legal challenge against the Government that raised complex issues. The spokesman also highlighted that change is already underway, noting that sites and apps now have legal duties to protect people, especially children. Ofcom has already opened investigations into over 70 services for potential breaches.
Separately, the regulator's new codes of practice, which came into force in July this year, now require online sites to implement robust age verification tools—such as facial scans and photo ID checks—to prevent underage access to pornography. Platforms have also been ordered to tame toxic algorithms and act faster to remove content related to self-harm, suicide, and eating disorders.
Broader Parliamentary Concerns on AI and Safety
The pressure on Ofcom and the government's tech policy was further amplified in the Commons, where MPs raised urgent concerns about AI chatbots. Conservative backbencher Bob Blackman warned that some chatbots are prompting young people to commit suicide and also to self-harm.
In response, AI minister Kanishka Narayan stated that such AI-based search tools are already covered by the Online Safety Act. He described each case of suicide and self-harm as a deep tragedy and confirmed that the Secretary of State has commissioned work to ensure there are no legislative gaps, promising robust action where necessary.