Business Secretary Kemi Badenoch has ignited a fierce debate by proposing a ban on social media access for under-16s, mirroring a recent move in Australia. While the profound risks platforms pose to young minds are undeniable, many argue this legislative response dangerously misdiagnoses the problem.
The Allure and Peril of a Simple Ban
The instinct to protect children from the documented harms of social media is powerful and widely shared. The tragic case of 14-year-old Molly Russell, who took her own life in 2017 after viewing suicide and self-harm content online, remains a stark and painful reminder of the potential consequences. From the anxiety fuelled by constant location sharing and exclusion to the intense body-image pressures of a selfie culture, many apps appear designed to disrupt adolescent wellbeing.
This creates a baffling contradiction in modern life. A child walking to school alone can spark a national debate on parental responsibility, yet technology firms, whose business models have a proven record of profiting from emotional distress, are granted virtually unfettered access to young people's bedrooms. The recent revelation that AI tools like Grok can generate sexualised imagery of children, treated by the government as a mere matter for Ofcom, underscores this alarming asymmetry of risk between the physical and digital worlds.
Placing the Burden on the Wrong Shoulders
The fundamental flaw in Badenoch's proposal is twofold. First, it absurdly places the onus for solving a corporate-created crisis onto the individual child. Expecting a 12-year-old to counter algorithmic feeds selling radical misogyny or self-harm methods by simply putting down their phone is not a serious solution. Similarly, burdening parents with an endless arms race of screen-time limits and parental blocks asks them to fill the moral vacuum left by late-stage capitalism and negligent regulation.
True government intervention must start by tackling the problem at its source: the platforms themselves and the toxic content they amplify. Legislation that holds tech giants accountable for the safety of their products, rather than punishing their youngest users, is the logical starting point.
The Older Generations Driving the Toxicity
Secondly, and perhaps more critically, focusing solely on young users presents a distorted picture of the online landscape. While they are uniquely vulnerable, a significant portion of the most damaging content—misinformation, conspiracy theories, and hate—is created and spread by adults.
An analysis of data from Amnesty International, Global Witness, and the BBC, which sought to identify prominent UK-based spreaders of misinformation on X (formerly Twitter), produced a telling list. It included figures like Nigel Farage, Laurence Fox, Julia Hartley-Brewer, and George Galloway. The youngest names frequently cited in such circles are figures like Tommy Robinson, 43, and Darren Grimes, 32.
This reveals a crucial blind spot: the online ecosystem is poisoned not just by what is targeted at teenagers, but by content consumed and shared by hyper-credulous baby boomers and Gen X miscreants. Any effective policy must reckon with how to address the risks these older demographics both face and propagate.
In conclusion, while the dangers social media poses to young people are real and urgent, a blanket ban for under-16s is a misguided simplification. It punishes the symptom while letting the disease—profitable, toxic content often driven by older users—rage unchecked. A serious strategy requires robust regulation of tech platforms and a clear-eyed view of online harms across all age groups.



