Parents across the United Kingdom are set to receive new notifications from Instagram regarding their children's activity on the popular social media platform. The alerts will specifically trigger if a teenager repeatedly attempts to search for terms related to suicide or self-harm within a condensed timeframe.
Enhanced Parental Supervision Tools
Instagram, which introduced dedicated Teen Accounts for users under 16 in 2024 to limit contacts and restrict content, is now rolling out this additional feature. In the coming weeks, parents who enrol in the platform's supervision tool will begin receiving notifications about their teen's search behaviour.
The notifications will activate when a young user searches for phrases that promote suicide or self-harm, suggest intentions of self-injury, or include explicit terms like 'suicide' or 'self-harm'. While Instagram already blocks such searches, the new system aims to make parents aware of repeated attempts, indicating potential distress.
Delivery Methods and Global Rollout
Alerts will be dispatched to parents via email, text message, or WhatsApp, based on available contact information, alongside in-app notifications. This initiative will launch next week in the United Kingdom, United States, Australia, and Canada, with plans to expand to other regions later this year.
Instagram has stated that it will also provide accompanying information to help parents support their teens and navigate sensitive conversations effectively.
Campaigner Criticism and Concerns
However, the announcement has faced immediate backlash from online safety advocates. Andy Burrows, chief executive of the Molly Rose Foundation, labelled the plans as "clumsy" and warned that "flimsy" notifications could leave parents panicked and ill-prepared.
"Every parent would want to know if their child is struggling, but these flimsy notifications will leave parents panicked and ill-prepared to have the sensitive and difficult conversations that will follow," Burrows emphasised.
He argued that the responsibility should not be shifted onto parents, urging tech firms like Instagram to proactively address risks by improving their algorithms, which he claims still recommend harmful content related to depression, suicide, and self-harm to vulnerable youths.
UK Government Context and Legislative Action
This development occurs amidst heightened governmental efforts to tackle the online harms crisis in the UK. The Online Safety Act already prohibits children from accessing harmful content, including suicide and self-harm material.
Nevertheless, campaigners have persistently highlighted gaps in the legislation, calling for more robust measures to ensure healthier online experiences for young people.
In response, the Government is scheduled to launch a children's digital wellbeing consultation next month. This three-month initiative will gather evidence on potential solutions, such as social media bans and restrictions on addictive apps, to enhance online safety for minors.
Technology Secretary Liz Kendall has pledged swift action following the consultation, echoing Labour leader Keir Starmer's commitment to act in "months, not years" to protect youth from the dangers of addictive social media platforms.