Meta Launches Tool for Parents to Monitor Kids' AI Chat Topics
Meta Lets Parents See Kids' AI Chat Topics

Meta, the parent company of Facebook and Instagram, has announced a new tool that enables parents to see what topics their children are discussing with its artificial intelligence chatbots. While parents already receive alerts if their children engage with sensitive subjects like suicide or self-harm, this new feature provides a broader overview of their children's AI conversations.

How the New Tool Works

Starting April 23, parents using the supervision tools on Facebook, Messenger, and Instagram will have access to an "Insights" tab. Within this tab, an option labeled "Their AI Interactions" displays a list of topics that their children have discussed with Meta's chatbots over the previous seven days. The topics are broad categories, including school, travel, writing, entertainment, lifestyle, health and wellbeing, as well as more specific sub-topics under each umbrella.

For example, the wellbeing category might include sub-topics like mental health or physical health, while lifestyle could list fashion or food. To use the Insights tab, parents must ensure their children are using Teen accounts, which are available on Meta's platforms, according to PC Mag.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Availability and Future Rollout

The new tool is initially available to parents in the United States, United Kingdom, Australia, Canada, and Brazil. Meta has stated that a global version of the tool will be rolled out in the coming weeks.

Background and Legal Context

This announcement follows a lawsuit in which Meta was ordered to pay $375 million for failing to block child exploitation on its apps. In addition, Meta has established an AI Wellbeing Expert Council, described as a group of experts who will provide ongoing input on AI experiences for teens to ensure they remain safe and age-appropriate. Company employees working on AI projects will reportedly hold regular meetings with the council to discuss updates and receive feedback.

The safety and health of children on social media has become a prominent issue. In March, a California court awarded $6 million to a woman who sued Meta and Google, claiming their products were addictive and contributed to her depression and anxiety since childhood. The jury found both companies negligent for designing addictive apps and failing to protect younger users. This ruling marks the first time social media companies have been held liable for the impact of their products on individuals, particularly children and teenagers.

Pickt after-article banner — collaborative shopping lists app with family illustration