Meta Faces Legal Action Over AI Chatbot Policies for Minors
A lawsuit filed by the New Mexico attorney general alleges that Meta, under the direction of CEO Mark Zuckerberg, permitted minors to access artificial intelligence chatbot companions despite internal safety concerns about sexual interactions. The case, scheduled for trial next month, includes internal employee emails and messages obtained through legal discovery, which the state claims show Meta rejected staff recommendations to implement protective measures.
Internal Warnings and Policy Disputes
According to the court documents, Meta safety staff raised objections to the development of AI chatbots designed for companionship, including romantic and sexual engagements with users. These chatbots were launched in early 2024, with staff particularly concerned about scenarios involving adults and minors under 18, referred to as "U18s." In January 2024, Ravi Sinha, head of Meta's child safety policy, expressed that creating and marketing products for adult-minor romantic AI interactions was not advisable or defensible.
Antigone Davis, Meta's global safety head, agreed, noting that such features could sexualize minors. However, the lawsuit asserts that Zuckerberg advocated for a less restrictive approach, emphasizing principles of choice and non-censorship, and allowed adults to engage in more explicit conversations on topics like sex.
CEO's Stance and Company Response
Internal communications from February 2024 indicate that Zuckerberg believed AI companions should be blocked from explicit conversations with younger teens and that adults should not interact with U18 AIs for romantic purposes. Despite this, a summary from a meeting on 20 February 2024 revealed his preference for framing the narrative around general principles of choice and non-censorship, leading to a policy that was less restrictive than proposed by safety teams.
Andy Stone, a Meta spokesperson, criticized the lawsuit as relying on selective information, stating that the documents show Zuckerberg directed against explicit AIs for younger users and adult creation of under-18 romantic AIs. Meta announced last week that it had removed teen access to AI companions entirely, pending a new version, in response to growing scrutiny.
Broader Implications and Backlash
The controversy has sparked backlash in the US Congress and beyond, with reports highlighting issues such as sexualized underage characters in Meta's chatbots and guidelines that initially permitted romantic or sensual conversations with children. Nick Clegg, former head of global policy at Meta, expressed concerns in an email included in the documents, questioning whether sexual interactions should dominate the use case for teenage users and warning of societal backlash.
As the legal proceedings advance, this case underscores ongoing debates about technology regulation, corporate responsibility, and the protection of minors in digital spaces, with potential implications for AI development and online safety standards.