Mark Zuckerberg Concedes Criminal Behaviour Inevitable on Meta Platforms in Court Testimony
In a taped deposition played for jurors in Santa Fe, New Mexico, this week, Meta CEO Mark Zuckerberg acknowledged that criminal activity, including sexual exploitation and mental health harms to children, is an unavoidable reality on the company's vast social media platforms. The testimony, alongside that of Instagram leader Adam Mosseri, forms a central part of a high-profile trial where New Mexico's attorney general alleges Meta knowingly prioritises profits and user engagement over child safety.
Grilling Over Child Safety and Platform Imperfections
Zuckerberg, grilled extensively about children's safety, stated in his deposition that with a user pool numbering in the billions across Facebook, Instagram, and WhatsApp, some individuals will inevitably engage in harmful behaviour. "I just think if you're serving billions of people, the unfortunate reality is that some very small percent of them are going to be criminals, and we should work as hard as we can to stop that activity from happening," he said. "I don't think that the standard for our platforms would be that you should assume that it will ever be perfect."
The trial, which began in early February and is expected to last about seven weeks, pits Meta against Attorney General Raul Torrez, who accuses the company of enabling predators to exploit children on its platforms. Meta disputes these allegations, pointing to recent changes such as teen accounts with default protections introduced in 2024. A Meta spokesperson emphasised, "We have strict, longstanding rules against child exploitation and have invested billions to fight it, both through proactive detection technology and safety features designed to prevent harm."
Evidence of Inappropriate Communications and Algorithmic Risks
Prosecutors presented compelling evidence during the proceedings, including a 2020 company estimate that approximately 500,000 children were receiving sexually inappropriate communications on Instagram daily. This included grooming activities, where adults build relationships with minors for sexual purposes. Meta responded that the technology used at the time was overly cautious, potentially including non-inappropriate interactions in the count.
Jurors also learned that Meta's "People you may know" algorithm, which recommends accounts for users to connect with, was identified as a primary driver of these harmful interactions. In 2018, this tool was used to discover victims in 79% of identified cases. Additionally, the court heard that around 30% of adults whose accounts were disabled for targeting children had returned to the platform and resumed such behaviour.
Controversy Over End-to-End Encryption
A significant point of contention in the trial revolves around Zuckerberg's decision to authorise end-to-end encryption for Facebook Messenger in 2023, despite warnings from child safety groups like Thorn and the National Center for Missing and Exploited Children (NCMEC). These groups argued that encryption could pose risks to children by allowing predators to share child sexual abuse imagery undetected. In his deposition, Zuckerberg defended the move, stating, "I think that end-to-end encryption messaging services are what people want. They really care about privacy."
Encryption prevents anyone other than the sender and recipient from viewing messages by converting them into unreadable ciphers, with content not stored on Meta's servers. A law enforcement officer testified earlier in the trial that reports of child sexual abuse material from the platform decreased following encryption. However, a Meta spokesperson noted that the company can still review and take action on encrypted messages if reported by users.
Internal Audits and Safety Feature Gaps
Internal presentations discussed at the trial revealed that Instagram's wellbeing safety team did not always prevent teen accounts from being recommended to potential violators, and vice versa. A December 2022 audit showed Meta continued to recommend minor accounts to some adults. In response, Meta introduced Teen Accounts in September 2024, which automatically place users under 18 into stricter settings, including private profiles by default and limited messaging options.
Despite these measures, researchers have identified gaps in protections, such as exposure to harmful videos through hashtags or recommendations and instances where safety features did not work as intended. Mosseri addressed these concerns in his deposition, saying, "We use a range of signals to identify adults who have shown potentially suspicious behaviour and avoid recommending these accounts to teens." He added that in 2025, Meta identified over 265 million Facebook accounts and 135 million Instagram accounts with suspicious behaviour, proactively preventing them from interacting with teens.
Mosseri concluded, "I certainly want to address any problem that's even remotely as severe as something like sexual solicitation ... Any negative action that happens offline, also to a certain degree, happens online. We're connecting billions of people. That is going to mean good and bad things happen."
