Mara Wilson: AI Deepfakes Are Creating a New Wave of Child Sexual Abuse Material
Child Actor's AI Nightmare: Deepfakes Fuel New CSAM Crisis

Former child actor and writer Mara Wilson has issued a stark warning that the rapid rise of generative artificial intelligence is enabling the mass creation of child sexual abuse material (CSAM), recreating the traumatic exploitation she suffered for a new generation.

From 'Stranger Danger' to a Digital Nightmare

Wilson, known for her roles in family films during the 1990s, describes how her image was misused for sexually explicit material and fetish websites before she even reached high school. While she felt physically safe on regulated film sets, the public eye made her a target. "Hollywood throws you into the pool," she says, "but it's the public that holds your head underwater."

She explains that the old fears of "Stranger Danger" were often misplaced, as most abuse is perpetrated by known individuals. However, the digital age has created a new, justified form of stranger danger where predators seek access through the internet. Wilson's personal nightmare is now being replicated on an industrial scale by AI tools.

The AI-Powered Threat to Child Safety

The threat has escalated dramatically with the advent of generative AI. In a chilling recent example, X's AI tool Grok was used openly to generate undressed images of an underage actor. In another case, a 13-year-old girl was reportedly expelled after hitting a classmate who allegedly created a deepfake pornographic image of her.

The scale is alarming. In July 2024, the Internet Watch Foundation discovered more than 3,500 images of AI-generated CSAM on a single dark web forum. Experts fear this is just the tip of the iceberg, with the technology making it "infinitely easier" for any child whose image is online to be sexually exploited.

Mathematician and former AI safety researcher Patrick LaVictoire explains the core problem: generative AI learns by finding patterns in its training data. A 2023 Stanford University study revealed that one popular training dataset already contained over 1,000 instances of CSAM. While those links were removed, a critical danger remains: AI can combine innocent images of children with adult pornography if both exist in the training data.

Inadequate Safeguards and a Looming Open-Source Crisis

Companies like Google and OpenAI claim to implement safeguards, including careful data curation and secondary AI systems that act like spam filters to block harmful queries. However, the Grok incident suggests these filters can be careless or easily bypassed.

A potentially greater threat looms with the push for open-source AI models, championed by firms like Meta. Open-source software allows anyone to download, edit, and remove safety protocols. This could enable individuals to "fine-tune" personal AI generators with illegal imagery, creating unlimited, unchecked CSAM. While Meta appears to have stepped back from fully open-sourcing its newer models, the risk persists.

A Patchwork of Global Responses and Legal Gray Areas

Globally, responses are uneven. China mandates AI content labelling, while Denmark is drafting laws to give citizens copyright over their likenesses, imposing fines on violators. In the UK and EU, the General Data Protection Regulation (GDPR) may offer some image protection.

The United States presents a grimmer picture. New York litigator Akiva Cohen notes that while new laws criminalise some digital manipulation, many abusive acts "consciously stay just on the 'horrific, but legal' side of the line." For instance, using AI to put an underage girl in a bikini may not be illegal, whereas generating explicit nudity might be.

Cohen argues for civil liability under "false light, invasion of privacy" torts, holding the enabling companies accountable. Precedents like New York's Raise Act and California's Senate Bill 53 suggest AI firms can be liable for certain harms.

The Call for Public Action and Technological Solutions

Former child actor and attorney Josh Saviano emphasises that while lobbying and courts will eventually address the issue, immediate action is needed. He advocates for a technological solution and is developing a tool to detect when personal images are scraped online, guided by the motto: "Protect the babies."

Wilson concludes that public pressure is essential. She argues that consumer boycotts of platforms are not sufficient. "We need to be the ones demanding companies that allow the creation of CSAM be held accountable," she states, calling for robust legislation and technological safeguards.

She also urges personal responsibility, warning parents that sharing children's photos online carries a real risk those images could be weaponised. The public's historical desire to prevent child endangerment, she asserts, must now be directed towards combating this digital epidemic.