Mara Wilson: AI Deepfakes Are Creating a New Wave of Child Sexual Abuse Material
Ex-Child Actor Warns of AI-Generated Child Abuse Crisis

Former child actor and writer Mara Wilson has issued a stark warning that generative artificial intelligence is enabling a new and terrifying wave of child sexual exploitation, forcing countless children to relive the nightmare she endured decades ago.

From 'Stranger Danger' to Digital Exploitation

Wilson, known for her roles in family films during the 1990s, describes how her childhood in the public eye led to severe sexual exploitation by strangers online. Despite feeling safe on regulated film sets, the public sphere proved dangerously hostile.

"Hollywood throws you into the pool," she states, "but it's the public that holds your head underwater." Before reaching her teenage years, her image had been misused on fetish websites, digitally inserted into pornography, and became the subject of creepy letters from adult men.

She emphasises that predators seek access, and a public profile provides it. Wilson had hoped her experience was a rare horror, but the advent of generative AI has shattered that hope, creating a scalable threat to millions.

The AI-Powered Threat to Child Safety

The crisis moved from fear to reality with recent high-profile incidents. In 2024, it was reported that X's AI tool, Grok, was used openly to generate undressed images of an underage actor. In a separate case, a 13-year-old girl was allegedly expelled after confronting a classmate who made deepfake porn of her.

The scale is alarming. In July 2024, the Internet Watch Foundation discovered more than 3,500 images of AI-generated Child Sexual Abuse Material (CSAM) on a single dark web forum. Experts fear this is just the tip of the iceberg.

Mathematician and former AI safety researcher Patrick LaVictoire explains the core problem: generative AI learns by finding patterns in its training data. If that data contains harmful material, the AI can replicate and recombine it. A 2023 Stanford study found one popular training dataset contained over 1,000 instances of CSAM.

While companies like Google and OpenAI claim to use safeguards, such as filtering bots that block harmful queries, the failure of xAI's Grok highlights systemic vulnerabilities. The push for open-source AI models, championed by Meta among others, could remove these flimsy safeguards entirely, allowing bad actors to create unlimited CSAM generators.

The Legal Battlefield: From 'Horrific but Legal' to Accountability

The legal response is fragmented and often inadequate. New York litigator Akiva Cohen notes that while new laws criminalise some digital manipulation, many abusive acts "consciously stay just on the 'horrific, but legal' side of the line." Using AI to put a minor in a bikini, for instance, may not be a criminal act.

Cohen argues the path to deterrence lies in civil liability, holding companies accountable for enabling harm. Precedents like New York's Raise Act and California's Senate Bill 53 point in this direction. Following the Grok scandal, X blocked the tool from making sexualised images of real people on its main platform, but this policy does not extend to the standalone Grok app.

Former child actor and attorney Josh Saviano is working on a technological solution—a tool to detect when personal images are scraped online. His team's motto underscores the urgency: "Protect the babies."

Internationally, some nations are acting. China mandates AI content labelling, Denmark is legislating for personal copyright over one's likeness and voice, and GDPR in the UK and Europe offers some image protection. The United States, however, presents a grimmer outlook, with executive orders resisting AI regulation and military contracts prioritising profit over safety.

A Call for Public Action Beyond Boycotts

Wilson concludes that public pressure is essential. While consumer boycotts of platforms like X or Meta send a message, they are insufficient. The public must demand robust legislation, effective technological safeguards, and corporate accountability from companies whose tools facilitate CSAM creation.

She also urges personal responsibility. Parents must understand the risk that innocent shared photos could be weaponised and educate their children about digital safety.

"If our obsession with Stranger Danger showed anything, it's that most of us want to prevent child endangerment and harassment," Wilson writes. "It's time to prove it." The deepfake apocalypse for children is not a future threat—it is happening now, and stopping it requires immediate, concerted effort from all sides.