New Powers to Test AI Models for Child Safety Safeguards
The UK government has introduced a significant legal change that will grant technology companies and child protection agencies explicit permission to examine artificial intelligence models for their ability to produce child abuse imagery. This measure, announced as part of amendments to the crime and policing bill, addresses the alarming rise in AI-generated child sexual abuse material (CSAM) reported by safety watchdogs.
Reports of AI-generated CSAM have more than doubled in the past year, increasing from 199 instances in 2024 to 426 in 2025, according to recent data. The legislation will allow designated organisations to proactively test the underlying technology behind popular AI tools, including chatbots like ChatGPT and image generators such as Google's Veo 3, ensuring they contain adequate safeguards to prevent the creation of illegal child sexual abuse content.
Stopping Abuse Before It Happens
Kanishka Narayan, the minister for AI and online safety, emphasised that this move is "ultimately about stopping abuse before it happens." He explained that under strict conditions, experts can now identify risks in AI models at an early stage, adding: "When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents."
The legal change was necessary because existing laws make it illegal to create or possess CSAM, even for testing purposes. Previously, authorities could only intervene after AI-generated abusive material had been created and uploaded online. This new approach aims to prevent the problem at its source by enabling pre-emptive safety testing.
Narayan recently visited Childline's London base, where he listened to a simulated call demonstrating how AI technology is being used to blackmail teenagers through sexualised deepfakes.
Rising Threat of AI-Generated Abuse Material
The Internet Watch Foundation, which monitors CSAM online, reported that instances of the most serious category A abuse material have increased from 2,621 images or videos to 3,086. Their data reveals that girls represent 94% of subjects in illegal AI images identified in 2025, while depictions of newborns to two-year-olds have dramatically risen from just five in 2024 to 92 in 2025.
Kerry Smith, chief executive of the Internet Watch Foundation, described the legislation as "a vital step to make sure AI products are safe before they are released." She warned that "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material."
Childline has released concerning statistics showing that counselling sessions mentioning AI, chatbots and related terms have quadrupled compared to the same period last year. Between April and September this year, the service delivered 367 such sessions. The conversations revealed various AI-related harms including:
- Using AI to rate weight, body and looks
- Chatbots discouraging children from speaking to safe adults about abuse
- Online bullying with AI-generated content
- Blackmail using AI-faked images
Half of the AI mentions in 2025 sessions related to mental health and wellbeing, including children using chatbots for support and AI therapy apps. The new legislation represents a crucial development in the ongoing battle to protect children in an increasingly digital world, where artificial intelligence presents both extraordinary opportunities and significant risks.