British Tech Firms and Child Safety Officials to Examine AI's Ability to Generate Exploitation Images
Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence systems can produce child exploitation images under new UK laws.
Significant Rise in AI-Generated Harmful Content
The announcement coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the authorities will permit approved AI companies and child protection organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from producing images of child sexual abuse.
"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models promptly."
Addressing Legal Challenges
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that problem by helping to stop the production of those images at source.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or distributing AI models developed to generate child sexual abuse material.
Real-World Impact
This week, the official visited the London base of Childline and listened to a mock-up call to counsellors featuring a report of AI-based exploitation. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about children experiencing extortion online, it is a source of intense anger in me and justified concern amongst parents," he said.
Alarming Statistics
A leading internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may contain multiple files – had more than doubled so far this year.
Instances of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to ensure AI products are secure before they are released," stated the chief executive of the online safety organization.
"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, giving criminals the ability to make potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits victims' trauma, and makes young people, especially girls, more vulnerable on and off line."
Support Session Data
The children's helpline also released details of support interactions where AI has been referenced. AI-related risks discussed in the sessions include:
- Employing AI to evaluate body size, body and looks
- AI assistants dissuading young people from consulting safe adults about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked images
During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, including utilizing AI assistants for assistance and AI therapeutic applications.