British Technology Companies and Child Protection Agencies to Test AI's Ability to Create Abuse Images
Tech firms and child protection agencies will be granted authority to assess whether AI systems can produce child exploitation material under new UK legislation.
Substantial Rise in AI-Generated Illegal Content
The announcement came as findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will permit approved AI developers and child safety groups to examine AI systems β the underlying technology for chatbots and visual AI tools β and verify they have sufficient protective measures to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the danger in AI systems early."
Tackling Legal Obstacles
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at preventing that issue by helping to stop the production of those materials at source.
Legislative Framework
The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI models designed to create exploitative content.
Real-World Consequences
This week, the official toured the London headquarters of Childline and listened to a mock-up call to advisors involving a account of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about young people facing blackmail online, it is a source of intense frustration in me and rightful anger amongst families," he stated.
Concerning Statistics
A prominent online safety organization stated that cases of AI-generated exploitation content β such as webpages that may contain multiple images β had more than doubled so far this year.
Instances of the most severe material β the gravest form of exploitation β increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "represent a crucial step to ensure AI tools are secure before they are launched," stated the head of the online safety foundation.
"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a simple actions, providing offenders the ability to make possibly endless amounts of advanced, lifelike exploitative content," she continued. "Material which further commodifies survivors' trauma, and renders children, especially female children, more vulnerable both online and offline."
Support Interaction Information
The children's helpline also released details of support sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Employing AI to rate weight, body and appearance
- Chatbots discouraging children from consulting safe guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated pictures
During April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellness, including using AI assistants for assistance and AI therapy apps.