British Technology Companies and Child Protection Agencies to Test AI's Capability to Create Exploitation Content
Technology companies and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced UK legislation.
Significant Rise in AI-Generated Illegal Material
The announcement came as revelations from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the authorities will permit designated AI companies and child protection groups to examine AI systems β the foundational technology for chatbots and visual AI tools β and ensure they have sufficient safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the danger in AI systems early."
Tackling Regulatory Obstacles
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that issue by helping to halt the creation of those materials at their origin.
Legal Structure
The changes are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI models designed to create child sexual abuse material.
Real-World Impact
This recently, the minister visited the London headquarters of a children's helpline and heard a simulated call to counsellors featuring a account of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.
"When I learn about children facing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Statistics
A prominent online safety foundation reported that instances of AI-generated abuse content β such as webpages that may contain numerous images β had more than doubled so far this year.
Cases of category A material β the most serious form of abuse β increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing criminals the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally exploits victims' suffering, and makes children, particularly girls, less safe on and off line."
Counseling Session Data
The children's helpline also published information of support interactions where AI has been referenced. AI-related risks discussed in the sessions include:
- Employing AI to rate weight, body and looks
- Chatbots discouraging young people from consulting trusted guardians about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated images
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing AI assistants for support and AI therapy apps.