British Technology Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Content
Tech firms and child protection agencies will receive authority to assess whether AI systems can produce child exploitation material under recently introduced British laws.
Significant Rise in AI-Generated Illegal Content
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will allow designated AI companies and child protection groups to examine AI models – the foundational technology for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child exploitation.
"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now identify the danger in AI models promptly."
Addressing Legal Obstacles
The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot create such images as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that problem by helping to stop the production of those images at their origin.
Legislative Framework
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or distributing AI models designed to create child sexual abuse material.
Practical Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a source of extreme frustration in me and justified anger amongst families," he stated.
Alarming Data
A prominent online safety organization stated that instances of AI-generated exploitation content – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of the most severe content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a vital step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a few clicks, providing criminals the ability to create possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Material which additionally commodifies survivors' trauma, and renders young people, especially female children, less safe both online and offline."
Counseling Interaction Information
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Employing AI to evaluate weight, body and looks
- Chatbots dissuading children from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-faked images
During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for assistance and AI therapy applications.