British Technology Companies and Child Safety Officials to Examine AI's Ability to Create Exploitation Content
Tech firms and child safety organizations will be granted permission to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.
Substantial Increase in AI-Generated Harmful Content
The announcement came as findings from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will allow approved AI developers and child safety groups to examine AI models β the foundational systems for conversational AI and visual AI tools β and verify they have sufficient protective measures to prevent them from creating depictions of child exploitation.
"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, adding: "Experts, under rigorous conditions, can now identify the risk in AI models promptly."
Tackling Legal Challenges
The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by enabling to stop the production of those materials at source.
Legal Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or sharing AI models developed to generate exploitative content.
Practical Impact
This week, the minister toured the London base of a children's helpline and listened to a simulated conversation to advisors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about children experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst families," he stated.
Alarming Statistics
A leading online safety organization reported that cases of AI-generated abuse material β such as online pages that may contain numerous files β had more than doubled so far this year.
Cases of the most severe content β the most serious form of abuse β increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are released," stated the chief executive of the online safety organization.
"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, giving criminals the ability to make potentially limitless quantities of advanced, photorealistic child sexual abuse material," she added. "Content which additionally exploits victims' trauma, and makes children, particularly female children, more vulnerable both online and offline."
Support Session Information
The children's helpline also released information of counselling interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
- Employing AI to rate body size, physique and appearance
- Chatbots dissuading children from consulting safe guardians about abuse
- Being bullied online with AI-generated content
- Online extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapy apps.