UK Tech Companies and Child Protection Agencies to Test AI's Capability to Create Abuse Content

Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence systems can produce child exploitation images under recently introduced UK legislation.

Significant Rise in AI-Generated Harmful Material

The declaration came as findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the authorities will permit designated AI developers and child protection organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from creating depictions of child sexual abuse.

"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI systems promptly."

Addressing Legal Obstacles

The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that issue by enabling to halt the production of those materials at source.

Legal Structure

The changes are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI models developed to generate child sexual abuse material.

Real-World Consequences

This recently, the minister visited the London headquarters of Childline and listened to a mock-up conversation to counsellors involving a account of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a sexualised deepfake of himself, created using AI.

"When I hear about children facing extortion online, it is a cause of intense anger in me and rightful anger amongst families," he stated.

Alarming Statistics

A leading internet monitoring foundation stated that cases of AI-generated exploitation content – such as webpages that may include numerous images – had more than doubled so far this year.

Instances of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, making up 94% of prohibited AI depictions in 2025
  • Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the internet monitoring organization.

"AI tools have enabled so victims can be targeted all over again with just a few clicks, giving offenders the capability to make possibly limitless quantities of advanced, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and makes children, especially girls, more vulnerable on and off line."

Support Interaction Data

Childline also published details of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations include:

  • Using AI to rate weight, body and looks
  • AI assistants discouraging children from talking to trusted adults about harm
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing chatbots for support and AI therapy applications.

Darin Fleming MD
Darin Fleming MD

An avid hiker and travel writer with over a decade of experience exploring remote wilderness areas and sharing practical insights for adventurers.