British Technology Companies and Child Protection Agencies to Test AI's Capability to Generate Exploitation Images

Technology companies and child safety organizations will receive authority to evaluate whether AI tools can generate child abuse images under new British legislation.

Significant Increase in AI-Generated Illegal Content

The declaration came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will allow approved AI developers and child protection groups to inspect AI models – the foundational technology for chatbots and image generators – and ensure they have adequate safeguards to prevent them from producing depictions of child sexual abuse.

"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems early."

Addressing Legal Challenges

The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is designed to preventing that problem by helping to stop the creation of those images at source.

Legal Structure

The amendments are being added by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI models designed to create exploitative content.

Practical Impact

This week, the official visited the London headquarters of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The interaction portrayed a teenager seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.

"When I hear about children facing extortion online, it is a source of extreme anger in me and justified anger amongst families," he stated.

Alarming Data

A leading internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may include multiple images – had more than doubled so far this year.

Cases of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a crucial step to guarantee AI products are safe before they are launched," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the capability to create possibly endless amounts of advanced, lifelike exploitative content," she added. "Material which additionally commodifies survivors' suffering, and makes young people, particularly girls, more vulnerable both online and offline."

Counseling Interaction Data

The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related risks discussed in the conversations include:

  • Using AI to evaluate body size, physique and appearance
  • Chatbots discouraging children from consulting trusted guardians about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked pictures

Between April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.

Charles Rodriguez
Charles Rodriguez

A passionate gamer and tech enthusiast with over a decade of experience in writing about video games and esports trends.