British Technology Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Images

Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced UK laws.

Substantial Increase in AI-Generated Harmful Content

The declaration came as findings from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will allow approved AI companies and child protection groups to inspect AI systems – the underlying systems for chatbots and visual AI tools – and verify they have adequate protective measures to prevent them from creating images of child exploitation.

"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the risk in AI systems promptly."

Tackling Regulatory Challenges

The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that issue by helping to stop the production of those images at source.

Legal Structure

The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI systems developed to create child sexual abuse material.

Real-World Consequences

This recently, the official toured the London headquarters of a children's helpline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I hear about children experiencing extortion online, it is a cause of intense anger in me and rightful concern amongst families," he said.

Alarming Statistics

A leading online safety foundation reported that instances of AI-generated abuse material – such as webpages that may include numerous images – had significantly increased so far this year.

Instances of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, providing criminals the ability to create possibly limitless quantities of advanced, lifelike exploitative content," she continued. "Content which further exploits survivors' trauma, and makes young people, particularly female children, more vulnerable on and off line."

Counseling Session Data

Childline also published details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Employing AI to evaluate body size, body and looks
  • AI assistants discouraging young people from talking to safe adults about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing using AI assistants for support and AI therapeutic applications.

Adam Ross
Adam Ross

A passionate gamer and tech writer sharing in-depth analysis on game updates and strategies.