Technology companies and child safety organizations will receive authority to evaluate whether AI systems can produce child abuse material under recently introduced UK legislation.
The announcement came as findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow designated AI companies and child protection organizations to inspect AI systems – the underlying technology for conversational AI and image generators – and verify they have adequate safeguards to prevent them from creating images of child exploitation.
"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the risk in AI models promptly."
The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that issue by enabling to halt the creation of those images at their origin.
The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI systems developed to generate exploitative content.
This week, the minister toured the London headquarters of Childline and heard a mock-up call to advisors involving a report of AI-based abuse. The interaction depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about children experiencing blackmail online, it is a source of extreme anger in me and justified concern amongst parents," he said.
A leading online safety organization stated that cases of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are released," commented the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, providing offenders the capability to make possibly endless amounts of advanced, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and makes children, particularly girls, more vulnerable on and off line."
Childline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapeutic apps.
A seasoned gambling analyst with over a decade of experience in the UK casino industry, specializing in slot reviews and player advocacy.