🔗 Share this article UK Technology Firms and Child Protection Agencies to Test AI's Capability to Generate Abuse Content Tech firms and child safety agencies will receive authority to evaluate whether AI systems can produce child abuse material under recently introduced British laws. Significant Rise in AI-Generated Harmful Material The announcement came as findings from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025. New Legal Structure Under the changes, the government will allow approved AI developers and child safety groups to inspect AI models – the underlying technology for chatbots and image generators – and ensure they have sufficient safeguards to prevent them from creating images of child sexual abuse. "Fundamentally about stopping abuse before it happens," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the danger in AI systems early." Addressing Legal Obstacles The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it. This legislation is aimed at preventing that problem by enabling to stop the production of those images at their origin. Legal Structure The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI models designed to create child sexual abuse material. Practical Consequences This week, the official toured the London base of Childline and heard a mock-up conversation to counsellors involving a report of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a explicit deepfake of themselves, created using AI. "When I learn about young people experiencing extortion online, it is a source of intense anger in me and rightful anger amongst parents," he stated. Alarming Data A prominent internet monitoring organization reported that cases of AI-generated abuse material – such as online pages that may include numerous images – had more than doubled so far this year. Instances of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086. Female children were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025 Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025 Industry Reaction The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring foundation. "AI tools have made it so survivors can be victimised all over again with just a simple actions, giving criminals the capability to create potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally exploits victims' suffering, and makes children, especially girls, less safe on and off line." Support Session Information The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related risks discussed in the sessions include: Using AI to evaluate weight, body and appearance Chatbots dissuading young people from talking to trusted guardians about harm Being bullied online with AI-generated material Digital blackmail using AI-manipulated images Between April and September this year, the helpline delivered 367 counselling interactions where AI, conversational AI and associated topics were mentioned, four times as many as in the equivalent timeframe last year. Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, encompassing using AI assistants for assistance and AI therapy applications.