AI Child Sexual Abuse Material Safeguards Strengthened by Proposed UK Law

by News Editor — Claire Donovan

The UK government is set to introduce new legislation empowering designated bodies, including the Internet Watch Foundation (IWF) and AI developers, to proactively test artificial intelligence models for their potential to generate child sexual abuse material (CSAM) before such content appears online. This marks a significant step in combatting online child exploitation by shifting focus from reactive removal to preventative measures.

Proposed Legal Framework and Its Implications

Under amendments expected to be added to the Crime and Policing Bill, AI developers and child protection organizations will be authorized to safely and securely assess AI systems for their propensity to produce illegal content such as CSAM, extreme pornography, and non-consensual intimate images. The government plans to establish a panel of experts to oversee these testing processes to ensure compliance with safety protocols and legal boundaries.

Technology Secretary Liz Kendall emphasized the importance of integrating child safety measures into AI system design, stating, “These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk. By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”

The Role of the Internet Watch Foundation and Impact on AI Safety

The IWF, responsible for removing hundreds of thousands of child abuse images each year, supports this move as crucial for early intervention. Kerry Smith, CEO of the IWF, described the legislation as a “vital step to make sure AI products are safe before they are released” and welcomed the capacity to address abusive content creation at its origin rather than after proliferation.

This approach aligns with ongoing efforts to develop technological tools that can detect AI-generated abuse images. The College of Policing has highlighted the evolving nature of AI-generated child sexual abuse material, noting that computer-generated images—even non-realistic or cartoon-style depictions derived from real photographs—can now be prosecuted as indecent images under case law. However, detection technology still needs refinement to keep pace with increasingly sophisticated abuses.

Rising Prevalence of AI-Generated Child Sexual Abuse Material

Recent data from the IWF reveals that reports of AI-generated CSAM have more than doubled over the past year, with the most severe content (Category A)—including images involving penetrative sexual activity and sadism—rising from 2,621 to 3,086 items. This category now represents 56% of all illegal material flagged, up from 41% the previous year. Girls remain overwhelmingly the victims, comprising 94% of all detected AI-generated abuse images in 2025.

These trends underscore growing concerns about AI’s misuse in facilitating child exploitation online. The NSPCC has urged the government to make the testing of AI models mandatory rather than optional, arguing that only compulsory regulations will ensure child protection is embedded in AI design. Rani Govender, NSPCC’s child safety online policy manager, noted, “To make a real difference for children, this cannot be optional. Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.”

Context and Broader Implications

International law enforcement and child advocacy groups have increasingly acknowledged AI’s role in generating and distributing illegal content, complicating traditional policing methods. The UK’s proactive regulatory model positions it as a global leader in the prevention of AI-enabled child exploitation. This legislative initiative reflects a necessary adaptation to rapid advances in AI technology, which traditional frameworks have struggled to keep pace with.

For instance, the College of Policing highlights that while AI detection capabilities are still developing, expert visual analysis remains critical in distinguishing AI-generated abuse from real imagery. The deeper integration of AI safety into development processes may help stem broader issues of violence against women and girls by tackling the problem at its root.

For more on efforts to tackle child exploitation, see the International Crime and Policing updates on AI abuse prevention at Reuters. Additional reporting on AI regulations and child protection can be found at Globally Pulse.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.