The government has announced new laws designed to prevent AI models from being used to create child sexual abuse material.

Under the new legislation, the government said designated bodies like AI developers and child protection organisations, such as the Internet Watch Foundation (IWF), will be empowered to scrutinise AI models and ensure safeguards are in place to prevent them generating child sexual abuse material.

The announcement comes as research published by the IWF on Wednesday reveals that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

According to the watchdog, there has also been a rise in depictions of infants, with images of 0–two-year-olds surging from five in 2024 to 92 in 2025.

The government said that currently criminal liability to create and possess this material means developers can’t carry out safety testing on AI models and images can only be removed after they have been created and shared online.

The new legislation, which the government claims is one of the first of its kind in the world, enables the robust testing of AI systems’ safeguards from the outset.

The government added that the new laws will allow organisations to check models have protections against extreme pornography and non-consensual intimate images.

While possessing and generating child sexual abuse material is already illegal under UK law, the government said that as AI image and video generation capabilities improve it presents a growing challenge.

The new measures aim to make it more difficult to circumnavigate safeguards and prevent the misuse of AI models.

Additionally, the government will also bring together a group of experts in child safety and AI to ensure that work is carried out safely and securely.

The group will help design the safeguards needed to protect sensitive data, prevent any risk of illegal content being leaked, and support the wellbeing of researchers involved.

“We will not allow technological advancement to outpace our ability to keep children safe,” said technology secretary Liz Kendall. “By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”


Share.
Exit mobile version