New Law Allows Experts to Test AI Tools for Child Abuse Images Before They're Released
A major overhaul is being made to UK child safety agencies' ability to detect potentially problematic artificial intelligence tools. This change is aimed at tackling the alarming rise in AI-generated child sexual abuse material (CSAM). According to a recent report, cases of such images have more than doubled from 199 instances in 2024 to 426 in 2025.
Under the new legislation, tech companies and designated child safety organizations will be granted permission to evaluate AI models used by chatbots like ChatGPT and image generators like Google's Veo 3. The aim is to ensure these tools are designed with robust safeguards that prevent them from creating child abuse images.
Experts involved in this process hope it will help identify potential risks early, allowing developers to address issues before they reach the public. However, critics argue that AI-generated CSAM poses a significant threat to children's safety online and offline.
The Internet Watch Foundation reported that 94% of recent AI-generated images targeted girls, with newborns to two-year-olds increasingly being depicted in these images. Survivors have also been victimized through AI tools, which can create sophisticated, photorealistic child abuse material at the click of a button.
Childline has seen an increase in counseling sessions where AI is mentioned as a concern. The helpline reported 367 sessions between April and September this year, four times more than during the same period last year. Many children are being bullied online using AI-generated content or blackmailed with AI-faked images.
The government believes that by testing AI tools before they're released, it can help prevent the creation of child abuse images at source. The changes to the crime and policing bill also introduce a ban on possessing, creating, or distributing AI models designed to generate CSAM.
While the law change holds promise, many experts and organizations are emphasizing its importance in preventing further harm to children.
A major overhaul is being made to UK child safety agencies' ability to detect potentially problematic artificial intelligence tools. This change is aimed at tackling the alarming rise in AI-generated child sexual abuse material (CSAM). According to a recent report, cases of such images have more than doubled from 199 instances in 2024 to 426 in 2025.
Under the new legislation, tech companies and designated child safety organizations will be granted permission to evaluate AI models used by chatbots like ChatGPT and image generators like Google's Veo 3. The aim is to ensure these tools are designed with robust safeguards that prevent them from creating child abuse images.
Experts involved in this process hope it will help identify potential risks early, allowing developers to address issues before they reach the public. However, critics argue that AI-generated CSAM poses a significant threat to children's safety online and offline.
The Internet Watch Foundation reported that 94% of recent AI-generated images targeted girls, with newborns to two-year-olds increasingly being depicted in these images. Survivors have also been victimized through AI tools, which can create sophisticated, photorealistic child abuse material at the click of a button.
Childline has seen an increase in counseling sessions where AI is mentioned as a concern. The helpline reported 367 sessions between April and September this year, four times more than during the same period last year. Many children are being bullied online using AI-generated content or blackmailed with AI-faked images.
The government believes that by testing AI tools before they're released, it can help prevent the creation of child abuse images at source. The changes to the crime and policing bill also introduce a ban on possessing, creating, or distributing AI models designed to generate CSAM.
While the law change holds promise, many experts and organizations are emphasizing its importance in preventing further harm to children.