Tech companies and UK child safety agencies to test AI tools' ability to create abuse images

New Law Allows Experts to Test AI Tools for Child Abuse Images Before They're Released

A major overhaul is being made to UK child safety agencies' ability to detect potentially problematic artificial intelligence tools. This change is aimed at tackling the alarming rise in AI-generated child sexual abuse material (CSAM). According to a recent report, cases of such images have more than doubled from 199 instances in 2024 to 426 in 2025.

Under the new legislation, tech companies and designated child safety organizations will be granted permission to evaluate AI models used by chatbots like ChatGPT and image generators like Google's Veo 3. The aim is to ensure these tools are designed with robust safeguards that prevent them from creating child abuse images.

Experts involved in this process hope it will help identify potential risks early, allowing developers to address issues before they reach the public. However, critics argue that AI-generated CSAM poses a significant threat to children's safety online and offline.

The Internet Watch Foundation reported that 94% of recent AI-generated images targeted girls, with newborns to two-year-olds increasingly being depicted in these images. Survivors have also been victimized through AI tools, which can create sophisticated, photorealistic child abuse material at the click of a button.

Childline has seen an increase in counseling sessions where AI is mentioned as a concern. The helpline reported 367 sessions between April and September this year, four times more than during the same period last year. Many children are being bullied online using AI-generated content or blackmailed with AI-faked images.

The government believes that by testing AI tools before they're released, it can help prevent the creation of child abuse images at source. The changes to the crime and policing bill also introduce a ban on possessing, creating, or distributing AI models designed to generate CSAM.

While the law change holds promise, many experts and organizations are emphasizing its importance in preventing further harm to children.
 
Its heartbreaking to see the rise of AI-generated child abuse material online πŸ€•... We gotta ask ourselves, what's the value of a technology if it can be used to hurt so many innocent lives? The fact that 94% of these images target girls & newborns highlights how vulnerable we've made our children online.

The new law may seem like a silver bullet, but its impact will depend on how tech companies & child safety orgs work together 🀝... We need to keep pushing for better safeguards, not just technical solutions. It's time for us all to think about the human cost of AI & ensure we're building technologies that serve humanity, not harm it πŸ’»
 
Ugh πŸ™„, I don't know how tech companies can afford not testing their AI tools for child abuse images... Like, it's not that hard to do a quick scan, right? πŸ˜’ And what's with the lack of data on AI-generated CSAM? We need more research done ASAP! πŸ’» I mean, we can't just sit around waiting for experts to test these models... The fact that 94% of recent AI-generated images targeted girls is super concerning 🀯. And have you seen those photorealistic child abuse material at the click of a button? 😨 It's like something out of a horror movie! πŸ‘»
 
πŸ€” I'm glad they're taking this seriously, tbh 😊. It's crazy how fast AI-generated CSAM is spreading online, especially with girls and newborns being targeted 🚨. The fact that 94% of these images are targeting young girls is just heartbreaking ❀️. We need to make sure these tech companies are held accountable for creating tools that can prevent this kind of abuse 🀝.

I'm also impressed by the Internet Watch Foundation for doing this kind of work, it's not an easy job πŸ‘. And I think banning AI models designed to generate CSAM is a good start, but we need to make sure these tools are being tested properly too πŸ’―. The child safety agencies should be able to identify potential risks early on, and then developers can address those issues before they reach the public 🚫.

It's a tough issue, but I'm glad the government is taking it seriously 😊. We just need to make sure we're doing everything in our power to keep kids safe online πŸ’».
 
I'm so down with the new law being super strict about testing AI tools before they're released πŸ€–πŸ˜’. Like, why wait till they've already been released and potentially harming kids online? We should be taking a zero-tolerance approach to this whole thing. 94% of those AI-generated images are targeting girls, what's next? 🚫 Those tech companies think they can just create some fancy safeguards and call it a day? I don't trust 'em as far as I can throw my phone πŸ’». And honestly, who cares about the developers addressing issues before release? What if that means they get to keep working on these tools without anyone holding them accountable? πŸ€¦β€β™‚οΈ Let's just shut this whole thing down ASAP πŸ”’.
 
just hope this new law makes a real difference πŸ’»πŸ’Έ i mean think about it if these AI tools can create super realistic images of kids just imagine how messed up that is 🀯 and yeah 94% targeted girls gotta be some kinda sick stats 😩 i feel bad for the kids who are being victimized online and offline πŸ‘ΆπŸΌπŸ‘¦πŸ» and i think this law change is a good start but it's not like it's gonna magically solve everything πŸ’” we need to keep pushing forward and make sure these AI tools are super safe 🀞
 
πŸ€” This new law is like, finally something being done about these AI tools and child abuse images πŸ™Œ... but also kinda worrying that tech companies are being allowed to test them before release... what if it's not good enough? πŸ€¦β€β™‚οΈ And 94% of the AI-generated images targeted girls? That's just devastating πŸ’”... I've been hearing from some friends who have been bullied online with AI-faked images and it's so scary 😱...
 
omg can't believe they're taking this seriously at last 🀯 finally something being done about these sicko's who create this filth on AI tools, i mean chatbots should be designed with safeguards like how else are we supposed to protect our kids online lol 94% of those images are targeting girls and it's getting crazy what these devs can do with AI now imagine if they just got their hands on more data πŸ€·β€β™€οΈ hope this law change helps reduce the numbers, it's getting too out of hand for my blood πŸ’‰
 
I'M SO GLAD TO SEE THE UK TAKING STEPS TO PROTECT ITS CHILDREN FROM AI-GENERATED CHILD ABUSE MATERIAL 🀝🏼😒 IT'S ABSOLUTELY DISGUSTING THAT PEOPLE CAN CREATE SUCH DEPRAVING CONTENT WITH JUST A CLICK OF A BUTTON! 🚫 I FEEL LIKE WE'RE MAKING GOOD PROGRESS, BUT WE NEED TO KEEP PUSHING FOR BETTER SAFEGUARDS AND EDUCATION ON ONLINE RISKS πŸ“ŠπŸ’»
 
πŸ€– this is so important!!! we gotta make sure these ai tools are safe for kids they're making these nasty images left and right and it's getting worse every year we need more people involved in testing them before they get out there so we can stop it from spreading πŸš«πŸ’»
 
πŸ€” I'm all for tackling this growing issue of AI-generated child abuse material - it's sickening to think about how quickly these tools can create convincing, disturbing content that can haunt kids online... 🚫 But at the same time, we gotta be real here - no law or test can guarantee 100% safety. We're still dealing with complex tech that's being developed by humans (who are fallible, let's face it) and AI systems that can be pushed to do some pretty twisted stuff. πŸ’»

I'm also a bit worried about the potential for over-regulation - we don't want to stifle innovation in this space entirely. Can't we find a balance between safety and progress? 🀝 Maybe we should focus on educating the next gen of developers, who are more likely to be aware of these issues from the get-go... or at least be trained to spot the red flags early on? πŸ“š That'd be a step in the right direction imo. πŸ‘
 
I'm so glad they're finally taking action against these AI tools that can create child abuse images 🀝 it's crazy to think how far we've come from just a few years ago when this was all still relatively unknown. But at the same time, I worry about how effective these tests will be in identifying potential risks early on... what if they're not thorough enough? πŸ€” and what about the tech companies that have been using AI for other purposes too? Are they really going to start from scratch when it comes to child safety? πŸ€·β€β™€οΈ
 
Back
Top