UK's Technology Secretary Liz Kendall has taken a step in the right direction by announcing plans to criminalize the creation of non-consensual intimate images, with the aim of making it easier for victims to seek justice. However, many are arguing that this approach is not enough and that stricter measures are needed.
Kendall's plan would make creating such images a criminal offense, while also targeting the supply of nudification apps. While this may seem like a step forward, critics argue that it does not go far enough. The main problem lies with the fact that platforms like X allow users to request non-consensual intimate imagery, and these requests are often met with an alarming frequency.
One major concern is that platforms are being allowed to profit from this online dehumanization and sexual harassment of women and minors by placing the AI image generation feature behind a paywall. This means that X can continue to generate and publish such images, potentially perpetuating harm against victims.
The problem is not just that these images exist, but also that they often spread quickly across social media platforms before being taken down. As Julia Lopez, the shadow technology secretary, has pointed out, this is different from traditional forms of abuse like crude drawings or Photoshop, where users have to take more effort and technical skill to create such content.
The issue is further complicated by the fact that many AI image generation tools are not dedicated nudification tools but rather general-purpose AI with weak safeguards. This means that platforms like Grok, which has been criticized for generating non-consensual intimate images, may be allowed to continue operating without more stringent regulation.
Kendall's approach relies on the law waiting for harm to happen before punishing those responsible. However, this approach is often too late for victims, who have already suffered from online abuse and harassment. For that reason, experts are advocating for a more preventative approach, where platforms are required to implement proactive detection and filtering measures.
Another major issue is that the US is moving in the opposite direction, with the Trump administration aiming to reduce AI regulation and increase global "dominance". This makes it challenging for countries like the UK to regulate AI without cross-border cooperation. Without a unified approach, it's difficult to ensure that platforms are held accountable for their actions.
Ultimately, what this means for victims of online abuse is uncertainty. How can they get justice if the perpetrator is halfway across the world? How can they trust that companies will be transparent about their practices and prioritize safety over speed?
The answer lies in regulation that prioritizes prevention over punishment. This requires mandatory input filtering, independent audits, and licensing conditions that make prevention a legal technical requirement. Only then can we ensure that AI companies are held accountable for their actions and that victims receive the justice they deserve.
As one researcher noted, "Regulation after the fact is better than nothing, but it offers little to the victims who have already been harmed." It's time to shift our approach from removing harm after it happens to proving that your system prevents harm in the first place.
Kendall's plan would make creating such images a criminal offense, while also targeting the supply of nudification apps. While this may seem like a step forward, critics argue that it does not go far enough. The main problem lies with the fact that platforms like X allow users to request non-consensual intimate imagery, and these requests are often met with an alarming frequency.
One major concern is that platforms are being allowed to profit from this online dehumanization and sexual harassment of women and minors by placing the AI image generation feature behind a paywall. This means that X can continue to generate and publish such images, potentially perpetuating harm against victims.
The problem is not just that these images exist, but also that they often spread quickly across social media platforms before being taken down. As Julia Lopez, the shadow technology secretary, has pointed out, this is different from traditional forms of abuse like crude drawings or Photoshop, where users have to take more effort and technical skill to create such content.
The issue is further complicated by the fact that many AI image generation tools are not dedicated nudification tools but rather general-purpose AI with weak safeguards. This means that platforms like Grok, which has been criticized for generating non-consensual intimate images, may be allowed to continue operating without more stringent regulation.
Kendall's approach relies on the law waiting for harm to happen before punishing those responsible. However, this approach is often too late for victims, who have already suffered from online abuse and harassment. For that reason, experts are advocating for a more preventative approach, where platforms are required to implement proactive detection and filtering measures.
Another major issue is that the US is moving in the opposite direction, with the Trump administration aiming to reduce AI regulation and increase global "dominance". This makes it challenging for countries like the UK to regulate AI without cross-border cooperation. Without a unified approach, it's difficult to ensure that platforms are held accountable for their actions.
Ultimately, what this means for victims of online abuse is uncertainty. How can they get justice if the perpetrator is halfway across the world? How can they trust that companies will be transparent about their practices and prioritize safety over speed?
The answer lies in regulation that prioritizes prevention over punishment. This requires mandatory input filtering, independent audits, and licensing conditions that make prevention a legal technical requirement. Only then can we ensure that AI companies are held accountable for their actions and that victims receive the justice they deserve.
As one researcher noted, "Regulation after the fact is better than nothing, but it offers little to the victims who have already been harmed." It's time to shift our approach from removing harm after it happens to proving that your system prevents harm in the first place.