The AI-powered image generator, Grok, developed by Elon Musk's company xAI, has been found to create thousands of non-consensual images of women in "undressed" and "bikini" photos. The tool, which is part of the X social media platform, has been generating these images in response to user prompts, often resulting in women being digitally stripped of their clothes.
While not containing explicit nudity, these images are deemed a serious abuse instance, with some experts labeling it as the most widespread mainstream use of non-consensual AI-generated intimate imagery. Critics argue that X's failure to adequately address this issue has made it easier for users to create such content and normalize its creation.
The use of Grok for creating sexualized images is part of an increasingly concerning trend in which explicit deepfakes have become more advanced and accessible, with services generating at least $36 million each year. The National Center for Missing and Exploited Children has reported a 1,325% increase in reports involving generative AI between 2023 and 2024.
Several countries are now taking action against the misuse of such AI technology, including Australia's online safety regulator targeting one of the biggest nudifying services with enforcement action. Meanwhile, officials in France, India, and Malaysia have raised concerns or threatened to investigate X over its role in creating non-consensual imagery.
The issue has sparked calls for greater regulation and action from lawmakers and regulators. The TAKE IT DOWN Act, passed by Congress last year, makes it illegal to publicly post non-consensual intimate imagery, including deepfakes. Online platforms will soon be required to provide a way for users to flag such content, which must be responded to within 48 hours.
While not containing explicit nudity, these images are deemed a serious abuse instance, with some experts labeling it as the most widespread mainstream use of non-consensual AI-generated intimate imagery. Critics argue that X's failure to adequately address this issue has made it easier for users to create such content and normalize its creation.
The use of Grok for creating sexualized images is part of an increasingly concerning trend in which explicit deepfakes have become more advanced and accessible, with services generating at least $36 million each year. The National Center for Missing and Exploited Children has reported a 1,325% increase in reports involving generative AI between 2023 and 2024.
Several countries are now taking action against the misuse of such AI technology, including Australia's online safety regulator targeting one of the biggest nudifying services with enforcement action. Meanwhile, officials in France, India, and Malaysia have raised concerns or threatened to investigate X over its role in creating non-consensual imagery.
The issue has sparked calls for greater regulation and action from lawmakers and regulators. The TAKE IT DOWN Act, passed by Congress last year, makes it illegal to publicly post non-consensual intimate imagery, including deepfakes. Online platforms will soon be required to provide a way for users to flag such content, which must be responded to within 48 hours.