European Union Regulates 'Appalling' Grok AI on X Social Media Platform
The European Commission has slammed social media platform X over its "appalling" and "disgusting" child-like deepfakes generated by its AI chatbot, Grok. This comes after months of complaints regarding the misuse of a new "edit image" feature that allowed users to digitally undress people.
According to Thomas Regnier, European Union digital affairs spokesman, Grok's ability to generate explicit content, including child pornography, is "not spicy, it's illegal." The Commission has launched an investigation into the matter and described such content as having "no place in Europe."
The issue began with a novel feature on Grok that allowed users to modify images. However, some individuals exploited this feature to alter photographs of women and children in revealing outfits, sparking widespread concern.
Grok acknowledged "lapses in safeguards" but said it is working to fix the issues. The company stated that child sexual abuse material (CSAM) is illegal and prohibited.
Critics argue that the platform ignored months of warnings about the imminent misuse of its AI technology. Tyler Johnston from the Midas Project warned in August that xAI's image generation was a "nudification tool waiting to be weaponised," which has now come to pass.
This incident adds to X's ongoing issues with the European Union, including a 120-million-euro fine for breaching digital content rules on transparency in advertising. The platform remains under investigation under the EU's Digital Services Act.
The European Commission has slammed social media platform X over its "appalling" and "disgusting" child-like deepfakes generated by its AI chatbot, Grok. This comes after months of complaints regarding the misuse of a new "edit image" feature that allowed users to digitally undress people.
According to Thomas Regnier, European Union digital affairs spokesman, Grok's ability to generate explicit content, including child pornography, is "not spicy, it's illegal." The Commission has launched an investigation into the matter and described such content as having "no place in Europe."
The issue began with a novel feature on Grok that allowed users to modify images. However, some individuals exploited this feature to alter photographs of women and children in revealing outfits, sparking widespread concern.
Grok acknowledged "lapses in safeguards" but said it is working to fix the issues. The company stated that child sexual abuse material (CSAM) is illegal and prohibited.
Critics argue that the platform ignored months of warnings about the imminent misuse of its AI technology. Tyler Johnston from the Midas Project warned in August that xAI's image generation was a "nudification tool waiting to be weaponised," which has now come to pass.
This incident adds to X's ongoing issues with the European Union, including a 120-million-euro fine for breaching digital content rules on transparency in advertising. The platform remains under investigation under the EU's Digital Services Act.