UK regulator Ofcom opens a formal investigation into X over CSAM scandal

UK Regulator Opens Formal Investigation into X Over CSAM Scandal

The UK's media regulator, Ofcom, has launched a formal investigation into X after receiving reports that its AI chatbot account, Grok, was being used to create and share explicit images of people, potentially amounting to intimate image abuse or child sexual abuse material (CSAM). The probe focuses on whether X has complied with its duties to protect users from content illegal in the UK.

Grok's alleged misuse has raised concerns among regulators worldwide. Malaysia and Indonesia have already taken action, blocking access to Grok due to insufficient safeguards against creating non-consensual deepfakes of women and children. Indonesia described the issue as a "serious violation of human rights, dignity, and safety" in the digital space.

The investigation will examine X's measures to prevent users from accessing priority illegal content, including CSAM and non-consensual intimate images. It will also assess whether X carried out an updated risk assessment before making significant changes to its platform and if it has effective age assurance to protect children from seeing pornography.

Ofcom has asked xAI for clarification on the steps the company is taking to protect UK users and has conducted an expedited assessment of available evidence as a matter of urgency. The regulator emphasized that platforms must protect people in the UK from content that's illegal in the UK, and it will not hesitate to investigate where companies are failing in their duties, especially where there's a risk of harm to children.

The investigation comes amid reports that X has started telling users that its image generation tools were being limited to paying subscribers. However, non-paying users can still generate images through the Grok tab on the X website and app.

If Ofcom finds that X has broken the law, it can require platforms to take specific steps to come into compliance or remedy harm caused by the breach. The regulator can also impose fines of up to Β£18 million ($24.3 million) or 10 percent of "qualifying" worldwide revenue, whichever is higher.
 
I'm not sure why all this fuss over Grok and X's AI chatbot... I mean, I know it's bad news when CSAM gets shared online, but come on, people! Can't we just have a nuanced conversation about tech companies taking responsibility for their products? I think Ofcom's got the right idea here, but maybe they're being too heavy-handed. What if X takes steps to improve its safety features and users are still able to find workarounds? Is that really worth a Β£18 million fine? πŸ€‘
 
Ugh πŸ€• I'm not surprised at all about this X stuff. Like, I've been saying it for ages - AI is just a fancy word for 'problem waiting to happen' 😳. Can't these tech giants just get their act together and put some basic safeguards in place? πŸ™„ I mean, who thought it was a good idea to let users create and share explicit images of people online? It's just a recipe for disaster πŸ’₯.

And what really gets my goat is that X seems to be trying to shift the blame onto its users. 'Limited' image generation tools for paying subscribers? Give me a break πŸ€‘. It's like they're saying, "Oh, we didn't do anything wrong, it's just our users who are stupid enough to not read the fine print." πŸ™„

This investigation by Ofcom is long overdue, and I'm glad someone is finally holding X accountable 🀝. But let's be real, this is just a small fraction of the problems that come with AI and online platforms. We need systemic changes, not just slap-on-the-wrist fines πŸ’Έ.
 
πŸ€” this whole CSAM scandal on Grok is super worrying, and I think Ofcom's move to launch a formal investigation is a good one πŸ‘ but X needs to do more than just ask users to pay for limited image generation tools πŸ€‘ the fact that non-paying users can still create images through the Grok tab is just red flag after red flag πŸ”΄ it's like they're not taking this issue seriously at all, and that's not okay πŸ’” what really concerns me is how X can be sure that their AI chatbot isn't creating CSAM on its own, and whether they have a system in place to detect and remove such content immediately πŸ” if Ofcom finds out that X hasn't been doing enough to protect users, there should definitely be consequences πŸ€¦β€β™‚οΈ but consequences shouldn't just stop at fines, we need to see real change from X, like major overhauls to their moderation policies and more transparency about what they're doing to keep users safe πŸ“Š
 
OMG this is so worrisome! 🀯 I was on X and saw that Grok was making non-consensual intimate images and I immediately reported it 😬. I'm just glad that Ofcom is taking action because this is a huge problem. It's not right that people can still make those types of images even if they're paying subscribers, it's gotta be fixed ASAP πŸ’ͺ. And Β£18 million fine? That's like, whoa! πŸ€‘ But seriously though, kids should be protected from all this stuff, we need to do better πŸ‘.
 
πŸ€• this is so messed up! how could x be so reckless with people's safety? i mean, creating CSAM is a serious crime and it's not something to be taken lightly 🚫... and now ofcom is investigating them and they can potentially face big fines... hopefully x will do the right thing and fix their platform ASAP πŸ’»... but how did this even happen in the first place? πŸ€”
 
I think Ofcom is being way too harsh on X πŸ™„ they're just trying to protect users and make some adjustments to their platform, what's the big deal? It's not like they intentionally set out to create CSAM or anything πŸ˜’ I mean, it sounds like there were some technical issues that needed to be sorted out, but come on, let's not jump all over them just yet πŸ€”. And yeah, maybe X did mess up, but at least they're taking steps to fix it and improve their platform. Let's not forget that they're a company trying to make a living here πŸ’Έ
 
I'm really worried about what's been going on with X and their Grok AI chatbot πŸ€•. I mean, creating and sharing explicit images of people without their consent is just not right at all 😑. As a platform, they have this huge responsibility to keep users safe online, especially when it comes to kids πŸ‘§. I hope Ofcom's investigation can get to the bottom of this ASAP πŸ’».

I think X needs to take a closer look at how they're protecting their users, especially when it comes to CSAM and non-consensual intimate images 🀝. They need to make sure that paying subscribers aren't getting special treatment, and that everyone has equal access to the same safety features 🚫. It's not just about the law, it's about keeping people safe online πŸ‘.

I'm also really concerned about the fact that non-paying users can still generate images through Grok 🀯. That just doesn't seem right at all 😬. X needs to rethink their whole approach to user safety and make sure that they're doing everything in their power to protect users from harm πŸ’ͺ.
 
This whole thing with Grok is super scary 🀯 but at least Ofcom's taking action and investigating X. I mean, the fact that Malaysia and Indonesia blocked access to Grok already shows how serious the issue is. But it's also kinda good that this is being looked into - maybe it'll lead to some bigger changes in how tech companies handle CSAM and user safety 🀝. And yeah, Ofcom's got a point about platforms needing to protect UK users from illegal content... it's not just about X, but all of us who are vulnerable online πŸ’‘.
 
πŸ˜• I'm kinda surprised it's taken this long for Ofcom to open an investigation into X's Grok AI chatbot πŸ€–. I mean, I've been saying it for ages - just because something's on the internet doesn't mean it's safe or allowed. And now we're seeing the real consequences of people being careless with that stuff. Like, yeah, blocking access to Grok in Malaysia and Indonesia is a good start, but we need more concrete steps here in the UK πŸ‡¬πŸ‡§. I'm not sure what kind of measures Ofcom's looking for, but I hope it's more than just some generic 'we're taking steps to prevent this' stuff πŸ€”. We need real action and accountability on this one πŸ’ͺ
 
😱 I'm totally shocked about this 🀯! X needs to step up its game ASAP πŸš€πŸ”₯ if it wants to regain trust with UK users and regulators. Creating explicit images using Grok's AI chatbot without proper consent or safeguards is a massive no-no πŸ”’πŸ‘Ž. Ofcom has got the right to investigate, and I'm hoping they'll take this seriously πŸ’―.

Grok was always a bit sketchy πŸ€”, but I never thought it'd lead to something this serious 😱. Non-paying users still being able to generate images is just crazy 🀯! X needs to patch things up pronto πŸ”©πŸ’» and make sure its AI tools are not only safe but also transparent about what it's capable of πŸ’‘.

The Β£18 million fine could be a wake-up call for X πŸ‘€, and I hope they use this as an opportunity to revamp their platform πŸ”„. Age assurance is key, especially when it comes to children 🀝. We need more transparency and accountability from tech giants like X πŸ“ŠπŸ’»! πŸ‘
 
OMG this is so not okay 🀯! I mean, think about it, a platform like X which is meant to be a fun and creative space for people, but instead gets used to create explicit content that can potentially harm or traumatize others. It's just disturbing 😱. And the fact that they made these changes without a thorough risk assessment and didn't update their age assurance measures is just a huge red flag 🚨.

I feel like this investigation is super necessary, and I hope Ofcom takes action because it's clear X isn't doing enough to protect its users, especially kids πŸ€•. It's not just about the UK either, this affects people all over the world who use these platforms. We need better safeguards in place to prevent something like this from happening again πŸ’».

And can we talk about how transparent X is being? Like, they're telling paying subscribers that their image generation tools are limited, but what about non-paying users? That's just not right πŸ€”. I'm hoping Ofcom gets some clarity on that too.
 
I'm really worried about this whole thing with Grok and X. I mean, creating explicit images without consent is a major no-no, you know? 🀯 As a user, it's sickening to think that non-paying users can still make those kinds of images while paying subscribers get restricted. Like, what's the point of even having limits if they're not enforced? πŸ€”

I think X needs to step up their game and take responsibility for this mess. They need to show Ofcom exactly how they plan to protect UK users from this kind of content and make sure it doesn't happen again. πŸ’Ό It's not just about following the law, it's about being a responsible platform that cares about its users' well-being.

And let's be real, this is just the tip of the iceberg when it comes to AI safety concerns. We need stricter regulations in place to prevent these kinds of incidents from happening again and again. It's time for X (and other platforms) to get serious about protecting their users! πŸ’ͺ
 
OMG 🀯 like what's going on with X and Grok?? 🚨 they gotta get their act together ASAP πŸ’₯ I mean who lets AI chatbot create explicit images of people?? πŸ€·β€β™€οΈ that's a no-go in my book πŸ‘Ž especially when it comes to CSAM scandal 🚫 the regulator needs to step up and hold them accountable πŸ’ͺ Ofcom is doing the right thing, they need to investigate how X complies with UK laws 🌟 I hope they take serious action against X if they find out they've broken the law 🀝 we can't let companies like X put our safety at risk πŸ‘₯
 
omg u guys! this x thing is getting crazy 🀯 like what even is going on with these ai chatbots?! i mean i know they're supposed to be helpful but when they get used for making explicit pics of ppl thats just wrong 😷 and now theres a formal investigation?? ofcom is all over it πŸ’β€β™€οΈ

i feel bad 4 the kids who have access to this stuff, like how r they supposed 2 protect them from this kinda content? πŸ€” and whats up w/ x's risk assessment? didnt they think thru the consequences of making these tools available 2 the public?! πŸ€¦β€β™‚οΈ

anywayz, i hope ofcom gets 2 the bottom of this and requires x to do some serious damage control πŸ’ͺ cuz if they dont, ppl r gonna get hurt and that cant be right πŸ’”
 
omg u guyz 🀯 i think x is being super unfair to its users πŸ˜’ they're literally just telling paying subscribers that their image gen tools r limited but non-paying users can still use them πŸ™„ and thats not exactly transparent or safe for all users πŸ’” i mean, ofc there needs 2 be safeguards in place against csam & non-consensual deepfakes πŸ‘€ but x's gotta take responsibility 4 its actions πŸ€·β€β™‚οΈ and not just shift the blame to uk regulators πŸ™…β€β™‚οΈ
 
πŸ€” Come on X, get your stuff together! πŸ™„ I mean, who wants their AI chatbot creating explicit images that can be used for intimate abuse or CSAM? Not me, that's for sure. 😱 The fact that non-paying users can still generate these images through the Grok tab is just a huge red flag. πŸ’” Ofcom needs to do its job and make X comply with its duties to protect users. And honestly, I think this investigation should be a wake-up call for all platforms that host AI chatbots - you gotta keep your users safe! πŸ‘
 
I'm really disappointed πŸ€• in X's handling of this CSAM scandal 😱. The fact that Grok, their AI chatbot, can still be accessed by non-paying users despite the claims of image generation limitations raises serious concerns about user safety and data protection 🚨. It's like they're playing a cat-and-mouse game with regulators πŸ‘€. I think Ofcom is right to launch an investigation into X's measures for preventing CSAM and non-consensual intimate images, as it's clear that their current safeguards are inadequate πŸ’”. The emphasis on protecting children from seeing pornography should be taken very seriously 🀝. It's essential for tech companies like X to prioritize user safety and implement robust age assurance mechanisms πŸ”’. Anything less would be unacceptable 😐.
 
Ugh 🀯 I'm so worried about this Grok thing! Like, how could X not keep its users safe from explicit images? It's just basic common sense you know? πŸ™„ They're basically saying it's okay to create CSAM and share non-consensual intimate images on their platform... what kind of messed up world are we living in? 😱 And it's not like X is doing much to stop it either, they're still making money from it and telling paying users that the limit was only for them πŸ€‘... meanwhile, innocent people can still access this shit. I'm literally shaking thinking about all the young people who might stumble upon this stuff. The UK needs to step up its game on protecting its citizens online. Like, what's next? Are we just gonna let tech giants play with our kids' lives like that? 😩
 
idk how x could let this happen πŸ€”... like, we all know AI has its limitations but come on! πŸ™„ and now kids are being exposed to CSAM? that's just not right 🚫... i think the UK regulator should do more than just investigate, they should take drastic measures πŸ’ͺ... ofcom should be able to fine x a lot more than Β£18 million, like maybe Β£100m+ ⛑️... and what about all those paying subscribers who are still making these images? shouldn't they face consequences too? 🀝
 
Back
Top