Google’s AI Detection Tool Can’t Decide if Its Own AI Made Doctored Photo of Crying Activist

Google's AI Detection Tool Can't Decide If Its Own AI Made Doctored Photo of Crying Activist.

When the White House X account posted an image depicting activist Nekima Levy Armstrong in tears during her arrest, there were telltale signs that the image had been altered. Less than an hour before, Homeland Security Secretary Kristi Noem had posted a photo of the exact same scene, but in Noem’s version Levy Armstrong appeared composed, not crying in the least.

To determine if the White House version of the photo had been altered using artificial intelligence tools, researchers turned to Google's SynthID - a detection mechanism that Google claims is able to discern whether an image or video was generated using Google's own AI. They followed Google's instructions and used its AI chatbot, Gemini, to see if the image contained SynthID forensic markers.

The results were clear: The White House image had been manipulated with Google’s AI. Researchers published a story about it.

However, after posting the article, subsequent attempts to use Gemini to authenticate the image with SynthID produced different outcomes. In their second test, Gemini concluded that the image of Levy Armstrong crying was actually authentic.

But in their third test, SynthID determined that the image was not made with Google's AI, directly contradicting its first response. This inconsistency raises serious questions about SynthID’s reliability to tell fact from fiction at a time when AI-manipulated photos and videos are growing increasingly prevalent.

Aside from Google's proprietary tool, there is no easy way for users to test whether an image contains a SynthID watermark. That makes it difficult in this case to determine whether Google's system initially detected the presence of a SynthID watermark in an image without one or if subsequent tests missed a SynthID watermark in an image that actually contains one.

If AI-detection technology fails to produce consistent responses, though, there’s reason to wonder who will call bullshit on the bullshit detector. The incident highlights concerns about the reliability of AI-generated tools and their potential misuse in spreading misinformation.

Supporters of the technology argue tools that can detect if something is AI will play a critical role establishing the common truth amid the pending flood of media generated or manipulated by AI. However, if AI-detection technology fails to deliver accurate results, the very notion of fact-checking becomes meaningless.
 
🤔 This whole thing is wild 🌪️. I mean, who creates a tool that can't even tell if it created the image itself? 😂 It's like me saying my clothes are clean when they're actually stained with last night's pizza sauce... not gonna fly 🍕. It just goes to show how flawed AI systems can be and how hard it is to trust them with important tasks like fact-checking. What's next, a lie detector that keeps telling you the truth? 😂
 
I'm getting worried about these new AI detection tools 🤖💡 - like SynthID from Google. If it can't even decide whether its own AI made a doctored photo, how can we trust it in real life? 🤔 It's like trying to catch a cat that's already escaped... or in this case, the misinformation has spread before you know what happened! 😱 I mean, if Gemini said one thing and SynthID said another, that's just too much to handle. What's next? Using AI to fact-check the AI detectors? 🤯 That's like creating a never-ending loop of BS 🔁 It's time for Google (and us) to get this under control! 💪
 
man this is crazy 😱 google's synthid tool can't even verify its own fake images? what's next? using ai to create fake news and then accusing others for spreading misinformation 📰🤖 the lack of consistency in synthid's results raises so many red flags, it's like they're playing a game of whack-a-mole with fact-checking 🎮 if we can't trust our own AI-detection tools, how can we trust anything else? this is a serious problem that needs to be addressed ASAP 💻
 
omg u guys! this is wild 🤯 so google's ai detection tool can't even tell if its own ai made a doctored photo lol what does that say about our reliance on tech these days? 🤔 i mean we need tools to detect fake pics but if those tools are faulty, how r we supposed to know what's real and what's not? 📸 it's like the old saying "if you can't trust the source, don't trust the info" 🤷‍♀️ anyway, this is a major concern for me rn. who's gonna fact-check if AI-detection tools are failing us? 😬
 
I mean, who needs reliable AI detection tools when there's a cat-and-mouse game going on between fact-checkers and AI manipulators anyway? 🙄 It's not like we should be worried about misinformation spreading or anything... I guess it's good that Google is trying to develop this tech, but wouldn't it be more effective if they just came clean and said "hey, our tool can get it wrong sometimes"? 🤷‍♀️
 
I'm totally freaking out over this! 🤯 Like, how can we trust our own "fact-checking" tools when they can't even tell if their own AI created a manipulated image? 🤔 It's like trying to have a conversation with a friend who's having an identity crisis... you're not sure what to believe. 😅

The more I think about it, the more I realize how vulnerable we are to misinformation. If AI-generated tools can't even get their own results straight, what hope do we have against fake news and propaganda? 📰 It's like playing a game of Whac-A-Mole - every time you think you've found a reliable source, another one pops up to challenge it.

We need better quality control on these AI detection tools, ASAP! 💥 And we need to be cautious not to get caught in the echo chamber of our own biases... 🌐 I mean, how can we expect to trust AI-generated "fact-checking" if we're just going to rely on its own flawed logic? 🤷‍♀️
 
I'm so worried about this! If AI tools can't even trust their own tech, how are we supposed to know what's real and what's not? 🤔 I mean, think about it - if Google's SynthID can't decide whether its own AI made a doctored photo of an activist, what hope is there for us regular people trying to figure out the truth? It's like they're creating a whole new level of fake news! 😱 And what really gets me is that we need tools like this in the first place - it feels like we're just throwing technology at our problems without considering the actual consequences. I just hope someone figures out a way to make these AI detectors trustworthy before things get out of hand... 🤞
 
🤖💻 5/10 I'm low-key concerned about Google's SynthID tool right now 🤔... like, I get it, AI-generated pics and vids are getting super sneaky 📸👀 but what if we can't trust the tool that's supposed to detect them? 🤦‍♂️

Here's a chart comparing the results of the three tests:

* Test 1: Gemini says White House image is manipulated with Google AI 🚫
* Test 2: SynthID says image is authentic 🙌
* Test 3: SynthID says image is not made with Google AI 🤷‍♂️

That's like, a 50/50 split in favor of the White House image being fake 😅. If I were to create a bar chart out of this data, it would look something like this:

SynthID: 33% say image is authentic, 67% say it's manipulated
Gemini: 0% say image is authentic, 100% say it's manipulated

🤯 That's some serious inconsistency 📊. If AI-detection technology can't get its act together, what's the point of even having fact-checking? 📰💔
 
🤔 I'm kinda worried about these AI detection tools, like SynthID. They're supposed to help us figure out if an image is fake, but it looks like they can't even trust themselves sometimes 🙈. The whole thing with the White House image of Nekima Levy Armstrong crying is just wild - one test says she's faking it, and the next one says nope, she's actually got emotional trauma 😩. It's hard to know what's real when these tools can't even decide for themselves! And if they're not accurate, who's going to fact-check for us? 🤷‍♀️
 
🤔 you know what this whole thing got me thinking? it's like trying to separate wheat from chaff with a tool that's still in beta 🌾 just because we have fancy tech doesn't mean our tools are infallible. AI-detection is like the ultimate test of human skepticism - if we can't trust these tools, how can we trust ourselves to discern truth from lies? it's all about perspective and accountability 💡 the question now is who's gonna be the gatekeeper of fact-checking in this new era of misinformation? 🚪
 
🤖 This is super weird. I mean, who makes an AI that can't even tell if its own AI messed with a photo? 🤦‍♂️ It's like they tested it on themselves and said "yeah, we're good" 😅. The fact that SynthID keeps changing its mind about whether the image was manipulated or not is really concerning. What's the point of having an AI detection tool if it can't even trust itself? 🤔 And what does this say about the future of fact-checking in general? We're already dealing with fake news and propaganda, but now we have AI-generated content that's almost indistinguishable from real stuff... it's a whole new level of crazy 😲.
 
🤯 this is like totally crazy! I mean, Google's own AI detection tool can't even decide if it made a doctored photo 📸😱, and now the whole AI detection thing is called into question. What's up with that? 🤔 It's like trying to catch a snake in a jar, impossible 😂. We need more transparency on how these tools work, stat! 💻 And what about when we can't trust our own "fact-checkers"? 🚨 This whole thing is like, totally flawed, you feel me? 🔴
 
🤔 you know what's wild? I was at this food festival last weekend and they had these insane desserts that looked like miniature versions of famous landmarks. Like, there was this one that looked exactly like the Eiffel Tower made out of chocolate 🍰😍. But here's the thing, I started thinking about how those desserts were probably made with AI-generated designs or 3D printing technology... it got me wondering, are we already living in a world where AI is being used to create everything from art to food? It just blows my mind!
 
man this is some wild stuff 🤯 I'm telling you, there's gotta be more to this story than meets the eye 🙃 first off, Google's AI detection tool can't even trust its own tech, that's just crazy talk 🤯 and now it's saying one thing and then another? what's going on here? 🤔 is this some kind of test to see how gullible we are? 💡 or maybe someone is playing a sick game of cat and mouse with our perception of reality? 🐈 idk but I'm not buying it just yet 👀
 
omg this is wild 🤯 like what even is going on with this SynthID thing? it's supposed to be able to detect whether an image is made with Google's AI or not, but it can't even get that right itself 🙄 in the end, does anyone really know if that original photo of Nekima Levy Armstrong crying was real or not? seems like we're just stuck in a neverending loop of AI-generated BS 😒
 
I'm getting super frustrated with this whole situation 🤯. I mean, we're relying on tech giants like Google to detect manipulation in images and videos, but their own tool can't even decide if its own AI made a doctored photo of me! It's crazy. We need more transparency and accountability from these companies, especially when it comes to something as critical as fact-checking.

And think about it - if an AI-detection tool can't even trust itself, what hope do we have in relying on it? We're talking about a situation where misinformation is spreading like wildfire, and the only thing that's being verified are the verification mechanisms themselves. It's a total mess 🚮.
 
🤔 this is wild 🤯 i mean google's own AI tool can't even detect its own fake photos lol what does that say about our reliance on tech to fact check? 📸 and yeah, who's gonna call bullshit on the bullshit detector if it fails so miserably? 😒 it's like they're creating a whole new level of misinformation problem... 🤦‍♂️
 
I'm totally blown away by this whole situation 🤯. I mean, can you even imagine using a tool that's supposed to detect fake images and it just can't decide if its own AI made one? 😱 It's like, what do we do then? If the system can't trust itself, how can we trust anything it says?

I've seen some crazy stuff go down on social media, but this takes the cake. I'm all for fact-checking and verifying information, but if the tools we use to do that aren't even working properly... what's the point? 🤷‍♀️ It just highlights how easy it is to manipulate information with AI these days.

And can you imagine the cat-and-mouse game that'll ensue where people try to trick AI-detection tools into thinking a fake image is real? 😹 It's like, we're already living in a sci-fi movie or something. But seriously, someone needs to figure out how to make these tools reliable ASAP.
 
🤯 I mean, what's up with Google's SynthID tool?! 🤔 It can't even decide if its own AI made a doctored photo of some activist 😂! The White House posting an image of Nekima Levy Armstrong crying during her arrest, and then having it say the opposite in just under an hour is wild. And to make matters worse, the tool kept changing its mind on whether the image was real or not 🤯. It's like it was playing a game of AI-whack-a-mole! 😂

I'm all for fact-checking and being able to spot manipulated images, but if the technology can't even get that right, what's the point? 🙄 We need tools that can trust their own results, not one that's just making it up as it goes along. And what about people who aren't tech-savvy? They're going to be completely lost trying to figure out what's real and what's not 🤷‍♀️.

This whole thing is like a big mess of AI-generated BS 💥. We need better tools, more robust testing, and some serious expertise in the field before we can even start calling something "fact-checked" 🙏. Otherwise, we're just perpetuating the problem and spreading misinformation left and right 🚨.
 
Back
Top