Google's AI Detection Tool Can't Decide Whether Its Own AI Created Doctored Photo of Crying Activist. In a striking reversal, the company's own AI detection tool failed to agree on whether an image posted by the White House contained synthetic details generated by Google's generative AI.
A photo of activist Nekima Levy Armstrong in tears during her arrest went viral after it was posted by the White House X account. But less than an hour later, Homeland Security Secretary Kristi Noem had posted a photo of the same scene with Levy Armstrong appearing composed, not crying at all. The images were starkly different, prompting questions about whether the White House image had been manipulated using artificial intelligence tools.
In search of answers, Google's SynthID was used to detect if the White House image had been generated or edited by AI. The detection mechanism is designed to embed invisible markers into images created using Google's generative AI tools, which can then be detected later. However, when the analysis was run through Gemini - Google's AI chatbot - different outcomes emerged.
Initially, Gemini stated that the crying image contained forensic markers indicating that it had been manipulated with Google's generative AI. The results were published in a story about the controversy, and the White House image was subsequently revealed to be doctored.
However, subsequent attempts to run the analysis again yielded inconsistent results. In one test, Gemini said the image was authentic, while in another, it claimed that SynthID had determined that the image was not made with Google's AI tools - a direct contradiction of its earlier response.
The inconsistencies raise serious questions about SynthID's reliability and ability to distinguish between genuine and manipulated content created using artificial intelligence.
This is just one instance where AI detection tools have failed, sparking concerns about their limitations and potential misuses. As the technology continues to evolve, it's essential for developers to ensure that these tools are robust and reliable in order to establish trust in digital media.
The controversy highlights the challenges of detecting manipulated content created using artificial intelligence, particularly when it comes to self-generated images. Google has described SynthID as a "bullshit detector" designed to identify synthetic details generated by AI - but its own tool's inability to provide consistent results suggests that this is an ongoing challenge for the industry.
For now, users will need to rely on other methods to verify the authenticity of digital content.
A photo of activist Nekima Levy Armstrong in tears during her arrest went viral after it was posted by the White House X account. But less than an hour later, Homeland Security Secretary Kristi Noem had posted a photo of the same scene with Levy Armstrong appearing composed, not crying at all. The images were starkly different, prompting questions about whether the White House image had been manipulated using artificial intelligence tools.
In search of answers, Google's SynthID was used to detect if the White House image had been generated or edited by AI. The detection mechanism is designed to embed invisible markers into images created using Google's generative AI tools, which can then be detected later. However, when the analysis was run through Gemini - Google's AI chatbot - different outcomes emerged.
Initially, Gemini stated that the crying image contained forensic markers indicating that it had been manipulated with Google's generative AI. The results were published in a story about the controversy, and the White House image was subsequently revealed to be doctored.
However, subsequent attempts to run the analysis again yielded inconsistent results. In one test, Gemini said the image was authentic, while in another, it claimed that SynthID had determined that the image was not made with Google's AI tools - a direct contradiction of its earlier response.
The inconsistencies raise serious questions about SynthID's reliability and ability to distinguish between genuine and manipulated content created using artificial intelligence.
This is just one instance where AI detection tools have failed, sparking concerns about their limitations and potential misuses. As the technology continues to evolve, it's essential for developers to ensure that these tools are robust and reliable in order to establish trust in digital media.
The controversy highlights the challenges of detecting manipulated content created using artificial intelligence, particularly when it comes to self-generated images. Google has described SynthID as a "bullshit detector" designed to identify synthetic details generated by AI - but its own tool's inability to provide consistent results suggests that this is an ongoing challenge for the industry.
For now, users will need to rely on other methods to verify the authenticity of digital content.