I'm not surprised, though - after all, AI is just a tool, and like any tool, it has its limitations. The problem is that we're relying on these tools to make some pretty big judgments about what's real and what's not. It's like trying to catch a fish with your bare hands - it's just not going to work.
The thing that really gets me is that we need to be having this conversation now, when the technology is still in its infancy. We can't afford to wait until there's been more testing and refinement before we start relying on these tools. We need to push the industry to make sure these tools are robust and reliable - otherwise, we're going to end up with a whole lot of misinformation.
And let's be real, this is just one example of what can go wrong when we rely too heavily on technology to verify information. There will be plenty more instances like this in the future, and it's up to us to make sure that we're prepared to deal with them.