A social network designed for artificial intelligence has exposed thousands of human users' credentials due to a security flaw, highlighting the limitations of relying on machine learning. Moltbook, marketed as an AI social network, has been compromised after Wiz, a cybersecurity firm, discovered a "vibe-coded" vulnerability in its platform.
According to Wiz's analysis, the issue arose from a Reddit-style forum setup that was allegedly created without human intervention by an AI assistant, as claimed by Moltbook's founder. However, the results were anything but intuitive. The platform exposed sensitive information, including 1.5 million API authentication tokens and private messages between agents.
The vulnerability also allowed unauthenticated users to edit live posts, raising concerns about authorship verification on the platform. Wiz found that the social network's "revolutionary AI" concept was in reality a front for humans operating fleets of bots.
This incident serves as a reminder that while AI can excel at tasks, it is not infallible and requires human oversight to avoid security breaches. The case highlights the need for vigilance when relying on machine learning, even in what appears to be a well-intentioned platform like Moltbook.
According to Wiz's analysis, the issue arose from a Reddit-style forum setup that was allegedly created without human intervention by an AI assistant, as claimed by Moltbook's founder. However, the results were anything but intuitive. The platform exposed sensitive information, including 1.5 million API authentication tokens and private messages between agents.
The vulnerability also allowed unauthenticated users to edit live posts, raising concerns about authorship verification on the platform. Wiz found that the social network's "revolutionary AI" concept was in reality a front for humans operating fleets of bots.
This incident serves as a reminder that while AI can excel at tasks, it is not infallible and requires human oversight to avoid security breaches. The case highlights the need for vigilance when relying on machine learning, even in what appears to be a well-intentioned platform like Moltbook.