370k Grok AI Chats Made Public Without User Consent
More than 370,000 conversations between users and the LLM (Large Language Model) on Elon Musk's xAI company platform have been made public without user consent. These chats were published on the Grok website and indexed by search engines such as Google.
The issue came to light when a user shared a conversation with someone else, but the link was also made available to search engines, allowing anyone to access the chat without being sent it directly. This means that users were not given any warning or disclaimer about the fact that their conversations would be made public.
The Consequences
As a result of this mistake, many sensitive and intimate conversations have been exposed. Some chats revealed personal details, such as names, passwords, and at least one password shared with the bot by a Grok user.
Forbes reviewed some of these conversations and found that they contained explicit instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware, and construct a bomb. The chats also revealed a detailed plan for the assassination of Elon Musk.
A Growing Concern
This is not the first time something similar has happened with AI chatbots. ChatGPT transcripts were also appearing in Google search results, although in those cases users had agreed to make their conversations discoverable to others.
The Grok revelation is particularly ironic given Elon Musk's previous baseless claims about the Apple and OpenAI partnership being a threat to privacy.
A Call for Transparency
As AI technology becomes more advanced, it's essential that companies prioritize transparency and user consent. The fact that users were not warned about their conversations being made public highlights the need for better governance and regulation in this area.
What's Next?
The incident has sparked a conversation about the importance of data protection and user control when it comes to AI-powered chatbots. As AI continues to evolve, it's crucial that we prioritize transparency, accountability, and user consent.
We will continue to monitor this situation and provide updates as more information becomes available.