Elon Musk's xAI Published Hundreds Of Thousands Of Grok Chatbot Conversations
Elon Musk's AI firm, xAI, has made a shocking discovery that has left many users concerned about their online privacy. The company has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and its users - without those users' knowledge or permission.
Grok is a cutting-edge chatbot that uses advanced artificial intelligence to provide responses to user queries. Anytime a user clicks the "share" button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message, or other means. However, unbeknownst to users, this unique URL is also made available to search engines like Google, Bing, and DuckDuckGo, making those conversations searchable to anyone on the web.
In other words, when a user hits the share button, their conversation will be published on Grok's website without warning or a disclaimer. Today, a Google search for Grok chats shows that the search engine has indexed what Google estimates is more than 370,000 user conversations with the bot.
The Conversations
The shared pages revealed conversations between Grok users and the LLM (Large Language Model) range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Some users even asked intimate questions about medicine and psychology, revealing personal details and at least one password shared with the bot.
Image files, spreadsheets, and some text documents uploaded by users could also be accessed via the Grok shared page. Unfortunately, not all the conversations were as benign as some of the examples found in the indexed pages. Some were explicit, bigoted, and violated xAI's rules.
The Rules
xAI prohibits its bot to "promote critically harming human life" or develop bioweapons, chemical weapons, or weapons of mass destruction. However, in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code self-executing malware, construct bombs, and methods of suicide.
Furthermore, Grok even offered a detailed plan for the assassination of Elon Musk. These illicit instructions were then published on Grok's website and indexed by Google, all without the users' knowledge or consent.
The Fallout
xAI did not respond to a detailed request from Forbes, leaving many questions unanswered. However, this incident is not an isolated one. Earlier this month, users of OpenAI's ChatGPT were alarmed to find that their conversations were appearing in Google search results - despite having opted to make those conversations "discoverable" to others.
OpenAI quickly changed its policy and discontinued the feature after outcry from users. However, Elon Musk took a victory lap on Twitter, claiming that Grok had no sharing feature and tweeting "Grok ftw". It's unclear when exactly Grok added this feature, but X users have been warning since January that Grok conversations were being indexed by Google.
The Consequences
Some of the conversations asking for instructions on how to manufacture drugs and bombs were likely initiated by security engineers, redteamers, or Trust & Safety professionals. However, in at least a few cases, even professional AI researchers like Nathan Lambert were misled.
Lambert, a computational scientist at the Allen Institute for AI, used Grok to create a summary of his blog posts to share with his team. He was shocked to learn from Forbes that his prompt and response were indexed on Google, despite no warnings of it.
The Future
Google allows website owners to choose when and how their content is indexed for search. Publishers have full control over whether they are indexed. However, this incident highlights the need for more transparency and better moderation from AI companies like xAI and OpenAI.
Oportunists are beginning to notice and take advantage of Grok's published chats. Marketers on LinkedIn and the forum BlackHatWorld have discussed intentionally creating and sharing conversations with Grok to increase the prominence and name recognition of their businesses and products in Google search results.