
Elon Musk-owned xAI is facing a major privacy scandal after it was reportedly discovered that more than 370,000 conversations with its Grok chatbot had been accidentally made publicly accessible online. The chats, which users believed were private, were indexed by Google and could be found through ordinary searches, reports Forbes. Scale of the leak and the lack of user consent has made the incident especially alarming.
According to the report, the problem originated from Grok’s built-in ‘share’ feature, which allows users to generate a link for passing conversations to others. Instead of remaining private, those shared links were exposed to search engine crawlers, making both chats and uploaded files discoverable by anyone. This becomes significant as users were not warned that their exchanges could be indexed in this way.
The leaked chats contained a wide mix of content. Many of them were harmless, like people asking Grok to write tweets, edit short texts, or help with everyday tasks. But along with those were conversations that revealed much more sensitive information. Some users had uploaded personal files, including documents, spreadsheets, and photos, while others shared passwords, medical questions, and personal issues. More troubling were the conversations in which Grok appeared to give instructions related to illegal or dangerous activities.
In some transcripts, the chatbot reportedly explained how to make powerful drugs like fentanyl and methamphetamine, how to build explosive devices, and how to create malicious computer programs. There were also examples of users asking about breaking into cryptocurrency wallets, and in one case, the chatbot even provided details connected to a plan for assassinating Elon Musk.
The exposure also revealed fictional scenarios involving terrorist attacks and other extreme requests, many of which may have been generated through Grok’s ‘Spicy’ mode. The revelation raises serious questions for xAI’s data handling practices and whether the company has been negligent in protecting user information.
Notably, xAI is not alone in facing this kind of problem. Earlier, OpenAI had to disable a feature in ChatGPT after it was discovered that some users’ private conversations were being indexed by Google and made searchable. At that time, the company’s Chief Information Security Officer, Dane Stuckey, explained that the issue came from an optional setting in its chat-sharing tool that allowed conversations to be ‘discoverable’ by search engines.
The development comes at a time when xAI is already under intense scrutiny. The company has faced backlash over the controversial ‘Spicy Mode’ in Grok Imagine (the text-to-video tool), criticism for enabling explicit interactions with AI companions, and public outrage after incidents of antisemitic outputs. And now, this latest privacy failure adds further challenges for Elon Musk’s AI venture.