Hundreds of thousands of conversations with Elon Musk’s Grok chatbot have been discovered in Google Search results, while China has released DeepSeek V3.1 as a direct rival to GPT-5, together raising urgent questions about privacy, competition and security in the AI market.
How the Grok Leak Happened
The exposure of Grok transcripts was first reported by Forbes, which identified more than 370,000 indexed conversations, and later confirmed by the BBC, which counted nearly 300,000 visible through Google Search. The numbers differ slightly depending on how the search engine indexes the material, but both point to a vast volume of data becoming public without users’ awareness.
It appears that the cause lies in how Grok’s “share” feature works. For example, each time a user chose to share a chat, Grok generated a unique webpage containing the transcript. Since those pages were not blocked from being crawled, search engines such as Google indexed them automatically. What many users may have assumed would be a private or semi-private link was, in fact, publicly available to anyone searching the web.
What Was Revealed?
Reports indicate that the published material varied widely in content. Some transcripts showed harmless exchanges, such as meal plans or password suggestions. Others contained much more sensitive prompts, including questions about medical issues, details about mental health, and even confidential business information.
More troubling still, some indexed conversations reportedly included Grok’s responses to attempts at “red teaming” the system, essentially testing its limits. These produced instructions for making illicit drugs, coding malware and building explosives. In at least one case, the chatbot provided a detailed description of an assassination plot.
This mixture of personal data, sensitive queries and dangerous instructions underlines the scale of the problem. Once indexed, such material can be copied, cached and shared indefinitely, making it difficult (if not impossible) to remove entirely.
Risks for Users and Businesses
For individual users, the risk is obvious. Even if names and account details are obscured, prompts often contain information that could identify a person, their health status, or their location. Privacy experts also warn that conversations about relationships, finances or mental wellbeing could resurface years later.
For businesses, the implications may be more severe still. Many companies now experiment with AI tools for drafting documents, brainstorming ideas or even testing security scenarios. If such exchanges end up publicly indexed, they could inadvertently reveal trade secrets, security weaknesses or sensitive commercial plans. For regulated sectors like healthcare and finance, this creates potential compliance issues.
A Setback for Musk’s AI Venture
For xAI, the start-up behind Grok, this discovery is particularly awkward. Grok has been marketed as a distinctive alternative to established players like OpenAI’s ChatGPT or Google’s Gemini, with direct integration into Musk’s social platform X. However, the exposure of hundreds of thousands of conversations clearly undermines that positioning, fuelling questions over whether xAI has adequate safeguards in place.
The incident is also notable because Musk had previously criticised rivals over similar missteps. Earlier this year, OpenAI briefly allowed shared ChatGPT conversations to be indexed before reversing course after user complaints. At the time, Musk mocked the issue and celebrated Grok as safer. The latest revelations make that claim harder to sustain.
Not the First Time
This is not the first case of chatbot transcripts spreading more widely than users expected. OpenAI’s trial of a shareable ChatGPT link caused uproar earlier in the year, and Meta’s AI tool has faced similar criticism for publishing shared chats in a public feed.
What sets the Grok case apart, however, is the apparent scale and duration. The indexing appears to have been ongoing for months, creating a large reservoir of material in search engines. The mix of personal information with instructions for harmful activity adds another layer of controversy.
China’s DeepSeek Raises the Stakes
Just as Grok’s making the news for the wrong reasons, China’s AI sector has added a new dimension to the debate with the release of DeepSeek V3.1, an open-weight model that experts say matches GPT-5 on some benchmarks while being priced to undercut it. The model was quietly launched via WeChat and posted on the Hugging Face platform, and has been optimised to perform well on Chinese-made chips. This reflects Beijing’s determination to build advanced AI systems without relying on Western hardware, particularly Nvidia GPUs, which are increasingly subject to U.S. export controls.
Technically, DeepSeek V3.1 is striking because of its architecture. With 685 billion parameters, it sits at the level of many so-called “frontier” models. However, its mixture-of-experts design means only a fraction of the model activates when answering a query, cutting costs and energy use while combining fast recall with step-by-step reasoning in a single system. This hybrid approach is something only the very top commercial models have offered until now.
The release clearly has some significant competitive implications. OpenAI chief executive Sam Altman admitted that Chinese open-source models such as DeepSeek influenced his company’s decision to publish its own open-weight models this summer. If developers can access a powerful model for a fraction of the cost, the balance of adoption may tilt quickly, especially outside the United States.
Security and Governance Concerns Around DeepSeek
While DeepSeek’s technical capabilities are impressive, it seems that some serious security concerns remain. For example, Cisco researchers previously flagged critical flaws in the DeepSeek R1 model that made it vulnerable to prompt attacks and misuse, with tests showing a 100 per cent success rate in bypassing safeguards. Researchers have also observed the model internally reasoning through restricted queries but censoring its outputs, raising questions about hidden biases and control.
For businesses, the central issue here is data governance. UK security experts warn that using DeepSeek for workplace tasks is effectively the same as sending confidential information directly into mainland China, where it may be subject to state access and outside the reach of UK and EU data protection laws. Surveys of UK security leaders show widespread concern, with six in ten believing tools like DeepSeek will increase cyber attacks on their organisations, and many calling for government guidance or outright restrictions.
It should be noted that some countries have already acted. For example, South Korea has suspended new downloads of DeepSeek over privacy compliance, while Germany and Australia have imposed limits on its use in government and critical sectors. In the U.S., senators have urged a formal investigation into Chinese open models, citing risks around data security and intellectual property.
What This Means for the AI Market
The Grok exposure shows how a simple design oversight can turn into a mass privacy failure, while DeepSeek’s release highlights how quickly competition can shift when cost and accessibility align with national industrial strategy. Together they underscore a market where the pace of innovation is outstripping safeguards, leaving both users and regulators struggling to keep up.
Expert and Privacy Group Reactions
Researchers and privacy advocates have warned that the Grok incident highlights a growing structural risk. Dr Luc Rocher of the Oxford Internet Institute has called AI chatbots a “privacy disaster in progress,” noting that sensitive human information is being exposed in ways that current regulation has not kept pace with.
Carissa Véliz, an ethicist at Oxford University, has similarly argued that technologies which fail to clearly inform users about how their data is handled are eroding public trust. She stresses that users deserve transparency and choice over whether their data is made public.
These warnings are consistent with long-standing concerns that AI providers are moving faster than regulators, with weak or inconsistent controls around data sharing.
Who Is Responsible?
Google has confirmed that website owners, not search engines, determine whether content is indexed. In practice, this means responsibility lies with xAI, which could have prevented the problem by blocking the shared pages from being indexed. So far, xAI has not issued a detailed statement, leaving open questions about when the feature was introduced, why there were no clear warnings for users, and what steps will now be taken.
What Can Be Done Now?
The most immediate step would be for xAI to change how its share feature works, either by making links private by default or adding technical restrictions to stop indexing. Privacy experts also stress the need for clearer disclaimers so users understand the risks before sharing.
For users, the advice is to avoid using the share button altogether until changes are made. Screenshots or secure document sharing may be safer alternatives for distributing chatbot outputs. However, those people whose conversations are already exposed face a harder challenge because, even if pages are taken down, cached versions may persist online.
DeepSeek’s rise shows that these issues are not limited to one provider. With its open-weight release, concerns focus less on accidental exposure and more on where data is processed and how it may be governed under Chinese law. Security specialists warn, therefore, that uploading business information into DeepSeek could mean that sensitive material is stored in mainland China, beyond the reach of UK or European compliance frameworks. For companies, this means risk management must cover both inadvertent leaks, as with Grok, and structural governance gaps, as with DeepSeek.
For regulators and policymakers, the combined picture will likely feed into calls for stronger oversight of AI services, particularly as businesses increasingly rely on them for sensitive tasks. Voluntary measures may no longer be enough in a landscape where user data can be published at scale with a single click or transmitted to jurisdictions with very different rules on privacy and access.
What Does This Mean For Your Business?
The Grok exposure highlights the risks of rapid AI deployment without basic data protections, while DeepSeek’s open-weight advance illustrates how quickly competition can shift the ground beneath established players. For UK businesses, the lesson is that generative AI tools cannot be treated as safe environments for sensitive or commercially valuable information unless they are placed behind clear enterprise guardrails. Any organisation using these tools should assume that prompts and outputs could be made public, and should ensure that procurement, data governance frameworks and regulatory compliance are in place before rolling out AI systems at scale.
Privacy advocates and academics have been clear that these events illustrate systemic flaws, not isolated mistakes. Governments are already responding, with bans and suspensions in some countries and calls for investigations in others, and further measures are likely if risks are not addressed. For xAI, the task is to regain trust by fixing its sharing features. For DeepSeek, the challenge is to prove that low-cost open models can also deliver robust safeguards. For the AI industry as a whole, the message is that transparency, data protection and security must move from the margins to the centre of product design. Without that shift, trust from users and businesses will remain fragile, and the adoption of these tools will be held back.