Featured Article : 300,000 + Grok Chats Exposed Online
Hundreds of thousands of conversations with Elon Musk’s Grok chatbot have been discovered in Google Search results, while China has released DeepSeek V3.1 as a direct rival to GPT-5, together raising urgent questions about privacy, competition and security in the AI market.
How the Grok Leak Happened
The exposure of Grok transcripts was first reported by Forbes, which identified more than 370,000 indexed conversations, and later confirmed by the BBC, which counted nearly 300,000 visible through Google Search. The numbers differ slightly depending on how the search engine indexes the material, but both point to a vast volume of data becoming public without users’ awareness.
It appears that the cause lies in how Grok’s “share” feature works. For example, each time a user chose to share a chat, Grok generated a unique webpage containing the transcript. Since those pages were not blocked from being crawled, search engines such as Google indexed them automatically. What many users may have assumed would be a private or semi-private link was, in fact, publicly available to anyone searching the web.
What Was Revealed?
Reports indicate that the published material varied widely in content. Some transcripts showed harmless exchanges, such as meal plans or password suggestions. Others contained much more sensitive prompts, including questions about medical issues, details about mental health, and even confidential business information.
More troubling still, some indexed conversations reportedly included Grok’s responses to attempts at “red teaming” the system, essentially testing its limits. These produced instructions for making illicit drugs, coding malware and building explosives. In at least one case, the chatbot provided a detailed description of an assassination plot.
This mixture of personal data, sensitive queries and dangerous instructions underlines the scale of the problem. Once indexed, such material can be copied, cached and shared indefinitely, making it difficult (if not impossible) to remove entirely.
Risks for Users and Businesses
For individual users, the risk is obvious. Even if names and account details are obscured, prompts often contain information that could identify a person, their health status, or their location. Privacy experts also warn that conversations about relationships, finances or mental wellbeing could resurface years later.
For businesses, the implications may be more severe still. Many companies now experiment with AI tools for drafting documents, brainstorming ideas or even testing security scenarios. If such exchanges end up publicly indexed, they could inadvertently reveal trade secrets, security weaknesses or sensitive commercial plans. For regulated sectors like healthcare and finance, this creates potential compliance issues.
A Setback for Musk’s AI Venture
For xAI, the start-up behind Grok, this discovery is particularly awkward. Grok has been marketed as a distinctive alternative to established players like OpenAI’s ChatGPT or Google’s Gemini, with direct integration into Musk’s social platform X. However, the exposure of hundreds of thousands of conversations clearly undermines that positioning, fuelling questions over whether xAI has adequate safeguards in place.
The incident is also notable because Musk had previously criticised rivals over similar missteps. Earlier this year, OpenAI briefly allowed shared ChatGPT conversations to be indexed before reversing course after user complaints. At the time, Musk mocked the issue and celebrated Grok as safer. The latest revelations make that claim harder to sustain.
Not the First Time
This is not the first case of chatbot transcripts spreading more widely than users expected. OpenAI’s trial of a shareable ChatGPT link caused uproar earlier in the year, and Meta’s AI tool has faced similar criticism for publishing shared chats in a public feed.
What sets the Grok case apart, however, is the apparent scale and duration. The indexing appears to have been ongoing for months, creating a large reservoir of material in search engines. The mix of personal information with instructions for harmful activity adds another layer of controversy.
China’s DeepSeek Raises the Stakes
Just as Grok’s making the news for the wrong reasons, China’s AI sector has added a new dimension to the debate with the release of DeepSeek V3.1, an open-weight model that experts say matches GPT-5 on some benchmarks while being priced to undercut it. The model was quietly launched via WeChat and posted on the Hugging Face platform, and has been optimised to perform well on Chinese-made chips. This reflects Beijing’s determination to build advanced AI systems without relying on Western hardware, particularly Nvidia GPUs, which are increasingly subject to U.S. export controls.
Technically, DeepSeek V3.1 is striking because of its architecture. With 685 billion parameters, it sits at the level of many so-called “frontier” models. However, its mixture-of-experts design means only a fraction of the model activates when answering a query, cutting costs and energy use while combining fast recall with step-by-step reasoning in a single system. This hybrid approach is something only the very top commercial models have offered until now.
The release clearly has some significant competitive implications. OpenAI chief executive Sam Altman admitted that Chinese open-source models such as DeepSeek influenced his company’s decision to publish its own open-weight models this summer. If developers can access a powerful model for a fraction of the cost, the balance of adoption may tilt quickly, especially outside the United States.
Security and Governance Concerns Around DeepSeek
While DeepSeek’s technical capabilities are impressive, it seems that some serious security concerns remain. For example, Cisco researchers previously flagged critical flaws in the DeepSeek R1 model that made it vulnerable to prompt attacks and misuse, with tests showing a 100 per cent success rate in bypassing safeguards. Researchers have also observed the model internally reasoning through restricted queries but censoring its outputs, raising questions about hidden biases and control.
For businesses, the central issue here is data governance. UK security experts warn that using DeepSeek for workplace tasks is effectively the same as sending confidential information directly into mainland China, where it may be subject to state access and outside the reach of UK and EU data protection laws. Surveys of UK security leaders show widespread concern, with six in ten believing tools like DeepSeek will increase cyber attacks on their organisations, and many calling for government guidance or outright restrictions.
It should be noted that some countries have already acted. For example, South Korea has suspended new downloads of DeepSeek over privacy compliance, while Germany and Australia have imposed limits on its use in government and critical sectors. In the U.S., senators have urged a formal investigation into Chinese open models, citing risks around data security and intellectual property.
What This Means for the AI Market
The Grok exposure shows how a simple design oversight can turn into a mass privacy failure, while DeepSeek’s release highlights how quickly competition can shift when cost and accessibility align with national industrial strategy. Together they underscore a market where the pace of innovation is outstripping safeguards, leaving both users and regulators struggling to keep up.
Expert and Privacy Group Reactions
Researchers and privacy advocates have warned that the Grok incident highlights a growing structural risk. Dr Luc Rocher of the Oxford Internet Institute has called AI chatbots a “privacy disaster in progress,” noting that sensitive human information is being exposed in ways that current regulation has not kept pace with.
Carissa Véliz, an ethicist at Oxford University, has similarly argued that technologies which fail to clearly inform users about how their data is handled are eroding public trust. She stresses that users deserve transparency and choice over whether their data is made public.
These warnings are consistent with long-standing concerns that AI providers are moving faster than regulators, with weak or inconsistent controls around data sharing.
Who Is Responsible?
Google has confirmed that website owners, not search engines, determine whether content is indexed. In practice, this means responsibility lies with xAI, which could have prevented the problem by blocking the shared pages from being indexed. So far, xAI has not issued a detailed statement, leaving open questions about when the feature was introduced, why there were no clear warnings for users, and what steps will now be taken.
What Can Be Done Now?
The most immediate step would be for xAI to change how its share feature works, either by making links private by default or adding technical restrictions to stop indexing. Privacy experts also stress the need for clearer disclaimers so users understand the risks before sharing.
For users, the advice is to avoid using the share button altogether until changes are made. Screenshots or secure document sharing may be safer alternatives for distributing chatbot outputs. However, those people whose conversations are already exposed face a harder challenge because, even if pages are taken down, cached versions may persist online.
DeepSeek’s rise shows that these issues are not limited to one provider. With its open-weight release, concerns focus less on accidental exposure and more on where data is processed and how it may be governed under Chinese law. Security specialists warn, therefore, that uploading business information into DeepSeek could mean that sensitive material is stored in mainland China, beyond the reach of UK or European compliance frameworks. For companies, this means risk management must cover both inadvertent leaks, as with Grok, and structural governance gaps, as with DeepSeek.
For regulators and policymakers, the combined picture will likely feed into calls for stronger oversight of AI services, particularly as businesses increasingly rely on them for sensitive tasks. Voluntary measures may no longer be enough in a landscape where user data can be published at scale with a single click or transmitted to jurisdictions with very different rules on privacy and access.
What Does This Mean For Your Business?
The Grok exposure highlights the risks of rapid AI deployment without basic data protections, while DeepSeek’s open-weight advance illustrates how quickly competition can shift the ground beneath established players. For UK businesses, the lesson is that generative AI tools cannot be treated as safe environments for sensitive or commercially valuable information unless they are placed behind clear enterprise guardrails. Any organisation using these tools should assume that prompts and outputs could be made public, and should ensure that procurement, data governance frameworks and regulatory compliance are in place before rolling out AI systems at scale.
Privacy advocates and academics have been clear that these events illustrate systemic flaws, not isolated mistakes. Governments are already responding, with bans and suspensions in some countries and calls for investigations in others, and further measures are likely if risks are not addressed. For xAI, the task is to regain trust by fixing its sharing features. For DeepSeek, the challenge is to prove that low-cost open models can also deliver robust safeguards. For the AI industry as a whole, the message is that transparency, data protection and security must move from the margins to the centre of product design. Without that shift, trust from users and businesses will remain fragile, and the adoption of these tools will be held back.
Tech Insight : How Your ‘Metadata’ Helps Scammers
In this tech insight, we look at how hidden metadata (embedded in files, emails, images and documents) is increasingly being used by scammers to profile, deceive and attack UK businesses, and how firms can protect themselves.
What Is Metadata and Why Does It Matter to Businesses?
Metadata is often described as “data about data”. It is the invisible layer of information attached to digital content, i.e. emails, Word documents, PDFs, spreadsheets, photographs, that describes how, when and by whom the file was created. Most people only ever see the visible content, but underneath lies a wealth of additional detail.
For example, a photo shared externally might contain GPS coordinates and the type of device used. Also, a Word document may carry the author’s name, the company domain, editing history and even internal file paths. Emails include headers that record the sending IP address, the mail server used, and the route taken.
Therefore, as highlighted by these examples, this invisible data matters because criminals do not always need to break into a system to learn about it. Metadata provides them with an information-rich trail they can use to understand how a business operates, who works there, and what technologies are in place. For UK businesses already facing high levels of phishing and fraud, this exposure creates another avenue for attack.
How Scammers Exploit Metadata
– Mapping the Organisation
In the early stages of a cyberattack, reconnaissance is everything and metadata is a valuable source of intelligence, helping attackers map out how an organisation works. For example, email headers can reveal communication patterns between staff. File metadata can identify the software tools a business relies on. Author names, revision histories and internal folder structures point to job roles and responsibilities. All of this can help scammers to build a picture of the target before any overt intrusion is attempted.
– Spear Phishing and Business Email Compromise
Metadata turns generic phishing into precision-engineered deception. For example, fraudsters can use internal project names or document formats drawn from metadata to make their phishing emails look authentic. In Business Email Compromise (BEC) scams, where criminals impersonate senior executives or trusted partners, metadata-derived details lend credibility and increase the likelihood of success.
The scale of phishing in the UK highlights the danger. For example, the UK Cyber Security Breaches Survey 2025 found that 43 per cent of businesses suffered a cyberattack or breach in the past year, equating to around 612,000 organisations. Of these, 85 per cent identified phishing as the cause, making it by far the leading threat. Also, separate research by Visa reports that 41 per cent of UK SMEs suffered fraud in the last year, with phishing, invoice scams and bank hacks the most common methods.
– Document-Level Social Engineering
Documents uploaded to websites or sent externally can inadvertently expose staff names, revision histories and company systems. Attackers use these details to craft fake invoices, letters or reports that look convincing. Security firm Outpost24 has shown how document metadata can reveal usernames, shared drive paths and software versions, all of which can be weaponised in targeted scams.
Real-World Lessons from Metadata
Several cases over the past two decades show how metadata, often overlooked in day-to-day business use, can surface in ways that expose sensitive information or provide attackers with a clear advantage.
– Merck Vioxx Litigation. In a landmark legal case, Microsoft Word documents disclosed revision histories showing that negative clinical trial results had been deleted. While not a cyberattack, it underlines how damaging metadata can be when exposed.
– Public Document Reconnaissance. Researchers at cybersecurity company Outpost24 demonstrated how simple metadata inspection of public files can expose organisational hierarchies and IT systems, effectively handing attackers a blueprint for intrusion.
– Email Metadata Inference. Academic studies have shown how even anonymised email metadata can reveal relationships between employees, peak activity times and internal workflows, demonstrating the power of metadata even without direct content access.
The Bigger Picture
The 2025 Cyber Security Breaches Survey also revealed that ransomware incidents, though less common than phishing, doubled from 0.5 per cent to 1 per cent of UK businesses, affecting nearly 19,000 firms. Meanwhile, cyber-enabled fraud hit 3 per cent of businesses, with average losses of £5,900, rising to £10,000 when excluding zero-cost cases.
Visa’s SME research shows that fraud cost small firms an average of £3,808 each, while the UK’s National Crime Agency continues to highlight phishing and social engineering as dominant forms of cyber-enabled crime.
These findings illustrate how metadata sits at the heart of many of today’s most prevalent attacks. By offering a hidden but rich data source, it makes phishing easier to personalise and fraud more convincing.
Metadata’s Dual Role
It should be noted, however, that metadata is not always a liability. For example, investigators and compliance officers use it to verify documents, trace timelines and detect manipulation. Revision histories, for example, can prove when a file was altered, while consistent timestamps across files can support fraud detection.
The problem is that criminals are equally aware of this. Fraudsters often scrub or alter metadata to conceal tampering, complicating detection efforts. Shift Technology has noted that this deliberate scrubbing is now a common tactic to cover fraudulent activity.
For businesses, the challenge is striking a balance: retain metadata internally where it supports compliance and investigation, but ensure sensitive metadata is removed before documents are shared externally.
Practical Steps for Businesses
Thankfully, there are straightforward measures that both individual employees and organisations can take to reduce the risks posed by metadata exposure. For example:
User-Level Actions
– Remove metadata before sharing externally. Tools such as Microsoft Office’s “Inspect Document” or PDF sanitisation features can strip out hidden data.
– Use VPNs when remote working. This helps mask IP addresses that could otherwise be logged in email headers.
– Be wary of attachments. Metadata-driven spear phishing makes fraudulent documents look highly credible.
– Provide staff training. Employees must understand that even ordinary files can carry sensitive metadata that exposes the business.
Organisational Controls
– Enforce metadata hygiene policies. Configure systems to automatically remove sensitive properties from outgoing files.
– Conduct metadata audits. Regularly check websites, shared drives and repositories to ensure sensitive details are not exposed.
– Harden email systems. Configure Microsoft 365 and other platforms to minimise metadata leakage, anonymise IPs and encrypt communications.
– Preserve metadata for internal use. Maintain full records for audit, compliance and fraud detection, while ensuring only sanitised files leave the organisation.
What Does This Mean For Your Business?
Metadata has become one of the least visible but most powerful tools in the arsenal of cybercriminals. What appears to be an ordinary email or document can, in fact, provide scammers with all the intelligence they need to plan their next move. For UK businesses already contending with phishing, invoice fraud and cyber-enabled crime on a large scale, the risk is not theoretical but immediate. The figures from recent surveys underline the point that metadata is often the hidden enabler of attacks that are already costing firms time, money and trust.
The picture is complicated by the fact that metadata is also useful. Security teams, regulators and auditors depend on it to investigate wrongdoing and prove authenticity. Stripping it away entirely can weaken fraud detection and compliance efforts, while leaving it exposed can give criminals the information they need. This balancing act is one that every organisation, large or small, must now face.
For business leaders, the message is clear. Metadata management can no longer be treated as a technical afterthought. It must be factored into security policies, training programmes and compliance strategies. Firms that take proactive steps will not only reduce their exposure to scams but also strengthen their ability to investigate incidents and demonstrate resilience. Those that fail to act risk leaving themselves open to increasingly sophisticated fraud that leverages the very information they generate every day.
Beyond individual businesses, the issue has wider implications. For example, regulators, technology providers and law enforcement agencies all have a stake in how metadata is handled. The growing use of artificial intelligence in both cyber defence and criminal activity means metadata is likely to play an even larger role in the future. For the UK economy, where small and medium-sized enterprises form the backbone, raising awareness and embedding good practice will be crucial in reducing vulnerability across the board.
News : New AI ‘Always-On’ Smart Glasses With ‘Infinite Memory’
Two former Harvard students are preparing to launch a pair of ‘always-on’ AI smart glasses that record and transcribe every conversation, which offers wearers an unprecedented digital memory and real-time information, but which also raises concerns about privacy and surveillance.
Who Is Behind the Project?
The device, named Halo X, has been developed by AnhPhu Nguyen and Caine Ardayfio, who left Harvard to pursue the venture. The duo previously made the news when they built a facial recognition app capable of identifying strangers using Meta’s Ray-Ban glasses. That earlier experiment was intended to highlight privacy risks, but their latest work shifts focus to turning wearable AI into a mainstream productivity tool.
Backers
Backed by $1 million in seed funding from US investors, including Pillar VC and Soma Capital, the pair are pitching Halo X as a way to extend human intelligence. They describe the glasses as offering “infinite memory” by capturing and processing every spoken word in real time.
How the Glasses Work
Halo X looks like conventional eyewear, but its frame conceals microphones and a discreet display. Conversations are captured continuously, transcribed by speech recognition software, and then fed through AI systems that provide real-time prompts, reminders, or supporting information via the lens.
The transcription engine is provided by California-based firm Soniox, while reasoning comes from Google’s Gemini model, and internet search is integrated through Perplexity. The company says audio is deleted once transcribed, rather than stored, and stresses that it is working towards stronger compliance and encryption measures to reassure future buyers.
Why ‘Always-On’ Matters
The standout difference is that Halo X does not require activation. For example, unlike Meta’s Ray-Bans or Snap’s Spectacles, which focus on recording photos or short clips, Nguyen and Ardayfio’s glasses are designed to listen continuously. The idea is that conversations should never be missed, whether in meetings, social interactions or chance encounters.
For the founders, this constant operation appears to be a key way to make the product genuinely useful and extra convenient to users. By eliminating the need to press record or take manual notes, they believe the glasses can function as a true cognitive assistant, capturing every word and supplying relevant data instantly.
Used For What?
The potential business applications are wide-ranging. For example, in meetings, the glasses could create a full transcript without a separate note-taker. In sales, staff could receive live prompts about a client’s history or preferences. In medicine or law, professionals could rely on transcripts to support record-keeping.
For companies, the attraction is therefore the ability to document and analyse conversations automatically could save time and improve accuracy. However, the risks are equally apparent. For example, in industries where confidentiality is paramount, the presence of an always-listening device could be problematic, and firms would need to establish strict policies on when and how such glasses could be used.
A Challenge to Established Players
By launching at $249, Halo X is priced far below devices like Apple’s Vision Pro, which retails at over $3,000. While those products emphasise immersive mixed reality, Halo X focuses squarely on augmenting everyday communication. This positions the glasses as less of an entertainment device and more as a professional tool, potentially creating a new niche in the wearables market.
For larger rivals such as Meta, Google and Apple, the arrival of Halo X shows that smaller startups can still push wearable AI in radical directions. Whether consumers accept the “always-on” trade-off remains to be seen, but the glasses represent a more practical, lightweight alternative to bulkier headsets.
Privacy and Security at the Forefront
The design has inevitably raised concerns about privacy. Unlike Meta’s glasses, Halo X does not include a recording light or other visible indicator. While the company insists that audio is deleted after transcription, critics argue that the very act of constant listening could erode expectations of privacy in public or private settings.
In the US, state-level two-party consent laws make it illegal to record a conversation without permission from all participants. In the UK, covert recording in workplaces or client meetings could also breach both legal and ethical standards. For businesses, allowing staff to wear Halo X may therefore carry compliance risks as well as reputational ones.
Practical Limitations
Aside from privacy, technical factors may also determine the glasses’ fate. For example, battery life is one question. Continuous recording and processing could quickly drain power, making it difficult to wear the glasses all day. Comfort and social acceptability are other issues. Google Glass failed partly because wearers faced social backlash, and some analysts suggest the same could happen with Halo X if people feel uncomfortable being around someone whose glasses are always listening.
On the technical front, accuracy in noisy environments remains another test. Speech recognition systems have improved dramatically, but background noise, multiple speakers and varied accents can still reduce reliability. For the glasses to gain traction in professional settings, they will need to deliver consistently accurate transcriptions and responses.
Regulation and Adoption Challenges
It should be noted that, useful features aside, for UK businesses and regulators, Halo X poses some immediate questions. Under the General Data Protection Regulation (GDPR), the recording and processing of personal data requires a lawful basis, and covert capture of conversations could put companies in breach of strict compliance rules. The UK’s Information Commissioner’s Office (ICO) has previously warned firms against deploying surveillance technology without clear justification, and wearable devices such as Halo X may well fall into that category.
Sectors that deal with highly sensitive information, from financial services to healthcare, would face particular scrutiny. Employers would need to weigh the potential productivity benefits against the risk of breaching confidentiality or data protection law. Even if Halo X deletes recordings after transcription, the act of processing still constitutes data handling, meaning it falls under regulatory oversight.
At the same time, adoption patterns are likely to differ by industry. Technology and creative sectors, which often embrace early experimentation, may be more open to trialling the glasses. By contrast, regulated professions such as law and medicine may take a more cautious approach until clearer guidelines are established.
On the competitive side, Halo X enters a market where the biggest technology firms are betting heavily on wearables. Apple’s Vision Pro and Meta’s Ray-Ban glasses emphasise entertainment and communication, while Microsoft continues to back its HoloLens for enterprise. By focusing on continuous transcription and contextual intelligence, Halo X is carving out a different niche, but it remains to be seen whether customers will accept the trade-offs required by an always-on design.
What Does This Mean For Your Business?
For UK businesses, the decision to engage with technology like Halo X will hinge on balancing potential productivity gains against legal, ethical and reputational risks. The glasses could transform how meetings are run and how records are kept, offering speed and convenience that current tools cannot match. Yet the same features that make them useful also create liability. Firms operating in sectors with strict confidentiality obligations may find adoption more of a risk than an opportunity unless stronger safeguards are developed.
For regulators, Halo X represents the next stage in a wider debate over wearable AI. Questions around informed consent, data processing, and acceptable limits on surveillance are not new, but devices that operate constantly and silently bring these issues to the surface in sharper terms. Authorities such as the ICO will almost certainly be pressed to clarify how existing rules apply, and whether new measures may be required.
Competitors and investors will also be watching closely. If Halo X gains traction, larger players may be forced to rethink their own wearables strategies and consider more enterprise-focused designs. If the glasses falter, it will reinforce the view that the public is not ready to accept always-on recording in daily life. Either way, the launch underlines how quickly AI is moving beyond phones and laptops into devices worn on the body, with all the social and commercial consequences that brings.
For individual users, the promise of “infinite memory” is enticing but the trade-offs are stark. To wear Halo X is to invite a layer of surveillance into every interaction, whether or not others consent. That tension between utility and intrusion will decide whether the glasses become an accepted business tool or another ambitious idea that fails to gain social acceptance.
News : UK Backs Down In Apple Privacy Row
The UK government has backed down from its demand that Apple create a “back door” into its encrypted systems, ending a high-profile dispute that drew in Washington and sparked widespread criticism from privacy campaigners and industry experts.
How the Row Began
The confrontation began late last year when the UK Home Office issued Apple with a “technical capability notice” under the Investigatory Powers Act. This law (also known as the “snooper’s charter”) allows the government to compel technology companies to assist law enforcement in accessing data to investigate serious crimes such as terrorism and child sexual abuse.
The notice required Apple to make encrypted customer data available to authorities on demand. What made it unusual was its global scope, i.e. the order applied not just to British customers but potentially to Apple users anywhere in the world, including in the United States.
The demand clashed directly with Apple’s Advanced Data Protection (ADP) tool, launched in 2022, which provides end-to-end encryption for iCloud backups. Once activated, not even Apple itself can access the contents of a user’s iCloud files, photos, notes or reminders. For law enforcement, this meant some data would be completely beyond reach. For Apple, complying with the UK’s order would have meant deliberately undermining its own encryption.
Apple responded by withdrawing ADP for new customers in the UK, saying it was “deeply disappointed” and would “never build a backdoor or master key” to its products. At the same time, it launched a legal challenge to the government’s order at the Investigatory Powers Tribunal, with a hearing scheduled for early 2026.
Escalation Into a Transatlantic Dispute
What might have remained a UK legal battle soon escalated into an international row. Because the UK’s notice applied worldwide, it raised the possibility of British authorities accessing the data of American citizens.
US leaders reacted strongly. President Donald Trump accused Britain of “behaving like China” and publicly told Prime Minister Keir Starmer: “You can’t do this.” Vice President JD Vance called the demand “crazy”, warning that it risked creating a vulnerability in US technology that could be exploited by hostile states. Tulsi Gabbard, the US Director of National Intelligence, was equally blunt, saying the order “would have encroached on our civil liberties”.
Behind the scenes, senior American officials pressed London to change course. According to the Financial Times, Vice President Vance personally intervened during a recent visit to the UK, negotiating what US officials later described as a “mutually beneficial understanding” that the order would be withdrawn.
The UK Retreats
On 19 August, Gabbard confirmed in a post on X that the UK had “agreed to drop its mandate for Apple to provide a ‘back door’ that would have enabled access to the protected encrypted data of American citizens”. She added that she had been working with President Trump and Vice President Vance “to ensure Americans’ private data remains private and our constitutional rights and civil liberties are protected”.
The Home Office has refused to confirm or deny her claim, citing a long-standing policy not to comment on operational matters. However, multiple British officials told reporters that the issue was “settled” and that London had “caved” to US pressure.
Whether the technical capability notice will be formally withdrawn, amended to target only UK citizens, or left in place but unenforced remains unclear. Legal experts have pointed out that limiting access to UK citizens’ data alone may be technologically unrealistic, since Apple’s cloud systems do not distinguish by nationality.
Why the Government Backed Down
Several factors contributed to the reversal. The most immediate was diplomatic pressure from Washington. With Trump’s administration already imposing tariffs on European goods and pressing allies on defence spending, the UK government may have had little appetite for a damaging rift over encryption policy. Also, some would say that, given Apple’s Tim Cook’s recent public strategic outreach (financially and symbolically) with President Trump, and UK Prime Minister Starmer’s wish not to have tariffs increased following recent negotiations, this may have been one fight the UK government thought it best not to have at this time.
Another factor was the risk to Britain’s global reputation. Legal experts and business groups had warned that forcing Apple to break encryption could deter companies from operating in the UK, damaging the country’s status as a safe destination for data. Charlotte Wilson, head of enterprise at Check Point Software, described the original order as “hugely damaging”, saying that once a master key to encrypted data exists “criminal groups and hostile states will try to exploit it too”.
Civil liberties organisations also appear to have played a role. Liberty and Privacy International had both launched legal action against the government, arguing that creating a back door would be unlawful and reckless. Sam Grant, Liberty’s director of external relations, called the reported U-turn “hugely welcome”, warning that such powers would put campaigners, minority groups and politicians at heightened risk of targeting.
What It Means for Apple and Its Users
For Apple, the retreat could be seen as a vindication of its long-standing stance on encryption. The company has repeatedly argued that any deliberate weakness, even one intended for law enforcement, could eventually be exploited by criminals or foreign governments.
It is now likely that Apple will reinstate Advanced Data Protection for new UK customers, although the company has not yet confirmed its plans. If it does, British businesses and individuals will again be able to benefit from the highest level of iCloud encryption, aligning with customers elsewhere in the world.
For UK businesses in particular, the move has real significance. For example, end-to-end encryption is increasingly seen as a baseline requirement for protecting sensitive intellectual property, financial data and client communications. Any perception that the UK was a weak link could have harmed firms’ ability to meet international compliance standards or reassure overseas partners.
Lingering Concerns
Despite the climbdown, some critics argue that the underlying problem remains. The Investigatory Powers Act still contains provisions allowing the government to issue similar notices in future. Jim Killock, executive director of the Open Rights Group, said: “The UK’s powers to attack encryption are still on the law books, and pose a serious risk to user security and protection against criminal abuse of our data.”
There are also unanswered questions about whether other technology companies have been served with similar demands. WhatsApp, for example, has said it has not received such a notice, but secrecy provisions mean firms cannot always disclose whether they have been targeted.
Another unresolved issue is whether Britain will seek to revise its order in a way that applies only to UK citizens. Privacy experts caution that such an approach could still create risks, since once a back door exists, it cannot easily be limited to one group of users.
The Wider Picture
The dispute highlights the tension between governments’ desire for access to digital evidence and technology companies’ commitment to protecting user privacy. Governments argue that encryption can provide cover for criminals and terrorists, while companies and privacy advocates insist that undermining encryption would weaken security for everyone.
For the UK government, the episode has also shown the limits of its extraterritorial powers. While the Investigatory Powers Act gives British authorities the ability to issue global data access demands, enforcing them against multinational firms without international support is fraught with difficulty.
For the United States, the outcome demonstrates the strength of its leverage over allies when civil liberties and the interests of its technology sector are at stake. Gabbard framed the UK’s reversal as a victory for American citizens’ rights, while Senator Ron Wyden described it as “a win for everyone who values secure communications”.
What Does This Mean For Your Business?
The UK government’s retreat may settle the immediate dispute but it leaves many questions unanswered about how far states can and should go in seeking access to private data. The fact that London backed down only after sustained US pressure shows the difficulty of enforcing extraterritorial demands when they clash with the interests of powerful allies and companies. It also underlines that encryption has become more than a technical feature, it is now a geopolitical fault line between privacy, commerce and national security.
For Apple, the outcome strengthens its position as a global defender of encryption and restores confidence among its UK customers, many of whom had been left without the strongest level of iCloud protection. Businesses in particular stand to gain if Advanced Data Protection is reinstated, since they rely heavily on secure storage and communications to safeguard sensitive information. The reassurance that the UK will not compel Apple to weaken its systems may also help British firms demonstrate compliance with international standards and maintain trust with overseas partners.
For the UK government, however, the episode risks being seen as a climbdown that exposes the limits of its investigatory powers. Ministers continue to argue that strong surveillance powers are essential to combat threats such as terrorism and child abuse, yet critics have been quick to say that Britain has undermined its own credibility by pushing for a back door it could not deliver. Privacy campaigners and technology experts will also point out that the Investigatory Powers Act still contains the same provisions, leaving open the possibility that a future government may attempt a similar move.
The wider implications go beyond Apple. Other technology companies will be weighing what this episode means for their own obligations under UK law and whether they too could face demands that clash with global privacy protections. Civil liberties groups will continue to press for reforms to prevent governments from seeking back doors in the first place, while law enforcement agencies are likely to warn that criminals will continue to exploit encryption to hide their activities. What is clear is that this confrontation has highlighted the difficulty of balancing privacy, security and international diplomacy, and it will not be the last time these issues collide.
Company Check : Businesses Choose Proxies As VPNs Face Rising Scrutiny
A growing number of UK companies are moving away from VPNs and adopting proxy services instead, while regulatory pressures and changing business needs reshape the digital tools used for online operations.
Regulatory Drivers Behind the Shift
The trigger for this apparent trend has been the UK’s Online Safety Act, which came into force on 25 July 2025. The law requires platforms hosting user-generated content to carry out risk assessments, enforce strict age verification, and prevent users from bypassing restrictions. Ofcom, the regulator tasked with enforcement, has flagged VPNs as a potential loophole in these measures. This has left businesses increasingly wary about relying on them, even though the government has said there are no immediate plans to ban VPN services outright.
VPN usage in the UK has nevertheless surged in the wake of the Act. For example, figures show Proton VPN sign-ups rose by 1,800 per cent and Nord Security registrations climbed by 1,000 per cent in the days following the new rules, while VPN apps dominated the UK App Store rankings. However, what once looked like a straightforward privacy tool has now become a focal point for regulators. Companies that rely on VPNs for tasks such as market research, competitive analysis, or data collection are finding themselves exposed to new risks of compliance problems and potential scrutiny.
Proxy Demand Surge
This uncertainty has pushed many businesses to explore alternatives. Proxies, which route traffic through an intermediary server without encrypting it in the same way as a VPN, have emerged as a preferred option for a growing range of enterprises. Data from Decodo, a global proxy provider, shows UK proxy users have increased by 65 per cent since the Act was introduced, with proxy traffic rising by 88 per cent.
Industry leaders suggest this is not just a temporary workaround but a reflection of a more deliberate strategy. “Companies around the globe are getting smarter about how they operate in highly competitive landscapes. Instead of just picking the most popular tools, they’re choosing what actually works best for them,” said Vytautas Savickas, CEO at Decodo.
Examples of Proxy Providers
Several proxy companies now leading the market for UK businesses. For example, Oxylabs and Bright Data are recognised for their scale, offering millions of residential, datacentre, mobile and ISP IP addresses worldwide, including large UK pools. Decodo, formerly Smartproxy, has become popular for its balance of affordability and ease of use, while Webshare provides reliable service and even a free tier for smaller tasks. SOAX specialises in city-level targeting across the UK, making it useful for location-sensitive operations, while IPRoyal and Rampage Proxies are seen as accessible entry-level choices. Together, these firms illustrate how far the proxy market has developed, offering tools that range from budget options to enterprise-grade services.
How Much Do Proxies Cost?
Obviously, pricing for proxies varies depending on type, usage volume and provider. Just as an example, however, at the lower end, services like Rampage Proxies start around $1 per gigabyte for residential proxies, while SOAX charges about $3.60 per gigabyte on smaller plans, dropping to closer to $2.50 for high-volume commitments. Enterprise providers such as Oxylabs and Bright Data are typically in the $3–$4 per gigabyte range. In practical terms, this means a small business might spend £100–£300 per month, with medium-sized operations budgeting £300–£1,000, and large enterprises paying upwards of £1,200. By contrast, business VPNs usually charge per user, between $7 and $18 a month, which is cost-effective for secure team access but less suited to the high-volume, region-specific tasks where proxies are increasingly used.
Technical and Strategic Benefits
One reason proxies are becoming more attractive is the greater level of control they provide. VPNs typically encrypt traffic and route it through a single tunnel, which is valuable for privacy but less useful for certain business functions. Proxies, on the other hand, allow more granular routing and customisable access. This means organisations can target data collection by location, test region-specific websites, or monitor competitors without triggering the same kinds of red flags that VPN use often does.
For example, in eCommerce, proxies are being used for price tracking and ad verification, ensuring that online campaigns appear correctly in different regions. In finance and fintech, they help detect fraudulent activity by simulating access from multiple jurisdictions. In digital marketing, SEO teams rely on proxies to monitor search results from specific countries.
As Gabriele Verbickaitė, Product Marketing Manager at Decodo, explained: “More organisations in the UK are investing time in understanding the tools that power secure and efficient online operations. Most companies test out different solutions, providers, and do their research on proxies and VPNs, and they’re also making more informed, strategic choices.”
Innovative Proxy Types
It should also be noted here that the technology itself has matured rapidly. For example, modern proxy services are no longer niche or unstable tools but come bundled with enterprise-grade security features and user-friendly platforms. Companies can now choose from residential proxies, which mimic the IP addresses of home users, also mobile proxies, which use cellular networks, ISP proxies, which combine stability with speed, and datacentre proxies, which are optimised for scale and performance.
This variety gives firms options that align with their specific objectives. Residential and mobile proxies, for example, are harder to detect and block, making them useful for ad verification or web scraping. ISP and datacentre proxies, by contrast, are better suited to tasks requiring speed and high volumes of data. Vaidotas Juknys, Head of Commerce at Decodo, said: “UK businesses are quickly adopting proxy services, moving beyond simple VPNs to more advanced setups that offer greater control over their online activity. It’s no longer just about staying private – performance and reliability are now just as important.”
Security Trade-Offs
However, despite their appeal, proxies are not without risks. The key difference is that proxies do not encrypt traffic in the same way that VPNs do, leaving data potentially more exposed to interception or monitoring. For businesses dealing with sensitive information, this can create vulnerabilities if additional protections are not in place.
Free or poorly managed proxies pose even greater concerns. Studies have shown that many free proxy services are either unstable or actively malicious. Research published last year (University of Maryland and the Max Planck Institute for Informatics) found that only around 34.5 per cent of free proxies tested were active, with many exposing users to adware, credential theft, or malware. For this reason, security experts warn that firms should treat proxies as part of a broader, layered security strategy rather than a like-for-like replacement for VPNs.
At the same time, critics note that the rapid adoption of proxies could create its own regulatory flashpoints. Just as VPNs are being scrutinised for their role in bypassing restrictions, widespread proxy use may eventually attract similar attention. Privacy campaigners argue that this arms race between regulation and circumvention tools risks undermining trust in digital services altogether.
Some Key Challenges and Criticisms
The transition also raises some practical challenges. For example, businesses must ensure that the proxy providers they use have robust security and compliance standards. Unlike VPNs, which are relatively standardised, the proxy market is fragmented, with varying levels of reliability and transparency among providers. Companies that depend heavily on proxies for data-driven decision-making could find themselves exposed if those services are blocked, blacklisted, or compromised.
Another criticism is that while proxies offer technical advantages, they do not necessarily solve the deeper issues driving regulation in the first place. The Online Safety Act was designed to protect children and reduce harmful content online, yet businesses adopting proxies to sidestep VPN concerns may only be shifting the problem rather than addressing it.
These concerns highlight the complexity of the issue. On one side, businesses need practical tools to compete globally, collect data, and operate efficiently. On the other, regulators are pushing for tighter oversight of digital access, with VPNs and now proxies caught in the middle of the debate.
What Does This Mean For Your Business?
The evidence suggests that the move from VPNs to proxies is more than just a passing reaction to regulation. For UK businesses, proxies appear to offer some real operational advantages, from accurate regional targeting to resilience against restrictions that can disrupt data-driven work. Sectors such as eCommerce, finance and digital marketing are already embedding these services into their daily operations, treating them not as optional extras but as essential infrastructure. For many firms, the ability to monitor competitors, verify advertising, or track prices across multiple markets has become too important to risk on tools that may fall under heavier regulatory pressure.
However, the shift also carries unavoidable trade-offs. For example, proxies may deliver speed and flexibility, but they do not provide the same encryption and privacy protections as VPNs, which creates a different risk profile. This is forcing companies to rethink their wider security strategies and balance operational performance with robust safeguards. For regulators, the trend signals another layer of complexity, as proxy use could undermine some of the very protections that the Online Safety Act was intended to enforce.
What this means for UK businesses is that digital infrastructure decisions are no longer simply about cost or convenience. For proxy providers, the surge in demand represents an opportunity to cement their place in the enterprise market, but it also brings responsibility to deliver reliable, transparent and secure services. For policymakers, the growth of proxies underscores the difficulty of regulating technologies that adapt faster than legislation.
The result is a more finely balanced environment, where businesses gain new capabilities but also face new scrutiny. Proxies may now be the tool of choice for many UK firms, but their adoption highlights wider questions about how companies, regulators and consumers can navigate the shifting ground of online access and digital control.
Security Stop-Press: Cybercriminals Seeking English-Speaking Social Engineers
English-speaking social engineers are now among the most in-demand recruits on cybercriminal forums, with job ads more than doubling between 2024 and mid-2025, according to ReliaQuest.
Often described as “impersonation-as-a-service”, criminals can now subscribe to training, scripts, and tools that make it easier to trick employees into handing over access. Groups such as Scattered Spider and ShinyHunters have used these techniques to launch targeted account-takeover attacks, including recent breaches of Salesforce accounts at firms like Dior, Chanel, Allianz, and Google.
Experts say English remains the priority because it allows attackers to convincingly impersonate staff at global companies, giving them a clear advantage over automated phishing or generic malware.
For organisations, the best defence lies in strong identity controls and staff training. Multi-factor authentication, strict verification procedures, and regular awareness exercises can help stop employees being manipulated into giving away access.