Sustainability-In-Tech : Robots Cut Strawberry Pesticides

A new wave of farm automation is aiming to cut chemical use in food production, led by California-based TRIC Robotics, whose UV-powered robots are helping strawberry growers tackle pests and disease without pesticides.

Tackling One of the Dirtiest Fruits on the Shelf

Strawberries may be a consumer favourite, but they’re also among the most chemically treated fruits in commercial farming. For example, according to the US-based non-profit Environmental Working Group’s 2024 “Dirty Dozen” list, strawberries once again topped the rankings for the highest levels of pesticide residue found on produce in the US. Despite growing demand for organic alternatives, conventional pest control practices in strawberry production remain heavily reliant on chemical sprays, often applied multiple times per week throughout the season.

It’s this issue that San Luis Obispo-based TRIC Robotics set out to address with a radically different approach. For example, rather than spraying crops with synthetic chemicals, the ag-tech company is using ultraviolet (UV-C) light, applied by autonomous robots operating at night, to kill pathogens and deter pests. Early results suggest the method could significantly reduce pesticide use on commercial farms while improving yield and sustainability.

Who Is Behind TRIC Robotics?

TRIC Robotics was founded in 2017 by Adam Stager, who holds a PhD in robotics. The company originally focused on developing mobile robots for law enforcement but pivoted towards agriculture in 2020 after Stager began exploring how automation could be applied to more socially impactful sectors. Through a US Department of Agriculture (USDA) commercialisation programme, he was introduced to dormant UV-light research that had not yet reached the field.

“I really wanted to do something that would have meaningful impact,” Stager told TechCrunch earlier this year. “When I discovered the potential of UV-C for farming, I saw a way to improve food production while reducing harm.”

Alongside co-founders Vishnu Somasundaram and Ryan Berard, TRIC began trialling early prototypes in strawberry fields along the US West Coast. The first robot was built in Stager’s garage and transported cross-country to farms in California, where the majority of US strawberries are grown. Since those early experiments in 2021, the company has expanded to nine robots and secured contracts with several major growers.

How the Technology Works

The system centres around large, tractor-sized autonomous robots, named Eden and Luna, which use UV-C light to control fungal and bacterial pathogens as well as insects such as spider mites. UV-C light, a short-wavelength ultraviolet radiation, damages the DNA of microorganisms, disrupting their ability to reproduce.

The robots operate exclusively at night, when UV-C is most effective and when plants are less vulnerable to stress. Each robot is equipped with adjustable booms, dosing systems, and high-resolution cameras for precision treatment. They can cover 50 to 100 acres each, moving independently through rows and adjusting to uneven terrain and plant height in real time. Vacuum systems are also fitted to remove pest residue and insects from leaves without damaging the crop.

Robots As-A-Service?

Instead of selling robots to farmers outright, TRIC offers a subscription-style “service model” in which robots are delivered, managed, and maintained by the company. Farmers pay roughly the same as they would for conventional spraying but avoid the need for pesticides, re-entry delays, or additional labour.

Environmental and Operational Benefits

The approach offers a clear environmental upside, which is reduced pesticide use. This, in turn, means less chemical runoff into soil and waterways, lower risk to pollinators and other beneficial insects, and fewer residues on produce. It also supports growers aiming to meet organic standards or export restrictions tied to pesticide levels.

From a business perspective, the robots improve consistency, reduce re-spray requirements, and allow treatments to occur more frequently. TRIC claims farms using its robots have seen pesticide use fall by up to 70 per cent, with some reporting yield improvements thanks to better pest and disease control.

The autonomous machines also generate valuable data. Built-in cameras and sensors capture real-time insights on plant health and pest pressure, helping growers monitor performance and make more informed decisions.

Ambition = Automated Crop Protection

TRIC raised $5.5 million in seed funding in mid-2025, led by Version One Ventures, with backing from Garage Capital, Lucas Venture Group, and others. The investment is being used to expand the robot fleet, enhance analytics, and explore the system’s applicability to other crops beyond strawberries.

Stager says the long-term ambition is to provide “automated crop protection” across multiple types of produce. “Agriculture needs practical, scalable solutions to reduce chemical inputs and protect yields,” he told investors during the funding round. “UV-C is one of those solutions—but only if it can be applied efficiently, safely, and at scale.”

TRIC’s approach also highlights a broader shift in ag-tech away from standalone equipment sales towards service-based, data-rich models that mirror the way many farmers already procure services like spraying or fertilisation.

Others in the Field

TRIC is not alone in applying UV-C to agriculture, but its combination of automation, scale, and commercial deployment is relatively rare. One of the best-known alternatives is Norway’s Saga Robotics, whose Thorvald platform uses UV-C light to treat strawberries and grapes in Europe and the US. However, Saga’s robots are smaller, battery-powered, and typically used in research or niche applications.

Other firms, such as FarmWise and Naïo Technologies, are also building autonomous farm machinery, but these generally focus on weeding, harvesting, or mechanical cultivation rather than light-based disease control.

In the greenhouse sector, Dutch firms like Priva and Signify have experimented with UV light for fungal control in tomatoes and cucumbers, but few solutions are currently available for open-field use at scale.

This essentially positions TRIC as one of the most commercially advanced players applying UV-C at field level. Still, the space is expected to grow quickly, with McKinsey predicting that farm robotics and automation will become a $50 billion global market by 2030.

Challenges

Despite promising results, the technology is not without challenges. One concern is the potential for overuse of UV-C, which can damage plant tissue or lead to resistance in certain pest populations if not carefully managed. TRIC’s dosing systems are designed to avoid this, but it remains a technical and biological balancing act.

Another issue is energy use. For example, although TRIC’s early robots were battery-powered, the current versions use on-board diesel generators due to limited field charging infrastructure, thereby raising questions about carbon emissions, especially for a solution marketed on sustainability grounds. TRIC has acknowledged this limitation and says future versions may explore hybrid or fully electric designs as farm infrastructure improves.

There are also operational constraints to consider. For example, the robots work best in certain field layouts and require access to well-maintained paths and consistent planting patterns which is something not all farms can offer without modification. That said, TRIC’s tractor-sized form factor was deliberately chosen to mirror existing spray rigs and reduce disruption.

Also, some industry observers have questioned whether UV-C alone is actually sufficient to replace chemical sprays across a full growing season, especially in regions with high pest pressure. While results from pilot sites have been encouraging, broader third-party trials and peer-reviewed research will be key to long-term credibility.

What Does This Mean For Your Business?

If TRIC’s model continues to scale, it may bring about a change in how pest and disease control is delivered across large-scale agriculture. By offering automation as a service and avoiding upfront equipment costs, the company has lowered the barrier to adoption for growers who might otherwise resist change. That could accelerate the move away from chemical inputs in a sector long dependent on them. The fact that it’s proving cost-comparable to traditional spraying means it may not take government intervention or subsidies to push adoption forward.

For the robotics industry, TRIC’s success adds weight to the idea that task-specific, autonomous machines, especially those built around a practical service model, can find real traction in farming. This is a notable development in a space where many ag-tech ventures remain trapped in trial stages or small-scale pilots. If other crops can be treated as effectively, and if energy issues are resolved, UV-C robotics may offer a compelling template for reducing agrochemical reliance more widely.

UK farmers, especially those under pressure from changing pesticide rules and tighter sustainability requirements, may see clear potential in this approach. For example, British growers facing EU-derived regulations on maximum residue levels and soil health could benefit from a model that allows frequent treatment without chemical application or delayed re-entry. There could also be scope for adaptation to local crops such as soft fruits, leafy greens, or high-value organics, particularly where manual spraying is still dominant or increasingly expensive due to labour shortages.

Also, for UK businesses involved in food supply chains, TRIC’s methods are likely to be promising. For example, as major retailers and buyers place more emphasis on sustainability, traceability, and reduced chemical use, upstream suppliers using robotic UV-C solutions may gain competitive advantage. The same applies to UK-based ag-tech firms exploring adjacent fields. The window is open for others to localise or licence similar models in the UK and Europe, or to partner with growers on collaborative trials.

However, any rollout here would need to take into account different field conditions, crop types, and infrastructure. Unlike the flat, uniform rows of California strawberry farms, many British farms are smaller, more varied in layout, and less mechanised. That may limit near-term deployment without further design iterations.

It’s also worth watching how regulators may respond. For example, UV-C is already used in food processing and healthcare, but applying it in open-field environments could raise fresh questions about environmental exposure, crop labelling, and treatment records. Clear data on safety, efficacy, and operational standards will be essential to building trust.

For now, TRIC’s model stands out as an example of how robotics, when applied thoughtfully and at the right point in the production chain, can genuinely support more sustainable agriculture. The bigger test will come as more farms take it on, and as others begin to compete on similar ground.

Tech Tip – Snooze Gmail Messages to Deal With Them Later

Busy Gmail inbox? The ‘Snooze’ feature lets you temporarily hide an email and have it reappear at a date and time when you’re ready to act on it.

How to:

– Hover over the email in your Gmail inbox.
– Click the clock icon (Snooze) on the right.
– Choose a preset time like “Tomorrow” or “Next week”, or click Pick date & time to choose your own.
– The email will disappear from your inbox and return at the scheduled time—marked as unread and flagged for attention.

What it’s for:

Keeps your inbox clear and helps you deal with non‑urgent emails at the right moment—ideal when you’re on the move, in meetings or just prioritising.

Pro‑Tip: Snoozed emails appear in your Snoozed tab (left-hand menu), so you can check or reschedule them at any time.

Featured Article : UK Public Sector / AI Partnership

The UK Government has entered into a formal partnership with OpenAI aimed at accelerating the responsible use of artificial intelligence (AI) across public services, infrastructure, and national growth zones.

What Is The Deal?

Announced on 21 July 2025, the agreement takes the form of a Memorandum of Understanding (MoU) between the Department for Science, Innovation and Technology and OpenAI, the US-based company behind ChatGPT. While not legally binding, the document outlines both sides’ intentions to deepen collaboration in areas including AI infrastructure, public sector deployment, and AI safety research.

To Transform Taxpayer-Funded Services

According to the Department, the strategic aim is to “transform taxpayer-funded services” and improve how the state uses emerging technologies. It also includes commitments to explore joint investments in regional AI growth zones, share technical insights with the UK’s AI Safety Institute, and expand OpenAI’s UK-based engineering and research operations.

Technology Secretary Peter Kyle described the move as central to “driving the change we need to see across the country – whether that’s in fixing the NHS, breaking down barriers to opportunity or driving economic growth”.

OpenAI CEO Sam Altman echoed this, saying AI is a “core technology for nation building” and that the partnership would “deliver prosperity for all” by aligning with the goals set out in the UK’s AI Opportunities Action Plan.

Why Now And Why OpenAI?

The timing reflects the government’s wider push to try to position Britain as a leader in AI development and deployment. This includes the £2 billion commitment to AI growth zones made earlier this year, alongside a new AI Compute Strategy and the creation of a national AI Safety Institute.

It also comes as the UK faces some sluggish productivity growth, mounting public sector workloads, and strained public finances. Officials argue that automating time-consuming tasks, such as consultation analysis, document classification or civil service admin, could help free up staff to focus on more complex or sensitive work.

OpenAI’s Models Already Being Used

It’s worth noting here that GPT-4o, OpenAI’s latest model, is already being used in a Whitehall tool called “Consult”, which automatically processes responses to public consultations. The tool is said to reduce weeks of manual work to a matter of minutes, while leaving substantive decision-making to human experts.

The government’s AI chatbot “Humphrey” also uses OpenAI’s API to help small businesses navigate GOV.UK services more efficiently.

According to the MoU, future deployments will prioritise transparency, data protection, and alignment with democratic values. However, critics have raised concerns that key details of the deal remain vague.

A Boost for OpenAI’s UK Ambitions

For OpenAI, the partnership will, no doubt, reinforce its growing presence in the UK, which it describes as a “top three market globally” for both API developers and paid ChatGPT subscribers.

The company opened its first international office in London in 2023 and now employs more than 100 staff there. Under the new agreement, it plans to expand these operations further to support both product development and local partnerships.

OpenAI is also expected to explore building or supporting UK-based data centres and R&D infrastructure, which is a move that would enhance what the government calls the country’s “sovereign AI capability”. This concept refers to ensuring that core AI infrastructure and innovation remain under UK control rather than becoming overly reliant on US or Chinese providers.

Sam Altman has suggested that such regional investment could help stimulate jobs and revitalise communities, especially within the designated AI growth zones.

Competitors and UK Tech Firms

The announcement is likely to intensify competition among global AI providers, particularly Google DeepMind and Anthropic, both of which have also signed cooperation agreements with the UK Government in recent months.

However, some British AI firms say the government is placing too much emphasis on partnerships with dominant US players at the expense of homegrown innovation. Tim Flagg, Chief Operating Officer at UKAI, a trade body for British AI companies, previously warned that the AI Opportunities Action Plan takes a “narrow view” of who is shaping the UK’s AI future.

For example, it could mean that UK-based AI firms working on foundation models, language processing, or ethical AI frameworks may now find themselves competing for talent, attention, and influence with the likes of OpenAI, whose models and reputation already dominate the field.

Digital rights campaigners have also questioned whether the government is adequately safeguarding public interest and data security in its eagerness to court big tech firms.

Warnings Over Public Data and Accountability

One of the main criticisms of the deal is its lack of specificity on how public data may be used. While the agreement hints at technical collaboration and information-sharing, it doesn’t clarify whether UK citizens’ data will help train OpenAI’s models, or what safeguards will be in place.

Digital rights group Foxglove called the MoU “hopelessly vague”, warning that OpenAI stands to benefit from the UK’s “treasure trove of public data”. Co-Executive Director Martha Dark went further, saying that “Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty”.

Others have raised broader concerns about transparency and oversight. Some academics and civil service experts suggest that while AI tools may relieve public sector staff of time-consuming administrative tasks, the real challenge lies in ensuring that deployments are done ethically, with strong governance and minimal reliance on personal or sensitive data.

The AI Infrastructure Angle

Beyond public services, the deal includes plans to explore investment in AI infrastructure, a term that typically refers to the high-performance computing facilities and energy-intensive data centres required to train and deploy large AI models.

This ties into the UK’s broader push for regional development. Under the AI Growth Zone initiative, over 200 local bids have been submitted, with billions in potential investment expected. The government has confirmed that both Scotland and Wales will host zones under the AI Compute Strategy.

The partnership with OpenAI may give these ambitions extra momentum. If the company builds or co-develops infrastructure in the UK, it could significantly improve national access to compute power, a key enabler for both public and private AI innovation.

Concerns Over Sovereignty and Big Tech Influence

Despite assurances from ministers that the UK will remain in control of its AI future, there are growing calls for greater scrutiny and legislative oversight.

The UK’s Data Protection and Digital Information Bill, which is making its way through Parliament, may play a role in regulating how personal and government data can be used in AI systems. However, many campaigners believe that dedicated AI legislation, with clear public interest protections, is still lacking.

Meanwhile, the MoU’s non-binding nature means the partnership could evolve in unpredictable ways, without necessarily being subject to parliamentary approval or regulatory review.

Peter Kyle has defended the approach, arguing that “global companies which are innovating on a scale the British state cannot match” must be engaged if the UK wants to compete in the AI era.

However, for opponents, this signals a risk of policy being shaped too closely around commercial interests, rather than the public good.

What Does This Mean For Your Business?

The UK’s agreement with OpenAI may sound like a significant moment in the evolution of public sector AI strategy, but it also raises some important questions about balance, control, and accountability. For government departments under pressure to deliver more with less, AI appears to present an opportunity to reduce routine workloads, speed up processes, and direct skilled professionals toward more impactful tasks. With OpenAI’s models already embedded in tools like “Humphrey” and “Consult”, this partnership could enable deeper integration and faster iteration across critical areas such as justice, health, education, and small business support.

For UK businesses, particularly those involved in or supplying to the public sector, the partnership could bring both practical benefits and growing pressure. For example, OpenAI’s expanded presence may improve access to advanced AI tools, infrastructure, and collaborative opportunities, helping British startups and firms apply new technologies more effectively. At the same time, there is concern that prioritising partnerships with large US-based companies could marginalise smaller UK tech providers whose innovations may be better suited to local contexts but lack the scale or visibility to compete.

The deal also adds pressure on the UK to clarify how it will protect data, enforce ethical guardrails, and ensure that public interest remains front and centre. Critics argue that the lack of legally binding terms leaves room for mission creep or overreach, especially if partnerships expand without clear oversight. With public trust in digital services already under strain, transparency and accountability will be vital to ensuring these systems are not only efficient, but also fair and secure.

Ultimately, the MoU appears to reflect the government’s belief that strategic alignment with global AI leaders is essential if the UK wants to stay competitive. Whether this approach will deliver broad-based economic and societal benefit, or reinforce existing power imbalances, will depend on how well the promises of inclusion, sovereignty, and ethical standards are translated into action. For now, the UK has made its bet, and the challenge will be ensuring that it delivers for everyone.

Tech Insight : 45% Of MSPs Keep Cash To Pay Off Hackers

A new survey reveals 45 per cent of managed service providers (MSPs) are setting aside cash to pay ransomware demands, as fears over AI-fuelled cybercrime continue to mount.

MSPs Under Pressure as Ransomware Attacks Surge

The finding comes from the CyberSmart MSP Survey 2025, which examined the security posture of 900 MSPs across the UK, Europe, Australia, and New Zealand. According to the report, nearly half of those surveyed now maintain a dedicated pot of money in case they are hit by a ransomware attack, a tactic where cybercriminals encrypt a victim’s data and demand a payment for its return.

Counter To Guidance

This approach appears to run counter to guidance from insurers, governments, and law enforcement agencies, which consistently urge organisations not to pay. However, the growing scale and frequency of attacks, often powered by artificial intelligence, appear to be forcing MSPs to adopt a more pragmatic (if controversial) strategy.

“Organisations shouldn’t rely on ransomware payments; rather, they should partner with organisations that can help proactively secure them,” said Jamie Akhtar, CEO and co-founder of CyberSmart.

Be Prepared

The report’s findings highlight a deepening sense of vulnerability among MSPs, many of which provide outsourced IT and cyber-security services to small and medium-sized enterprises (SMEs). With AI-generated phishing emails, malware, and deepfakes becoming increasingly sophisticated, the pressure to be prepared for the worst has never been higher.

More Breaches, More Budgets, More Confusion

CyberSmart’s research revealed that 69 per cent of MSPs had suffered two or more cyber breaches in the last 12 months, while 47 per cent reported being hit three times or more. These incidents are not just one-off events. For example, many are the result of supply chain vulnerabilities, such as the May 2025 breach where the Dragonforce ransomware group exploited a remote monitoring and management (RMM) tool to compromise multiple MSP clients.

Faced with mounting threats, MSPs are reacting in different ways. For example, 36 per cent now rely on cyber insurance as their primary defence, while 11 per cent (worryingly) have neither cyber insurance nor a ransomware fund in place, leaving them financially and operationally exposed if attacked.

Guidance Not Clear

It seems that part of the problem is that official guidance around ransomware payments remains fragmented and unclear. While governments generally discourage paying ransoms, enforcement is inconsistent outside the public sector. “What your business is advised to do will largely depend on where you’re based and who’s advising you,” CyberSmart noted in its commentary.

This has led to a patchwork of interpretations, with some MSPs feeling they have little choice but to maintain a reserve, despite the moral and strategic risks involved.

UK Government Moves to Ban Ransomware Payments for Critical Services

In July 2025, the UK government announced proposals to ban ransomware payments for public sector bodies and operators of critical national infrastructure (CNI). The measures, introduced by the Home Office following a public consultation, would apply to organisations such as hospitals, councils, schools, and water providers, sectors where operational downtime can endanger lives.

“Ransomware is a predatory crime that puts the public at risk, wrecks livelihoods and threatens the services we depend on,” said Security Minister Dan Jarvis. “We’re determined to smash the cyber criminal business model and protect the services we all rely on.”

Private Businesses Would Need To Notify Government Before Paying

Under the proposals, private businesses would not be banned outright from paying, but would be required to notify the government before doing so. This would enable authorities to offer advice, check for potential sanctions breaches (such as paying Russian-linked gangs), and gather intelligence to disrupt criminal networks.

Cybercrime’s Business Model Under Scrutiny

The rationale behind the payment ban is to undermine the business model of ransomware gangs, which rely on victims caving in quickly to avoid reputational damage, data leaks, or prolonged disruption. However, experts have warned that banning payments, especially only for certain sectors, may not have the desired effect.

“Ransomware is largely an opportunistic crime, and most cyber criminals are not discerning,” said Jamie MacColl, a senior research fellow at the Royal United Services Institute (RUSI). “They’re unlikely to develop a rigorous understanding of UK legislation or how we designate critical infrastructure.”

Others suggest the ban could increase the stakes for victims. “If the best solution is to just turn around and say to the hackers, ‘We’re not giving in to your demands anymore,’ don’t be surprised if they double down,” said Rob Jardin, chief digital officer at NymVPN.

The British Library, one of the most high-profile public victims of ransomware in recent years, chose not to pay after an attack in October 2023 devastated its systems. “We are committed to sharing our experiences to help protect other institutions and build collective resilience,” said Chief Executive Rebecca Lawrence.

AI Attacks Are Changing the Game

Perhaps the most striking shift in this year’s CyberSmart survey is the rise of artificial intelligence as the top concern for MSPs in 2025. AI overtook ransomware itself, with 44 per cent of respondents citing it as their biggest worry, compared to 40 per cent for traditional malware and ransomware threats.

This change reflects a growing trend in how attackers operate. For example, AI tools are now being used to write convincing phishing emails, build more evasive malware, and even create deepfake audio and video to impersonate executives or support social engineering attacks.

In 2024, 67 per cent of MSPs reported falling victim to AI-enabled attacks, a figure expected to rise in 2025 as generative and agent-based AI tools become more widely available to threat actors.

However, many MSPs feel ill-equipped to counter these evolving threats, with a lack of user-friendly, AI-specific defence tools still a key issue. “MSPs are being asked to do more, with fewer tools at their disposal,” the report concludes.

Customer Expectations Are Rising, But So Is Investment

The research also showed that 84 per cent of MSPs now manage their clients’ cybersecurity infrastructure, or both their cybersecurity and broader IT estate. This shift reflects growing client expectations for MSPs to provide end-to-end protection which are the kind of expectations that often come with greater scrutiny.

According to the CyberSmart research, 77 per cent of MSPs said potential customers are now evaluating their cyber credentials more carefully, especially in the procurement stage.

To meet demand, it seems that MSPs are now investing heavily. For example, 81 per cent have increased spend on hiring security specialists, and 78 per cent have upped budgets for cyber defence tools, training, and client services. Compliance is also high on the agenda, with 60 per cent hiring regulatory specialists and 64 per cent enhancing capabilities to align with frameworks such as NIS2 in the EU and the UK’s upcoming Cyber Security and Resilience Bill.

According to NCSC Director of National Resilience Jonathon Ellison, such steps are critical: “Ransomware remains a serious and evolving threat, and organisations must not become complacent. All businesses should strengthen their defences using proven frameworks such as Cyber Essentials.”

MSPs Prepared Yet Vulnerable

Despite the high rate of breaches, MSPs remain surprisingly confident in their security posture. For example, CyberSmart found that 76 per cent rate their cyber confidence as above average or higher. That said, only 20 per cent described their confidence as complete, suggesting that many know there’s room for improvement.

Looking at this research, for businesses relying on MSPs to manage their security, the message appears to be that while many providers are stepping up their game, others are still reacting to threats in ways that may not align with long-term best practice.

Co-op CEO Shirine Khoury-Haq, who oversaw the retailer’s response to a Scattered Spider ransomware attack, captured the sentiment well, saying: “What matters most is learning, building resilience, and supporting each other to prevent future harm. This is a step in the right direction for building a safer digital future.”

What Does This Mean For Your Organisation?

For MSPs and their clients, the emergence of ransomware funds could be seen as a move from aspirational resilience to operational realism. Despite official advice against paying cybercriminals, it seems that many MSPs clearly believe they cannot afford to be unprepared. With 69 per cent already breached multiple times in a single year and AI accelerating the scale and complexity of attacks, the temptation to hold a contingency reserve is understandable. However, this pragmatic stance may also entrench the very business model that governments and law enforcement are working hard to dismantle.

The UK’s proposed ransomware payment ban for public bodies and CNI highlights just how far official thinking has moved towards systemic deterrence. However, the exclusion of private businesses from that ban, and the option for them to pay under notification, risks creating an uneven response that may ultimately frustrate enforcement and dilute its impact. As Jamie MacColl pointed out, most ransomware gangs operate opportunistically and will not necessarily distinguish between regulated and unregulated targets. This raises questions about whether partial bans can realistically alter attacker behaviour.

For UK businesses, especially SMEs dependent on MSPs for protection, the findings raise difficult questions. For example, while many providers are making serious investments in tools, people, and compliance, others are still relying on reactive strategies that may offer short-term cover but little long-term assurance. The increasing scrutiny on MSPs is likely to intensify, particularly as clients seek partners who are both cyber confident and operationally transparent. Businesses must now evaluate not only whether their MSP has a ransomware plan, but also whether that plan reflects best practice or a compromise born of confusion.

For regulators, the lack of clarity and consistency around ransomware responses remains a core problem. Guidance alone is proving insufficient. A broader and more unified framework, alongside mandatory reporting, may be needed to help ensure MSPs, their clients, and their insurers are working from the same playbook. For now, the reliance on private ransomware funds points to a cyber landscape still dominated by tactical survival rather than strategic coordination.

Tech News : WhatsApp Barred From Apple Case

WhatsApp has been denied permission to join a major legal challenge over UK government demands for access to encrypted data, as a special tribunal confirms a seven-day public hearing will go ahead in 2026.

WhatsApp Shut Out of High-Stakes Encryption Fight

The Investigatory Powers Tribunal (IPT), which hears complaints about UK surveillance and investigatory powers, has rejected an application by WhatsApp to intervene in two linked legal challenges over the use of secret government powers to weaken encryption.

The challenges stem from a reported Technical Capability Notice (TCN) issued by the Home Office in January 2025. Under the UK’s Investigatory Powers Act, a TCN can compel a company to build or alter technology to ensure it can be accessed by government agencies under lawful authority.

In this case, the order reportedly demanded that Apple provide access to encrypted user data stored globally on its iCloud platform, including material protected by its Advanced Data Protection (ADP) service.

Apple responded in February by withdrawing the ADP feature from UK users, publicly stating that it would never build “a backdoor or master key” into its products. The move drew attention on both sides of the Atlantic, triggering concerns in the US about the implications for American users and businesses.

In March, Privacy International, Liberty, and two individual claimants filed a legal challenge to the secrecy and legality of the Home Office’s reported actions. Apple launched its own legal case in parallel.

Then, in April, the Home Office attempted to argue that the full case should be heard behind closed doors. This was rejected by the IPT following objections from ten media organisations. The tribunal opted instead for a novel legal approach which was to proceed on the basis of “assumed facts”, allowing as much of the hearing as possible to be held in public while preserving the government’s right to “neither confirm nor deny” the existence of the order.

WhatsApp applied to intervene in both cases in June, citing the risk of a precedent that could erode the encryption protections used by billions of people. However, on 23 July, the Tribunal refused the application. A seven-day public hearing will now go ahead in early 2026, combining Apple’s case and the Privacy International-led challenge.

A Public Hearing, But Based on Assumed Facts

Although much of the government’s activities around encryption remain secret, the IPT has ruled that the bulk of Apple’s and Privacy International’s legal arguments will be heard in open court at a seven-day hearing, now scheduled for early 2026.

In a bid to balance transparency with national security, the tribunal will proceed on the basis of “assumed facts” rather than actual confirmation of the Home Office’s reported order. The government will be permitted to maintain its official “neither confirm nor deny” (NCND) position on the existence of the TCN, even though details have been widely leaked and reported.

Why?

It seems that this approach allows both Apple’s and Privacy International’s legal arguments to be made in public, without requiring sensitive details to be aired in a closed court. The IPT had previously rejected attempts by the Home Office to keep the entire case behind closed doors, following objections from a coalition of media outlets including the BBC, The Guardian and Computer Weekly.

A Frustrated WhatsApp Pushes Back

WhatsApp expressed clear frustration at the decision to exclude it from proceedings. CEO Will Cathcart previously submitted written evidence raising concerns that the UK order sets “a dangerous precedent for security technologies that protect users around the world”.

Cathcart stated: “We’ve applied to intervene in this case to protect people’s privacy globally. Liberal democracies should want the best security for their citizens. Instead, the UK is doing the opposite through a secret order.”

Following the ruling, a WhatsApp spokesperson added: “This is deeply disappointing, particularly as the UK’s attempt to break encryption continues to be shrouded in layers of secrecy. We will continue to stand up to governments that try to weaken the encryption that protects people’s private communication.”

The company has repeatedly warned that mandating backdoors, i.e. ways for governments to access encrypted systems, would compromise security not just for criminals, but for all users, exposing communications to cybercriminals and hostile states.

Apple Takes a Stand (And a Step Back)

Apple has also taken a firm stance against the Home Office’s demands. For example, in February 2025, it withdrew its Advanced Data Protection (ADP) service from UK customers, rather than comply with the TCN’s reported requirements.

ADP enables users to encrypt their iCloud backups using end-to-end encryption, meaning not even Apple can access the data. The feature remains available in other countries.

In a statement at the time, Apple said: “As we have said many times before, we have never built a backdoor or master key to any of our products or services, and we never will.”

Apple’s legal challenge is separate from the civil liberties group case, but will be heard during the same week as part of the IPT’s coordinated hearing.

Why This Matters and What’s at Stake

The case matters because it has significant implications for privacy, national security, and the power of democratic oversight. At its heart is a tension between the UK government’s claim that it must access encrypted data to fight terrorism and child abuse, and the tech industry’s position that weakening encryption threatens the security of everyone.

Technical Capability Notices, while rarely discussed in public, give the Home Office power to compel companies to make their systems interceptable. This can include designing or modifying services to allow for lawful access, which is something encryption advocates have long argued is incompatible with true end-to-end encryption.

Smokescreen?

Campaigners such as Privacy International argue that the UK is using national security as a “smokescreen” to bypass proper scrutiny and safeguards. Legal Director Caroline Wilson Palow criticised the government’s NCND stance, saying: “We are being forced to sustain the fiction that the order does not exist, which may hinder our ability to grapple fully with its legal ramifications.”

Privacy International’s challenge also questions the lawfulness and necessity of the regime underpinning TCNs, including whether they are being used proportionately and with sufficient parliamentary oversight.

International Repercussions and Political Fallout

It seems that the Home Office’s efforts have not only raised legal alarms but have also sparked diplomatic tensions. For example, the Financial Times recently reported that UK officials are now exploring ways to de-escalate the row with the US government, which sees the order against Apple as a breach of sovereignty.

US President Donald Trump and Director of National Intelligence Tulsi Gabbard have both condemned the UK’s actions, warning that attempts to access the encrypted data of US citizens could be considered a hostile act.

Gabbard described the move as “a clear and egregious violation”, and there have been calls in Washington for changes to the US CLOUD Act to limit the extraterritorial reach of UK orders.

What Comes Next?

The Tribunal’s case management order paves the way for a high-profile legal test in early 2026. The hearing is expected to include arguments on the legal limits of the UK’s investigatory powers, the technological realities of encryption, and whether governments can compel private firms to compromise the security of their own systems.

The hearing’s outcome may shape the future of encrypted communications not only in the UK, but globally. If the IPT upholds the TCN, it could embolden similar efforts in other jurisdictions. If it rules in favour of Apple and Privacy International, it could reinforce legal limits on surveillance powers.

While WhatsApp is now shut out of this phase of the process, the company and others offering secure communications are likely to keep pushing back, through lobbying, public advocacy, and possibly future legal action. For businesses and consumers relying on encrypted services to protect sensitive data, the stakes are high.

What Does This Mean For Your Business?

The hearing will be closely watched by UK businesses that rely on cloud services, secure messaging, and encrypted backups to safeguard client data and protect against cyber threats. If the government’s approach is upheld, it could signal the start of broader obligations on tech providers to ensure government access by design. That would pose real concerns for sectors handling sensitive information, including finance, legal services, healthcare and defence, where robust end-to-end encryption is often a regulatory or contractual expectation.

Although the Home Office claims such powers are essential for national security and criminal investigations, many critics argue (and have long done so) that the very existence of compelled access could weaken the technical integrity of services relied on by billions of people. From a commercial perspective, compliance with such orders may require re-engineering platforms, reducing user trust, or even withdrawing features entirely, as Apple has already done. For global technology firms operating in the UK, the outcome of this case could determine whether the market remains viable under increasingly intrusive obligations.

WhatsApp’s exclusion also raises questions about who gets to speak for encryption. As the leading end-to-end messaging platform, its technical perspective and global footprint might reasonably have added weight to the Tribunal’s understanding of broader risks. Its absence means the court will hear arguments from campaigners and Apple alone, but the ruling will likely affect a much wider community of providers, developers and users.

The Tribunal’s decision to hold a mostly open hearing is a rare opportunity for meaningful legal and public scrutiny of the UK’s approach to encrypted data. However, the reliance on “assumed facts” and continued insistence on neither confirming nor denying the order’s existence means that transparency will remain partial. For those on all sides of the encryption debate, that balancing act between openness and secrecy is likely to remain a defining feature of the months ahead.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives