Featured Article : UK Public Sector / AI Partnership
The UK Government has entered into a formal partnership with OpenAI aimed at accelerating the responsible use of artificial intelligence (AI) across public services, infrastructure, and national growth zones.
What Is The Deal?
Announced on 21 July 2025, the agreement takes the form of a Memorandum of Understanding (MoU) between the Department for Science, Innovation and Technology and OpenAI, the US-based company behind ChatGPT. While not legally binding, the document outlines both sides’ intentions to deepen collaboration in areas including AI infrastructure, public sector deployment, and AI safety research.
To Transform Taxpayer-Funded Services
According to the Department, the strategic aim is to “transform taxpayer-funded services” and improve how the state uses emerging technologies. It also includes commitments to explore joint investments in regional AI growth zones, share technical insights with the UK’s AI Safety Institute, and expand OpenAI’s UK-based engineering and research operations.
Technology Secretary Peter Kyle described the move as central to “driving the change we need to see across the country – whether that’s in fixing the NHS, breaking down barriers to opportunity or driving economic growth”.
OpenAI CEO Sam Altman echoed this, saying AI is a “core technology for nation building” and that the partnership would “deliver prosperity for all” by aligning with the goals set out in the UK’s AI Opportunities Action Plan.
Why Now And Why OpenAI?
The timing reflects the government’s wider push to try to position Britain as a leader in AI development and deployment. This includes the £2 billion commitment to AI growth zones made earlier this year, alongside a new AI Compute Strategy and the creation of a national AI Safety Institute.
It also comes as the UK faces some sluggish productivity growth, mounting public sector workloads, and strained public finances. Officials argue that automating time-consuming tasks, such as consultation analysis, document classification or civil service admin, could help free up staff to focus on more complex or sensitive work.
OpenAI’s Models Already Being Used
It’s worth noting here that GPT-4o, OpenAI’s latest model, is already being used in a Whitehall tool called “Consult”, which automatically processes responses to public consultations. The tool is said to reduce weeks of manual work to a matter of minutes, while leaving substantive decision-making to human experts.
The government’s AI chatbot “Humphrey” also uses OpenAI’s API to help small businesses navigate GOV.UK services more efficiently.
According to the MoU, future deployments will prioritise transparency, data protection, and alignment with democratic values. However, critics have raised concerns that key details of the deal remain vague.
A Boost for OpenAI’s UK Ambitions
For OpenAI, the partnership will, no doubt, reinforce its growing presence in the UK, which it describes as a “top three market globally” for both API developers and paid ChatGPT subscribers.
The company opened its first international office in London in 2023 and now employs more than 100 staff there. Under the new agreement, it plans to expand these operations further to support both product development and local partnerships.
OpenAI is also expected to explore building or supporting UK-based data centres and R&D infrastructure, which is a move that would enhance what the government calls the country’s “sovereign AI capability”. This concept refers to ensuring that core AI infrastructure and innovation remain under UK control rather than becoming overly reliant on US or Chinese providers.
Sam Altman has suggested that such regional investment could help stimulate jobs and revitalise communities, especially within the designated AI growth zones.
Competitors and UK Tech Firms
The announcement is likely to intensify competition among global AI providers, particularly Google DeepMind and Anthropic, both of which have also signed cooperation agreements with the UK Government in recent months.
However, some British AI firms say the government is placing too much emphasis on partnerships with dominant US players at the expense of homegrown innovation. Tim Flagg, Chief Operating Officer at UKAI, a trade body for British AI companies, previously warned that the AI Opportunities Action Plan takes a “narrow view” of who is shaping the UK’s AI future.
For example, it could mean that UK-based AI firms working on foundation models, language processing, or ethical AI frameworks may now find themselves competing for talent, attention, and influence with the likes of OpenAI, whose models and reputation already dominate the field.
Digital rights campaigners have also questioned whether the government is adequately safeguarding public interest and data security in its eagerness to court big tech firms.
Warnings Over Public Data and Accountability
One of the main criticisms of the deal is its lack of specificity on how public data may be used. While the agreement hints at technical collaboration and information-sharing, it doesn’t clarify whether UK citizens’ data will help train OpenAI’s models, or what safeguards will be in place.
Digital rights group Foxglove called the MoU “hopelessly vague”, warning that OpenAI stands to benefit from the UK’s “treasure trove of public data”. Co-Executive Director Martha Dark went further, saying that “Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty”.
Others have raised broader concerns about transparency and oversight. Some academics and civil service experts suggest that while AI tools may relieve public sector staff of time-consuming administrative tasks, the real challenge lies in ensuring that deployments are done ethically, with strong governance and minimal reliance on personal or sensitive data.
The AI Infrastructure Angle
Beyond public services, the deal includes plans to explore investment in AI infrastructure, a term that typically refers to the high-performance computing facilities and energy-intensive data centres required to train and deploy large AI models.
This ties into the UK’s broader push for regional development. Under the AI Growth Zone initiative, over 200 local bids have been submitted, with billions in potential investment expected. The government has confirmed that both Scotland and Wales will host zones under the AI Compute Strategy.
The partnership with OpenAI may give these ambitions extra momentum. If the company builds or co-develops infrastructure in the UK, it could significantly improve national access to compute power, a key enabler for both public and private AI innovation.
Concerns Over Sovereignty and Big Tech Influence
Despite assurances from ministers that the UK will remain in control of its AI future, there are growing calls for greater scrutiny and legislative oversight.
The UK’s Data Protection and Digital Information Bill, which is making its way through Parliament, may play a role in regulating how personal and government data can be used in AI systems. However, many campaigners believe that dedicated AI legislation, with clear public interest protections, is still lacking.
Meanwhile, the MoU’s non-binding nature means the partnership could evolve in unpredictable ways, without necessarily being subject to parliamentary approval or regulatory review.
Peter Kyle has defended the approach, arguing that “global companies which are innovating on a scale the British state cannot match” must be engaged if the UK wants to compete in the AI era.
However, for opponents, this signals a risk of policy being shaped too closely around commercial interests, rather than the public good.
What Does This Mean For Your Business?
The UK’s agreement with OpenAI may sound like a significant moment in the evolution of public sector AI strategy, but it also raises some important questions about balance, control, and accountability. For government departments under pressure to deliver more with less, AI appears to present an opportunity to reduce routine workloads, speed up processes, and direct skilled professionals toward more impactful tasks. With OpenAI’s models already embedded in tools like “Humphrey” and “Consult”, this partnership could enable deeper integration and faster iteration across critical areas such as justice, health, education, and small business support.
For UK businesses, particularly those involved in or supplying to the public sector, the partnership could bring both practical benefits and growing pressure. For example, OpenAI’s expanded presence may improve access to advanced AI tools, infrastructure, and collaborative opportunities, helping British startups and firms apply new technologies more effectively. At the same time, there is concern that prioritising partnerships with large US-based companies could marginalise smaller UK tech providers whose innovations may be better suited to local contexts but lack the scale or visibility to compete.
The deal also adds pressure on the UK to clarify how it will protect data, enforce ethical guardrails, and ensure that public interest remains front and centre. Critics argue that the lack of legally binding terms leaves room for mission creep or overreach, especially if partnerships expand without clear oversight. With public trust in digital services already under strain, transparency and accountability will be vital to ensuring these systems are not only efficient, but also fair and secure.
Ultimately, the MoU appears to reflect the government’s belief that strategic alignment with global AI leaders is essential if the UK wants to stay competitive. Whether this approach will deliver broad-based economic and societal benefit, or reinforce existing power imbalances, will depend on how well the promises of inclusion, sovereignty, and ethical standards are translated into action. For now, the UK has made its bet, and the challenge will be ensuring that it delivers for everyone.
Tech Insight : 45% Of MSPs Keep Cash To Pay Off Hackers
A new survey reveals 45 per cent of managed service providers (MSPs) are setting aside cash to pay ransomware demands, as fears over AI-fuelled cybercrime continue to mount.
MSPs Under Pressure as Ransomware Attacks Surge
The finding comes from the CyberSmart MSP Survey 2025, which examined the security posture of 900 MSPs across the UK, Europe, Australia, and New Zealand. According to the report, nearly half of those surveyed now maintain a dedicated pot of money in case they are hit by a ransomware attack, a tactic where cybercriminals encrypt a victim’s data and demand a payment for its return.
Counter To Guidance
This approach appears to run counter to guidance from insurers, governments, and law enforcement agencies, which consistently urge organisations not to pay. However, the growing scale and frequency of attacks, often powered by artificial intelligence, appear to be forcing MSPs to adopt a more pragmatic (if controversial) strategy.
“Organisations shouldn’t rely on ransomware payments; rather, they should partner with organisations that can help proactively secure them,” said Jamie Akhtar, CEO and co-founder of CyberSmart.
Be Prepared
The report’s findings highlight a deepening sense of vulnerability among MSPs, many of which provide outsourced IT and cyber-security services to small and medium-sized enterprises (SMEs). With AI-generated phishing emails, malware, and deepfakes becoming increasingly sophisticated, the pressure to be prepared for the worst has never been higher.
More Breaches, More Budgets, More Confusion
CyberSmart’s research revealed that 69 per cent of MSPs had suffered two or more cyber breaches in the last 12 months, while 47 per cent reported being hit three times or more. These incidents are not just one-off events. For example, many are the result of supply chain vulnerabilities, such as the May 2025 breach where the Dragonforce ransomware group exploited a remote monitoring and management (RMM) tool to compromise multiple MSP clients.
Faced with mounting threats, MSPs are reacting in different ways. For example, 36 per cent now rely on cyber insurance as their primary defence, while 11 per cent (worryingly) have neither cyber insurance nor a ransomware fund in place, leaving them financially and operationally exposed if attacked.
Guidance Not Clear
It seems that part of the problem is that official guidance around ransomware payments remains fragmented and unclear. While governments generally discourage paying ransoms, enforcement is inconsistent outside the public sector. “What your business is advised to do will largely depend on where you’re based and who’s advising you,” CyberSmart noted in its commentary.
This has led to a patchwork of interpretations, with some MSPs feeling they have little choice but to maintain a reserve, despite the moral and strategic risks involved.
UK Government Moves to Ban Ransomware Payments for Critical Services
In July 2025, the UK government announced proposals to ban ransomware payments for public sector bodies and operators of critical national infrastructure (CNI). The measures, introduced by the Home Office following a public consultation, would apply to organisations such as hospitals, councils, schools, and water providers, sectors where operational downtime can endanger lives.
“Ransomware is a predatory crime that puts the public at risk, wrecks livelihoods and threatens the services we depend on,” said Security Minister Dan Jarvis. “We’re determined to smash the cyber criminal business model and protect the services we all rely on.”
Private Businesses Would Need To Notify Government Before Paying
Under the proposals, private businesses would not be banned outright from paying, but would be required to notify the government before doing so. This would enable authorities to offer advice, check for potential sanctions breaches (such as paying Russian-linked gangs), and gather intelligence to disrupt criminal networks.
Cybercrime’s Business Model Under Scrutiny
The rationale behind the payment ban is to undermine the business model of ransomware gangs, which rely on victims caving in quickly to avoid reputational damage, data leaks, or prolonged disruption. However, experts have warned that banning payments, especially only for certain sectors, may not have the desired effect.
“Ransomware is largely an opportunistic crime, and most cyber criminals are not discerning,” said Jamie MacColl, a senior research fellow at the Royal United Services Institute (RUSI). “They’re unlikely to develop a rigorous understanding of UK legislation or how we designate critical infrastructure.”
Others suggest the ban could increase the stakes for victims. “If the best solution is to just turn around and say to the hackers, ‘We’re not giving in to your demands anymore,’ don’t be surprised if they double down,” said Rob Jardin, chief digital officer at NymVPN.
The British Library, one of the most high-profile public victims of ransomware in recent years, chose not to pay after an attack in October 2023 devastated its systems. “We are committed to sharing our experiences to help protect other institutions and build collective resilience,” said Chief Executive Rebecca Lawrence.
AI Attacks Are Changing the Game
Perhaps the most striking shift in this year’s CyberSmart survey is the rise of artificial intelligence as the top concern for MSPs in 2025. AI overtook ransomware itself, with 44 per cent of respondents citing it as their biggest worry, compared to 40 per cent for traditional malware and ransomware threats.
This change reflects a growing trend in how attackers operate. For example, AI tools are now being used to write convincing phishing emails, build more evasive malware, and even create deepfake audio and video to impersonate executives or support social engineering attacks.
In 2024, 67 per cent of MSPs reported falling victim to AI-enabled attacks, a figure expected to rise in 2025 as generative and agent-based AI tools become more widely available to threat actors.
However, many MSPs feel ill-equipped to counter these evolving threats, with a lack of user-friendly, AI-specific defence tools still a key issue. “MSPs are being asked to do more, with fewer tools at their disposal,” the report concludes.
Customer Expectations Are Rising, But So Is Investment
The research also showed that 84 per cent of MSPs now manage their clients’ cybersecurity infrastructure, or both their cybersecurity and broader IT estate. This shift reflects growing client expectations for MSPs to provide end-to-end protection which are the kind of expectations that often come with greater scrutiny.
According to the CyberSmart research, 77 per cent of MSPs said potential customers are now evaluating their cyber credentials more carefully, especially in the procurement stage.
To meet demand, it seems that MSPs are now investing heavily. For example, 81 per cent have increased spend on hiring security specialists, and 78 per cent have upped budgets for cyber defence tools, training, and client services. Compliance is also high on the agenda, with 60 per cent hiring regulatory specialists and 64 per cent enhancing capabilities to align with frameworks such as NIS2 in the EU and the UK’s upcoming Cyber Security and Resilience Bill.
According to NCSC Director of National Resilience Jonathon Ellison, such steps are critical: “Ransomware remains a serious and evolving threat, and organisations must not become complacent. All businesses should strengthen their defences using proven frameworks such as Cyber Essentials.”
MSPs Prepared Yet Vulnerable
Despite the high rate of breaches, MSPs remain surprisingly confident in their security posture. For example, CyberSmart found that 76 per cent rate their cyber confidence as above average or higher. That said, only 20 per cent described their confidence as complete, suggesting that many know there’s room for improvement.
Looking at this research, for businesses relying on MSPs to manage their security, the message appears to be that while many providers are stepping up their game, others are still reacting to threats in ways that may not align with long-term best practice.
Co-op CEO Shirine Khoury-Haq, who oversaw the retailer’s response to a Scattered Spider ransomware attack, captured the sentiment well, saying: “What matters most is learning, building resilience, and supporting each other to prevent future harm. This is a step in the right direction for building a safer digital future.”
What Does This Mean For Your Organisation?
For MSPs and their clients, the emergence of ransomware funds could be seen as a move from aspirational resilience to operational realism. Despite official advice against paying cybercriminals, it seems that many MSPs clearly believe they cannot afford to be unprepared. With 69 per cent already breached multiple times in a single year and AI accelerating the scale and complexity of attacks, the temptation to hold a contingency reserve is understandable. However, this pragmatic stance may also entrench the very business model that governments and law enforcement are working hard to dismantle.
The UK’s proposed ransomware payment ban for public bodies and CNI highlights just how far official thinking has moved towards systemic deterrence. However, the exclusion of private businesses from that ban, and the option for them to pay under notification, risks creating an uneven response that may ultimately frustrate enforcement and dilute its impact. As Jamie MacColl pointed out, most ransomware gangs operate opportunistically and will not necessarily distinguish between regulated and unregulated targets. This raises questions about whether partial bans can realistically alter attacker behaviour.
For UK businesses, especially SMEs dependent on MSPs for protection, the findings raise difficult questions. For example, while many providers are making serious investments in tools, people, and compliance, others are still relying on reactive strategies that may offer short-term cover but little long-term assurance. The increasing scrutiny on MSPs is likely to intensify, particularly as clients seek partners who are both cyber confident and operationally transparent. Businesses must now evaluate not only whether their MSP has a ransomware plan, but also whether that plan reflects best practice or a compromise born of confusion.
For regulators, the lack of clarity and consistency around ransomware responses remains a core problem. Guidance alone is proving insufficient. A broader and more unified framework, alongside mandatory reporting, may be needed to help ensure MSPs, their clients, and their insurers are working from the same playbook. For now, the reliance on private ransomware funds points to a cyber landscape still dominated by tactical survival rather than strategic coordination.
Tech News : WhatsApp Barred From Apple Case
WhatsApp has been denied permission to join a major legal challenge over UK government demands for access to encrypted data, as a special tribunal confirms a seven-day public hearing will go ahead in 2026.
WhatsApp Shut Out of High-Stakes Encryption Fight
The Investigatory Powers Tribunal (IPT), which hears complaints about UK surveillance and investigatory powers, has rejected an application by WhatsApp to intervene in two linked legal challenges over the use of secret government powers to weaken encryption.
The challenges stem from a reported Technical Capability Notice (TCN) issued by the Home Office in January 2025. Under the UK’s Investigatory Powers Act, a TCN can compel a company to build or alter technology to ensure it can be accessed by government agencies under lawful authority.
In this case, the order reportedly demanded that Apple provide access to encrypted user data stored globally on its iCloud platform, including material protected by its Advanced Data Protection (ADP) service.
Apple responded in February by withdrawing the ADP feature from UK users, publicly stating that it would never build “a backdoor or master key” into its products. The move drew attention on both sides of the Atlantic, triggering concerns in the US about the implications for American users and businesses.
In March, Privacy International, Liberty, and two individual claimants filed a legal challenge to the secrecy and legality of the Home Office’s reported actions. Apple launched its own legal case in parallel.
Then, in April, the Home Office attempted to argue that the full case should be heard behind closed doors. This was rejected by the IPT following objections from ten media organisations. The tribunal opted instead for a novel legal approach which was to proceed on the basis of “assumed facts”, allowing as much of the hearing as possible to be held in public while preserving the government’s right to “neither confirm nor deny” the existence of the order.
WhatsApp applied to intervene in both cases in June, citing the risk of a precedent that could erode the encryption protections used by billions of people. However, on 23 July, the Tribunal refused the application. A seven-day public hearing will now go ahead in early 2026, combining Apple’s case and the Privacy International-led challenge.
A Public Hearing, But Based on Assumed Facts
Although much of the government’s activities around encryption remain secret, the IPT has ruled that the bulk of Apple’s and Privacy International’s legal arguments will be heard in open court at a seven-day hearing, now scheduled for early 2026.
In a bid to balance transparency with national security, the tribunal will proceed on the basis of “assumed facts” rather than actual confirmation of the Home Office’s reported order. The government will be permitted to maintain its official “neither confirm nor deny” (NCND) position on the existence of the TCN, even though details have been widely leaked and reported.
Why?
It seems that this approach allows both Apple’s and Privacy International’s legal arguments to be made in public, without requiring sensitive details to be aired in a closed court. The IPT had previously rejected attempts by the Home Office to keep the entire case behind closed doors, following objections from a coalition of media outlets including the BBC, The Guardian and Computer Weekly.
A Frustrated WhatsApp Pushes Back
WhatsApp expressed clear frustration at the decision to exclude it from proceedings. CEO Will Cathcart previously submitted written evidence raising concerns that the UK order sets “a dangerous precedent for security technologies that protect users around the world”.
Cathcart stated: “We’ve applied to intervene in this case to protect people’s privacy globally. Liberal democracies should want the best security for their citizens. Instead, the UK is doing the opposite through a secret order.”
Following the ruling, a WhatsApp spokesperson added: “This is deeply disappointing, particularly as the UK’s attempt to break encryption continues to be shrouded in layers of secrecy. We will continue to stand up to governments that try to weaken the encryption that protects people’s private communication.”
The company has repeatedly warned that mandating backdoors, i.e. ways for governments to access encrypted systems, would compromise security not just for criminals, but for all users, exposing communications to cybercriminals and hostile states.
Apple Takes a Stand (And a Step Back)
Apple has also taken a firm stance against the Home Office’s demands. For example, in February 2025, it withdrew its Advanced Data Protection (ADP) service from UK customers, rather than comply with the TCN’s reported requirements.
ADP enables users to encrypt their iCloud backups using end-to-end encryption, meaning not even Apple can access the data. The feature remains available in other countries.
In a statement at the time, Apple said: “As we have said many times before, we have never built a backdoor or master key to any of our products or services, and we never will.”
Apple’s legal challenge is separate from the civil liberties group case, but will be heard during the same week as part of the IPT’s coordinated hearing.
Why This Matters and What’s at Stake
The case matters because it has significant implications for privacy, national security, and the power of democratic oversight. At its heart is a tension between the UK government’s claim that it must access encrypted data to fight terrorism and child abuse, and the tech industry’s position that weakening encryption threatens the security of everyone.
Technical Capability Notices, while rarely discussed in public, give the Home Office power to compel companies to make their systems interceptable. This can include designing or modifying services to allow for lawful access, which is something encryption advocates have long argued is incompatible with true end-to-end encryption.
Smokescreen?
Campaigners such as Privacy International argue that the UK is using national security as a “smokescreen” to bypass proper scrutiny and safeguards. Legal Director Caroline Wilson Palow criticised the government’s NCND stance, saying: “We are being forced to sustain the fiction that the order does not exist, which may hinder our ability to grapple fully with its legal ramifications.”
Privacy International’s challenge also questions the lawfulness and necessity of the regime underpinning TCNs, including whether they are being used proportionately and with sufficient parliamentary oversight.
International Repercussions and Political Fallout
It seems that the Home Office’s efforts have not only raised legal alarms but have also sparked diplomatic tensions. For example, the Financial Times recently reported that UK officials are now exploring ways to de-escalate the row with the US government, which sees the order against Apple as a breach of sovereignty.
US President Donald Trump and Director of National Intelligence Tulsi Gabbard have both condemned the UK’s actions, warning that attempts to access the encrypted data of US citizens could be considered a hostile act.
Gabbard described the move as “a clear and egregious violation”, and there have been calls in Washington for changes to the US CLOUD Act to limit the extraterritorial reach of UK orders.
What Comes Next?
The Tribunal’s case management order paves the way for a high-profile legal test in early 2026. The hearing is expected to include arguments on the legal limits of the UK’s investigatory powers, the technological realities of encryption, and whether governments can compel private firms to compromise the security of their own systems.
The hearing’s outcome may shape the future of encrypted communications not only in the UK, but globally. If the IPT upholds the TCN, it could embolden similar efforts in other jurisdictions. If it rules in favour of Apple and Privacy International, it could reinforce legal limits on surveillance powers.
While WhatsApp is now shut out of this phase of the process, the company and others offering secure communications are likely to keep pushing back, through lobbying, public advocacy, and possibly future legal action. For businesses and consumers relying on encrypted services to protect sensitive data, the stakes are high.
What Does This Mean For Your Business?
The hearing will be closely watched by UK businesses that rely on cloud services, secure messaging, and encrypted backups to safeguard client data and protect against cyber threats. If the government’s approach is upheld, it could signal the start of broader obligations on tech providers to ensure government access by design. That would pose real concerns for sectors handling sensitive information, including finance, legal services, healthcare and defence, where robust end-to-end encryption is often a regulatory or contractual expectation.
Although the Home Office claims such powers are essential for national security and criminal investigations, many critics argue (and have long done so) that the very existence of compelled access could weaken the technical integrity of services relied on by billions of people. From a commercial perspective, compliance with such orders may require re-engineering platforms, reducing user trust, or even withdrawing features entirely, as Apple has already done. For global technology firms operating in the UK, the outcome of this case could determine whether the market remains viable under increasingly intrusive obligations.
WhatsApp’s exclusion also raises questions about who gets to speak for encryption. As the leading end-to-end messaging platform, its technical perspective and global footprint might reasonably have added weight to the Tribunal’s understanding of broader risks. Its absence means the court will hear arguments from campaigners and Apple alone, but the ruling will likely affect a much wider community of providers, developers and users.
The Tribunal’s decision to hold a mostly open hearing is a rare opportunity for meaningful legal and public scrutiny of the UK’s approach to encrypted data. However, the reliance on “assumed facts” and continued insistence on neither confirming nor denying the order’s existence means that transparency will remain partial. For those on all sides of the encryption debate, that balancing act between openness and secrecy is likely to remain a defining feature of the months ahead.
Tech News : UK Supercomputer Ranks 11th Globally
The UK has switched on its most powerful supercomputer to date, Isambard-AI, a machine purpose-built for artificial intelligence research that now ranks 11th globally in the TOP500 list.
A Major Leap in UK Computing Power
Isambard-AI was officially launched in mid-July at the University of Bristol, marking a significant milestone in the UK’s push to become a global leader in AI and high-performance computing (HPC). Developed by Hewlett Packard Enterprise (HPE) using its advanced Cray EX architecture, the system is powered by more than 5,400 NVIDIA GH200 Grace Hopper Superchips and is housed within the Bristol Centre for Supercomputing.
Its raw computing performance reaches 216.5 petaflops, with a peak theoretical output of 278.6 petaflops. For comparison, one petaflop equals one quadrillion (that’s 1,000,000,000,000,000) calculations per second … i.e a million billion! To put that in context, Isambard-AI is over ten times more powerful than the UK’s next-fastest system, London’s Njoerd supercluster.
Also, this new machine is not just the fastest in the country, but also ranks sixth in Europe and is currently the fourth greenest supercomputer in the world, according to the Green500 sustainability rankings.
What Exactly Is a Supercomputer?
Supercomputers are specialised computing systems built to process enormous quantities of data at extremely high speed. Unlike everyday computers, which typically operate using a handful of processing cores, supercomputers use thousands, or in Isambard-AI’s case, tens of thousands, to perform vast numbers of calculations in parallel. This makes them indispensable for complex simulations, deep learning models, and data-heavy scientific research.
Isambard-AI is part of the UK’s Artificial Intelligence Research Resource (AIRR), a national programme aimed at making cutting-edge computing capacity available to public researchers and innovators. This includes major UK universities, startups, and even NHS-linked projects.
Built for AI But Designed for More
Although it has been purpose-built with AI workloads in mind, Isambard-AI is also designed to accelerate scientific discovery across a range of domains. For example, early projects already underway include helping researchers at University College London develop faster, more accurate prostate cancer detection systems, and assisting scientists at Liverpool in the discovery of greener, more sustainable industrial materials.
Isambard-AI is also expected to play a role in climate modelling, vaccine research, and training of large language models (LLMs), which require substantial computational resources. These capabilities align with the government’s broader ambitions to use AI to tackle national challenges, such as reducing NHS waiting times and supporting energy transition goals.
Peter Kyle, the UK’s Secretary of State for Science, Innovation and Technology, described the supercomputer as a catalyst for national progress: “Today we put the most powerful computer system in the country into the hands of British researchers and entrepreneurs… It will propel the UK to the forefront of AI discovery.”
Bristol at the Centre of UK Supercomputing
Isambard-AI is hosted at the National Composites Centre near Bristol, a strategic choice given the University of Bristol’s long-standing leadership in high-performance computing and AI research. The supercomputer’s name Isambard also comes from Isambard Kingdom Brunel, the pioneering Victorian engineer whose legacy is deeply tied to Bristol through landmark projects like the Clifton Suspension Bridge and the Great Western Railway.
The university already operates another major system, Isambard 3, a CPU-based machine aimed at traditional scientific modelling. Together, the two systems provide an integrated platform for advanced research, all with an eye toward sustainability.
According to Professor Simon McIntosh-Smith, Director of the Bristol Centre for Supercomputing, “We built Isambard-AI to serve the UK research community and help solve some of the world’s toughest problems. Seeing it recognised among the world’s best is a real testament to what’s possible when brilliant people come together with a shared vision.”
He also noted the importance of partnerships in realising the project, thanking contributors including HPE, NVIDIA, Arm, DSIT, UKRI, and STFC.
Where It Ranks Globally And Why That Matters
In the June 2025 TOP500 rankings, an internationally respected benchmark for supercomputers, Isambard-AI entered the list at number 11, placing the UK firmly back on the global HPC map.
At the top of the list is El Capitan, a US-based machine boasting an actual performance of 1,742 petaflops. Other American systems, Frontier and Aurora, rank second and third respectively, both operating at the exascale level, a threshold defined as at least 1,000 petaflops. These machines are considerably more powerful, but also reflect much higher investment levels and longer development cycles.
Europe’s top contender, Germany’s JUPITER Booster, ranks fourth, while Italy’s HPC6 (6th) and Leonardo (10th), Switzerland’s Alps (8th), and Finland’s LUMI (9th) also sit in the top 10. Isambard-AI’s arrival just outside this elite group is still a substantial leap for the UK, which in recent years had slipped behind in HPC capacity.
Its global position also supports the UK’s industrial ambition. For example, as the government stated in its July announcement, the goal is not merely to use AI technologies but to become an “AI maker rather than an AI taker”.
A Publicly Funded, Open Access System
The development of Isambard-AI was funded through a £225 million government investment, part of a wider strategy to create national infrastructure for emerging technologies. The system is built to be open-access, meaning academic researchers, public institutions, and SMEs across the UK can apply for use, thereby potentially democratising access to otherwise inaccessible computing power.
Will Work With Dawn
Isambard-AI will work in tandem with Dawn, another AI-focused machine based at the University of Cambridge, though the systems are not physically connected. Both form the initial backbone of the UK’s AIRR initiative, which aims to expand computing resources twenty-fold over the next five years.
Alongside this, the government is investing in skills development, pledging to train 1 million students and 7.5 million adults in AI-related skills in the coming years.
Challenges, Costs and Competition
Despite the achievement, Isambard-AI is not without its challenges. For example, one significant concern is energy use. Supercomputers are notoriously power-hungry, and although Isambard-AI ranks highly for energy efficiency, its environmental impact is still non-trivial. Liquid cooling systems and heat recovery features help mitigate this, but the issue remains a live one, especially as public scrutiny of AI’s environmental footprint increases.
There are also questions about how effectively such a system can be accessed and utilised outside of academia. While the machine is open to UK researchers, some have warned that access processes can be bureaucratic or overly restrictive, potentially limiting SME and startup engagement.
Another challenge lies in keeping pace with international rivals. Although Isambard-AI is the UK’s most powerful supercomputer today, its time at the top may be brief. A £750 million investment in a future exascale system in Edinburgh has already been announced — one that could launch later this decade and potentially place the UK within the top five globally.
David Hogan, NVIDIA’s European Vice President, described Isambard-AI as “a truly transformational machine”, but acknowledged that this is “just a starting point”. For Britain to retain its momentum in AI and supercomputing, further investment, collaboration and long-term strategy will be essential.
What Does This Mean For Your Business?
Looking ahead, the arrival of Isambard-AI marks a critical inflection point in the UK’s scientific and technological capabilities. With serious backing from government and academia, it gives British researchers and developers access to one of the most powerful computing tools currently available anywhere in the world. That matters not just for scientific prestige, but for practical impact. From accelerating cancer diagnostics to designing greener materials, this machine is already being used to tackle problems with far-reaching consequences.
For UK businesses, particularly in life sciences, clean tech, and AI development, the launch could lower the barriers to entry for high-performance computing. By offering open access through the national AI Research Resource, smaller firms and startups may gain capabilities previously reserved for large institutions or well-funded labs. If the system is made genuinely accessible in practice as well as in principle, it could give British tech innovators a competitive edge in a global market that increasingly depends on large-scale compute.
At the same time, the launch sends a clear signal internationally. After years of falling behind in supercomputing capacity, the UK is now back in contention. Although it still lags behind US and some European systems in raw performance, Isambard-AI has vaulted the UK into the top tier of AI infrastructure providers. The challenge now will be maintaining that momentum. With a more powerful exascale machine already planned in Edinburgh, the question will not just be how fast these systems are, but how effectively they are integrated into wider research and commercial ecosystems.
Isambard-AI shows what’s possible when public investment, private expertise and academic leadership align around a shared goal. The task now is to ensure it delivers not just world-class performance, but world-class value.
Company Check : WeTransfer Under Fire Over New Data Terms
Dutch file-sharing platform WeTransfer has sparked uproar after quietly adding language to its terms of service suggesting it could use customer files to train AI models, then swiftly removing the clause following backlash.
What Users Spotted and Why It Sparked Alarm
The controversy erupted in mid-July when eagle-eyed WeTransfer users, including high-profile creatives, flagged an update to the company’s terms of service set to take effect on 8 August 2025. In particular, Section 6.3 introduced wording that granted WeTransfer a “perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable licence” to use uploaded files for operating and developing the service, including, crucially, to “improve performance of machine learning models that enhance our content moderation process.”
To many, that appeared to signal a quiet expansion of rights that could allow WeTransfer to use (or even monetise) user-uploaded content for artificial intelligence (AI) training.
Among the concerned voices was UK children’s author and illustrator Sarah McIntyre, who took to X (formerly Twitter) to say: “I pay you to shift my big artwork files. I DON’T pay you to have the right to use them to train AI or print, sell and distribute my artwork and set yourself up as a commercial rival to me.”
It seems that such concerns weren’t unfounded. The clause appeared to echo patterns seen elsewhere in the tech world, where companies including Zoom, Adobe, Slack and Dropbox have faced recent backlash over vague or overly broad licensing updates connected to AI development. As AI tools become more powerful and accessible, the question of whose data fuels them, and with what consent, has become a flashpoint in digital rights and trust.
Why This Matters for Business Users
For many creatives and businesses, WeTransfer has long positioned itself as a privacy-respecting, user-friendly alternative to more data-hungry services. Its clean interface, strong brand identity, and explicit support for the creative industries made it especially popular with freelancers, studios, and design teams.
However, as a result of this latest incident, that trust now appears to be under scrutiny. If the AI clause had remained, businesses could have faced the uncomfortable possibility that internal documents, pitch decks, drafts, artwork, or sensitive visual assets might be used, not just to train algorithms, but potentially to inform systems well beyond the original upload. Even if restricted to content moderation purposes, the lack of clarity raised red flags.
For example, a design agency transferring client work via WeTransfer might wonder whether its bespoke assets could end up being parsed for machine learning, however indirectly. A photographer might fear her original image files could be used to train image recognition or generation tools. And a marketing firm sharing early brand materials might question what “derivative works” could technically include.
Although WeTransfer insists that no such usage has occurred, the lack of clear technical limitations in the original clause left too much room for doubt.
WeTransfer’s Response
Within days of the backlash, WeTransfer issued a formal press release clarifying its position. It insisted that the controversial clause was a misstep and that the company does “not use user content to train AI models, nor do we sell or share files with third parties.” The company acknowledged that AI had been under consideration “to improve content moderation,” but confirmed that “such a feature hasn’t been built or deployed in practice.”
The statement added: “We’ve since updated the terms further to make them easier to understand. We’ve also removed the mention of machine learning, as it’s not something WeTransfer uses in connection with customer content and may have caused some apprehension.”
Clause Now Dropped
Following the uproar, it seems that, in an updated version of Section 6.3, the AI-related clause was dropped entirely. For example, the new text grants WeTransfer a royalty-free licence to use content strictly for “operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.” Importantly, it reinforces that users retain ownership and intellectual property rights over their content, and that processing complies with GDPR and other privacy regulations.
What’s Changed and What Hasn’t?
From a legal perspective, WeTransfer’s licensing terms weren’t entirely new. Earlier terms already included broad usage rights necessary to operate the service, such as the ability to scan, index, and reproduce files. However, the new inclusion of AI-specific language, especially amid public concern about AI and data usage, introduced a new level of perceived risk.
As the company explained: “The language regarding licensing didn’t actually change in substance compared to the previous Terms of Service… The change in wording was meant to simplify the terms while ensuring our customers can enjoy WeTransfer’s features and services as they were built to be used.”
Nonetheless, perception matters. For example, the way the AI clause was introduced, without technical limitations, public explanation, or opt-out options, appeared to really undermine confidence at a time when many businesses are increasingly sensitive to data governance.
Broader Industry Fallout and Lessons for Tech Providers
WeTransfer is far from alone in facing scrutiny over AI terms. For example, back in 2023, Zoom had to walk back similar policy updates after suggesting it could use customer audio and video to train its AI models. Dropbox, Slack, and Adobe have all been forced to issue clarifications in recent months after terms of service changes sparked similar fears.
For regulators, the episode highlights ongoing gaps in user protection. In the UK, the ICO (Information Commissioner’s Office) has warned companies that AI development must respect explicit consent, clarity of purpose, and data minimisation, all of which could come under strain when licensing terms are broadly written.
For businesses, the incident is a reminder to read the fine print, especially as more cloud services evolve their models to incorporate generative AI, content filtering, and user analytics.
As an example, a marketing team using file-sharing services or cloud-based creative tools should now routinely assess licensing clauses for AI-related language, even if those features are not currently in use. Procurement teams may also need to establish red lines around AI usage to safeguard proprietary material.
Trust Takes Time to Build And Moments to Erode
Despite WeTransfer’s efforts to clarify and course-correct, replies on social media appear to remain largely sceptical. Some users have suggested the company had been testing the waters for broader AI permissions, only to retreat when the backlash hit. Others have expressed a desire to move to alternatives, such as Swiss-based Tresorit or Proton Drive, that offer end-to-end encryption and stronger privacy guarantees.
While WeTransfer may weather the storm, the event highlights a wider issue for the tech industry, i.e., transparency around AI is no longer optional. As public awareness of AI training practices grows, even small wording changes can trigger major reputational fallout. And for companies built on the trust of creative professionals, that risk is especially acute.
What Does This Mean For Your Business?
For UK businesses and creative professionals in particular, this episode serves as a clear warning that assumptions about how cloud-based platforms handle data can no longer be taken at face value. The practical risk may have been limited in this instance, but the reputational impact is real, and the consequences of poor communication are hard to reverse. For companies that regularly transfer visual, written, or proprietary material via WeTransfer or similar services, it may prompt a review not only of terms and conditions, but of where and how sensitive files are shared in future.
For WeTransfer, the timing could hardly be worse. As demand grows for privacy-conscious alternatives in an AI-saturated market, any perception of blurred boundaries risks handing competitive advantage to rivals positioning themselves as more transparent or security-first. Providers such as Proton Drive, Filestage and Internxt are already responding to this shift, actively marketing their commitment to zero-knowledge infrastructure and end-to-end encryption.
Regulators and legal teams are also likely to be watching closely. The blurred line between operational necessity and expansive licensing is fast becoming a regulatory priority. In the UK, organisations working in regulated sectors, such as legal, health or financial services, may find that contract terms involving generative AI now trigger enhanced scrutiny from internal compliance and external auditors alike.
The broader takeaway from this story is that, as AI becomes more embedded in the digital infrastructure businesses rely on, consent must be granular, wording must be clear, and trust must be continually earned. WeTransfer’s quick backtrack may limit the immediate fallout, but it will likely be remembered as yet another sign of how easily tech companies can alienate users when they fail to communicate transparently, especially when the stakes involve creative ownership, client confidentiality, and commercial value.
Security Stop Press : Chinese Hackers Exploit SharePoint Flaws
Microsoft has confirmed that Chinese state-linked hackers are exploiting critical flaws in on-premises SharePoint servers to steal data and deploy ransomware.
The groups, known as Linen Typhoon, Violet Typhoon, and Storm-2603, are targeting government, defence, and business organisations by abusing spoofing and remote code execution vulnerabilities. Cloud-based SharePoint systems are not affected.
Victims have been reported across multiple sectors and countries, including the UK. Microsoft says the attacks allow hackers to steal credentials, disable security tools, and spread ransomware such as Warlock.
Storm-2603, a China-based group, has been observed using a malicious script called spinstall0.aspx to gain access and escalate privileges inside networks. Microsoft has warned that more attackers are likely to adopt these methods.
To stay secure, businesses using on-prem SharePoint must install Microsoft’s latest security updates, rotate ASP.NET machine keys, enable AMSI protection, and use advanced endpoint detection tools to block post-exploit activity.