Sustainability-in-Tech : UK Data Centre Cuts AI Power Use By 40 Per Cent

A UK data centre has demonstrated that artificial intelligence infrastructure can reduce its electricity consumption by up to 40 per cent in response to grid signals without interrupting critical computing workloads.

A UK-First Trial Of Flexible AI Infrastructure

The demonstration took place at Nebius’s “AI Factory” data centre near London and was conducted in partnership with National Grid, Emerald AI, the Electric Power Research Institute (EPRI), and NVIDIA. The project was designed to test whether high-performance AI infrastructure could act as a flexible energy asset rather than a fixed electricity load.

Over five days in December 2025, a cluster of NVIDIA Blackwell Ultra GPUs was subjected to more than 200 simulated grid events. These signals instructed the facility to adjust its electricity consumption under different conditions, including scenarios where the system had little or no advance warning.

According to the project’s white paper, the cluster achieved full compliance with all requested power targets and ramp-rate requirements while maintaining normal operation of key workloads. National Grid Partners described the results as evidence that high-performance AI infrastructure can operate as “a power-flexible, grid-responsive asset without disrupting mission-critical workloads.”

How The System Reduced Power Demand

The trial involved a 130 kW compute cluster running realistic AI training workloads based on open models such as Llama, Qwen and GPT-OSS. The cluster was deliberately kept busy throughout the experiment in order to simulate real production conditions.

Rather than switching servers off, the system reduced electricity consumption by dynamically managing how GPU workloads were scheduled and executed. Lower-priority tasks could be paused, delayed or temporarily slowed, allowing the cluster’s power draw to fall when grid operators requested a reduction.

This approach relies on the nature of many AI workloads. Model training and fine-tuning often run for long periods and include natural pause points, known as checkpoints, where processing can be safely interrupted without losing progress.

By contrast, latency-sensitive tasks such as inference can continue running normally while background training workloads absorb most of the power adjustments.

The orchestration software coordinating this behaviour was provided by US-based AI infrastructure company Emerald AI. Its platform interprets grid signals and automatically adjusts computing workloads so that the data centre can respond quickly to changes in electricity demand.

Testing Real-World Grid Events

Some of the simulated grid signals included immediate reduction requests with no ramp-down period, forcing the system to respond rapidly. Others provided advance warning and allowed the cluster to gradually reduce its consumption.

The trial also modelled real electricity demand patterns. One scenario simulated the well-known “TV pickup” effect in the UK, where millions of households switch on kettles during the half-time break of major football matches or television programmes.

These sudden surges can add around one gigawatt of demand to the grid within minutes. During the simulation, the AI cluster automatically reduced its power consumption as demand increased, demonstrating how data centres could help stabilise electricity networks during peak usage.

National Grid Partners president Steve Smith said the results challenge assumptions about the impact of AI infrastructure on electricity systems. As he explained, “as the UK’s digital economy accelerates, there’s concern that datacentres could add pressure to an already constrained system. This trial proves the opposite can be true.”

He added that the results suggest high-performance computing facilities “don’t have to place additional strain on the grid,” but could instead contribute to more flexible and responsive electricity systems.

Why AI Power Demand Is Becoming A Major Issue

The experiment takes place against the backdrop of rapidly growing electricity demand from AI computing. Training large AI models requires enormous GPU clusters operating continuously, and global data centre power consumption is expected to rise significantly as AI adoption expands.

Grid operators are increasingly concerned that new data centres could strain already constrained electricity systems. In the UK, demand for grid connections has grown rapidly in recent years as developers race to build AI infrastructure.

Traditional data centres are usually treated as “firm loads”, meaning the electricity system must assume they will draw their full power requirements at all times. The London trial explored an alternative model in which data centres act as flexible loads that can temporarily reduce consumption during periods of grid stress.

If implemented at scale, this approach could make it easier for electricity networks to accommodate the growth of AI infrastructure while maintaining grid stability.

What Does This Mean For Your Business?

For businesses building or using AI infrastructure, the trial highlights a possible change in how data centres can interact with energy systems.

AI computing has often been criticised for its high energy consumption, particularly as demand for generative AI services continues to grow. The London trial suggests that AI infrastructure may also offer new tools for managing electricity demand more intelligently.

Flexible computing loads could allow data centres to reduce power consumption during peak demand periods or when renewable energy supply is limited. This could help organisations balance sustainability goals with the growing need for high-performance computing.

However, the model also introduces new operational considerations. Running AI infrastructure as a flexible grid resource requires sophisticated workload management systems capable of pausing or rescheduling non-critical tasks without affecting service levels.

As AI becomes more deeply integrated into business operations, the ability to manage computing workloads in ways that support both performance and energy resilience may become an important part of future data centre strategy.

Tech Tip : Check Which Apps Have Access To Your Google Or Microsoft Account

Many users grant third-party apps access to their Google or Microsoft account and then forget about them, so regularly reviewing and removing unused connections is a simple way to reduce unnecessary access to your business data.

Why This Matters

Applications often request permission to access email, files, calendars or contacts. Over time these connections accumulate as people try new tools, browser extensions or productivity apps.

Some of these integrations are legitimate and useful. Others may be unused or unnecessary, yet still retain permission to access data.

Regularly reviewing connected apps helps reduce risk and improves account security.

How To Check Connected Apps In Google

– Go to https://myaccount.google.com/security

– Scroll to the section labelled Your connections to third-party apps and services.

– Select See all connections.

– Review each app listed.

– Remove access for anything you no longer recognise or use.

How To Check Connected Apps In Microsoft

– Go to https://account.live.com/consent/Manage

– Sign in with your Microsoft account if prompted.

– Review the list of applications that have been granted permission to access your account.

– Select the app you want to review.

– Choose Remove these permissions if the app is no longer required.

If you are using a work or school account, some permissions may be managed by your organisation’s administrator.

What To Look For

When reviewing access, consider:

– Unused productivity tools.

– Old browser extensions.

– Trial software you no longer use.

– Unknown or unfamiliar applications.

Removing these connections reduces the number of pathways that could potentially access your account.

A Practical Approach

Set a reminder to review connected apps every few months. Removing unused integrations keeps your account cleaner and reduces unnecessary exposure to third-party services.

Burger King Deploys AI Headsets to Monitor Staff ‘Friendliness’

Burger King is piloting OpenAI-powered headsets in 500 US restaurants that analyse drive-thru conversations, coach staff in real time and track hospitality signals such as whether employees say “please” and “thank you”.

What Is BK Assistant and How Does It Work?

The system, known as BK Assistant, sits inside employee headsets and a connected web and app platform. At its centre is a voice-enabled AI chatbot called “Patty”, built on OpenAI technology.

From the moment a customer pulls up at the drive-thru to the point they leave, the system analyses the interaction. It can prompt staff with recipe guidance, flag low stock levels such as a drink syrup running low, and alert managers if a customer reports an issue via a QR code.

It can also detect certain hospitality phrases. Burger King has confirmed that the system can identify words such as “welcome”, “please” and “thank you” as one signal among many to help managers understand service patterns.

Designed To Streamline Operations

Restaurant Brands International, the Miami-based parent company of Burger King, has described the platform as being “designed to streamline restaurant operations” and allow managers and teams to “focus more on guest service and team leadership”.

The company has, however, been very keen to stress that the tool is not intended to record conversations for disciplinary monitoring or score individual workers. In statements to multiple outlets, Burger King has said: “It’s not about scoring individuals or enforcing scripts. It’s about reinforcing great hospitality and giving managers helpful, real-time insights so they can recognise their teams more effectively.”

The pilot is currently running in 500 US restaurants. The wider BK Assistant platform is expected to be available to all US locations by the end of 2026.

Why Now?

Fast food is a high-volume, low-margin business where seconds matter. Drive-thru performance, order accuracy and customer satisfaction scores directly influence revenue.

AI promises to reduce friction. Recipe reminders reduce training time. Automatic menu updates prevent customers ordering out-of-stock items. Real-time alerts about stock levels and cleanliness issues allow managers to act faster.

There is also a broader industry push towards automation. Labour costs remain one of the largest operational expenses in quick-service restaurants. At the same time, recruitment and retention challenges have persisted in many markets.

Against that backdrop, using AI as a coaching and operational support tool seems to be a commercially logical decision.

The friendliness monitoring element, however, is what has triggered the strongest reaction.

Support Tool or Surveillance?

Online backlash has been swift. Some critics have described the system as dystopian, arguing that analysing staff speech risks creating a culture of constant monitoring.

Burger King has attempted to position the system as supportive rather than punitive. “We believe hospitality is fundamentally human,” the company has said. “The role of this technology is to support our teams so they can stay present with guests.”

From a management perspective, aggregated data on service patterns could be useful. From an employee perspective, the idea that an AI system is listening for key phrases raises legitimate concerns about trust and autonomy.

AI systems are not infallible. Speech recognition technology can struggle with regional accents, background noise or overlapping conversations, particularly in a busy drive-thru environment. A missed “thank you” or a misheard phrase could distort the data being fed back to managers, creating the risk of misleading signals. Over time, that kind of inaccuracy could erode confidence in the system, both for staff expected to trust it and for managers relying on it to guide decisions

There is also the wider debate about workplace surveillance. Customer service calls have long been recorded for quality purposes, but embedding AI analysis directly into frontline headsets seems to be a real step change in visibility.

So what is really going on? In reality, this is likely to be less about politeness policing and more about data. This is because fast food chains are increasingly treating operational behaviour as measurable input. Every interaction becomes a data point.

What It Means for Burger King and Its Competitors

For Burger King, the upside is operational consistency at scale. With thousands of restaurants, even marginal improvements in order accuracy or service speed can translate into significant revenue gains.

However, there’s also a reputational risk to coinsider here. If staff perceive the system as intrusive, morale could suffer. If customers view it as excessive monitoring, brand sentiment could be affected.

Competitors Doing IT Too

Burger King is not the only fast-food company using AI. Across the sector, major brands are investing heavily in artificial intelligence as they look for gains in speed, consistency and tighter operational control.

Yum Brands, the parent company of KFC, Taco Bell and Pizza Hut, has announced partnerships with Nvidia to develop AI technologies across its restaurant estate, signalling a broader move towards data-driven kitchens and smarter front-of-house systems. McDonald’s has also experimented in this space. It previously tested automated AI order-taking at drive-thrus through a partnership with IBM before ending that trial in 2024, and has since turned to Google as it refines its AI strategy.

Quick-service restaurants are evolving into technology-led businesses, embedding AI into ordering systems, kitchen workflows and customer interactions in pursuit of efficiency and consistency at scale.

What Does This Mean For Your Business?

For UK SMEs and mid-sized organisations, this story is not really about burgers at all. It is about artificial intelligence moving out of the back office and into direct, frontline interaction with customers and staff.

Burger King is using AI to gather real-time operational data, coach teams and encourage consistent service standards. That same principle is now appearing across retail, logistics, healthcare and hospitality, where AI tools are increasingly shaping how people work rather than just analysing what has already happened.

That raises important governance questions. How exactly is the data being collected? How is it interpreted, and by whom? What visibility do managers have, and how clearly is the purpose explained to employees? These are not abstract compliance issues. They influence culture, morale and trust.

Used well, AI can remove friction, improve accuracy and support performance in ways that genuinely help staff do their jobs better. Used poorly, particularly in customer-facing roles, it can feel like constant surveillance, even if that was never the original intention.

For business owners, the lesson is not to avoid AI, but to introduce it carefully. For example be transparent about what the system does and doesn’t do. Set boundaries and make sure the benefits are visible to staff as well as management.

Technology can analyse behaviour and surface patterns. The quality of service, however, still depends on people. That balance will define whether AI in the workplace feels empowering or intrusive.

Consumers Still Don’t Trust AI to Handle Customer Service

New research from Pegasystems and YouGov shows that most consumers in the UK and US remain wary of generative AI in customer service, preferring human interaction despite widespread corporate investment in chatbots and automated support.

What the Research Found

The study, published in February 2026 by Pegasystems Inc., a US-based enterprise AI software company, surveyed 4,748 adults across the UK and the US between 4 and 13 November 2025. The results show a widening disconnect between how confidently businesses are deploying generative AI in customer service and how comfortable consumers feel interacting with it.

Two-thirds of consumers (64 per cent) said they were either “not very confident” or “not at all confident” in the way businesses use generative AI when interacting with them. More than half, 53 per cent, lacked confidence that organisations use generative AI responsibly.

That scepticism appears to come from lived experience. For example, almost half (46 per cent) reported that they either “rarely” or “never” get successful outcomes when their customer service interaction is AI-powered. A similar proportion (48 per cent) said they do not trust businesses to handle their customer service entirely through AI.

People Prefer Human Support Over AI

What stands out most clearly is that people still prefer human support rather than AI. According to the research, 77 per cent say they “always” or “often” achieve better outcomes when dealing only with a person. Two-thirds, 66 per cent, actively prefer human-led assistance. By contrast, just 2 per cent say they want to interact exclusively with generative AI chatbots.

Taken together, the figures suggest that while AI adoption has accelerated rapidly inside organisations, consumer confidence in those systems has not kept pace.

Why Consumers Don’t Trust AI

Simon Thorpe, Director at Pega, was clear about what is driving the unease. “AI can be transformational for customer service – but it has to live up to customer expectations,” he said in the company’s press release. “There’s a simple reason why we’re seeing a lack of consumer trust in the use of AI. There are just too many first-hand examples of businesses deploying these tools in ways that lead to dead ends and frustration.”

That frustration is now likely to be familiar to many customers. People report being stuck in automated loops, struggling to escalate to a human agent, or having to repeat information that has already been provided. Even when an issue is eventually resolved, the process can feel inefficient and impersonal.

Not Rejecting AI Outright

That said, the research suggests that consumers are not rejecting AI outright. Instead, they are reacting to how it has been introduced into customer service channels. As Thorpe added: “Businesses must build back consumer trust by moving past simple chatbots and deploying predictable AI agents that consistently get work done on behalf of customers. If businesses can use AI to make customer service faster and easier, they can drive massive new efficiencies while retaining customer trust.”

The distinction matters. The concern is less about AI existing and more about whether it delivers a reliable, transparent and genuinely helpful experience.

Consumers May Not Choose AI, But They Suspect It’s There Anyway

The research also reveals something more subtle. Although 48 per cent of respondents said they never actively choose to use generative AI in everyday tasks, many suspect they are already using it without realising it. Around 24 per cent think they probably interact with AI every day, even if they are not consciously aware of it.

That suggests a form of reluctant acceptance. People may not actively seek out AI-powered customer service, yet they understand that it is becoming embedded in daily life, from online banking and retail to travel and utilities.

AI is becoming part of everyday customer service, from chatbots and automated emails to voice systems and agent-assist tools. Yet many customers still question whether businesses are using it in ways that genuinely improve their experience. That contrast is becoming harder to ignore.

Pressure on Businesses to Deploy AI

Despite consumer scepticism, organisations face mounting internal and competitive pressure to adopt AI. Separate industry research from Gartner has found that more than nine in ten customer service leaders report being under pressure to implement AI within the year.

The commercial reasons are clear. AI promises lower operating costs, faster response times and improved self-service success. It can triage routine queries, surface relevant data for agents and operate around the clock.

For large enterprises, even marginal gains in efficiency can translate into significant savings. For smaller organisations, automation can help manage peaks in demand without expanding headcount.

However, the Pega findings suggest that cost efficiency alone will not secure customer loyalty. A separate study by Gladly and Wakefield Research has shown that even when AI or hybrid AI-to-human interactions resolve an issue, only a minority of customers say it increases their preference for the company. Customers, the report noted, “don’t resent AI… They resent wasted effort.”

That distinction matters.

Implications

For consumers, the issue is not technology in itself. It is reliability. When AI works seamlessly, it fades into the background. When it misfires or blocks access to a person, frustration rises quickly.

For frontline staff, AI systems are reshaping workflows. In the best cases, they reduce repetitive administration and surface relevant information at speed. In weaker implementations, they add another layer of process that can constrain judgement rather than support it.

For senior leaders, AI in customer service now sits at the intersection of cost control, brand perception and regulatory scrutiny, and any decisions about deployment increasingly carry reputational weight.

Organisations are therefore navigating a narrow path. They must modernise service operations while protecting customer confidence and employee engagement. That balance is becoming a defining feature of digital strategy.

What Does This Mean For Your Business?

For UK SMEs and mid-sized organisations, the message from this research is clear. Customer service automation can’t be treated as a plug-and-play efficiency project.

Before expanding AI across service channels, it’s worth asking three commercial questions. Does it genuinely improve resolution times? Does it reduce customer effort? Does it enhance, rather than restrict, human support when it matters?

The data suggests that customers are not rejecting AI outright. They are simply reacting to poor experiences. That means implementation quality is now a competitive differentiator. A well-designed hybrid model, where AI handles routine interactions and escalates intelligently to trained staff, is likely to outperform either extreme.

There is also a governance dimension here. Transparent communication about how AI is used, what data is processed and when a human can intervene will increasingly influence trust. With regulatory scrutiny of automated decision-making growing across the UK and Europe, customer service AI is unlikely to remain outside compliance conversations for long.

For growing businesses, AI offers the opportunity to extend service hours, smooth demand spikes and provide operational insight that was previously unavailable. Yet the organisations that benefit most will be those that treat AI as an augmentation layer, not a replacement for judgement.

The commercial advantage will not come from deploying more chatbots. It will come from deploying better ones, supported by people, process and clear accountability.

Instagram To Alert Parents Over Repeated Self-Harm Searches

Instagram says it will begin notifying parents if their teen repeatedly searches for suicide or self-harm-related terms within a short period, adding to its existing content controls as scrutiny of teen digital wellbeing intensifies.

How The Alerts Will Work

The new feature applies to Teen Accounts enrolled in Instagram’s parental supervision tools. If a young user repeatedly attempts to search for phrases promoting suicide or self-harm, or terms such as “suicide” or “self-harm”, a notification will be sent to their parent or guardian.

Parents will receive the alert via email, text message or WhatsApp, depending on the contact information provided, alongside an in-app notification. The alert will explain that the teen has repeatedly attempted to search for such terms within a short time window and will provide access to expert resources designed to support sensitive conversations.

Most Don’t Search For This

Meta has been keen to state that, of course, the vast majority of teens do not search for suicide or self-harm content and, if or when they do, Meta’s Instagram already blocks those searches and redirects users to helplines and support resources. The new alert mechanism is intended to flag patterns of repeated attempts rather than single queries.

In its announcement, Meta said: “We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution.” The company acknowledged the risk of unnecessary alerts but argued that “empowering a parent to step in can be extremely important.”

When And Where?

The alerts will roll out (starting this week) in the US, the UK, Australia and Canada, with wider availability planned later in the year. Meta has also confirmed that similar notifications are being developed for certain AI-related conversations, reflecting the growing role of AI chat interfaces in teen digital behaviour.

Why Now?

The timing reflects several pressures coming together at once. Meta and other social media companies are currently facing lawsuits in US courts alleging that their platforms have contributed to harm among young users. During recent testimony in federal and state proceedings, company executives were questioned over the pace of safety feature rollouts and the effectiveness of parental controls.

At the same time, internal research disclosed in separate proceedings suggested that parental supervision tools had limited impact on compulsive social media use.

Beyond the legal context, broader behavioural trends are also likely to be playing a part in this decision. In February, a Pew Research Center survey found that 64 per cent of US teens report using AI chatbots, compared with 51 per cent of parents who believe their teen uses them. While most teens use AI to search for information (57 per cent) or get help with schoolwork (54 per cent), 16 per cent say they have used chatbots for casual conversation and 12 per cent report using them for emotional support or advice.

These figures underline why Meta’s decision to extend parental alerts to AI interactions later this year may prove significant.

Mixed Views On AI From Teens

Interestingly, Pew also found that teens’ views on AI are mixed. For example, 36 per cent expect AI to have a positive impact on them personally over the next 20 years, while 26 per cent believe its broader impact on society will be negative. That ambivalence reflects a digital environment in which technology is both a support tool and a source of concern.

Balancing Intervention and Privacy

Introducing parental alerts for repeated search behaviour raises practical questions around privacy, proportionality and effectiveness.

Meta says it analysed Instagram search behaviour and consulted its Suicide and Self-Harm Advisory Group to determine an appropriate threshold. The aim, it says, is to avoid excessive notifications that could reduce impact over time.

The company also maintains strict policies against content that promotes or glorifies suicide or self-harm and states that it hides certain sensitive content from teens even when shared by accounts they follow.

The challenge, as with many digital safeguards, is calibration. Too little intervention risks missing warning signs. Too much may undermine trust or normal adolescent privacy.

What Does This Mean For Your Business?

For organisations operating in digital platforms, education, youth services or AI development, this move illustrates how online safety, legal exposure and product design are increasingly intertwined.

Parental oversight features are no longer optional add-ons. They are becoming part of the baseline expectation for platforms used by minors. The extension of alerts into AI conversations also signals that companies view conversational systems as part of the same duty-of-care landscape as social feeds.

The Pew data adds another dimension. With 12 per cent of teens reporting use of AI for emotional support, and parents often underestimating that behaviour, organisations developing AI-enabled services will face growing scrutiny over how those systems respond to vulnerable users.

More broadly, the story reflects a shift from reactive moderation to proactive signal detection. Repeated search behaviour is being treated not just as content interaction but as a potential indicator of need.

For businesses, the implication is clear. Where products intersect with young users, mental health or AI-driven interaction, safety design must be demonstrable, measurable and defensible. The commercial risk of failing to anticipate that expectation is no longer theoretical.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives