Sustainability-in-Tech : AI Datacentres May Heat Surrounding Areas For Miles
AI datacentres built to power the rapid expansion of artificial intelligence may also be creating measurable heat increases across surrounding areas, raising new concerns about their local environmental impact as well as their energy use.
New Research Findings
A 2026 study led by researchers affiliated with the University of Cambridge examined land surface temperature data around thousands of AI datacentre locations worldwide between 2004 and 2024.
Using satellite-derived temperature measurements and location data for AI hyperscale facilities, the researchers analysed how temperatures changed before and after sites became operational. Their findings suggest that the presence of large AI datacentres is associated with a noticeable increase in surrounding land surface temperatures.
The paper states that “the land surface temperature increases by 2°C on average after the start of operations of an AI data centre,” with recorded increases ranging from as little as 0.3°C to as much as 9.1°C in some locations.
The researchers describe this phenomenon as a new form of localised warming, referring to it as the “data heat island effect”, drawing a direct comparison with the well-established urban heat island effect seen in cities.
How Far The Effect Extends
One of the most significant aspects of the study is its claim that the warming effect extends well beyond the datacentre site itself.
The analysis suggests that temperature increases can even be detected up to 10 kilometres away from AI datacentres, although the intensity reduces with distance. According to the study, “an average monthly land surface temperature increase of 1°C can be measured up to 4.5 km from the AI hyperscalers”.
This places the scale of the effect in a similar range to traditional urban heat islands, where built environments and human activity create localised warming zones that affect surrounding areas.
The researchers argue that this spatial reach makes the phenomenon difficult to ignore when considering the broader environmental footprint of AI infrastructure.
Why Is This Happening?
At the core of the issue is energy consumption. For example, AI datacentres require vast amounts of electricity to train and run machine learning models, and a large proportion of that energy is ultimately released as heat. Cooling systems are designed to remove this heat from servers, but in doing so, it is transferred into the surrounding environment.
The paper notes that the rapid expansion of AI services is driving a surge in datacentre capacity and energy demand, stating that data processing could soon become one of the most power-intensive activities globally.
It also highlights a critical sustainability challenge, observing that “AI data centres are in the vast majority relying on fossil fuel use”, meaning that rising demand for AI computing could increase both emissions and localised heat output at the same time.
How Many People Could Be Affected?
The potential scale of impact is another key concern raised in the research. By combining temperature data with population mapping, the authors estimate that “more than 340 million people could be affected by this temperature increase” worldwide, particularly those living within several kilometres of large datacentre clusters.
They warn that, much like urban heat islands, this could have knock-on effects for “welfare, healthcare, and energy systems”, particularly in regions already experiencing rising temperatures or heat stress.
While these figures are based on modelling and assumptions rather than direct measurement of human exposure, they highlight the potential for AI infrastructure to influence local environments in ways that have not previously been considered.
Caveats And Limitations
Despite the striking findings, the study comes with some important limitations. For example, it has not yet been peer-reviewed, meaning its methodology and conclusions have not undergone full academic scrutiny. As with any preprint study, its results should, therefore, be treated as indicative rather than definitive.
There is also a key technical distinction in what is being measured. The study focuses on land surface temperature, which reflects how hot surfaces such as roofs, roads and ground materials become, rather than the air temperature experienced directly by people.
This means some of the observed warming may actually be linked to changes in land use, construction materials, and reduced vegetation around datacentre sites, rather than heat emissions from computing alone.
As a result, the findings are best viewed as evidence of a broader environmental effect associated with large-scale datacentre development, rather than as proof that AI processing itself is solely responsible for widespread temperature increases.
Where This Leaves AI Sustainability
The study does, however, seem to add a new dimension to the sustainability debate around AI. Whereas much of the focus to date has been on carbon emissions and electricity consumption, this research suggests that local environmental impacts, particularly heat, may also need to be considered as part of the overall footprint of AI infrastructure.
The authors themselves emphasise this point, stating that the data heat island effect “could have a remarkable influence on communities and regional welfare in the future” and should become part of the wider conversation around sustainable AI development.
They also point to potential mitigation strategies, including more energy-efficient hardware, improved cooling systems, and computational methods that reduce the energy required to train and run AI models.
What Does This Mean For Your Business?
For businesses, this is an early signal that AI infrastructure decisions are becoming more complex.
Organisations relying on AI services may soon face greater scrutiny over the environmental impact of their digital operations, particularly if sustainability reporting expands to include local effects as well as carbon emissions.
For those involved in property, planning, or infrastructure, the implications are more immediate. Large datacentre developments may need to be assessed not just in terms of energy supply and connectivity, but also their potential impact on local microclimates and surrounding communities.
At the same time, this challenge is already starting to create new opportunities. For example, several projects are exploring how waste heat from datacentres can be captured and reused rather than simply expelled into the environment. In the UK, government-backed initiatives have looked at using datacentre heat to supply district heating networks, helping to warm homes and public buildings. In Europe, schemes in countries such as Denmark and Sweden are already feeding excess heat from large datacentres into local heating systems, reducing both emissions and energy costs for nearby communities.
This means that, instead of being seen purely as energy-intensive assets, datacentres can become part of local energy ecosystems, supporting more efficient and circular use of heat. For businesses, this opens up practical opportunities around energy partnerships, sustainable building design, and participation in local heat networks.
For organisations planning new facilities, there is also a clear incentive to design with this in mind from the outset. Integrating heat recovery, selecting appropriate locations, and working with local authorities on energy reuse strategies could all become competitive advantages rather than regulatory burdens.
Broadly speaking, the research highlights an important point. AI may be digital, but the systems that power it are not. As demand for AI continues to grow, so too will the need to manage its physical footprint in a way that is sustainable, measurable, and commercially viable, not just environmentally responsible.
Video Update : How To Create Documents Using The New Copilot Word Agent
Microsoft’s Copilot in Word can turn a simple prompt into a complete document, and this video shows how it can quickly produce written content, structure it into clear sections and take care of the initial layout so you are not starting from scratch.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip : Use “Open In Browser” For Unknown Files Before Downloading
Many email and cloud platforms allow you to preview files in your browser, so opening unknown documents this way first is a simple way to reduce the risk of running harmful content on your device.
Why This Matters
Unexpected attachments are one of the most common ways malware and phishing attacks reach businesses.
Opening a file directly in a desktop application can allow embedded content, such as macros or scripts, to run if enabled.
Previewing a file in your browser, where supported, limits this behaviour and gives you a chance to assess the content before downloading it.
How To Preview Files In Microsoft 365
In Outlook on the web or OneDrive:
- Click on the attachment or file.
- Select ‘Preview’ or ‘Open in browser’.
- Review the content without downloading it.
Office files such as Word, Excel and PDFs will typically open in a web-based viewer.
How To Preview Files In Google Workspace
In Gmail or Google Drive:
- Click the attachment or file.
- Select ‘Preview’ (often shown as an eye icon).
- Review the file in the browser window.
You can then decide whether it is safe to download or open fully.
What To Watch For
Even when previewing files, be cautious of:
- Requests to enable editing or macros after download.
- Links inside documents that prompt further action.
- Files from unknown or unexpected senders.
If in doubt, verify with the sender before opening fully.
A Practical Approach
Use browser preview as a quick first step when dealing with unexpected files.
It only takes a moment and adds an extra layer of caution before opening content directly on your device, helping reduce the risk of accidental malware execution.
Meta And Google Found Liable In Landmark Social Media Addiction Case
A US jury has just found Meta Platforms and Google liable for harm linked to addictive platform design, marking a pivotal moment in how social media companies may be held accountable.
What Just Happened?
A Los Angeles jury has concluded that Meta and Google were responsible for harm suffered by a young woman who developed compulsive use of Meta-owned Instagram and Google’s YouTube from an early age.
In the case, the US-based plaintiff, now aged 20 and identified in court documents as “Kaley” or “KGM” (her full identity has not been publicly disclosed), said she began using YouTube at six and Instagram at nine, later experiencing anxiety, depression and body image issues. Jurors awarded $6m in damages, split between compensatory and punitive elements, and found that Instagram and YouTube had acted with what was described in court as malice, oppression or fraud.
Crucially, the jury determined that the platforms’ design was a substantial factor in causing harm, rather than focusing on the specific content viewed.
Why This Case Is Being Treated As A Milestone
What makes this case so noteworthy is that it is one of the first cases of its kind to reach a full jury verdict, and it is widely seen as an early indicator of a much larger wave of litigation.
There are already more than a thousand similar claims progressing through US courts, involving families, schools and public authorities. Legal experts expect this ruling to influence how future cases are argued, how damages are assessed, and whether companies choose to settle rather than go to trial.
Some legal commentators have also framed this moment as a broader turning point for the technology sector, comparable to earlier cases in other industries where product design and long-term harm became central to accountability.
As one of the lawyers representing the plaintiff stated after the verdict, “no company is above accountability when it comes to our children,” reflecting a wider sentiment that the legal threshold for responsibility may now be changing.
The Shift From Content To Design
One of the most important aspects of the case is actually what it did not focus on. US law has long protected technology companies from liability for user-generated content, limiting legal exposure in many previous cases. Instead, this case examined how platforms are built.
This distinction could prove significant beyond this single case. Legal protections such as Section 230 in the US have historically shielded platforms from responsibility for content, but a growing focus on design may place aspects of those protections under increased scrutiny.
The plaintiff’s legal team argued that features such as infinite scrolling, autoplay videos and constant notifications were intentionally designed to maximise engagement and keep users returning. These features are now common across most digital platforms, and are often described as engagement tools.
The jury accepted that these design choices could create patterns of compulsive use, particularly among younger users. As one expert witness described during proceedings, the question at the centre of the case was effectively how platforms are designed to ensure “a child never puts the phone down,” framing the issue as one of engineering rather than behaviour.
In Their Defence
Both Meta and Google have said they disagree with the verdict and plan to appeal.
Meta has argued that mental health is complex and cannot be attributed to a single factor, while also pointing to its policies restricting under-13s from using its platforms. During testimony, its leadership maintained that their products are intended to have a positive impact.
Google’s defence focused on positioning YouTube as a video platform rather than a traditional social network, and questioned whether the usage patterns described in the case met the threshold for addiction.
These arguments are likely to form the basis of ongoing appeals and future legal disputes.
A Wider Pattern Of Legal And Political Pressure
It’s worth noting here that this verdict follows closely behind another US ruling that found Meta liable in a separate case involving child safety and harmful content exposure.
Notably, other major platforms involved in similar litigation, including TikTok and Snap, chose to settle before trial, which may indicate the level of legal and financial risk companies now associate with these claims.
At the same time, governments are increasingly exploring regulatory action. In the UK, for example, proposals to restrict social media access for under-16s are under active consideration, while Australia has already introduced measures targeting youth access and platform design.
Political leaders, including Keir Starmer, have signalled that the current approach to social media regulation may not be sufficient. He recently stated that the status quo is “not good enough,” indicating that further intervention is likely.
Campaign groups and families involved in similar cases argue that responsibility is beginning to move away from individuals and towards the companies designing these platforms.
Why This Matters Beyond Social Media
For technology companies more broadly, this case highlights a growing legal focus on how digital products are designed, not just how they are used.
Courts are increasingly treating platform design as a series of deliberate choices rather than neutral features, meaning those decisions may carry legal and ethical consequences in the same way as other product design decisions.
Many business models rely on capturing attention and encouraging repeated engagement. Techniques that support this, such as personalised recommendations and continuous content feeds, are widely used across sectors including media, retail and software.
This also seems to highlight the tension in social media platforms between user wellbeing and commercial performance. Features that maximise engagement are often closely tied to advertising revenue and platform growth, which means any legal pressure to change them could have direct business implications.
The risk here is that these same techniques could now face greater scrutiny if they are seen to contribute to harm, particularly where younger or vulnerable users are involved.
This could lead to a reassessment of how engagement is measured and prioritised within digital services.
What Does This Mean For Your Business?
This ruling signals that digital design choices are becoming a matter of legal and commercial risk, not just user experience.
For Meta Platforms, Google, and other major platforms such as TikTok and Snap Inc., it raises the prospect of sustained legal exposure. This case is widely expected to influence hundreds of similar lawsuits, increasing the likelihood of further damages, settlements, and pressure to redesign core product features that drive engagement.
Businesses that operate platforms, apps or online services should now perhaps begin to review how their products encourage user behaviour, particularly if they rely heavily on notifications, recommendations or continuous scrolling. Features that were once seen as standard may now require clearer justification, stronger safeguards, and potentially formal risk assessments, especially where younger users are involved.
There is also a broader reputational consideration here. Public expectations are changing, and organisations seen to prioritise engagement over user wellbeing may face increased scrutiny from customers, regulators and partners. For large platforms, this could translate into tighter regulation, limits on certain design practices, and closer oversight of how algorithms influence behaviour.
For companies using social media as a marketing channel, this case raises questions about long-term platform stability. Ongoing legal challenges and potential regulation could alter how these platforms operate, how audiences engage, and how data is used, particularly if engagement-driven features are restricted or redesigned.
For the largest platforms, this may ultimately lead to more fundamental changes in how products are designed, especially if courts or regulators begin to place limits on features that are closely linked to prolonged user engagement.
It seems now that accountability is expanding across the sector, and both platform providers and the businesses that rely on them will need to adapt to a landscape where design decisions, not just content, are subject to legal and regulatory scrutiny.
What Happens When Robotaxis Break Down?
A series of incidents involving Waymo’s autonomous vehicles has highlighted what happens when driverless systems fail in complex real world situations and how much they still rely on human intervention to recover.
A Technology Built For The Road Meets The Unexpected
Waymo’s robotaxi service has expanded rapidly across multiple US cities, now delivering hundreds of thousands of paid rides each week. The company positions its system as a fully autonomous driving service, designed to operate without a human driver behind the wheel.
However, recent incidents show that when situations fall outside expected conditions, vehicles can struggle to respond. In several reported cases, Waymo vehicles have stopped, hesitated or behaved unpredictably during emergencies, requiring intervention from police officers or other first responders.
One widely reported example from August 2025 involved a highway fire in California, where traffic was redirected in an unusual way. A Waymo vehicle was unable to adapt to the change, eventually stopping and requiring a police officer to manually move it out of the way.
When Autonomous Vehicles Cannot Proceed
The most significant issue here seems to be what happens when the system cannot decide what to do next.
Autonomous vehicles are designed to prioritise safety, which often means stopping when uncertainty is too high. While this reduces the risk of collisions, it can create new problems, particularly in fast-moving or emergency situations where standing still is not a viable option.
In multiple incidents, it seems that autonomous vehicles have effectively become obstacles in live environments, blocking traffic or delaying access for emergency services until human intervention takes place.
Human Support As The Fallback
To manage these situations, Waymo relies on human support systems behind the scenes.
The company uses Remote Assistance teams who provide contextual guidance when the vehicle encounters something it cannot resolve. According to Waymo, these workers do not drive the vehicle. Instead, they support decision-making. As the company explains, Remote Assistance agents “provide advice and support to the [vehicle] but do not directly control, steer, or drive the vehicle.”
This model is designed to ensure that the automated system remains in control at all times. However, it also means that when the system reaches its limits, recovery can depend on how effectively this human support is integrated.
Where Things Can Go Wrong
Even with this support in place, errors can still occur. For example, in one case under investigation in Austin, Texas, in January this year, a Waymo vehicle approached a stopped school bus with warning lights active. The system requested input from a remote assistant, who it is alleged incorrectly confirmed it was safe to proceed. The vehicle then moved past the bus while children were boarding, an action that would normally be illegal for a human driver.
Other reported incidents show a different type of failure, where no safe path is identified at all. In these cases, vehicles have remained stationary until physically moved, sometimes by police or other first responders.
All this has led to local officials raising concerns that this places an unexpected burden on public services. For example, in San Francisco, emergency management leaders warned that responders were becoming a default support function for autonomous vehicles, something they described as unsustainable.
Scaling The Problem Alongside The Technology
It seems that these challenges are becoming more visible as Waymo scales its operations.
The company operates thousands of vehicles and is expanding into new cities, increasing the number of unpredictable environments its systems must handle. It has said that around 70 Remote Assistance agents support a fleet delivering more than 400,000 rides per week.
In its response to US lawmakers, Waymo reiterated that Remote Assistance is limited in scope, stating that agents “provide advice only when requested by the automated driving system on an event-driven basis” and do not take control of the vehicle.
As deployment grows, the question is not whether incidents will occur, but how frequently and how effectively they can be resolved without external intervention.
Balancing Autonomy With Accountability
Waymo maintains that its system is designed to prioritise safety, even if that means stopping when conditions are unclear. The vehicle can also ignore human input if it conflicts with its own assessment, reinforcing that it remains the primary decision maker.
The company also states that “Waymo’s service does not rely on remote drivers,” emphasising that human involvement is limited and controlled.
However, the pattern of real world incidents seems to suggest that full autonomy still depends on multiple layers of human support. When those layers are not sufficient, responsibility can extend beyond the company itself to public infrastructure and emergency services.
What Does This Mean For Your Business?
For UK businesses, this highlights a critical aspect of automation that is often overlooked, namely what happens when systems fail or reach their limits.
Autonomous technologies are not just defined by how they perform under normal conditions, but by how they behave when they cannot proceed. Stopping safely is one outcome, but in operational environments, recovery is just as important.
It seems that human oversight, fallback processes and clear responsibility models remain essential. Businesses adopting automation will, therefore, need to plan not only for success scenarios, but also for failure scenarios, including how issues are resolved quickly and safely.
There is also a wider accountability question here. When automated systems interact with public environments, any gaps in ownership can become visible very quickly.
The Waymo case shows that the real test of autonomous systems is not when everything works, but how they respond when it doesn’t.
New Nail Polish That Works On Touchscreens
A new chemistry breakthrough could allow people to use long fingernails on touchscreens, addressing a long-standing usability issue with modern devices.
Why Fingernails Don’t Work On Touchscreens
Most modern smartphones and tablets use capacitive touchscreens, which rely on tiny electrical fields across the surface of the display. When a conductive object, such as a fingertip, disrupts that field, the device registers a touch.
Fingernails, however, are not conductive. This means taps made with the nail itself are not recognised, forcing users to adjust how they interact with devices. For people with long nails, this often results in awkward movements or reduced accuracy.
The issue is actually more widespread than it first appears. It also affects individuals with heavily calloused skin, where reduced conductivity can lead to unreliable touch response.
A Chemistry Led Solution
The new approach has been developed by a student researcher working with a supervisor at Centenary College of Louisiana and presented at a meeting of the American Chemical Society.
The idea is simple in principle, i.e., to create a nail coating that allows fingernails to interact with a touchscreen in the same way as skin.
As part of the research, the team experimented with more than 50 additives across multiple nail polish formulations. Their goal was to find a combination that could introduce just enough electrical interaction to register a touch, without compromising safety or appearance.
The motivation for the work came from a real-world need. As the researchers noted, when they explored the problem, the response was immediate: “would a touchscreen-compatible nail be useful?” The answer, they said, was “a resounding ‘yes, please!’”
How The Nail Polish Actually Works
Rather than making the nail directly conductive in the traditional sense, the formulation works through a different mechanism.
The researchers identified two key ingredients, taurine, commonly found in dietary supplements, and ethanolamine, a simple organic compound. When combined in a specific way, these ingredients enable a small movement of electrical charge across the nail surface.
This is enough to create a change in capacitance, allowing the touchscreen to detect contact.
According to the researchers, “our final, clear polish could be put over any manicure or even bare nails,” meaning it could integrate easily into existing cosmetic routines while also offering a functional benefit.
Why Previous Attempts Fell Short
Earlier efforts to solve this problem typically relied on adding conductive materials such as carbon nanotubes or metallic particles to nail polish.
While effective, these approaches introduced some practical challenges. For example, some materials raised safety concerns during manufacturing, while others limited the range of colours available, often resulting in dark or metallic finishes that were not commercially appealing.
The new approach avoids these issues by using more familiar chemical compounds and aiming for a clear or near-clear finish. This makes it more compatible with current consumer expectations in the beauty market.
Still Early Days, But Technically Promising
Despite the progress, the formulation is not yet ready for commercial use.
The researchers report that current versions require a relatively thick application and can feel slightly gritty. Current performance is also limited, with the conductive effect lasting only a short period once applied. The researchers say they are aiming to extend this to a more practical timeframe of several days.
There are also considerations around ingredient safety, particularly with ethanolamine, which can act as a skin irritant. The team is continuing to refine the formula to improve both durability and usability.
As the researchers themselves acknowledge, “we’re doing the hard work of finding things that don’t work, and eventually, if you do that long enough, you find something that does.”
What This Means Beyond Nail Polish
While this may appear to be a niche innovation, it highlights a broader trend in product development. Small usability challenges, particularly those affecting large numbers of people, are increasingly being addressed through interdisciplinary approaches that combine chemistry, materials science and user experience design.
There is also a clear commercial angle here. The involvement of cosmetic chemistry and early industry interest suggests potential applications within the beauty sector, particularly if the product can be refined to meet consumer expectations around appearance and durability.
More broadly, it could be said to demonstrate how relatively simple chemical solutions can improve how people interact with everyday technology, without requiring changes to the devices themselves.
What Does This Mean For Your Business?
For businesses, this development is a reminder that user experience challenges often sit at the intersection of technology and human behaviour.
Opportunities can emerge not just from building new digital tools, but from improving how people interact with the ones they already use. Even small friction points, when addressed effectively, can create meaningful differentiation.
It also highlights the value of early-stage research. Innovations like this may begin as academic projects, but can quickly attract commercial interest if they solve a genuine problem in a scalable way.
Organisations that stay aware of these developments, particularly in adjacent industries, may be more likely to spot practical innovations that improve usability, accessibility and customer experience.