Sustainability-in-Tech : Google’s 100-Hour ($1 Billion) Battery to Power New Data Centre
Google has announced plans to build a new data centre in Pine Island, Minnesota, powered by wind, solar and a 300-megawatt, 100-hour iron-air battery supplied by US startup Form Energy, marking a significant test of long-duration energy storage at hyperscale.
Minnesota and the Clean Energy Structure
The project, revealed in February, will be developed in partnership with Minnesota-based (headquartered in Minneapolis) electric and gas utility Xcel Energy and introduces a new contract mechanism called the Clean Energy Accelerator Charge (CEAC). Under the arrangement, Google will cover all costs associated with its electric service, with the aim of accelerating clean energy deployment without shifting costs onto other customers.
As part of the agreement, 1,400 megawatts of new wind generation and 200 megawatts of solar will be added to Xcel’s grid to support the data centre, alongside the 300 MW iron-air battery system already announced. The combination is intended to provide a more balanced solution, pairing large-scale renewable capacity with multi-day storage. Google says it will also contribute $50 million to bolster Xcel’s Capacity*Connect programme, which is designed to deploy up to 200 MW of distributed battery storage across Minnesota by 2028 to strengthen grid resilience.
Google describes the partnership as an opportunity to “reimagine how data centres can be served”, positioning the project as a catalyst for electricity innovation rather than a conventional power purchase arrangement.
The New Battery
At the centre of the announcement is Form Energy’s iron-air battery, capable of delivering 300 MW continuously for up to 100 hours. Unlike lithium-ion systems, which typically discharge over four to six hours, iron-air technology is designed for multi-day storage. It works by using oxygen to rust iron, releasing electrons during discharge and reversing the process during charging.
According to one source (The Information), Google’s agreement with Form Energy could be valued at around $1 billion, making it one of the most significant commercial deployments of long-duration energy storage to date.
For data centres increasingly driven by AI workloads, energy reliability is becoming as important as raw capacity. Wind and solar can provide large volumes of low-carbon electricity, but their intermittency presents operational challenges. A 100-hour battery is intended to smooth fluctuations over multiple days rather than just peak hours.
Scaling AI Without Straining the Grid
The timing is significant. Hyperscale data centre demand in the United States has surged, particularly in regions with strong renewable resources. At the same time, utilities and regulators face mounting pressure to ensure that new data centre loads do not drive up energy prices or compromise grid reliability.
In Texas, where Google has also announced new facilities, the company has highlighted a “power first” co-location model and air-cooling systems designed to limit operational water use to “only critical campus operations like kitchens.” Across the state, Google says it has contracted more than 7,800 MW of net-new energy generation and capacity through power purchase agreements.
Minnesota’s model is different because it combines large-scale renewables with long-duration storage and distributed battery networks. For Xcel Energy, which has plans to install 600 MW of energy storage by 2030, the Google partnership provides both capital and a high-profile validation of distributed capacity strategies.
Commercial and Technical Realities
While the announcement shows real ambition, several practical questions remain. Iron-air technology has been demonstrated at pilot scale, but Minnesota represents one of its first major commercial deployments. Manufacturing scale-up, cost discipline and long-term performance under real grid conditions will be closely watched.
Also, although the 100-hour battery is designed to address the challenge of multi-day variability in wind and solar output, it does not remove the need for transmission upgrades, dispatchable generation or demand management.
For Google, the commercial logic also extends beyond sustainability credentials. Securing predictable, long-term clean energy supply can reduce exposure to volatile wholesale markets and regulatory scrutiny. It also strengthens the company’s narrative that AI growth can align with decarbonisation goals rather than undermine them.
For Form Energy, the agreement provides a landmark customer and potential springboard towards a planned public listing. The company has reportedly raised over $1.4 billion to date and is building manufacturing capacity in West Virginia.
What Does This Mean For Your Business?
For most UK businesses, a 300 MW, 100-hour battery may feel like something that is still some way off in the future. However, energy resilience is steadily becoming a board-level issue rather than simply an operational one. As organisations expand their digital infrastructure, the questions are shifting from how much energy is consumed to how securely and predictably it can be supplied. Reliability, price stability and long-term sustainability are increasingly linked.
Long-duration storage is one potential response to that challenge. Buying renewable power helps reduce carbon intensity. Ensuring supply remains stable during prolonged periods of low wind or solar output supports operational continuity. For businesses with growing digital demands, the difference between those two objectives is becoming increasingly important.
The way the deal has been structured is also worth noting. Google has linked its expansion to additional clean capacity in a way that is intended to avoid shifting costs onto other customers. Most SMEs will never negotiate at this scale, but the underlying principle of matching growth with demonstrable energy impact is increasingly shaping procurement decisions, sustainability reporting and investor scrutiny.
At the same time, public attention on data centre energy and water use is increasing. Businesses expanding cloud and AI capabilities should expect greater transparency requirements around sourcing, efficiency and grid effects. Sustainability claims will increasingly need to be backed by operational evidence.
Minnesota will now act as a practical test. If multi-day storage performs reliably at this scale, it could strengthen the case for deeper renewable integration across energy-intensive industries. If it struggles, it will reinforce how complex the transition remains. Either way, projects like this may be shaping the framework within which future digital growth will need to operate.
Video Update : Reduce Hallucinations In ChatGPT/Copilot
Here’s a way to reduce the amount of ‘hallucinations’ in the outputs of your prompts with the use of … another prompt … albeit set up as a “Custom Instructions” within the settings of your Copilot or ChatGPT setup.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip : Set Up A Passkey On Your Microsoft Or Google Account
Passkeys let you sign in without a password, dramatically reducing the risk of phishing and credential theft, and most UK business users can set one up on their Microsoft 365 or Google Workspace account in just a few minutes.
What Is A Passkey?
A passkey is a password replacement that uses your device’s built-in security, such as fingerprint, facial recognition, a PIN, or Windows Hello, to authenticate you. Instead of typing a password that could be stolen, guessed or reused, you approve the sign-in securely on your own device.
Both Microsoft and Google now support passkeys for business and personal accounts, and they are widely regarded as a major step forward in phishing-resistant authentication.
Why This Matters For Businesses
Phishing and password spraying remain two of the most common ways attackers gain access to business email and cloud systems. If a password is stolen through a fake login page or reused from another breach, it can be used immediately.
Passkeys change that. There is no password to steal, reuse or type into a fake website. Even if you land on a convincing phishing page, a passkey will not authenticate against it. For individual users, this is one of the simplest and most effective security upgrades available today.
How To Set Up A Passkey On A Microsoft Work Or School Account
- Go to https://mysignins.microsoft.com/security-info or open your Microsoft account and navigate to Security info.
- Select Add sign-in method.
- Choose Passkey from the list of options.
- Select Add and follow the on-screen prompts.
- Choose where to store the passkey, for example Windows Hello on your PC, or your mobile device.
- Complete the verification step if prompted.
Once configured, you can use your fingerprint, face, or device PIN to sign in instead of entering your password.
If you do not see Passkey as an option, your organisation’s IT administrator may need to enable it within Microsoft Entra ID first.
How To Set Up A Passkey On A Google Account
- Go to https://myaccount.google.com/security while signed in.
- Scroll to the section labelled Passkeys.
- Select Create a passkey.
- Follow the prompts to store the passkey on your device, such as your phone or laptop.
- Confirm using your device unlock method.
Google will then allow you to sign in using your device authentication rather than a traditional password.
A Practical Approach
Start with your most important accounts, especially your business email. You can keep your existing authentication methods during the transition, but moving to passkey-based sign-in removes one of the most common attack routes used against UK businesses.
This is a small change, made in your own account settings, that can significantly reduce phishing risk and strengthen your first line of defence.
Microsoft Copilot Bug Exposes Confidential Emails To AI Tool
A coding error inside Microsoft 365 Copilot briefly allowed the AI tool to read and summarise emails that businesses had explicitly marked as confidential.
A Safeguard That Didn’t Hold
In January, Microsoft detected an issue inside the “Work” tab of Microsoft 365 Copilot Chat. The problem, tracked internally as CW1226324, meant Copilot could process emails stored in users’ Sent Items and Drafts folders, even when those messages carried sensitivity labels designed to block AI access.
Inbox folders appear to have remained protected. The weakness sat in a specific retrieval path affecting Drafts and Sent Items.
Microsoft confirmed the bug was first identified on 21 January 2026. A server-side fix began rolling out in early February and is still being monitored across enterprise tenants.
The company said in a statement:
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential, authored by a user and stored within their Draft and Sent Items in Outlook desktop.”
It added:
“This did not provide anyone access to information they weren’t already authorised to see. While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access.”
That distinction matters. Microsoft’s position is that no unauthorised user gained access to restricted data. The issue was about Copilot processing information it was supposed to ignore.
How Did This Happen?
Copilot relies on what’s known as a retrieve then generate model. It first pulls relevant content from emails, documents or chats. It then feeds that material into a large language model to produce summaries or answers.
The enforcement point is the retrieval stage. If protected content is fetched at that stage, the AI will use it.
In this case, a code logic error meant sensitivity labels and data loss prevention policies were not correctly enforced for Drafts and Sent Items. Emails marked confidential were picked up and summarised inside Copilot’s Work chat.
That creates obvious concerns. Draft folders often contain unfinalised legal advice, internal assessments or sensitive negotiations. Sent Items frequently hold commercially sensitive exchanges.
Even if summaries stayed within the same user’s workspace, the principle of exclusion had failed.
Why It Happened At An Awkward Moment
Microsoft has been aggressively positioning Microsoft 365 Copilot as a secure enterprise AI assistant. Businesses pay a premium licence fee on top of their Microsoft 365 subscriptions. The selling point is productivity without compromising governance.
This incident seems to undermine that message.
It also comes amid heightened scrutiny of AI tools in regulated environments. The European Parliament recently banned AI tools on some worker devices over cloud data concerns. Regulators are watching closely.
Industry analysts have long warned that the rapid rollout of enterprise AI features increases the likelihood of control gaps and configuration errors. As vendors compete to embed generative AI deeper into core productivity tools, governance frameworks are often forced to catch up. This incident reinforces a wider concern that AI functionality can move faster than internal compliance oversight.
Security researchers have previously highlighted vulnerabilities in retrieval augmented generation systems, including those used by Copilot. The lesson is consistent. If policy enforcement fails at retrieval, downstream safeguards cannot fully compensate.
What This Means For Microsoft And Its Rivals
Copilot sits at the centre of Microsoft’s enterprise AI strategy, so any weakness in its data controls lands hard. Businesses are being asked to trust an assistant that can read across emails, documents and internal chats. That trust is commercial currency.
In Microsoft’s defence, it must be said that the company moved quickly to contain the issue. The fix was applied server-side, so customers did not need to install patches, and the company says it is contacting affected tenants while monitoring the rollout. From a technical response standpoint, the reaction has been swift.
Microsoft has yet to publish tenant-level figures or detailed forensic logs showing exactly which confidential items were processed during the exposure window. For organisations with regulatory obligations, reassurance alone will not be enough. They will want clear evidence of what was accessed, when and under what controls.
Rivals will also be paying attention. Google Workspace with Gemini, Salesforce’s AI integrations and other embedded assistants rely on similar retrieval architectures. The risk exposed here is not unique to one vendor. It reflects a broader design challenge facing every platform embedding generative AI into live corporate data environments.
What Does This Mean For Your Business?
If your organisation is using Microsoft 365 Copilot, this is a governance story, not a crisis story.
Microsoft insists no unauthorised access took place and there is no evidence of data being exposed outside permitted user boundaries. That matters. Yet the episode highlights something more structural. AI controls can fail quietly inside systems businesses assume are ring-fenced.
Copilot is not a standalone chatbot. It operates across your email, documents and collaboration tools. It reads broadly. It summarises intelligently. It relies on retrieval rules working exactly as designed. When those rules misfire, even briefly, sensitive material can be processed in ways you did not intend.
That is why access decisions matter. Embedding AI into legal, HR, finance or executive workflows is not simply a productivity choice. Draft emails often contain unfiltered strategy, regulatory advice or negotiation positions. Those are precisely the communications organisations most want tightly controlled.
This is also a moment to test assumptions. Sensitivity labels and data loss prevention policies are only effective if they behave as expected under real conditions. Enabling new AI features should trigger validation, not blind trust.
Copilot can deliver genuine efficiency gains. Faster document drafting, quicker retrieval of buried information and less manual searching all translate into time saved. The value is real. Yet tools with that level of visibility into your data estate deserve the same scrutiny you would apply to any system handling commercially sensitive information.
Businesses that combine productivity ambition with disciplined oversight will benefit. Those that treat embedded AI as frictionless and risk-free may find the learning curve steeper than expected.
The Truth About Cyber Insurance
Cyber insurance has grown into a multi-billion-dollar global market, yet when a serious breach occurs, the real story often lies in the small print, the exclusions, and the security controls that should have been in place long before the policy was signed.
Once Just An Add-On
Cyber insurance was once treated as a niche add-on to professional indemnity cover. Today it sits at the centre of boardroom risk discussions. The reason is simple. Cyber incidents are no longer rare. They are routine, costly and increasingly disruptive.
So what exactly is cyber insurance, how large has the market become, and when does it actually pay out?
What Cyber Insurance Really Covers
At its core, cyber insurance is designed to cover two broad categories of loss. First-party losses include incident response, forensic investigation, legal advice, customer notification, system restoration, business interruption and, in some cases, ransom payments. Third-party cover addresses claims brought by customers, partners or regulators following data breaches or operational failures.
The detail, however, varies significantly between policies. Cover is often conditional on specific security controls being in place, such as multi-factor authentication, tested backups and patch management processes. In practice, cyber insurance now operates as a form of security gatekeeper. Insurers increasingly assess a firm’s cyber hygiene before agreeing terms or setting premiums.
How Big Is The Market?
According to Munich Re (Münchener Rückversicherungs-Gesellschaft), one of the world’s largest reinsurance companies, the global cyber insurance market was worth around $15.3 billion in 2024 and is expected to reach $16.3 billion in 2025. Munich Re projects that global premium volume could more than double by 2030, with annual growth exceeding 10 percent.
North America accounts for roughly 69 percent of global premiums, with Europe representing around 21 percent. Growth in Europe has been particularly strong over the past few years as regulatory pressure and ransomware attacks have increased awareness.
In the UK, the Association of British Insurers reported that insurers paid out £197 million in cyber claims to UK businesses in 2024. That figure represents a 230 percent increase on the previous year. Malware and ransomware accounted for 51 percent of all UK cyber claims, up from 32 percent in 2023.
These numbers underline two trends. Claims are rising sharply, and insurers are paying substantial sums.
But what do claims actually look like in practice?
Claims And Payouts
There is no universal “claim approval rate” published across the market, but available industry data offers some insight into how incidents unfold.
Coalition’s 2025 Cyber Claims Report, covering incidents in 2024 across several markets including the UK, found that 60 percent of claims arose from business email compromise and funds transfer fraud. These are not sophisticated zero-day exploits. They are often payment diversion scams targeting finance teams.
The same report noted that 44 percent of policyholders affected by ransomware chose to pay the ransom when it was deemed reasonable and necessary. Meanwhile, 56 percent of reported matters required no out-of-pocket payment from the policyholder, often because insurer-provided incident response support mitigated losses before they escalated.
The key takeaway here is that many cyber claims are not dramatic data centre shutdowns. They are invoice fraud, stolen credentials and misdirected payments.
That said, some cases have tested the boundaries of cover entirely.
When The Small Print Becomes The Story
One of the most widely reported examples of a major cyber insurance coverage dispute followed the 2017 NotPetya attack (a malware attack attributed to the Russian military). Pharmaceutical giant Merck said the malware disrupted around 40,000 machines and ultimately caused losses of approximately $1.4 billion. Several of its insurers sought to rely on traditional “war exclusion” clauses, arguing that the attack was attributable to a state actor and therefore not covered. In 2022, a New Jersey court ruled that the wording of the war exclusion did not apply to the cyber attack in question. The parties later reached a confidential settlement.
The Merck case became a landmark moment in cyber insurance interpretation. It highlighted how state-linked cyber operations can blur the boundary between criminal activity and geopolitical conflict, and exposed the limits of legacy policy wording when applied to modern cyber warfare.
Exclusions
In the wake of disputes linked to NotPetya and similar incidents, Lloyd’s of London issued a market bulletin requiring, from 31 March 2023, that standalone cyber policies include clearly defined exclusions addressing state-backed cyber attacks unless expressly covered. The intention was to reduce ambiguity around systemic cyber risk and clarify how attribution would be handled within policy terms.
Other Examples
Other incidents illustrate the potential scale of insured losses. Colonial Pipeline paid a $4.4 million ransom in 2021 following a ransomware attack, with US authorities later recovering approximately $2.3 million in cryptocurrency. CNA Financial was widely reported to have paid $40 million after a ransomware attack the same year. Norsk Hydro, by contrast, refused to pay ransom after its 2019 attack and later disclosed financial impacts in the region of $60–70 million, supported in part by insurance arrangements.
Taken together, these cases demonstrate both the scale of financial exposure and the growing legal and structural complexity surrounding cyber insurance. Insurance can provide vital financial cushioning when an attack hits, yet it can just as quickly become the subject of dispute, interpretation and courtroom argument when definitions, exclusions or attribution are tested.
Why Cyber Insurance Is Interesting Now
Three structural shifts are fundamentally reshaping the cyber insurance market and changing how organisations think about risk, cover and accountability.
Cyber insurance is increasingly acting as a de facto regulator. Insurers demand evidence of MFA, endpoint protection, network segmentation and backup testing before binding cover. Organisations seeking insurance often upgrade security controls simply to qualify.
There is a clear protection gap. Swiss Re estimates that SMEs account for around 30 percent of global cyber premiums, yet penetration rates among smaller firms remain modest. Many UK SMEs remain uninsured despite rising threat levels.
Systemic risk looms large. Supply chain attacks, cloud provider outages and state-linked campaigns raise questions about correlated losses. Insurers must balance growth with exposure to events that could trigger thousands of simultaneous claims.
What Does This Mean For Your Business?
For UK organisations, cyber insurance is neither a silver bullet nor a formality. It is a financial resilience tool that sits alongside prevention, not in place of it.
Policies can provide rapid access to specialist incident response teams, legal advisers and negotiators at moments of crisis. That support can materially reduce downtime and reputational damage, yet cover is conditional. Failure to implement agreed controls can jeopardise claims.
Businesses should therefore treat cyber insurance procurement as part of a broader risk management strategy. That means reviewing exclusions, understanding sub-limits for ransomware and business interruption, and aligning technical controls with policy requirements.
The market is growing, claims are increasing, and insurers are paying out significant sums. The most important lesson from the past decade is that buying cyber insurance is not the end of the story. It is the point at which scrutiny, obligations and real risk management truly begin.
Hard Drive Makers Sell Out 2026 Output To AI Data Centres
The world’s biggest hard drive manufacturers have already allocated all the units they will produce this year after hyperscale AI and cloud operators secured the bulk of available capacity.
AI Infrastructure Buys Up The Year
Western Digital and Seagate have both confirmed that their nearline hard drive production for calendar year 2026 is effectively spoken for.
Western Digital chief executive Tiang Yew Tan told analysts: “We’re pretty much sold out for calendar ’26. We have firm purchase orders with our top seven customers. And we’ve also established long-term agreements with two of them for calendar year ’27 and one of them for calendar year ’28.”
Seagate CEO Dave Mosley was equally direct: “Our nearline capacity is fully allocated through calendar year 2026, and we expect to begin accepting orders for the first half of calendar year 2027 in the coming months… multiple cloud customers are discussing their demand growth projections for calendar 2028, underscoring that supply assurance remains their highest priority.”
In simple terms, the hyperscalers have moved first and bought ahead.
Nearline drives are the high-capacity workhorses used in data centres for bulk storage. They are not consumer PC drives. They are 30TB-plus, 40TB-class disks that underpin cloud storage, AI training datasets and archival systems.
Why AI Is Driving The Squeeze
The AI boom has created a double demand curve.
Training large models requires vast amounts of storage for datasets, checkpoints and logs. Inference workloads generate new data that also needs to be stored, replicated and backed up. Cloud providers are scaling capacity aggressively.
Technology market research firm Omdia now forecasts total server spend in 2026 at around $590 billion, with datacentre capex exceeding $1 trillion. The top ten cloud providers are expected to account for more than 70 percent of that spend, with AI-optimised servers representing roughly 80 percent of total server investment.
Storage sits at the heart of that build-out.
Western Digital has pivoted heavily towards this segment. Around 89 percent of its revenue now comes from cloud customers, compared with just 5 percent from consumers. This is no longer a PC storage business. It is AI infrastructure plumbing.
Implications For The Wider Market
For hyperscalers, long-term supply agreements bring certainty. For everyone else, the cupboard looks thinner.
Analysts have warned that discretionary buyers, including mid-sized enterprises and traditional server customers, may struggle to secure high-capacity drives at predictable prices. Corporate IT projects that assumed hard drives would provide a cost-effective capacity tier may need to revisit budgets.
There is also a ripple effect. AI demand has already strained DRAM and NAND flash markets. If SSD prices rise, some buyers will pivot back to HDDs for bulk storage, adding further pressure to supply.
Andrew Buss, from global market intelligence and research firm International Data Corporation (IDC), recently noted that AI growth is consuming “large amounts of fast flash-based NVMe SSDs”, pushing up prices and prompting a reconsideration of HDD-based arrays where workloads allow.
The result is an unusual reversal. Hard drives, once seen as legacy technology, are back at the centre of infrastructure planning.
Technology Race Intensifies
At the same time, the technical arms race continues.
Western Digital is pushing towards 40TB and 44TB drives this year and has outlined a roadmap to 100TB by 2029, supported by new 14-platter designs. Seagate is advancing its HAMR technology and has publicly targeted 100TB drives by the end of the decade.
These capacity gains matter. Hyperscalers want more storage per rack, per watt and per square metre. Increasing areal density and platter counts is now a strategic priority, not an incremental upgrade.
The challenge is manufacturing capacity. HDD production cannot be scaled overnight. Tooling, media, heads and assembly lines require long lead times. When hyperscalers lock in output years in advance, smaller buyers sit further back in the queue.
What Does This Mean For Your Business?
For Western Digital and Seagate, the sell-out provides revenue visibility rare in the storage sector. Multi-year agreements reduce demand uncertainty and underpin capital investment plans.
For AI infrastructure players, it reinforces concentration. The largest cloud providers are able to secure supply at scale, strengthening their competitive position.
For enterprises and SMEs, it raises practical questions. If you are planning a server refresh or building on-premise storage, availability and pricing assumptions may need adjustment.
There is also a structural concern here. When the majority of global HDD output is effectively pre-booked by a small number of hyperscalers, the market becomes less flexible. Innovation may skew even further towards the needs of AI data centres rather than general-purpose enterprise workloads.
Critics argue that the AI infrastructure boom is distorting supply chains across silicon, memory and now spinning disk. Supporters counter that it is driving investment, accelerating innovation and revitalising a technology many had written off.
What is clear here is that the humble hard drive, long overshadowed by flash, has become a strategic asset again. In an AI-first world, bulk storage is no longer a commodity. It is strategic leverage.