Sustainability-In-Tech : Why Green Hydrogen’s Global Rollout Is Struggling
Green hydrogen was expected to become one of the most important clean fuels for decarbonising heavy industry, yet many projects across Europe, the United States and Australia are now slowing, shrinking or being cancelled altogether.
What Is Green Hydrogen?
Green hydrogen is produced by splitting water into hydrogen and oxygen using renewable electricity from wind, solar or hydropower. The process uses electrolysers, which sit between the electricity supply and the water source.
There are three main types of electrolyser, which are:
1. Proton Exchange Membrane (PEM) electrolysers (such as those built by Quest One in Hamburg), which offer fast response times and compact designs.
2. Alkaline electrolysers are a more mature technology with lower upfront costs.
3. Solid oxide electrolysers operate at high temperatures and can achieve greater efficiencies when integrated with industrial heat, although they are still emerging commercially.
Hydrogen is already widely used in fertiliser production and oil refining, but almost all of that supply is “grey” hydrogen made from natural gas without capturing emissions. The International Energy Agency says low emissions hydrogen, which includes both green and blue hydrogen, accounts for less than one per cent of today’s global hydrogen production. Scaling green hydrogen is seen as essential for heavy industries that cannot easily electrify, such as steelmaking, chemicals, shipping fuels and long duration energy storage.
Why Germany’s Expectations Have Not Yet Materialised
As a European example, Quest One’s Hamburg facility illustrates the disconnect between ambition and reality. The factory was built to support twice as many staff as it currently employs, yet orders for electrolysers remain well below capacity. Earlier this year, the company cut roughly 20 per cent of its German workforce. Its executive vice president for customer operations said the issue is not an inability to produce but a lack of demand.
Also, it seems that the price gap is a major barrier. For example, hydrogen made from renewable electricity remains significantly more expensive than hydrogen produced from fossil fuels. Companies such as Quest One estimate that costs may fall to around four euros per kilogram later this decade, which is roughly half current German prices, but only if production scales meaningfully.
Infrastructure (To Operate At Scale) Not Ready Until The 2030s
German policymakers are continuing to view hydrogen as essential for meeting climate targets. Large infrastructure is being planned, including hydrogen pipelines from the Port of Hamburg to industrial clients and new underground storage sites in salt caverns in northern Germany. It’s understood, however, that these assets will not operate at scale until the 2030s. In the meantime, companies must navigate today’s market conditions with little clarity on long term demand.
Hydrogen Better For Industry Than For Domestic Purposes
German researchers and industry advisers also highlight a second challenge, i.e., hydrogen is most valuable in heavy industrial settings with high temperature needs. It is far less efficient for heating homes or replacing petrol in passenger cars, where direct electrification performs better. However, early political debate often focused on these less suitable uses, creating public confusion and diverting attention from industrial applications that genuinely require hydrogen.
Similar Problems Across The Rest Of Europe
It seems that other European companies are encountering the same pressures. For example, ITM Power in the UK has undergone restructuring in response to losses and project delays, even as it reports growth in its order book. Its commercial progress has been held back by earlier fixed price contracts and the slow pace of customer decision making.
Also, Norway’s Statkraft, Europe’s largest renewable generator, announced earlier this year (2025) that it would stop developing new green hydrogen projects due to market uncertainty. The company said it would concentrate on a smaller set of existing projects and seek new investment partners before entering construction phases.
Norwegian electrolyser manufacturer Nel has also faced weakening order pipelines. It reports reducing planned investment, has postponed a new factory in the United States and acknowledged that customers are taking longer to commit to projects than previously anticipated.
In fact, more than 50 renewable hydrogen projects have been cancelled globally in the last eighteen months according to industry assessments, with most citing economics and unclear offtake agreements as the primary causes.
United States Developers Are Scaling Back
In the United States, the green hydrogen sector has benefited from generous tax credits through the Inflation Reduction Act, yet uncertainty remains, not least because of the Trump administration’s apparent opposition to green ideas. Plug Power, for example, one of the country’s most prominent hydrogen companies, has announced job cuts and a financial restructuring programme, pointing to tougher market conditions and slower than expected equipment sales.
Also, ExxonMobil recently paused development of what would have been one of the world’s largest blue hydrogen facilities at Baytown in Texas. Its executives said the company had not secured enough long term customers willing to pay for hydrogen at a commercially viable price.
Although interest in hydrogen production hubs continues across the US, it seems that many industrial buyers remain cautious about committing to expensive new fuels when electricity prices, carbon pricing and regulatory frameworks remain unsettled.
Australia’s Export Plans Hit Obstacles
Australia once positioned itself as a major exporter of green hydrogen and green ammonia to Asia and Europe but now several projects have been delayed or cancelled. For example, Fortescue (a large mining and green energy company) has stepped back from hydrogen developments in Queensland and Arizona and announced significant write downs. The company has said it will refocus on projects with clearer commercial pathways, including green iron and battery materials.
Also, a global commodities trading and logistics company Trafigura has halted its Port Pirie hydrogen project in South Australia after rising costs and difficulty securing guaranteed demand from industrial buyers. Analysts in the region have noted that early expectations for exporting hydrogen at large scale were likely unrealistic without stronger international commitments from importers.
Where Hydrogen Could Succeed
Energy agencies and research groups now broadly agree on the sectors where hydrogen is indispensable. Steelmaking is the most prominent example, with several companies testing direct reduction processes that use hydrogen instead of coking coal. Chemical producers are exploring lower carbon routes to ammonia and methanol. Shipping and aviation are studying hydrogen derived fuels that can integrate with existing global energy infrastructure.
These applications can offer meaningful emissions reductions and play to hydrogen’s strengths. The challenge, however, seems to be that these industries require large volumes of low cost hydrogen delivered reliably and safely. Most are not prepared to sign long term contracts until prices fall and infrastructure is in place.
Price Remains A Central Issue
Multiple European analyses estimate that green hydrogen still costs between three and five times as much as grey hydrogen produced from fossil fuels. For example, the EU’s energy regulator reported that green hydrogen in Europe was around four times the cost of fossil based hydrogen in 2024. Electrolyser prices are falling, helped in part by strong Chinese manufacturing, but electricity costs and financing remain high.
Risk Reduction Needed
Project developers say that large scale deployment will only happen once governments introduce mechanisms that reduce risk for both suppliers and buyers. Proposals include long term contracts for difference, industrial quotas that require certain sectors to buy low carbon hydrogen and funding for hydrogen hubs where production and demand can grow together.
Other Key Growth Factors
Industry leaders argue that the next phase of hydrogen’s development depends less on technology, which has largely matured, and more on policy clarity, market stability and credible industrial demand. Despite the downturn, investment continues to rise and an increasing number of projects are progressing from early design to construction.
What Does This Mean For Your Organisation?
It seems that progress now hinges on whether governments and industry commit to clear, bankable demand rather than just broad ambition. The technology is no longer the barrier, yet producers and buyers remain stuck without the conditions they need to move forward. Developers say they can’t cut prices without large scale deployment, while industrial users say they can’t commit to long term contracts until prices fall. This loop is slowing projects across Europe, the United States and Australia and is shaping whether hydrogen becomes a major industrial fuel or stays confined to small, specialist uses.
The policy environment will help decide how quickly this gap closes. Companies need predictable frameworks, stable pricing signals and clarity over infrastructure timelines before moving beyond pilots. The current pattern of cancellations and delays shows how fragile large hydrogen investments can be without these foundations.
For UK businesses, these global setbacks really matter. For example, UK electrolyser manufacturers rely on worldwide demand to scale production, reduce costs and stay competitive. Heavy industrial users in the UK, including steel, chemicals and shipping, will also track these developments closely because their own decarbonisation plans depend on affordable low carbon fuels rather than costly niche products. Slow international progress risks higher operating costs and delayed investment decisions at home.
Energy firms, investors and policymakers face similar pressures. Building pipelines, storage and import terminals requires long term confidence in the market. Financing large hydrogen hubs demands regulatory stability. Governments must balance fiscal constraints with the need to support industries that can deliver major emissions reductions. The examples emerging from Germany, Norway, the United States and Australia illustrate how easily momentum can falter without that certainty.
The wider picture here appears to be that hydrogen still offers a credible route for cutting emissions in the hardest to electrify sectors. The potential remains significant, but the path to commercial reality is proving slower and more complex than early forecasts suggested. This is the stage at which consistent policy, coordinated infrastructure planning and targeted support for genuine industrial use cases will matter most, particularly for countries like the UK aiming to compete in future low carbon markets.
Tech Tip: Use Outlook’s “Groups” (Contact Groups) to Email Faster
Did you know you can bundle any set of contacts into a single group and send a message to everyone with just one address? It’s a huge time‑saver and eliminates the risk of forgetting a recipient.
How to create a group – Desktop (Outlook 365)
– Switch to People (the icon at the bottom of the navigation pane).
– Click New Contact Group (or New → Contact Group).
– Give the group a clear name.
– Click Add Members, choose From Outlook Contacts or From Address Book, select the people you want, then click OK.
– Click Save & Close.
How to create a group – Web (Outlook.com / Outlook on the web)
– Open People, then click New contact list (or New → Contact List).
– Name the list, click Add Members, pick contacts from your address book or type new email addresses, then hit Create.
Why it’s so handy: One click in the To field expands the whole group, keeping your message tidy and ensuring everyone gets the same info instantly. Updating a group is as easy as editing the list with no need to rewrite dozens of addresses.
Give it a try next time you need to reach a project team, club, or family list!
Major Insurers Say AI Is Too Risky to Cover
Insurers on both sides of the Atlantic are warning that artificial intelligence may now be too unpredictable to insure, raising concerns about the financial fallout if widely used models fail at scale.
Anxiety
As recently reported in the Financial Times, it seems that anxiety across the insurance sector has grown sharply in recent months as companies race to deploy generative AI tools in customer service, product design, business operations, and cybersecurity. For example, several of the largest US insurers, including Great American, Chubb, and W. R. Berkley, have now reportedly asked US state regulators for permission to exclude AI-related liabilities from standard corporate insurance policies. Their requests centre on a growing fear that large language models and other generative systems pose what the sector calls “systemic risk”, where one failure triggers thousands of claims at the same time.
What Insurers Are Worried About
The recent filings describe AI systems as too opaque for actuaries to model, with one, reported by the Financial Times, as saying that LLM outputs are “too much of a black box”. Actuaries normally rely on long historical datasets to predict how often a specific type of claim might occur. Generative AI has only been in mainstream use for a very short period, and its behaviour is influenced by training data and internal processes that are not easily accessible to external analysts.
The Central Fear
The industry’s central fear is not an isolated error but the possibility that a single malfunction in a widely used model could affect thousands of businesses at the same time. For example, a senior executive at Aon, one of the world’s largest insurance brokers, outlined the challenge earlier this year, noting that insurers can absorb a £300 to £400 million loss affecting one company, but cannot easily survive a situation where thousands of claims emerge simultaneously from a common cause.
The concept of “aggregation” risk is well understood within insurance. For example, cyberattacks, natural disasters, and supply chain failures already create challenges when losses cluster. However, what makes AI different is the speed at which a flawed model update, inaccurate output, or unexpected behaviour could spread across global users within seconds.
Real Incidents Behind the Rising Concern
Several high-profile cases have highlighted the unpredictability of AI systems when deployed at scale. For example, earlier this year, Google’s AI Overview feature falsely accused an Arizona solar company of regulatory violations and legal trouble. The business filed a lawsuit seeking $110 million in damages, arguing that the false claim caused reputational harm and lost sales. The case was widely reported across technology and legal publications and is now a reference point for insurers trying to price the risks associated with AI-driven public information tools.
Air Canada faced a different challenge in 2023 when a customer service chatbot invented a discount policy and provided it to a traveller. The airline argued that the chatbot was responsible for the mistake, not the company, but a tribunal ruled that companies remain liable for the behaviour of their AI systems. This ruling has since appeared in several legal and insurance industry analyses as a sign of where liability is likely to sit in future disputes.
Another incident involved the global engineering consultancy Arup, which confirmed that fraudsters used a deepfake of a senior employee during a video call to authorise a transfer. The theft totalled around £25 million. This case, first reported by Bloomberg, has been used by cyber risk specialists to illustrate the speed and sophistication of AI-enabled financial crime.
It seems that these examples are not isolated. For example, industry reports from cyber insurers and security analysts show steep increases in AI-assisted phishing attacks, automated hacking tools, and malicious code generation. The UK’s National Cyber Security Centre has also noted that AI is lowering the barrier for less skilled criminals to produce convincing scams.
Why Insurers Are Seeking New Exclusions
Filings submitted to US state regulators show insurers requesting permission to exclude claims arising from “any actual or alleged use” of AI in a product or service. In fact, some requests are reported to go further, seeking to exclude losses connected to decisions made by AI or errors introduced by systems that incorporate generative models.
W. R. Berkley’s filing, for example, asks to exclude claims linked to AI systems embedded within company products, as well as advice or information generated by an AI tool. Chubb and Great American are seeking similar adjustments, citing the difficulty of identifying, modelling, and pricing the underlying risk.
AIG was mentioned by some insurers during the early stages of these discussions, although the company has since clarified that it is not seeking to introduce any AI-related exclusions at this time.
Some specialist insurers have already limited the types of AI risks they are willing to take on. Mosaic Insurance, which focuses on cyber risk, has confirmed that it provides cover for certain software where AI is embedded but does not offer protection for losses linked to large general purpose models such as ChatGPT or Claude.
What Industry Analysts Say About the Risk
The Geneva Association, the global insurance think tank, published a report last year warning that parts of AI risk may become “uninsurable” without improvements in transparency, auditability, and regulatory control. The report highlighted several drivers of concern, including the lack of training data visibility, unpredictable model behaviour, and the rapid adoption of AI across industries with varying levels of oversight.
It seems that Lloyd’s of London has also taken an increasingly cautious approach. For example, recent bulletins instructed underwriters to review AI exposure within cyber policies, noting that widespread model adoption may create new forms of correlated risk. Lloyd’s has been preparing for similar challenges on the cyber side for years, including the possibility that a global cloud platform outage or a major vulnerability could create simultaneous losses for thousands of clients.
In its most recent market commentary, Lloyd’s emphasised that AI introduces both upside and downside risk but noted that “high levels of dependency on a small number of models or providers” could increase the severity of a large scale incident.
Regulators and the Emerging Policy Debate
State insurance regulators in the US are now reviewing the proposed exclusions, which must be approved before they can be applied to policies. However, approval is not guaranteed and regulators typically weigh the interests of insurers against the needs of businesses who require predictable cover to operate safely.
There is also a growing policy debate in Washington and across Europe about whether AI liability should sit with developers, deployers, or both. For example, the European Union’s AI Act, approved earlier this year, introduces new rules for high risk AI systems and could reduce some uncertainty for insurers in the longer term. The Act requires risk assessments, transparency commitments, and technical documentation for certain types of AI models, which could help underwriters understand how systems have been trained and tested.
The UK has taken a more flexible, sector based approach so far, although its regulators have expressed concerns about the speed at which AI is being adopted. The Financial Conduct Authority has already issued guidance reminding firms that they remain responsible for the outcomes of any automated decision making systems, regardless of whether those systems use AI.
Business Risk
Many organisations now use AI for customer service, marketing, content generation, fraud detection, HR screening, and operational automation. However, if insurers continue to retreat from covering AI related losses, businesses may need to rethink how they assess and manage the risks associated with these tools.
Some analysts believe that a new class of specialist AI insurance products will emerge, similar to how cyber insurance developed over the past decade. Others argue that meaningful coverage may not be possible until the industry gains far more visibility into how models work, how they are trained, and how they behave in unexpected situations.
What Does This Mean For Your Business?
Insurers are clearly confronting a technology that’s developing faster than the tools used to measure its risk. The issue is not hostility towards AI but the absence of reliable ways to model how large, general purpose systems behave. Without that visibility, insurers cannot judge how often errors might occur or how widely they might spread, which is essential for any form of cover.
Systemic exposure remains the central concern here. For example, a single flawed update or misinterpreted instruction could create thousands of identical losses at once, something the insurance market is not designed to absorb. Individual claims can be managed but really large clusters of identical failures can’t. This is why insurers are pulling back and why businesses may soon face gaps that did not exist a year ago.
The implications for UK organisations are significant. For example, many businesses already rely on generative AI for customer service, content creation, coding, and screening tasks. If insurers exclude losses linked to AI behaviour, companies may need to reassess how they deploy these systems and where responsibility sits if something goes wrong. A misstatement from a chatbot or an error introduced in a design process could leave a firm exposed without the safety net of traditional liability cover.
Developers and regulators will heavily influence what happens next. Insurers have been clear that better transparency, audit trails, and documentation would help them price risk more accurately. Regulatory frameworks, such as the EU’s AI Act, may also make high risk systems more insurable over time. The UK’s lighter, sector based approach leaves more responsibility with businesses to manage these risks proactively.
The wider picture here is that insurers, developers, regulators, and users each have a stake in how this evolves. Until risk can be measured with greater confidence, cover will remain uncertain and may become more restrictive. The next stage of AI adoption will rely as much on the ability to understand and manage these liabilities as on the technology itself.
Microsoft Launches Fara-7B, Its New On-Device AI Computer Agent
Microsoft has announced Fara-7B, a new “agentic” small language model built to run directly on a PC and carry out tasks on screen, marking a significant move towards practical AI agents that can operate computers rather than simply generate text.
What Is Fara-7B?
Fara-7B is Microsoft’s first computer-use small language model (SLM), designed to act as an on-device operator that sees the screen, understands what is visible and performs actions with the mouse and keyboard. It does not read hidden interface structures and does not rely on multiple models stitched together. Instead, Microsoft says it works in the same visual way a person would, interpreting screenshots and deciding what to click, type or scroll next.
Compact
The model has 7 billion parameters, which is small compared with leading large language models. However, Microsoft says Fara-7B delivers state-of-the-art performance for its size and is competitive with some larger systems used for browser automation. The focus on a compact model is deliberate. For example, smaller models offer lower energy requirements, faster response times and the ability to run locally, which has become increasingly important for both privacy and reliability.
Where Can You Get It?
Microsoft has positioned Fara-7B as an experimental release intended to accelerate development of practical computer-use agents. It is openly available through Microsoft Foundry and Hugging Face, can be explored through the Magentic-UI environment and will run on Copilot+ PCs using a silicon-optimised version.
Why Build A Computer-Use SLM?
Microsoft’s announcement of Fara-7B is not that surprising, given the wider trend in AI development. The industry has now moved beyond text-only chat models to models that can act, reason about their environment and automate digital tasks. This actually reflects the growing demand from businesses and users for assistants that can complete work rather than merely describe how to do it.
There is also a strategic element. For example, Microsoft has invested heavily in AI across Windows, Azure, Copilot and its device ecosystem. Building a capable agentic model that runs directly on Windows strengthens this position and gives Microsoft a competitive answer to similar tools emerging from OpenAI, Google and other major players.
By releasing the model with open weights and permissive licensing, Microsoft is also encouraging researchers and developers to experiment, build new tools and benchmark new methods. This approach has the potential to shape the direction of computer-use agents across the industry.
How Fara-7B Has Been Developed
One of the biggest challenges in creating computer-use agents is the lack of large, high-quality data showing how people interact with websites and applications. For example, a typical task might involve dozens of small actions, from locating a button to entering text in the correct field. Gathering this data manually would be too slow and expensive at the scale needed.
Microsoft says its team tackled this by creating a synthetic data pipeline built on the company’s earlier Magentic-One framework. The pipeline generates tasks from real public webpages, then uses a multi-agent system to explore each page, plan actions, carry out those actions and record every observation and step. These recordings, known as trajectories, are passed through verifier agents that confirm the tasks were completed successfully. Only verified attempts are used to train the model.
In total, Fara-7B was trained on around 145,000 trajectories containing around one million individual steps. These tasks cover e-commerce, travel, job applications, restaurant bookings, information look-ups and many other common activities. The base model, Qwen2.5-VL-7B, was selected for its strong multimodal grounding abilities and its support for long context windows, which allows Fara-7B to consider multiple screenshots and previous actions at once.
How Fara-7B Works In Practice
During use, Fara-7B receives screenshots of the browser window, the task description and a history of actions. It then predicts its next move, such as clicking on a button, typing text or visiting a new URL. The model outputs a short internal reasoning message and the exact action it intends to take.
Mirrors Human Behaviour By Just Looking At The Screen
This is all designed to mirror human behaviour. For example, the model sees only what is on the screen and must work out what to do based on that view. This avoids the need for extra data sources and ensures the model’s decisions can be inspected and audited.
Strong Results
Evaluations published by Microsoft appear to show strong results. For example, on well-known web automation benchmarks such as WebVoyager and Online-Mind2Web, Fara-7B outperforms other models in its size range and in some cases matches or exceeds the performance of larger systems. Independent testing by Browserbase also recorded a 62 per cent success rate on WebVoyager under human verification.
What Fara-7B Can Be Used For
The current release is aimed at developers, researchers and technical users who want to explore automated web tasks. Typical examples include:
– Filling out online forms.
– Searching for information.
– Making bookings.
– Managing online accounts.
– Navigating support pages.
– Comparing product prices.
– Extracting or summarising content from websites.
These tasks reflect everyday processes that take time in workplaces. Automating them could, therefore, reduce repetitive admin, speed up routine workflows and improve consistency when handling high-volume digital tasks.
Also, the fact that the model is open weight means organisations can fine tune it or build custom versions for internal use. For example, a business could adapt it to handle specialist web portals, internal booking systems or industry-specific interfaces.
Who Can Use It And When?
Fara-7B is available now through Microsoft Foundry, Hugging Face and the Magentic-UI research environment. A quantised and silicon-optimised version is available for Copilot+ PCs running Windows 11, allowing early adopters to test the model directly on their devices.
However, it should be noted here that it’s not yet a consumer feature and should be used in controlled experimentation rather than in production environments. Microsoft recommends running it in a sandboxed environment where users can observe its actions and intervene if needed.
The Benefits For Business Users
Many organisations have been cautious about browser automation due to concerns about data privacy, vendor lock-in and cloud dependency. Fara-7B’s on-device design appears to directly address these issues by keeping data local. This is especially relevant for sectors where regulatory requirements restrict the movement of sensitive information.
Running the model locally also reduces latency. For example, an agent that is reading the screen and clicking through a webpage must respond quickly, and any delay can disrupt the experience. An on-device agent avoids these delays and provides more predictable performance.
Benefits For Microsoft
For Microsoft, Fara-7B essentially strengthens its position in agentic AI, supports its Windows and Copilot+ hardware strategy and provides a foundation for future systems that combine device-side reasoning with cloud-based intelligence.
Developers
For developers and researchers, the open-weight release lowers barriers to experimentation, allowing new techniques to be tested and new evaluation methods to be developed. This may accelerate progress in areas such as safe automation, grounding accuracy and long-horizon task completion.
Challenges And Criticisms
Microsoft is clear that Fara-7B remains an experimental model with limitations. It can misinterpret interfaces, struggle with unfamiliar layouts or fail partway through a complex task. Like other agents that control computers, it remains vulnerable to malicious webpages, prompt-based attacks and unpredictable site behaviour.
There are some notable governance and security questions too. For example, businesses will need to consider how to monitor and log agent actions, how credentials are managed and how to prevent incorrect or undesired operations.
That said, Microsoft has introduced several safety systems to address these risks. The model has been trained to stop at “Critical Points”, such as payment stages or permission prompts, and will refuse to proceed without confirmation. The company also notes that the model achieved an 82 per cent refusal rate on red-team tasks designed to solicit harmful behaviour.
Early commentary has also highlighted that benchmark success does not necessarily translate directly into strong real-world performance, since live websites can behave unpredictably. Developers will need to conduct extensive testing before deploying any form of autonomous web agent in operational settings.
What Does This Mean For Your Business?
Fara-7B brings the idea of practical, controllable computer-use agents much closer to everyday reality, and the implications reach far beyond its immediate research release. The model shows that meaningful on-device automation is now possible with compact architectures rather than sprawling cloud systems. That alone will interest UK businesses that want to streamline manual web-based tasks without handing sensitive data to external services. These organisations have long relied on browser-driven processes in areas such as procurement, HR, finance and customer administration, so a tool that can take on repeatable workflows locally could offer genuine operational value if it proves reliable enough.
The wider AI market is likely to view the launch as a clear signal that Microsoft intends to compete directly in the emerging space for agentic automation. Fara-7B gives the company a foothold that it controls end to end, from the hardware and operating system through to developer tools and safety frameworks. This matters in a landscape where other players have approached computer-use agents with more closed or cloud-first designs. The open-weight release also sets a tone for how Microsoft wants the community to interact with the model, and it encourages a level of scrutiny that could shape future iterations.
In Fara-7B, developers and researchers gain a flexible platform that they can adapt, test and benchmark in their own environments. The training methodology itself, built on large scale synthetic tasks, raises important questions about how best to model digital behaviour and how to ensure that agents can generalise beyond curated datasets. These questions will continue to surface as more organisations explore automation that depends on visual reasoning rather than structured APIs.
It’s likely that stakeholders across government, regulation and security will now be assessing the risks as closely as the opportunities. For example, a system capable of taking actions on a live machine introduces new oversight challenges, from governance and auditing to resilience against hostile prompts or malicious web content. Microsoft’s emphasis on safety, refusal behaviour and Critical Points is a start, although much will depend on how reliably these mechanisms perform once the model is exposed to diverse real-world environments.
The release ultimately gives the industry a clearer view of what agentic AI might look like when it is embedded directly into personal devices rather than controlled entirely in the cloud. If the technology matures, it could affect expectations about digital assistance in the workplace, reduce friction in routine operations and extend automation to tasks that currently have no clean API-based alternative. The coming months will show whether developers and early adopters can turn this experimental foundation into stable, responsible tools that benefit businesses, consumers and the wider ecosystem.
GDS Local Launched To Link National And Local Services
A new GDS Local unit has been launched to give residents simpler, consistent access to both national and local government services through a single digital system.
What the Government Has Announced
On 22 November 2025, the Department for Science, Innovation and Technology (DSIT) unveiled GDS Local, a dedicated team within the Government Digital Service (GDS) created to support councils with digital transformation. The stated aim is to help local authorities modernise services, reform long-term technology contracts, and make better use of shared data to improve everyday tasks such as managing council tax, reporting issues in a local area, applying for school places or accessing local support.
Three Main Priorities
The government says GDS Local has been set up with three core priorities, which are:
1. To help councils connect to existing national platforms including GOV.UK One Login and the GOV.UK App. These platforms already underpin central government services such as tax, passports and benefits, and the plan is that residents will eventually only need one secure account for both national and local services.
2. Market and procurement reform, with a clear focus on helping councils break free from restrictive long-term contracts that limit flexibility and often involve high costs for outdated systems.
3. To improve the way councils use and share anonymised data, supported by a new Government Digital and Data Hub that brings together digital and data professionals from across the public sector.
Part of “Rewire The State”
The launch actually forms part of a wider programme to “rewire the state” and address the findings of the recent State of Digital Government Review, which estimated that modernising public services could release up to £45 billion in productivity gains each year. Reports cited during the review also suggest that digital and data spending across the UK public sector remains well below international benchmarks.
Why Local Councils Are A Major Focus
Much of the UK’s recent digital modernisation has taken place at central government level. The roll-out of GOV.UK One Login, changes to HMRC’s digital services, and new online systems for benefits and health services have all progressed, yet councils have often been left to modernise in isolation. This is despite councils being responsible for many of the services people use most frequently.
Minister for Digital Government Ian Murray said this gap had persisted “for too long”, arguing that councils had not benefited from the same investment or support as central departments. Announcing the new unit, he said GDS Local would help end the “postcode lottery” for digital services and give every resident access to “modern, joined-up and reliable online services”. He described the aim as ensuring that public services “work seamlessly for people wherever they live”.
The scale of the challenge becomes clearer when looking at the underlying numbers. For example, digital spending in local government is significantly lower than the levels seen in comparable sectors internationally. Also, councils depend on ageing systems, often supplied by a small number of long-standing vendors who offer limited interoperability and hold councils in expensive, inflexible contracts. Many of these contracts are due to expire over the next decade, which the government sees as an opportunity to reshape the market and encourage more competition.
Creating A Single Account For Local And National Services
One of the most visible changes GDS Local aims to deliver is the integration of GOV.UK One Login into local services. One Login is the national secure identity system that will eventually replace dozens of separate logins across the public sector. The government argues that using this same system for councils will make services simpler for residents and more efficient for local authorities.
If fully implemented, this would allow residents to sign in to the GOV.UK App or website and access everything from council tax accounts to local housing support using the same verified identity they use for passport renewals or DVLA services. This approach is expected to reduce duplication, strengthen security, lower failure rates when people cannot remember multiple passwords, and give councils access to a modern identity system without having to build one independently.
Central Solutions Imposed On Councils
GDS has emphasised that this work will not involve imposing central solutions on councils. GDS Local leaders Liz Adams and Theo Blackwell said the priority is to “collaboratively extend proven platforms and expertise”, recognising the unique needs of each authority. They also stressed that councils’ own experience in designing local services will remain central to how the national platforms evolve.
Reforming Long-Term Technology Contracts
Long-standing technology contracts have been one of the biggest barriers to local digital progress. For example, many councils have been locked into multi-year agreements with a single supplier covering critical services such as revenues and benefits, social care or housing. These systems often cannot integrate easily with modern tools or data platforms, making it harder for councils to innovate or switch provider.
The government’s announcement described these arrangements as “ball and chain” contracts that “lock councils into long-term agreements with single suppliers, often paying premium prices for outdated technology”. GDS Local has been tasked with giving councils more control, increasing competition, and helping authorities choose systems that support modern digital standards.
This work will be carried out with the Local Government Association (LGA) and the Ministry for Housing, Communities and Local Government (MHCLG). The LGA has long argued that councils need more flexibility and more competitive procurement options. Its Public Service Reform and Innovation Committee chair, Councillor Dan Swords, welcomed the move and said the new unit offered “a fantastic opportunity to accelerate the pace of transformation”, making services “more accessible, efficient and tailored to local need”.
Improving How Councils Use and Share Data
Alongside GDS Local, the government has also launched the Government Digital and Data Hub, which is a central online platform for digital and data professionals across the public sector. The hub brings together staff from central government, councils, the NHS and other public bodies, offering training, career guidance, resources and a network to share expertise.
One goal of the hub is to help councils share anonymised data on issues such as homelessness, social care demand and environmental trends. The intention is to help authorities learn from one another’s approaches, scale innovation that works, and identify emerging issues earlier. GDS argues that shared learning and consistent data practices can help reduce duplication and improve service planning across regions.
Liverpool City Region As An Early Partner
Liverpool City Region has been closely involved in the early stages of GDS Local and was chosen as the location for the national launch. The region has previously developed a Community Charter on Data and AI, led by local residents, to set clear principles for responsible data use. It has also experimented with data-driven projects through initiatives such as its AI for Good programme and the Civic Data Cooperative.
Councillor Liam Robinson, the region’s Cabinet Member for Innovation, described GDS Local as “an important step forward” and said the region’s recent work showed how data and technology could be used to tackle real-world challenges such as improving health outcomes or addressing misinformation.
The launch event also highlighted the upcoming Local Government Innovation Hackathon in Birmingham, taking place on 26–27 November. The event will bring together councils, designers, technologists and voluntary organisations to explore how digital tools can help address homelessness and rough sleeping.
What Comes Next?
Councils are now being invited to register interest in working with GDS Local through discovery projects, data-sharing initiatives and early connections with GOV.UK One Login. More detailed plans are expected over the coming months as DSIT and GDS set out the next steps for integration, procurement reform and data standards.
The unit’s success will depend on how widely councils engage with it, how effectively central and local systems can be joined up, and how quickly legacy barriers can be removed.
What Does This Mean For Your Business?
All of this seems to point to a more consistent experience for residents, but the scale of change involved will test how well central and local government can work together. Councils will, no doubt, need sustained support to unwind their legacy systems, adapt to common identity standards and take advantage of shared data platforms. Some authorities are already well placed to do this, while others face steeper challenges due to funding pressures, outdated infrastructure or complex service demands. The success of GDS Local will rely on whether these differences can be narrowed rather than deepened.
The implications stretch beyond councils. For example, UK businesses that depend on timely licensing decisions, planning processes, environmental checks or local regulatory services could benefit from faster and more predictable digital systems. More consistent use of One Login may reduce administrative friction for organisations interacting with multiple authorities, and clearer data standards may help suppliers build tools that work across regions rather than creating bespoke versions for every council. There are also opportunities for technology firms to compete in a reformed procurement environment where long-term lock-in no longer dominates the market.
Residents, meanwhile, may stand to gain from simpler access to core services and a clearer sense of what to expect from their local authority regardless of where they live. Improved data sharing may also help councils respond earlier to really serious issues such as homelessness, care demand or environmental risks, which could influence wider public services including health and emergency response.
The coming months will show how quickly GDS Local can turn its priorities into practical progress. Much will depend on how well central platforms can adapt to local needs and how effectively councils can reshape contracting arrangements that have been entrenched for years. The foundations laid through this launch should give the programme a clear direction, although the real measure will be whether residents and organisations begin to notice services becoming easier, faster and more consistent across the country.
Microsoft Copilot To Leave WhatsApp In January 2026
Microsoft has announced that its Copilot chatbot will stop working on WhatsApp on 15 January 2026 after WhatsApp introduces its new restrictions on third party AI assistants.
Why Copilot Was On WhatsApp In The First Place
Copilot was launched on WhatsApp in late 2024 as part of Microsoft’s wider effort to meet users inside the apps they already use each day. It allows people to talk to Copilot through a normal WhatsApp chat thread, asking questions, requesting explanations, drafting messages, or generating ideas. Microsoft says “millions of people” have used the WhatsApp integration since launch, showing how messaging apps have become a common first step into generative AI for mainstream users.
Operated Through The WhatsApp Business API
The chatbot operated through the WhatsApp Business API, which is the system that lets companies automate conversations with customers. Copilot’s version was “unauthenticated”, meaning users did not sign in with a Microsoft account. This made the experience fast and simple, although it meant the service was separated from users’ main Copilot profiles on Microsoft platforms.
Why It’s Being Removed
The removal of Copilot from WhatsApp appears to be due entirely to changes in WhatsApp’s platform rules. For example, in October 2025, WhatsApp updated its Business API terms to prohibit general purpose AI chatbots from running on the platform. These rules apply to assistants capable of broad, open ended conversation rather than bots created to support specific customer service tasks.
WhatsApp said the Business API should remain focused on helping organisations serve customers, i.e., providing shipping updates, booking information, or answers to common questions. The company made clear that it no longer intends WhatsApp to act as a distribution channel for large AI assistants created by external providers.
Several Factors, Say Industry Analysts
Industry analysts have linked the decision to several factors. For example, these include the cost of handling high volume AI traffic on WhatsApp’s infrastructure, Meta’s growing focus on consolidating data inside its own ecosystem, and the introduction of Meta AI, the company’s consumer facing assistant that is being deployed across WhatsApp, Instagram, and Messenger. Meta AI is expected to remain the only general purpose assistant users can access directly inside WhatsApp once the policy takes effect.
How The Change Will Happen
Microsoft has confirmed that Copilot will remain accessible on WhatsApp until 15 January. After that date, the chatbot will stop responding and users will not be able to send new prompts through the app.
Microsoft has also warned that chat history will not transfer to any other Copilot platform. The WhatsApp integration not using Microsoft’s account authentication means that there is no technical link between a user’s WhatsApp conversation and their profile on the Copilot app or website. Microsoft therefore recommends exporting chats manually using WhatsApp’s built in export tool before the deadline if users want to keep a record of past conversations.
OpenAI has taken a similar approach with ChatGPT on WhatsApp, although it has said that some users may be able to link previous chats to their ChatGPT history if they used a version tied to their account. This is not an option for Copilot due to the design of the original integration.
Where Users Can Access Copilot Instead
Microsoft is directing users to three main platforms where Copilot will continue to be available, which are:
1. The Copilot mobile app on iOS and Android.
2. Copilot on the web at copilot.microsoft.com.
3. Copilot on Windows, built into the operating system.
These platforms support all of the core features users are already familiar with and introduce additional tools that were not available in WhatsApp. These include Copilot Voice for spoken queries, Copilot Vision for image understanding, and Mico, a companion style presence that supports daily tasks. Microsoft says these will form the central experience for Copilot going forward.
The Wider Effect On AI Chatbots
WhatsApp is now reported to be used by more than three billion people globally and has become an important distribution route for companies deploying AI driven tools. The updated rules now mean that all general purpose AI assistants will be removed from the platform, including ChatGPT and Perplexity, which were introduced earlier in 2025. Each provider has begun notifying users and guiding them towards their own mobile apps and websites.
OpenAI previously said more than 50 million people had used ChatGPT through WhatsApp, showing how significant the channel had become for AI adoption. Microsoft has not released its own usage figures beyond confirming “millions” of Copilot interactions on WhatsApp since launch.
Commentary from industry analysts notes that the update will reshape how external AI companies can reach users inside Meta’s ecosystem. It also creates a clearer distinction between approved business automation, which can continue, and broad AI assistants, which cannot operate inside WhatsApp under the new rules.
What The Policy Change Means For AI Developers
Developers that relied on the WhatsApp Business API to distribute general purpose assistants will no longer be able to use that channel. Companies that built workflows around WhatsApp based assistants now need to redesign their approach to comply with the updated rules. Many WhatsApp integration providers have already issued technical advice to help organisations check whether their existing use cases fall under the new restrictions or remain permitted under the “customer support” classification.
Microsoft’s public response has been measured. For example, its official statement states that it is “proud of the impact” Copilot has had on WhatsApp and that it is now focused on ensuring a smooth transition for users. The company has avoided any direct criticism of WhatsApp and has instead highlighted the added functionality available in its own apps, particularly multimodal features that did not fit within WhatsApp’s interface.
What Does This Mean For Your Business?
This development shows how quickly access to mainstream AI tools can change when platform rules are updated, and it reinforces how much control large messaging platforms now have over which assistants users can reach. For UK businesses, the change means that any informal use of Copilot or ChatGPT through WhatsApp will now need to move to authenticated apps or web based tools, which may offer clearer security controls even if the transition disrupts established habits. Organisations that had started exploring AI driven workflows inside WhatsApp must check whether their implementations fall within the permitted customer support category or whether they now count as general purpose assistants that need reworking or relocating.
AI developers face tighter boundaries on where and how their models can operate, particularly when relying on platforms that sit between them and their users. This will encourage providers to invest more heavily in their own apps and operating system integrations, where they retain full control over authentication, data handling, and feature development. Users who previously relied on WhatsApp as a simple way to test or adopt generative AI will now need to shift their expectations to standalone tools that offer richer functionality but require more deliberate use.
This change also highlights how Meta is positioning its own assistant as the primary option inside WhatsApp, creating a more contained environment for general purpose AI. This will influence how consumers discover and evaluate different AI products, and it will shape how competing providers reach audiences on messaging platforms that have become central to everyday communication.