Company Check : Another Cloudflare Outage Raises Fresh Concerns
Cloudflare has suffered its second major service outage in less than a month, briefly taking a substantial portion of the internet offline and prompting renewed questions about the resilience of the infrastructure many organisations now rely on.
Friday 5 December Outage
This latest incident occurred on Friday 5 December, when websites around the world began returning blank pages, stalled login screens and 500 error messages from around 08:47 GMT. Cloudflare confirmed that the problem affected part of its global network and that a significant number of high profile customers were impacted. Although services were largely restored by 09:12, the disruption was extensive enough to affect millions of users and thousands of online businesses during a busy weekday morning.
What Happened And Why Did It Spread So Quickly?
Cloudflare acknowledged shortly after the incident that the outage was caused by an internal change to how its Web Application Firewall processes incoming requests. The change had been deployed as part of an emergency response to a newly disclosed security vulnerability in React Server Components. The flaw, widely discussed across the software industry, could allow remote code execution in some applications built using React and Next.js. Cloudflare introduced new rules to help shield its customers from potential exploitation while they applied their own patches.
A Bug Was Triggered
During that process, a long standing bug in how the Web Application Firewall parses request bodies was triggered under the specific conditions created by the mitigation. This resulted in errors being generated within parts of Cloudflare’s network responsible for inspecting and forwarding traffic. In practice, it meant that requests processed through those systems began failing, which is why so many sites appeared blank or unresponsive.
Not A Cyber Attack
Cloudflare’s Chief Technology Officer commented publicly that this was not the result of an attack and was instead linked to logging changes implemented to help address the React vulnerability. The company has since published a technical summary of the issue, stating that it was working on a full review to prevent similar failures from recurring.
The speed of the disruption reflected Cloudflare’s central role in global web infrastructure. For example, the company provides security, performance optimisation and traffic routing services for a large proportion of internet services. This means that when a fault is introduced in a critical part of its platform, the effects can cascade quickly across many unrelated industries and geographies.
Which Services Were Impacted?
Reports from affected organisations and users indicated that large platforms such as LinkedIn, Zoom, Canva and Discord were among the most prominent names disrupted. E commerce providers including Shopify, Deliveroo and Vinted also experienced problems. Media outlets and entertainment platforms saw outages, as did financial services and stock trading apps in some regions. Ironically, even DownDetector, the independent website that tracks service outages, was temporarily unavailable because it also runs on Cloudflare’s network.
For many businesses the disruption manifested as failed page loads, broken checkout journeys or services timing out without explanation. It should be noted that, although the outage was brief, these symptoms can have very real impacts. For example, retailers risk abandoned purchases, subscription platforms face customer frustration and organisations offering time critical services can see immediate operational strain.
How This Compares With The November Outage
The December outage arrived only weeks after Cloudflare’s previous incident on 18 November, which was far longer and affected a wider range of services. That disruption began around midday UTC and took several hours to fully resolve.
Cloudflare later explained that the November issue stemmed from an automatically generated configuration file used by its Bot Management system. A change to database permissions caused the file to grow far beyond its intended size. When the oversized file was synchronised across the network, it caused a core traffic routing module to fail repeatedly. Major services including X, ChatGPT, Spotify and large gaming platforms all experienced significant downtime.
Both The Results of Internal Changes
It seems, therefore, that the two outages were technically unrelated. The November incident was caused by a configuration file that overwhelmed a key proxying process, while the December disruption was caused by a logic error triggered within the Web Application Firewall. However, what links them is that both were the result of internal changes aimed at improving security and performance, and both exposed fragilities within a highly automated global system.
Reactions From Cloudflare And The Wider Industry
Cloudflare has stated publicly that any outage of this scale is unacceptable and has acknowledged the frustration caused to customers. After the November incident, its chief executive promised a series of improvements to configuration handling, kill switches and automated safety checks. The fact that a second issue occurred so soon afterwards has prompted visible concern from customers and industry observers about the platform’s change control processes.
The Danger Of Relying On A Small Number Of Infrastructure Providers
Security experts have emphasised the broader lesson here, i.e., that many organisations now rely heavily on a small number of global infrastructure providers. Cloudflare’s size and technical capabilities offer benefits in terms of speed and protection from attacks, yet this scale also creates single points of failure. If a major provider experiences a fault, thousands of websites and applications can be disrupted almost instantly.
Industry groups have urged organisations to reassess their resilience strategies. Some policy specialists argue that businesses should identify where they rely on a single vendor for critical operations and explore ways to diversify. This might involve adopting multiple cloud providers, splitting content delivery across different networks or architecting applications so they degrade gracefully rather than fail outright when a dependency becomes unavailable.
Customers And Competitors
For Cloudflare’s customers, the December outage reinforces the need to balance performance gains with risk planning. Many organisations use Cloudflare for security filtering, caching, bot protection and traffic routing, meaning a failure in any of those layers can have immediate consequences for availability.
Also, competitors in the content delivery and cloud security sector may see renewed interest in multi provider approaches. This does not necessarily mean businesses will move away from Cloudflare, given its extensive footprint and capability, but it is likely to encourage more organisations to build redundancy around critical services.
Regulators are also likely to take note of what has happened at Cloudflare. For example, European and UK frameworks focusing on operational resilience, such as NIS2 and DORA, place increasing emphasis on understanding and mitigating third party risk. Repeated outages at a major provider may strengthen the argument for closer oversight of critical internet infrastructure and more transparent reporting requirements.
What Happens Next?
Cloudflare has said it will publish a full post incident analysis and will continue making changes to improve reliability across its platform. The company has already committed to reviewing how new security mitigations are validated before deployment, in addition to strengthening internal safeguards that determine how changes propagate across the network.
For customers and other stakeholders, the incident is another reminder that internet resilience depends not only on defending against attackers but also on managing the risks introduced by routine operational changes. The growing complexity of web infrastructure has made this increasingly challenging, and the recent outages have placed long term operational resilience firmly back on the agenda.
What Does This Mean For Your Business?
The pace of software change, the pressure to react quickly to new vulnerabilities and the scale at which providers now operate mean that even well intentioned updates can clearly create unexpected instability. This latest incident from Cloudflare shows how a single adjustment deep inside a security layer can move rapidly through global systems and affect businesses with no direct connection to the underlying flaw. It also reinforces why resilience planning needs to be treated as a strategic priority rather than an operational afterthought.
UK businesses, in particular, face a growing need to understand how their digital supply chains actually function. Many organisations depend on Cloudflare without realising how many of their core services sit behind it. The outage demonstrated that customer experience, revenue and even internal operations can be affected within minutes if one vendor encounters a problem. These short disruptions may not make headlines for long, yet they expose gaps in continuity planning that boards and technology teams are being pushed to close, especially as regulators sharpen their expectations around third party risk.
Although Cloudflare’s competitors may now really want to highlight the benefits of multi provider architectures and the reduced exposure this can offer, the practical reality is that Cloudflare’s scale, speed and security tooling remain difficult to replicate. Most organisations may not currently be planning to abandon the platform but they may be looking for ways to introduce redundancy around it, whether by spreading workloads, adding backup routing options or designing services that fail more gracefully when a dependency falters. In other words, the market is now moving towards diversification rather than replacement.
Other stakeholders have lessons to learn from all this as well. For example, regulators will continue scrutinising outages that affect large sections of the internet, particularly where they touch financial services, transport or healthcare. Also, investors will look at whether Cloudflare can demonstrate consistent improvements after two incidents so close together. Developers and security teams across the industry may now reflect on the risks involved in rolling out urgent protections at speed, especially when the underlying software landscape is evolving as quickly as it is today.
Cloudflare remains a central pillar of global internet infrastructure, and that reality brings both advantages and pressures. Although pretty inconvenient and costly to many businesses and their users, the recent outages do not change the importance of Cloudflare, but they do highlight how essential it has become to strengthen resilience around the entire ecosystem. This means that organisations that choose to invest in understanding their dependencies and designing for failure may be better positioned to handle future shocks, whatever their source, and will place themselves on far stronger footing as digital systems continue to grow in complexity.
Security Stop-Press: Scam Ads Reported On YouTube As Fraudsters Exploit Ad Slots
Users in several countries say they are seeing a rise in misleading adverts on YouTube, including fake government schemes, miracle health claims, inappropriate content and AI-generated promotions that lead to suspicious websites.
Many of the ads redirect to imitation news pages or fake portals designed to collect personal information or small payments. Viewers say the scams often look polished, making them harder to spot at a glance.
Security researchers warn that criminals are using malvertising techniques to slip fraudulent ads into YouTube’s automated auction system. Cheap AI tools make it easy to generate endless scam variations that bypass basic checks, even as billions of harmful ads are removed each year.
Businesses can reduce exposure by training staff to recognise suspicious promotions, avoiding links in untrusted ads and using browser protections that block known malicious domains. Clear reporting routes and strong account security help limit the chances of employees being caught out.
Sustainability-In-Tech : Why Green Hydrogen’s Global Rollout Is Struggling
Green hydrogen was expected to become one of the most important clean fuels for decarbonising heavy industry, yet many projects across Europe, the United States and Australia are now slowing, shrinking or being cancelled altogether.
What Is Green Hydrogen?
Green hydrogen is produced by splitting water into hydrogen and oxygen using renewable electricity from wind, solar or hydropower. The process uses electrolysers, which sit between the electricity supply and the water source.
There are three main types of electrolyser, which are:
1. Proton Exchange Membrane (PEM) electrolysers (such as those built by Quest One in Hamburg), which offer fast response times and compact designs.
2. Alkaline electrolysers are a more mature technology with lower upfront costs.
3. Solid oxide electrolysers operate at high temperatures and can achieve greater efficiencies when integrated with industrial heat, although they are still emerging commercially.
Hydrogen is already widely used in fertiliser production and oil refining, but almost all of that supply is “grey” hydrogen made from natural gas without capturing emissions. The International Energy Agency says low emissions hydrogen, which includes both green and blue hydrogen, accounts for less than one per cent of today’s global hydrogen production. Scaling green hydrogen is seen as essential for heavy industries that cannot easily electrify, such as steelmaking, chemicals, shipping fuels and long duration energy storage.
Why Germany’s Expectations Have Not Yet Materialised
As a European example, Quest One’s Hamburg facility illustrates the disconnect between ambition and reality. The factory was built to support twice as many staff as it currently employs, yet orders for electrolysers remain well below capacity. Earlier this year, the company cut roughly 20 per cent of its German workforce. Its executive vice president for customer operations said the issue is not an inability to produce but a lack of demand.
Also, it seems that the price gap is a major barrier. For example, hydrogen made from renewable electricity remains significantly more expensive than hydrogen produced from fossil fuels. Companies such as Quest One estimate that costs may fall to around four euros per kilogram later this decade, which is roughly half current German prices, but only if production scales meaningfully.
Infrastructure (To Operate At Scale) Not Ready Until The 2030s
German policymakers are continuing to view hydrogen as essential for meeting climate targets. Large infrastructure is being planned, including hydrogen pipelines from the Port of Hamburg to industrial clients and new underground storage sites in salt caverns in northern Germany. It’s understood, however, that these assets will not operate at scale until the 2030s. In the meantime, companies must navigate today’s market conditions with little clarity on long term demand.
Hydrogen Better For Industry Than For Domestic Purposes
German researchers and industry advisers also highlight a second challenge, i.e., hydrogen is most valuable in heavy industrial settings with high temperature needs. It is far less efficient for heating homes or replacing petrol in passenger cars, where direct electrification performs better. However, early political debate often focused on these less suitable uses, creating public confusion and diverting attention from industrial applications that genuinely require hydrogen.
Similar Problems Across The Rest Of Europe
It seems that other European companies are encountering the same pressures. For example, ITM Power in the UK has undergone restructuring in response to losses and project delays, even as it reports growth in its order book. Its commercial progress has been held back by earlier fixed price contracts and the slow pace of customer decision making.
Also, Norway’s Statkraft, Europe’s largest renewable generator, announced earlier this year (2025) that it would stop developing new green hydrogen projects due to market uncertainty. The company said it would concentrate on a smaller set of existing projects and seek new investment partners before entering construction phases.
Norwegian electrolyser manufacturer Nel has also faced weakening order pipelines. It reports reducing planned investment, has postponed a new factory in the United States and acknowledged that customers are taking longer to commit to projects than previously anticipated.
In fact, more than 50 renewable hydrogen projects have been cancelled globally in the last eighteen months according to industry assessments, with most citing economics and unclear offtake agreements as the primary causes.
United States Developers Are Scaling Back
In the United States, the green hydrogen sector has benefited from generous tax credits through the Inflation Reduction Act, yet uncertainty remains, not least because of the Trump administration’s apparent opposition to green ideas. Plug Power, for example, one of the country’s most prominent hydrogen companies, has announced job cuts and a financial restructuring programme, pointing to tougher market conditions and slower than expected equipment sales.
Also, ExxonMobil recently paused development of what would have been one of the world’s largest blue hydrogen facilities at Baytown in Texas. Its executives said the company had not secured enough long term customers willing to pay for hydrogen at a commercially viable price.
Although interest in hydrogen production hubs continues across the US, it seems that many industrial buyers remain cautious about committing to expensive new fuels when electricity prices, carbon pricing and regulatory frameworks remain unsettled.
Australia’s Export Plans Hit Obstacles
Australia once positioned itself as a major exporter of green hydrogen and green ammonia to Asia and Europe but now several projects have been delayed or cancelled. For example, Fortescue (a large mining and green energy company) has stepped back from hydrogen developments in Queensland and Arizona and announced significant write downs. The company has said it will refocus on projects with clearer commercial pathways, including green iron and battery materials.
Also, a global commodities trading and logistics company Trafigura has halted its Port Pirie hydrogen project in South Australia after rising costs and difficulty securing guaranteed demand from industrial buyers. Analysts in the region have noted that early expectations for exporting hydrogen at large scale were likely unrealistic without stronger international commitments from importers.
Where Hydrogen Could Succeed
Energy agencies and research groups now broadly agree on the sectors where hydrogen is indispensable. Steelmaking is the most prominent example, with several companies testing direct reduction processes that use hydrogen instead of coking coal. Chemical producers are exploring lower carbon routes to ammonia and methanol. Shipping and aviation are studying hydrogen derived fuels that can integrate with existing global energy infrastructure.
These applications can offer meaningful emissions reductions and play to hydrogen’s strengths. The challenge, however, seems to be that these industries require large volumes of low cost hydrogen delivered reliably and safely. Most are not prepared to sign long term contracts until prices fall and infrastructure is in place.
Price Remains A Central Issue
Multiple European analyses estimate that green hydrogen still costs between three and five times as much as grey hydrogen produced from fossil fuels. For example, the EU’s energy regulator reported that green hydrogen in Europe was around four times the cost of fossil based hydrogen in 2024. Electrolyser prices are falling, helped in part by strong Chinese manufacturing, but electricity costs and financing remain high.
Risk Reduction Needed
Project developers say that large scale deployment will only happen once governments introduce mechanisms that reduce risk for both suppliers and buyers. Proposals include long term contracts for difference, industrial quotas that require certain sectors to buy low carbon hydrogen and funding for hydrogen hubs where production and demand can grow together.
Other Key Growth Factors
Industry leaders argue that the next phase of hydrogen’s development depends less on technology, which has largely matured, and more on policy clarity, market stability and credible industrial demand. Despite the downturn, investment continues to rise and an increasing number of projects are progressing from early design to construction.
What Does This Mean For Your Organisation?
It seems that progress now hinges on whether governments and industry commit to clear, bankable demand rather than just broad ambition. The technology is no longer the barrier, yet producers and buyers remain stuck without the conditions they need to move forward. Developers say they can’t cut prices without large scale deployment, while industrial users say they can’t commit to long term contracts until prices fall. This loop is slowing projects across Europe, the United States and Australia and is shaping whether hydrogen becomes a major industrial fuel or stays confined to small, specialist uses.
The policy environment will help decide how quickly this gap closes. Companies need predictable frameworks, stable pricing signals and clarity over infrastructure timelines before moving beyond pilots. The current pattern of cancellations and delays shows how fragile large hydrogen investments can be without these foundations.
For UK businesses, these global setbacks really matter. For example, UK electrolyser manufacturers rely on worldwide demand to scale production, reduce costs and stay competitive. Heavy industrial users in the UK, including steel, chemicals and shipping, will also track these developments closely because their own decarbonisation plans depend on affordable low carbon fuels rather than costly niche products. Slow international progress risks higher operating costs and delayed investment decisions at home.
Energy firms, investors and policymakers face similar pressures. Building pipelines, storage and import terminals requires long term confidence in the market. Financing large hydrogen hubs demands regulatory stability. Governments must balance fiscal constraints with the need to support industries that can deliver major emissions reductions. The examples emerging from Germany, Norway, the United States and Australia illustrate how easily momentum can falter without that certainty.
The wider picture here appears to be that hydrogen still offers a credible route for cutting emissions in the hardest to electrify sectors. The potential remains significant, but the path to commercial reality is proving slower and more complex than early forecasts suggested. This is the stage at which consistent policy, coordinated infrastructure planning and targeted support for genuine industrial use cases will matter most, particularly for countries like the UK aiming to compete in future low carbon markets.
Tech Tip: Use Outlook’s “Groups” (Contact Groups) to Email Faster
Did you know you can bundle any set of contacts into a single group and send a message to everyone with just one address? It’s a huge time‑saver and eliminates the risk of forgetting a recipient.
How to create a group – Desktop (Outlook 365)
– Switch to People (the icon at the bottom of the navigation pane).
– Click New Contact Group (or New → Contact Group).
– Give the group a clear name.
– Click Add Members, choose From Outlook Contacts or From Address Book, select the people you want, then click OK.
– Click Save & Close.
How to create a group – Web (Outlook.com / Outlook on the web)
– Open People, then click New contact list (or New → Contact List).
– Name the list, click Add Members, pick contacts from your address book or type new email addresses, then hit Create.
Why it’s so handy: One click in the To field expands the whole group, keeping your message tidy and ensuring everyone gets the same info instantly. Updating a group is as easy as editing the list with no need to rewrite dozens of addresses.
Give it a try next time you need to reach a project team, club, or family list!
Major Insurers Say AI Is Too Risky to Cover
Insurers on both sides of the Atlantic are warning that artificial intelligence may now be too unpredictable to insure, raising concerns about the financial fallout if widely used models fail at scale.
Anxiety
As recently reported in the Financial Times, it seems that anxiety across the insurance sector has grown sharply in recent months as companies race to deploy generative AI tools in customer service, product design, business operations, and cybersecurity. For example, several of the largest US insurers, including Great American, Chubb, and W. R. Berkley, have now reportedly asked US state regulators for permission to exclude AI-related liabilities from standard corporate insurance policies. Their requests centre on a growing fear that large language models and other generative systems pose what the sector calls “systemic risk”, where one failure triggers thousands of claims at the same time.
What Insurers Are Worried About
The recent filings describe AI systems as too opaque for actuaries to model, with one, reported by the Financial Times, as saying that LLM outputs are “too much of a black box”. Actuaries normally rely on long historical datasets to predict how often a specific type of claim might occur. Generative AI has only been in mainstream use for a very short period, and its behaviour is influenced by training data and internal processes that are not easily accessible to external analysts.
The Central Fear
The industry’s central fear is not an isolated error but the possibility that a single malfunction in a widely used model could affect thousands of businesses at the same time. For example, a senior executive at Aon, one of the world’s largest insurance brokers, outlined the challenge earlier this year, noting that insurers can absorb a £300 to £400 million loss affecting one company, but cannot easily survive a situation where thousands of claims emerge simultaneously from a common cause.
The concept of “aggregation” risk is well understood within insurance. For example, cyberattacks, natural disasters, and supply chain failures already create challenges when losses cluster. However, what makes AI different is the speed at which a flawed model update, inaccurate output, or unexpected behaviour could spread across global users within seconds.
Real Incidents Behind the Rising Concern
Several high-profile cases have highlighted the unpredictability of AI systems when deployed at scale. For example, earlier this year, Google’s AI Overview feature falsely accused an Arizona solar company of regulatory violations and legal trouble. The business filed a lawsuit seeking $110 million in damages, arguing that the false claim caused reputational harm and lost sales. The case was widely reported across technology and legal publications and is now a reference point for insurers trying to price the risks associated with AI-driven public information tools.
Air Canada faced a different challenge in 2023 when a customer service chatbot invented a discount policy and provided it to a traveller. The airline argued that the chatbot was responsible for the mistake, not the company, but a tribunal ruled that companies remain liable for the behaviour of their AI systems. This ruling has since appeared in several legal and insurance industry analyses as a sign of where liability is likely to sit in future disputes.
Another incident involved the global engineering consultancy Arup, which confirmed that fraudsters used a deepfake of a senior employee during a video call to authorise a transfer. The theft totalled around £25 million. This case, first reported by Bloomberg, has been used by cyber risk specialists to illustrate the speed and sophistication of AI-enabled financial crime.
It seems that these examples are not isolated. For example, industry reports from cyber insurers and security analysts show steep increases in AI-assisted phishing attacks, automated hacking tools, and malicious code generation. The UK’s National Cyber Security Centre has also noted that AI is lowering the barrier for less skilled criminals to produce convincing scams.
Why Insurers Are Seeking New Exclusions
Filings submitted to US state regulators show insurers requesting permission to exclude claims arising from “any actual or alleged use” of AI in a product or service. In fact, some requests are reported to go further, seeking to exclude losses connected to decisions made by AI or errors introduced by systems that incorporate generative models.
W. R. Berkley’s filing, for example, asks to exclude claims linked to AI systems embedded within company products, as well as advice or information generated by an AI tool. Chubb and Great American are seeking similar adjustments, citing the difficulty of identifying, modelling, and pricing the underlying risk.
AIG was mentioned by some insurers during the early stages of these discussions, although the company has since clarified that it is not seeking to introduce any AI-related exclusions at this time.
Some specialist insurers have already limited the types of AI risks they are willing to take on. Mosaic Insurance, which focuses on cyber risk, has confirmed that it provides cover for certain software where AI is embedded but does not offer protection for losses linked to large general purpose models such as ChatGPT or Claude.
What Industry Analysts Say About the Risk
The Geneva Association, the global insurance think tank, published a report last year warning that parts of AI risk may become “uninsurable” without improvements in transparency, auditability, and regulatory control. The report highlighted several drivers of concern, including the lack of training data visibility, unpredictable model behaviour, and the rapid adoption of AI across industries with varying levels of oversight.
It seems that Lloyd’s of London has also taken an increasingly cautious approach. For example, recent bulletins instructed underwriters to review AI exposure within cyber policies, noting that widespread model adoption may create new forms of correlated risk. Lloyd’s has been preparing for similar challenges on the cyber side for years, including the possibility that a global cloud platform outage or a major vulnerability could create simultaneous losses for thousands of clients.
In its most recent market commentary, Lloyd’s emphasised that AI introduces both upside and downside risk but noted that “high levels of dependency on a small number of models or providers” could increase the severity of a large scale incident.
Regulators and the Emerging Policy Debate
State insurance regulators in the US are now reviewing the proposed exclusions, which must be approved before they can be applied to policies. However, approval is not guaranteed and regulators typically weigh the interests of insurers against the needs of businesses who require predictable cover to operate safely.
There is also a growing policy debate in Washington and across Europe about whether AI liability should sit with developers, deployers, or both. For example, the European Union’s AI Act, approved earlier this year, introduces new rules for high risk AI systems and could reduce some uncertainty for insurers in the longer term. The Act requires risk assessments, transparency commitments, and technical documentation for certain types of AI models, which could help underwriters understand how systems have been trained and tested.
The UK has taken a more flexible, sector based approach so far, although its regulators have expressed concerns about the speed at which AI is being adopted. The Financial Conduct Authority has already issued guidance reminding firms that they remain responsible for the outcomes of any automated decision making systems, regardless of whether those systems use AI.
Business Risk
Many organisations now use AI for customer service, marketing, content generation, fraud detection, HR screening, and operational automation. However, if insurers continue to retreat from covering AI related losses, businesses may need to rethink how they assess and manage the risks associated with these tools.
Some analysts believe that a new class of specialist AI insurance products will emerge, similar to how cyber insurance developed over the past decade. Others argue that meaningful coverage may not be possible until the industry gains far more visibility into how models work, how they are trained, and how they behave in unexpected situations.
What Does This Mean For Your Business?
Insurers are clearly confronting a technology that’s developing faster than the tools used to measure its risk. The issue is not hostility towards AI but the absence of reliable ways to model how large, general purpose systems behave. Without that visibility, insurers cannot judge how often errors might occur or how widely they might spread, which is essential for any form of cover.
Systemic exposure remains the central concern here. For example, a single flawed update or misinterpreted instruction could create thousands of identical losses at once, something the insurance market is not designed to absorb. Individual claims can be managed but really large clusters of identical failures can’t. This is why insurers are pulling back and why businesses may soon face gaps that did not exist a year ago.
The implications for UK organisations are significant. For example, many businesses already rely on generative AI for customer service, content creation, coding, and screening tasks. If insurers exclude losses linked to AI behaviour, companies may need to reassess how they deploy these systems and where responsibility sits if something goes wrong. A misstatement from a chatbot or an error introduced in a design process could leave a firm exposed without the safety net of traditional liability cover.
Developers and regulators will heavily influence what happens next. Insurers have been clear that better transparency, audit trails, and documentation would help them price risk more accurately. Regulatory frameworks, such as the EU’s AI Act, may also make high risk systems more insurable over time. The UK’s lighter, sector based approach leaves more responsibility with businesses to manage these risks proactively.
The wider picture here is that insurers, developers, regulators, and users each have a stake in how this evolves. Until risk can be measured with greater confidence, cover will remain uncertain and may become more restrictive. The next stage of AI adoption will rely as much on the ability to understand and manage these liabilities as on the technology itself.
Microsoft Launches Fara-7B, Its New On-Device AI Computer Agent
Microsoft has announced Fara-7B, a new “agentic” small language model built to run directly on a PC and carry out tasks on screen, marking a significant move towards practical AI agents that can operate computers rather than simply generate text.
What Is Fara-7B?
Fara-7B is Microsoft’s first computer-use small language model (SLM), designed to act as an on-device operator that sees the screen, understands what is visible and performs actions with the mouse and keyboard. It does not read hidden interface structures and does not rely on multiple models stitched together. Instead, Microsoft says it works in the same visual way a person would, interpreting screenshots and deciding what to click, type or scroll next.
Compact
The model has 7 billion parameters, which is small compared with leading large language models. However, Microsoft says Fara-7B delivers state-of-the-art performance for its size and is competitive with some larger systems used for browser automation. The focus on a compact model is deliberate. For example, smaller models offer lower energy requirements, faster response times and the ability to run locally, which has become increasingly important for both privacy and reliability.
Where Can You Get It?
Microsoft has positioned Fara-7B as an experimental release intended to accelerate development of practical computer-use agents. It is openly available through Microsoft Foundry and Hugging Face, can be explored through the Magentic-UI environment and will run on Copilot+ PCs using a silicon-optimised version.
Why Build A Computer-Use SLM?
Microsoft’s announcement of Fara-7B is not that surprising, given the wider trend in AI development. The industry has now moved beyond text-only chat models to models that can act, reason about their environment and automate digital tasks. This actually reflects the growing demand from businesses and users for assistants that can complete work rather than merely describe how to do it.
There is also a strategic element. For example, Microsoft has invested heavily in AI across Windows, Azure, Copilot and its device ecosystem. Building a capable agentic model that runs directly on Windows strengthens this position and gives Microsoft a competitive answer to similar tools emerging from OpenAI, Google and other major players.
By releasing the model with open weights and permissive licensing, Microsoft is also encouraging researchers and developers to experiment, build new tools and benchmark new methods. This approach has the potential to shape the direction of computer-use agents across the industry.
How Fara-7B Has Been Developed
One of the biggest challenges in creating computer-use agents is the lack of large, high-quality data showing how people interact with websites and applications. For example, a typical task might involve dozens of small actions, from locating a button to entering text in the correct field. Gathering this data manually would be too slow and expensive at the scale needed.
Microsoft says its team tackled this by creating a synthetic data pipeline built on the company’s earlier Magentic-One framework. The pipeline generates tasks from real public webpages, then uses a multi-agent system to explore each page, plan actions, carry out those actions and record every observation and step. These recordings, known as trajectories, are passed through verifier agents that confirm the tasks were completed successfully. Only verified attempts are used to train the model.
In total, Fara-7B was trained on around 145,000 trajectories containing around one million individual steps. These tasks cover e-commerce, travel, job applications, restaurant bookings, information look-ups and many other common activities. The base model, Qwen2.5-VL-7B, was selected for its strong multimodal grounding abilities and its support for long context windows, which allows Fara-7B to consider multiple screenshots and previous actions at once.
How Fara-7B Works In Practice
During use, Fara-7B receives screenshots of the browser window, the task description and a history of actions. It then predicts its next move, such as clicking on a button, typing text or visiting a new URL. The model outputs a short internal reasoning message and the exact action it intends to take.
Mirrors Human Behaviour By Just Looking At The Screen
This is all designed to mirror human behaviour. For example, the model sees only what is on the screen and must work out what to do based on that view. This avoids the need for extra data sources and ensures the model’s decisions can be inspected and audited.
Strong Results
Evaluations published by Microsoft appear to show strong results. For example, on well-known web automation benchmarks such as WebVoyager and Online-Mind2Web, Fara-7B outperforms other models in its size range and in some cases matches or exceeds the performance of larger systems. Independent testing by Browserbase also recorded a 62 per cent success rate on WebVoyager under human verification.
What Fara-7B Can Be Used For
The current release is aimed at developers, researchers and technical users who want to explore automated web tasks. Typical examples include:
– Filling out online forms.
– Searching for information.
– Making bookings.
– Managing online accounts.
– Navigating support pages.
– Comparing product prices.
– Extracting or summarising content from websites.
These tasks reflect everyday processes that take time in workplaces. Automating them could, therefore, reduce repetitive admin, speed up routine workflows and improve consistency when handling high-volume digital tasks.
Also, the fact that the model is open weight means organisations can fine tune it or build custom versions for internal use. For example, a business could adapt it to handle specialist web portals, internal booking systems or industry-specific interfaces.
Who Can Use It And When?
Fara-7B is available now through Microsoft Foundry, Hugging Face and the Magentic-UI research environment. A quantised and silicon-optimised version is available for Copilot+ PCs running Windows 11, allowing early adopters to test the model directly on their devices.
However, it should be noted here that it’s not yet a consumer feature and should be used in controlled experimentation rather than in production environments. Microsoft recommends running it in a sandboxed environment where users can observe its actions and intervene if needed.
The Benefits For Business Users
Many organisations have been cautious about browser automation due to concerns about data privacy, vendor lock-in and cloud dependency. Fara-7B’s on-device design appears to directly address these issues by keeping data local. This is especially relevant for sectors where regulatory requirements restrict the movement of sensitive information.
Running the model locally also reduces latency. For example, an agent that is reading the screen and clicking through a webpage must respond quickly, and any delay can disrupt the experience. An on-device agent avoids these delays and provides more predictable performance.
Benefits For Microsoft
For Microsoft, Fara-7B essentially strengthens its position in agentic AI, supports its Windows and Copilot+ hardware strategy and provides a foundation for future systems that combine device-side reasoning with cloud-based intelligence.
Developers
For developers and researchers, the open-weight release lowers barriers to experimentation, allowing new techniques to be tested and new evaluation methods to be developed. This may accelerate progress in areas such as safe automation, grounding accuracy and long-horizon task completion.
Challenges And Criticisms
Microsoft is clear that Fara-7B remains an experimental model with limitations. It can misinterpret interfaces, struggle with unfamiliar layouts or fail partway through a complex task. Like other agents that control computers, it remains vulnerable to malicious webpages, prompt-based attacks and unpredictable site behaviour.
There are some notable governance and security questions too. For example, businesses will need to consider how to monitor and log agent actions, how credentials are managed and how to prevent incorrect or undesired operations.
That said, Microsoft has introduced several safety systems to address these risks. The model has been trained to stop at “Critical Points”, such as payment stages or permission prompts, and will refuse to proceed without confirmation. The company also notes that the model achieved an 82 per cent refusal rate on red-team tasks designed to solicit harmful behaviour.
Early commentary has also highlighted that benchmark success does not necessarily translate directly into strong real-world performance, since live websites can behave unpredictably. Developers will need to conduct extensive testing before deploying any form of autonomous web agent in operational settings.
What Does This Mean For Your Business?
Fara-7B brings the idea of practical, controllable computer-use agents much closer to everyday reality, and the implications reach far beyond its immediate research release. The model shows that meaningful on-device automation is now possible with compact architectures rather than sprawling cloud systems. That alone will interest UK businesses that want to streamline manual web-based tasks without handing sensitive data to external services. These organisations have long relied on browser-driven processes in areas such as procurement, HR, finance and customer administration, so a tool that can take on repeatable workflows locally could offer genuine operational value if it proves reliable enough.
The wider AI market is likely to view the launch as a clear signal that Microsoft intends to compete directly in the emerging space for agentic automation. Fara-7B gives the company a foothold that it controls end to end, from the hardware and operating system through to developer tools and safety frameworks. This matters in a landscape where other players have approached computer-use agents with more closed or cloud-first designs. The open-weight release also sets a tone for how Microsoft wants the community to interact with the model, and it encourages a level of scrutiny that could shape future iterations.
In Fara-7B, developers and researchers gain a flexible platform that they can adapt, test and benchmark in their own environments. The training methodology itself, built on large scale synthetic tasks, raises important questions about how best to model digital behaviour and how to ensure that agents can generalise beyond curated datasets. These questions will continue to surface as more organisations explore automation that depends on visual reasoning rather than structured APIs.
It’s likely that stakeholders across government, regulation and security will now be assessing the risks as closely as the opportunities. For example, a system capable of taking actions on a live machine introduces new oversight challenges, from governance and auditing to resilience against hostile prompts or malicious web content. Microsoft’s emphasis on safety, refusal behaviour and Critical Points is a start, although much will depend on how reliably these mechanisms perform once the model is exposed to diverse real-world environments.
The release ultimately gives the industry a clearer view of what agentic AI might look like when it is embedded directly into personal devices rather than controlled entirely in the cloud. If the technology matures, it could affect expectations about digital assistance in the workplace, reduce friction in routine operations and extend automation to tasks that currently have no clean API-based alternative. The coming months will show whether developers and early adopters can turn this experimental foundation into stable, responsible tools that benefit businesses, consumers and the wider ecosystem.