Company Check : Microsoft Exchange & Skype Servers Go Subscription-Only

Microsoft has officially launched subscription-only versions of its on-premises Exchange Server and Skype for Business Server, thereby ending the era of year-numbered releases and perpetual licences.

A Long-Anticipated Transition Becomes Reality

After months of preparation and close calls with support deadlines, Microsoft has made its Subscription Edition (SE) versions of Exchange Server and Skype for Business Server generally available. These editions replace the traditional 2016 and 2019 versions, which are set to reach the end of extended support on 14 October 2025.

Although Exchange Online and Microsoft Teams remain Microsoft’s strategic focus, the software giant has acknowledged that many organisations still require on-premises options. The Subscription Editions were first introduced to select enterprise customers earlier this year but are now widely available to all qualifying customers.

Microsoft says the SE releases reflect its “commitment to ongoing support for scenarios where on-premises solutions remain critical”, noting that these deployments are often driven by regulatory requirements, data residency needs, or cloud-sceptical policies in sectors such as government, finance, and defence.

What’s Actually Changing?

At a technical level, the initial releases of Exchange Server SE and Skype for Business Server SE are nearly identical to their predecessors. Exchange SE is based on Exchange Server 2019 CU15, while Skype for Business Server SE shares its codebase with Skype for Business Server 2019 CU8HF1. As such, there are no new features, removed components, or major structural changes at this stage.

However, the licensing and servicing models are the parts that have changed fundamentally. For example, both servers are now governed by Microsoft’s Modern Lifecycle Policy, which removes fixed end-of-support dates as long as organisations keep systems updated. This transforms them into evergreen products, with two cumulative updates (CUs) planned per year and additional security patches as needed.

Crucially, Microsoft has dropped perpetual licensing in favour of a subscription-only model. Organisations must now pay regularly to continue using the software legally. Stop paying, and you’re effectively frozen at the last supported version, which is now outside Microsoft’s safety net for patches and support.

Why Now and Why Like This?

The timing of the general availability appears to be closely tied to looming deadlines. Both Exchange Server 2016 and 2019, as well as Skype for Business Server 2015 and 2019, are approaching end-of-support in October 2025. Microsoft had promised a transition plan well before this date, and the SE editions are the fulfilment of that promise, albeit cutting it close.

Another driving force is Microsoft’s long-term strategy to encourage cloud adoption. As Rob Helm, analyst at Directions on Microsoft, put it: “The licence price hikes, the cutoff of old versions, the weak link with new Outlook—they all point to a single message: If you care about Exchange email, get off Exchange Server.”

Yet despite the cloud push, Microsoft has also acknowledged the real-world barriers to migration for many organisations. In a blog post accompanying the release, the company said: “Exchange SE demonstrates our commitment to ongoing support for scenarios where on-premises solutions remain critical.”

This includes hybrid deployments, secure national infrastructures, and regions with inadequate cloud access or stringent legal obligations regarding data locality.

A Smooth but Inevitable Upgrade Path

It seems that Microsoft has gone to some lengths to present the upgrade path as low-risk. For those already running Exchange 2019 CU14 or CU15, moving to SE involves minimal disruption, i.e. no schema changes, no removed features, and no new installation prerequisites. Even licence keys remain unchanged (at least for now).

The same applies to Skype for Business Server, where the SE edition uses an identical build number to CU8HF1, minus a few cosmetic updates and the refreshed licence agreement.

However, organisations sticking with older versions will face a steeper climb. Future SE cumulative updates will introduce breaking changes. Exchange SE CU2, for instance, will block coexistence with legacy 2016 or 2019 servers, effectively forcing full migration. Skype for Business SE updates are expected to do the same.

Changing On-Prem Strategy

For Microsoft, this move is part of a broader shift in its on-prem strategy (the software that runs on a company’s own servers, rather than in the cloud), i.e. fewer fixed-version launches, more ongoing subscriptions, and tighter integration with cloud-based tools. Exchange SE and Skype SE will not see the same innovation curve as Microsoft 365 or Teams, but they offer a lifeline for organisations that cannot or will not go all-in on the cloud.

From a competitive standpoint, this opens up opportunities for rivals such as Zoho, Open-Xchange, and Proton, particularly in markets concerned about data sovereignty or vendor lock-in. Microsoft’s insistence on subscriptions may also play into the hands of open-source email and UC solutions, especially in price-sensitive or highly regulated environments.

For businesses, particularly UK-based organisations balancing compliance, cost, and control, the release of SE editions raises key strategic questions. For example, should they embrace the evergreen model and continue with Microsoft’s stack, or use the transition as an opportunity to diversify infrastructure or explore alternative platforms?

The Cost of Staying On-Prem

Perhaps the most controversial element of the announcement is pricing. Microsoft confirmed that all standalone on-prem server products, including Exchange SE and Skype SE, are subject to a 10 per cent price increase. Some licence types may even rise by up to 20 per cent, depending on the channel and configuration.

These hikes do not apply to cloud equivalents such as Exchange Online, Microsoft Teams, or SharePoint Online. The implication is that staying on-premises is becoming not just technically more demanding, but also financially more burdensome.

For organisations required to maintain on-prem email or voice systems, there’s little choice. Running unsupported software is not only a security risk but a compliance red flag, particularly under regulations such as the UK GDPR, ISO 27001, and sector-specific frameworks like NHS DSPT or FCA guidelines.

Operational and Cultural Implications

Beyond licensing and compliance, there are also some broader operational implications. For example, Teams responsible for managing Exchange or Skype for Business deployments will need to adapt to faster patch cycles, modernised update tooling, and shorter grace periods for non-compliance. There’s also a risk that core features might stagnate, with most new innovations funnelled to the cloud-only Microsoft 365 environment.

Microsoft has yet to confirm whether Exchange SE or Skype SE will receive any integration with future Copilot features, AI enhancements, or cross-platform sync improvements. As such, businesses relying on SE products may find themselves maintaining legacy tech in an ecosystem that’s moving on without them.

What Does This Mean For Your Business?

The switch to Subscription Editions may be framed as a practical continuity measure, but it also appears to signal a deeper change in how Microsoft intends to manage its remaining on-premises software. For many UK businesses, particularly those in regulated sectors or with hybrid infrastructure needs, SE offers a necessary bridge, but the subscription-only model means that bridge now comes with ongoing costs, tighter servicing rules, and less certainty about long-term feature investment. While Microsoft maintains that on-prem is still supported, the direction of travel looks like being clearly towards the cloud.

This means that organisations that have built operations around Exchange or Skype on-prem will now have to budget not only for higher licence costs but also for the internal work needed to meet Microsoft’s evolving update requirements. That could mean more testing, faster deployment cycles, and additional pressure on IT teams already juggling hybrid or multi-cloud environments. At the same time, those exploring alternatives may face challenges in interoperability, skills, and vendor maturity, making a full departure from Microsoft’s stack a complex decision rather than an easy switch.

For Microsoft, this shift allows continued servicing of legacy platforms without anchoring itself to ageing support timelines or major version overhauls. For competitors, however, it could create space to target niche on-premise or privacy-first customers that may feel increasingly underserved. For the wider industry, including managed service providers and IT resellers, the move may prompt a reassessment of support models, procurement strategies, and cloud migration readiness. Subscription Editions may keep the lights on for on-prem customers, but they also make clear that Microsoft’s long-term bet is firmly on the cloud.

Security Stop Press : Blur Your Property on Google Maps for Better Security

Blurring your property on Google Maps is a simple, permanent step available to any homeowner or tenant that may help reduce the risk of targeted crime.

Street View images can expose details such as building layouts, CCTV cameras, and even the type of vehicles on-site. Security experts warn this information can be useful to burglars, stalkers, or fraudsters planning remote reconnaissance.

To blur your property, go to Street View, click ‘Report a problem’ in the corner of the screen, and follow the prompts to outline and justify your request. Once processed by Google, the blur cannot be undone.

For home-based businesses or firms with visible assets, this small action may help reduce exposure without affecting normal operations. It’s a straightforward way to improve physical security in an increasingly digital world.

Sustainability-In-Tech : EU Funding to Replace Microplastics in Cosmetics

Cellugy, a Danish industrial biotech company, has received €8.1 million in EU funding to scale up production of EcoFLEXY, a biodegradable cellulose-based material designed to replace microplastics in everyday cosmetics.

A Hidden Threat

Microplastics, (i.e. tiny plastic particles under 5mm), are now found in everything from toothpaste and moisturiser to shower gels and makeup. These particles often go unnoticed by consumers but can persist in the environment for centuries, posing long-term risks to marine life and potentially human health.

Cosmetic companies have used fossil-derived polymers such as carbomers for decades because of their ability to provide smooth textures, stabilise emulsions, and extend shelf life. However, these ingredients are increasingly under scrutiny, both from regulators and from environmentally aware consumers. The European Chemicals Agency (ECHA) has estimated that more than 42,000 tonnes of intentionally added microplastics are used in EU products every year, with rinse-off cosmetics among the major contributors. That’s where Cellugy’s new products come in.

Who Is Cellugy?

Founded in Aarhus, Denmark, Cellugy is a synthetic biology startup developing sustainable, high-performance alternatives to petrochemical ingredients. The company is led by CEO and co-founder Dr Isabel Alvarez-Martos, who appears to have become quite an outspoken advocate for bio-based innovation as a means of catalysing systemic change in consumer goods.

Funding

Earlier this year, Cellugy secured €8.1 million from the EU LIFE Programme to support its BIOCARE4LIFE project with the main aim being to commercialise EcoFLEXY, the company’s flagship ingredient designed specifically for the personal care sector.

The Technology Behind EcoFLEXY

EcoFLEXY is a fermentation-derived, biofabricated cellulose, which is essentially a high-purity biopolymer produced without cutting down trees or using harsh extraction chemicals. Cellugy feeds sucrose to specially engineered bacteria in a controlled environment, allowing them to synthesise cellulose in ultra-pure, crystalline form.

The resulting material is a rheology modifier, i.e. a substance used to control the texture, viscosity, and flow of cosmetics. It performs a similar role to carbomers but offers what Cellugy describes as “enhanced stability, compatibility, and sensoriality”, industry terms referring to product consistency, chemical resilience, and feel on the skin.

Importantly, EcoFLEXY is biodegradable, bio-based, and scalable. Its structure is stable in the presence of salts and other charged compounds, making it suitable even for more complex product formulations like sunscreens and gels.

How Much Impact Could This Really Have?

Cellugy estimates that EcoFLEXY could prevent the release of 259 tonnes of microplastics into the environment each year, scaling to over 1,200 tonnes annually by 2034. That’s equivalent to removing millions of contaminated beauty products from the market.

This projection appears to be based on current usage patterns and is being validated by project partners including The Footprint Firm, a Danish circular economy consultancy, and Sci2sci, a Berlin-based AI company helping optimise Cellugy’s fermentation process.

“Our role is to optimise every layer of production so that EcoFLEXY can compete not just on environmental benefits, but on cost and performance metrics that matter to manufacturers,” said Angelina Lesnikova, CEO of Sci2sci.

Cellugy’s funding will cover four years of industrial scaling and validation, with the company aiming to generate “significant revenue within three to five years,” according to Dr Alvarez-Martos.

Why Microplastics in Cosmetics Are So Problematic

While most consumers are now aware of the dangers of plastic bottles and packaging, fewer realise that the products they apply to their skin may also contain plastic particles. Worryingly, these ingredients do not break down in wastewater treatment plants and often end up in rivers, lakes, and oceans.

Once in the environment, for example, they are consumed by marine organisms such as plankton, worms, and fish, working their way up the food chain to humans. A 2018 study by WWF suggested the average person may ingest up to 5 grams of plastic per week, equivalent to a credit card’s worth!

Also, it’s not just the environment at risk. For example, some synthetic polymers are known or suspected to interfere with hormones, trigger allergies, or accumulate in tissues. While research into long-term effects is ongoing, consumer concerns are growing.

Implications for the Cosmetics Industry

EcoFLEXY enters a market already under pressure to clean up. For example, in 2023, the EU adopted legislation to restrict intentionally added microplastics in cosmetic and cleaning products. The new rules are expected to gradually phase out many current formulations, forcing brands to reformulate or risk non-compliance.

Yet it seems that not all “natural” alternatives perform well. For example, according to Cellugy, many plant-based thickeners lack the chemical stability needed for modern cosmetics. EcoFLEXY aims to fill this gap, offering brands a way to remain compliant without sacrificing product performance.

“An alternative material that simply aims to be more sustainable is not enough,” said Dr Alvarez-Martos. “The critical challenge is about delivering bio-based solutions that actually outperform petrochemicals.”

Cellugy Not the Only One

It should be noted here that Cellugy isn’t the only company exploring microplastic alternatives for cosmetics. Examples of other startups and multinationals exploring the same thing include:

– Geno (USA), a biotech firm backed by L’Oréal and Unilever, is working on bioengineered alternatives to fossil-derived surfactants and polymers.

– Lignopure (Germany), which has developed LignoBase, a lignin-based ingredient for personal care formulations.

– CarbonWave (USA), which is turning sargassum seaweed into emulsifiers and stabilisers for skin care products.

However, it seems that few have focused as specifically on the rheology modifier market, where carbomers still dominate due to their low cost, proven performance, and widespread availability.

By targeting this particular category, Cellugy appears to be carving out a commercially attractive and environmentally urgent niche.

Challenges Ahead

Despite the promising figures, there are some key challenges to take note of. For example, biotech production processes like fermentation can be difficult and expensive to scale, especially when consistency and purity are paramount. Manufacturers also need to be convinced not only of EcoFLEXY’s ecological merits, but of its price competitiveness and long-term supply reliability.

Some industry insiders caution that switching ingredients often requires lengthy reformulation cycles and new safety testing. And while regulatory pressure helps push adoption, it also creates risks if new rules change or enforcement is delayed.

Sceptics may also question whether bio-based equals low-impact. Although fermentation is generally cleaner than petrochemical processing, it still requires energy, water, and feedstock inputs, raising questions about lifecycle emissions and land use.

That said, at the moment, the momentum appears to be on Cellugy’s side. With regulatory deadlines looming and younger consumers demanding transparency and traceability, the pressure to eliminate microplastics from cosmetics is unlikely to subside.

As the personal care sector enters a new phase of sustainability-led innovation, Cellugy’s success (or failure) could set a precedent for how the industry balances performance with environmental responsibility.

What Does This Mean For Your Organisation?

If EcoFLEXY delivers on its promises, Cellugy could become a key driver in shifting the cosmetics industry away from petrochemical dependency. By offering a material that is not only biodegradable and biobased but also capable of meeting the technical demands of high-performance formulations, the company is addressing a gap that has long held back broader adoption of sustainable alternatives. The emphasis on performance parity matters here, particularly for manufacturers who are unwilling or unable to compromise on product quality to meet environmental goals.

For UK businesses, the potential benefits are clear. Brands looking to stay ahead of incoming regulations around microplastics could find in EcoFLEXY a ready-made solution that reduces risk and supports green innovation claims. At the same time, contract manufacturers, product developers, and retailers in the UK may see opportunities to differentiate themselves in an increasingly sustainability-conscious market. However, cost remains a likely sticking point. Unless fermentation-based materials like EcoFLEXY can be made competitively at scale, some firms may hesitate to switch without regulatory or market pressure.

For regulators and environmental advocates, Cellugy’s approach demonstrates how public funding can help bridge the gap between lab-scale promise and commercial viability. It also shows that innovation-led solutions don’t always come from inside the legacy cosmetics giants. In fact, small biotech firms like Cellugy may be better placed to build sustainability into the core of their business models rather than treating it as a retrofit.

Still, the industry’s next steps will be critical. If larger companies fail to follow through on public sustainability pledges, or if reformulation efforts stall, the microplastics problem in cosmetics may simply shift rather than shrink. Also, although firms like Geno, CarbonWave, and Lignopure are bringing complementary solutions to market, broader uptake will depend on how quickly the sector aligns behind credible standards for biodegradability, toxicity, and lifecycle impact.

What Cellugy has done here is to essentially raise the bar, but the coming years will reveal whether the rest of the industry is ready to meet it.

Tech Tip – Filter WhatsApp Chats with Custom Lists

Need to organise your chats more effectively? WhatsApp now lets you create custom filters like “Clients” or “Team” so your most important conversations are always easy to find.

How to:

– In WhatsApp, go to the ⋯ menu (Android) or Settings (iOS).
– Tap ‘Chats > Filter chats > Custom List’.
– Select the chats you want included and give the list a name.

What it’s for:

Keeps client or project conversations instantly accessible so there’s no scrolling through dozens of unrelated chats. Ideal for busy professionals managing multiple threads.

Pro‑Tip: You can update or rename lists as your work changes so your filters stay relevant and easy to use.

Featured Article : MPs Concern : ‘Predictive Policing’ in UK

A cross-party group of MPs is calling for the UK government to outlaw predictive policing technologies through an amendment to the forthcoming Crime and Policing Bill, citing concerns over racial profiling, surveillance, and algorithmic bias.

Proposed Law Aims to Outlaw Future Crime Predictions

At the centre of the debate is New Clause 30 (NC30), an amendment tabled by Green MP Siân Berry and backed by at least eight others, including Labour’s Clive Lewis and Zarah Sultana. If passed, the clause would explicitly prohibit UK police from using artificial intelligence (AI), automated decision-making (ADM), or profiling techniques to predict whether an individual or group is likely to commit a future offence.

Berry told the House of Commons that such systems are “inherently flawed” and represent “a fundamental threat to basic rights,” including the presumption of innocence. “Predictive policing, however cleverly sold, always relies on historic police and public data that is itself biased,” she argued. “It reinforces patterns of over-policing and turns communities into suspects, not citizens.”

What Is Predictive Policing?

Predictive policing refers to the use of data analytics, AI and algorithms to identify patterns that suggest where crimes are likely to occur or which individuals may be at greater risk of offending. It takes two broad forms, i.e. place-based systems that forecast crime in particular geographic locations, and person-based systems that claim to assess the risk posed by individuals.

Already Piloted or Deployed

It’s worth noting that these systems have already been piloted or deployed in over 30 UK police forces. For example, according to a 2025 Amnesty International report, 32 forces were using location-focused tools, while 11 had tested or deployed systems to forecast individual behaviour. The aim, according to police, is to deploy resources more efficiently and prevent crime before it happens.

However, critics argue that the data used to train these systems, such as arrest records, stop-and-search data, and local crime statistics, is historically biased. This, they say, leads to feedback loops where marginalised and heavily policed communities are disproportionately targeted by future interventions.

Why MPs Are Taking a Stand Now

The renewed push for a legislative ban follows a string of revelations over the past 18 months about the growing use of algorithmic policing in the UK, often without public consultation or oversight. One of the most contentious examples was uncovered by Statewatch in 2025, i.e., the Ministry of Justice’s so-called “Homicide Prediction Project”, a system under development to identify individuals at risk of committing murder using sensitive data, including health and domestic abuse records—even in cases where no criminal conviction exists.

Statewatch researcher Sofia Lyall called the initiative “chilling and dystopian,” warning that “using predictive tools built on data about addiction, mental health and disability amounts to highly intrusive profiling” and risks “coding bias directly into policing practice.”

The amendment to the Crime and Policing Bill comes as the government continues to expand data-driven law enforcement under new legislation. The Data Use and Access Act (passed earlier this year) permits certain forms of automated decision-making that were previously restricted under the Data Protection Act 2018. More than 30 civil society groups, including Big Brother Watch, Open Rights Group, Inquest and Amnesty, have signed a joint letter condemning the changes and calling for a ban on predictive policing to be included in the new bill.

Bias, Surveillance and Lack of Transparency

At the heart of the pushback is the view that predictive systems do not eliminate human bias, but instead replicate and scale it. As Open Rights Group’s Sara Chitseko explained in a May blog, “historical crime data reflects decades of discriminatory policing, particularly targeting poor neighbourhoods and racialised communities.”

The concern is not just over potential inaccuracies, but the broader impact on civil liberties. Campaigners warn that predictive tools undermine the right to privacy and fuel what they call a “pre-crime surveillance state,” in which individuals can be subjected to policing actions without having committed any crime.

This can include being flagged for increased surveillance, added to risk registers, or subjected to stop-and-search, all based on algorithmic assessments that may be impossible to scrutinise. Data from these tools is often shared across public bodies, meaning individuals can be affected in housing, education, or welfare decisions as a result of hidden profiling.

55 Automated Tools Identified

Researchers at the Public Law Project, which runs the Tracking Automated Government (TAG) register, have documented over 55 automated decision-making tools used across UK government departments, including policing. Many operate without publicly available data protection or equality assessments. Legal Director Ariane Adam said, “People deserve to know if a decision about their lives is being made by an opaque algorithm—and have a way to challenge it if it’s wrong.”

How the Crime and Policing Bill Fits In

The Crime and Policing Bill is part of a broader effort by the UK government to modernise policing powers and criminal justice processes. While not specifically focused on predictive technologies, the bill’s scope includes provisions for police data access, surveillance capabilities and crime prevention strategies.

Critics argue that without clear prohibitions, the bill risks giving predictive systems greater legitimacy. “Predictive policing isn’t just a technical tool—it’s a fundamental shift in the presumption of innocence,” said Berry. “We need the law to say clearly: you cannot be punished for something you haven’t done, just because a computer says you might.”

A second proposed amendment from Berry seeks to provide safeguards where automated decisions are used in policing. This would include a legal right to request human review, improved transparency over the use of algorithms, and meaningful routes for redress.

What the Police and Government Are Saying

Police forces and government departments have largely defended their use of predictive technologies, arguing that they allow for more proactive policing. For example, the Home Office has supported initiatives such as GRIP, a place-based prediction system used by 20 forces since 2021 to identify high-crime areas.

Proponents claim these tools help reduce violence and make best use of limited resources. However, recent assessments suggest the benefits may be overstated. Amnesty found “no conclusive evidence” that GRIP had reduced crime, while also warning it had “reinforced racial profiling” in the communities it targeted.

The government has not yet formally responded to the proposed amendments. However, officials have previously argued that AI and ADM can be used responsibly with the right oversight. The Department for Science, Innovation and Technology’s 2023 White Paper on AI governance promoted voluntary transparency standards but fell short of recommending statutory controls.

Businesses and Civil Society

If the amendment banning predictive policing passes, it could reshape how AI and automation are used across public services, not just policing. For civil society and legal groups, it would mark a significant win for rights-based governance of AI.

For businesses working in AI, data analytics and security tech, the implications are mixed. Suppliers of predictive systems to police forces may lose a key customer base, while developers of ethical or human-in-the-loop systems could find new demand for tools that meet stricter legal standards.

More broadly, companies operating in sectors such as insurance, HR tech, or public procurement may face growing scrutiny over how their algorithms are used to assess individuals, particularly if they supply services to the government. A legislative ban on predictive policing could signal the start of tighter controls on high-risk ADM across all sectors.

Cybersecurity professionals and data governance officers may also need to reassess compliance strategies, especially where their systems intersect with law enforcement or public sector clients.

The challenge, according to legal analysts, is ensuring any ban does not create ambiguity. “There’s a fine line between banning profiling-based prediction and stifling responsible innovation,” said one lawyer familiar with the TAG project. “Clear definitions and thresholds will be vital.”

Key Obstacles to Progress

Even with growing public and parliamentary concern, the road to banning predictive policing is unlikely to be smooth. One challenge is technical, i.e. there’s no consensus on what exactly counts as “predictive policing,” given the variety of tools and methods involved.

There’s also the legal complexity of drawing lines between fully automated systems and those that merely assist human decision-making. As with facial recognition and biometric surveillance, courts and regulators have struggled to keep pace with the technology.

Policymakers face a political challenge too: calls for stronger law and order measures remain popular with some voters, and banning high-tech crime-fighting tools may be portrayed as soft on crime. Opponents of the amendment are likely to argue that police need every available advantage to tackle modern threats, including gang violence, knife crime and terrorism.

However, it seems that the tide may be turning. As Berry put it in her Commons speech: “This is a moment to decide what kind of society we want to be—one where we protect rights and freedoms, or one where we criminalise people before they’ve done anything wrong.”

What Does This Mean For Your Business?

Whether or not the amendment to ban predictive policing is adopted, the pressure now facing the UK government reflects a growing public and parliamentary appetite for more robust oversight of AI and algorithmic decision-making. The evidence presented by civil rights groups, legal experts and academics points to a consistent pattern where predictive systems are deployed without transparency or accountability, the result is often discrimination and deep mistrust in public institutions.

For police forces, this moment raises urgent questions about how data is collected, analysed and applied. Even if predictive systems are well-intentioned, their reliance on flawed historical datasets and opaque algorithms makes it difficult to separate operational efficiency from systemic bias. Without clear legal limits, the use of such technologies could further entrench inequalities and reduce trust in frontline policing.

The implications extend far beyond law enforcement. For example, businesses involved in AI development, analytics or public sector contracting will need to stay alert to changing expectations around transparency, fairness and accountability. A legal ban on predictive policing could signal broader regulatory moves against high-risk algorithmic profiling, especially where sensitive or personal data is involved. Companies that rely on such tools in recruitment, risk scoring or fraud detection may need to rethink how their systems operate and how they explain them to clients and users.

For civil society and campaigners, the bill presents a rare chance to press for hard legal safeguards rather than soft ethical guidelines. The current momentum suggests that arguments grounded in lived experience, statistical evidence and human rights law are starting to gain traction in parliamentary debates.

What happens next will shape the relationship between data, power and the public for years to come. Whether through this bill or a future AI-specific law, the UK faces a clear choice: allow automated prediction to quietly redefine policing, or legislate to ensure that new technologies serve justice without undermining it.

Tech Insight : Block Or Charge AI Bots Accessing Your Website

A new system from Cloudflare gives millions of websites the power to block AI bots from scraping their content without permission and could soon let them charge for access via a new pay-per-crawl model.

AI Crawlers A Problem for Publishers and Creators

In recent years, the rapid growth of AI tools has sparked a battle over ownership, access, and compensation. At the centre of the controversy are “AI crawlers”, i.e. automated bots developed by companies like OpenAI, Google, and Anthropic to trawl the internet, copying data from websites to train large language models (LLMs) or power AI assistants.

For creators and publishers, the issue is that this content is often scraped without permission or compensation. Unlike traditional web crawlers used by search engines, which drive traffic back to the original source and support advertising revenue, AI bots typically use the content to generate summaries, answers or outputs directly, without crediting or linking to the sites they pulled from. This bypasses publishers entirely, cutting them out of the value chain.

The BBC, for example, recently accused US-based AI firm Perplexity of using its content without consent and demanded compensation. Similar rows have erupted in the US, with lawsuits from the likes of The New York Times, and in the UK, where artists have criticised the government over weak protections.

As Matthew Prince, co-founder and CEO of Cloudflare, put it: “AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate.”

Who Is Cloudflare?

Cloudflare is one of the internet’s biggest behind-the-scenes players. The US-listed tech firm provides security, performance optimisation and content delivery services for around 20 per cent of all websites globally. That scale makes any system it deploys highly influential, and potentially industry-defining.

On 1 July, the company launched a sweeping new system that gives website owners direct control over AI crawlers. Crucially, this is now turned on by default for new Cloudflare users, meaning that unless permission is granted, AI bots will be blocked from accessing site content altogether.

The move significantly changes the rules of engagement between content owners and AI firms, and lays the groundwork for a new type of economic model.

How the New System Works

The technology uses Cloudflare’s bot detection infrastructure to identify which crawlers are trying to access a site and what purpose they’re being used for, such as AI training, inference, or chatbot search responses. It means that AI crawlers must now declare their identity and intent. This in turn gives website owners the power to choose to allow access, deny it entirely, or ask for payment via a new initiative called Pay per Crawl.

Pay Per Crawl

Pay per Crawl is an experimental marketplace currently in private beta. It allows publishers to set a price (typically a micropayment) for each individual bot crawl. The AI companies must then agree to pay if they want continued access to the site’s content. The entire process is managed by Cloudflare as the intermediary.

The system also includes transparency tools such as dashboards showing how often bots visit a site and what they are collecting. This allows publishers to differentiate between helpful crawler (e.g. those from Google Search) and AI bots that may be extracting content without driving any traffic back.

Big Names Already Backing the Block

Over one million sites are already using Cloudflare’s earlier one-click tool to block AI crawlers. With the new system, even more are expected to adopt it, especially as the default setting now blocks crawlers unless explicitly allowed.

For example, leading media companies including Sky News, The Associated Press, BuzzFeed, TIME, The Atlantic, Condé Nast, Gannett (USA Today), and Dotdash Meredith have signed on to use the technology. Many see it as a step towards restoring control over their intellectual property and creating fairer terms for their contributions to the web.

“This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable,” said Roger Lynch, CEO of Condé Nast.

Also, TIME’s COO, Mark Howard, described the initiative as “a meaningful step toward building a healthier AI ecosystem—one that respects the value of trusted content and supports the creators behind it.”

Crawling Costs and Content Control

The problem, publishers argue, is that AI firms are currently reaping huge rewards from models trained on content that they never paid for. For example, a recent analysis by Cloudflare suggests that OpenAI’s crawler, GPTBot, scraped websites 1,700 times for every referral it gave in return. In comparison, Google’s bot gave one referral for every 14 scrapes – still skewed, but not nearly as extreme.

This imbalance has prompted fears that the original economic model of the open internet, i.e. where traffic from search engines fuels revenue for content creators, is breaking down. For example, as AI assistants become more prevalent and answer users’ questions directly, fewer people click through to the source material. That threatens the sustainability of journalism, research, and creative industries.

Therefore, by introducing a payment mechanism and making bot access conditional, Cloudflare hopes to reshape the model. As the company wrote in its announcement: “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Websites and AI Firms

For website owners, especially smaller publishers, creative professionals, and independent media, Cloudflare’s system could offer a much-needed line of defence. For example, many lack the technical resources to build their own bot detection or monetisation systems. With Cloudflare now providing this as a built-in service, it levels the playing field.

For AI companies, however, it creates a new layer of complexity and potentially, cost. While some like ProRata AI and Quora have expressed support for fair compensation models, others may be forced to rethink how they access training data or structure deals with publishers.

At the same time, AI firms that continue to ignore bot exclusion rules may now find themselves more easily blocked, routed into traps (like Cloudflare’s AI “Labyrinth” of junk content), or publicly named and shamed.

The move also puts pressure on Cloudflare’s competitors, such as Amazon Web Services, Google Cloud, and Akamai, to offer similar tools or risk falling behind in the arms race over content protection and AI ethics.

A Bet on a New Internet Economy

By launching Pay per Crawl (still in beta), Cloudflare is positioning itself as both a gatekeeper and broker of a new AI-era content economy. In doing so, it’s hoping to gain influence over how value flows between creators and AI companies, and opening the door to becoming a central payments infrastructure provider in this emerging market.

CEO Matthew Prince has even floated the idea of creating Cloudflare’s own stablecoin to support seamless micropayments at scale.

Challenges

That said, challenges remain. For example, the system only protects content hosted through Cloudflare. Critics like Ed Newton-Rex, founder of Fairly Trained, argue this is a “sticking plaster” rather than a full solution. Legal frameworks, they say, are still essential to address copyright and enforce compliance across the wider web.

Baroness Beeban Kidron, a prominent campaigner for creative rights, nonetheless praised the move as “decisive action,” saying: “If we want a vibrant public sphere, we need AI companies to contribute to the communities in which they operate.”

More broadly, the battle now turns to whether Cloudflare’s system can actually become the foundation for a fairer digital ecosystem, or whether AI firms and others will try to find ways around it.

What Does This Mean For Your Business?

For publishers, a permission-based model for AI web scraping could be the first meaningful opportunity to assert control over how their work is accessed and monetised in an AI-driven world. It gives media groups, content creators, and smaller businesses a chance to protect their intellectual property without needing bespoke technical solutions, and could eventually create new revenue streams where previously there were none. If widely adopted, it also signals a move away from the unspoken assumption that public web content is free for AI companies to exploit.

What makes this development particularly relevant is Cloudflare’s scale. With its technology touching around one fifth of the internet, its default blocking of AI bots resets the baseline. AI companies can no longer rely on passive access to build their models and must now navigate a fragmented, consent-based landscape. While this raises operational challenges for developers of AI tools, it may also encourage more formal, sustainable commercial arrangements between content owners and AI firms.

For UK businesses, the implications are twofold. On the one hand, firms producing original content, e.g. publishers, consultancies, and creative agencies, stand to gain from greater control and potential compensation. On the other, companies that rely on AI systems to summarise, synthesise or build upon external content may face new hurdles or costs. It highlights the need for businesses to understand not just how AI tools function, but where their data comes from and under what terms.

However, the effectiveness of Cloudflare’s model will depend on broad adoption and robust enforcement. The Pay per Crawl system is still in beta and, for now, limited in reach. There is also the risk that aggressive scraping bots will continue to operate outside legitimate channels or spoof identities to bypass detection. In that sense, legal backing remains a missing piece. As critics point out, a voluntary system only protects those within its walls.

Even so, the shift represents a turning point. Whether or not Cloudflare’s marketplace becomes the standard, it has created a framework that others may follow or adapt. For publishers, platforms and AI companies alike, the message is that the free-for-all era of unregulated AI scraping appears to be over. The next chapter will be defined by consent, compensation and a more negotiated relationship between those who create content and those who use it.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives