Sustainability-In-Tech : EU Funding to Replace Microplastics in Cosmetics

Cellugy, a Danish industrial biotech company, has received €8.1 million in EU funding to scale up production of EcoFLEXY, a biodegradable cellulose-based material designed to replace microplastics in everyday cosmetics.

A Hidden Threat

Microplastics, (i.e. tiny plastic particles under 5mm), are now found in everything from toothpaste and moisturiser to shower gels and makeup. These particles often go unnoticed by consumers but can persist in the environment for centuries, posing long-term risks to marine life and potentially human health.

Cosmetic companies have used fossil-derived polymers such as carbomers for decades because of their ability to provide smooth textures, stabilise emulsions, and extend shelf life. However, these ingredients are increasingly under scrutiny, both from regulators and from environmentally aware consumers. The European Chemicals Agency (ECHA) has estimated that more than 42,000 tonnes of intentionally added microplastics are used in EU products every year, with rinse-off cosmetics among the major contributors. That’s where Cellugy’s new products come in.

Who Is Cellugy?

Founded in Aarhus, Denmark, Cellugy is a synthetic biology startup developing sustainable, high-performance alternatives to petrochemical ingredients. The company is led by CEO and co-founder Dr Isabel Alvarez-Martos, who appears to have become quite an outspoken advocate for bio-based innovation as a means of catalysing systemic change in consumer goods.

Funding

Earlier this year, Cellugy secured €8.1 million from the EU LIFE Programme to support its BIOCARE4LIFE project with the main aim being to commercialise EcoFLEXY, the company’s flagship ingredient designed specifically for the personal care sector.

The Technology Behind EcoFLEXY

EcoFLEXY is a fermentation-derived, biofabricated cellulose, which is essentially a high-purity biopolymer produced without cutting down trees or using harsh extraction chemicals. Cellugy feeds sucrose to specially engineered bacteria in a controlled environment, allowing them to synthesise cellulose in ultra-pure, crystalline form.

The resulting material is a rheology modifier, i.e. a substance used to control the texture, viscosity, and flow of cosmetics. It performs a similar role to carbomers but offers what Cellugy describes as “enhanced stability, compatibility, and sensoriality”, industry terms referring to product consistency, chemical resilience, and feel on the skin.

Importantly, EcoFLEXY is biodegradable, bio-based, and scalable. Its structure is stable in the presence of salts and other charged compounds, making it suitable even for more complex product formulations like sunscreens and gels.

How Much Impact Could This Really Have?

Cellugy estimates that EcoFLEXY could prevent the release of 259 tonnes of microplastics into the environment each year, scaling to over 1,200 tonnes annually by 2034. That’s equivalent to removing millions of contaminated beauty products from the market.

This projection appears to be based on current usage patterns and is being validated by project partners including The Footprint Firm, a Danish circular economy consultancy, and Sci2sci, a Berlin-based AI company helping optimise Cellugy’s fermentation process.

“Our role is to optimise every layer of production so that EcoFLEXY can compete not just on environmental benefits, but on cost and performance metrics that matter to manufacturers,” said Angelina Lesnikova, CEO of Sci2sci.

Cellugy’s funding will cover four years of industrial scaling and validation, with the company aiming to generate “significant revenue within three to five years,” according to Dr Alvarez-Martos.

Why Microplastics in Cosmetics Are So Problematic

While most consumers are now aware of the dangers of plastic bottles and packaging, fewer realise that the products they apply to their skin may also contain plastic particles. Worryingly, these ingredients do not break down in wastewater treatment plants and often end up in rivers, lakes, and oceans.

Once in the environment, for example, they are consumed by marine organisms such as plankton, worms, and fish, working their way up the food chain to humans. A 2018 study by WWF suggested the average person may ingest up to 5 grams of plastic per week, equivalent to a credit card’s worth!

Also, it’s not just the environment at risk. For example, some synthetic polymers are known or suspected to interfere with hormones, trigger allergies, or accumulate in tissues. While research into long-term effects is ongoing, consumer concerns are growing.

Implications for the Cosmetics Industry

EcoFLEXY enters a market already under pressure to clean up. For example, in 2023, the EU adopted legislation to restrict intentionally added microplastics in cosmetic and cleaning products. The new rules are expected to gradually phase out many current formulations, forcing brands to reformulate or risk non-compliance.

Yet it seems that not all “natural” alternatives perform well. For example, according to Cellugy, many plant-based thickeners lack the chemical stability needed for modern cosmetics. EcoFLEXY aims to fill this gap, offering brands a way to remain compliant without sacrificing product performance.

“An alternative material that simply aims to be more sustainable is not enough,” said Dr Alvarez-Martos. “The critical challenge is about delivering bio-based solutions that actually outperform petrochemicals.”

Cellugy Not the Only One

It should be noted here that Cellugy isn’t the only company exploring microplastic alternatives for cosmetics. Examples of other startups and multinationals exploring the same thing include:

– Geno (USA), a biotech firm backed by L’Oréal and Unilever, is working on bioengineered alternatives to fossil-derived surfactants and polymers.

– Lignopure (Germany), which has developed LignoBase, a lignin-based ingredient for personal care formulations.

– CarbonWave (USA), which is turning sargassum seaweed into emulsifiers and stabilisers for skin care products.

However, it seems that few have focused as specifically on the rheology modifier market, where carbomers still dominate due to their low cost, proven performance, and widespread availability.

By targeting this particular category, Cellugy appears to be carving out a commercially attractive and environmentally urgent niche.

Challenges Ahead

Despite the promising figures, there are some key challenges to take note of. For example, biotech production processes like fermentation can be difficult and expensive to scale, especially when consistency and purity are paramount. Manufacturers also need to be convinced not only of EcoFLEXY’s ecological merits, but of its price competitiveness and long-term supply reliability.

Some industry insiders caution that switching ingredients often requires lengthy reformulation cycles and new safety testing. And while regulatory pressure helps push adoption, it also creates risks if new rules change or enforcement is delayed.

Sceptics may also question whether bio-based equals low-impact. Although fermentation is generally cleaner than petrochemical processing, it still requires energy, water, and feedstock inputs, raising questions about lifecycle emissions and land use.

That said, at the moment, the momentum appears to be on Cellugy’s side. With regulatory deadlines looming and younger consumers demanding transparency and traceability, the pressure to eliminate microplastics from cosmetics is unlikely to subside.

As the personal care sector enters a new phase of sustainability-led innovation, Cellugy’s success (or failure) could set a precedent for how the industry balances performance with environmental responsibility.

What Does This Mean For Your Organisation?

If EcoFLEXY delivers on its promises, Cellugy could become a key driver in shifting the cosmetics industry away from petrochemical dependency. By offering a material that is not only biodegradable and biobased but also capable of meeting the technical demands of high-performance formulations, the company is addressing a gap that has long held back broader adoption of sustainable alternatives. The emphasis on performance parity matters here, particularly for manufacturers who are unwilling or unable to compromise on product quality to meet environmental goals.

For UK businesses, the potential benefits are clear. Brands looking to stay ahead of incoming regulations around microplastics could find in EcoFLEXY a ready-made solution that reduces risk and supports green innovation claims. At the same time, contract manufacturers, product developers, and retailers in the UK may see opportunities to differentiate themselves in an increasingly sustainability-conscious market. However, cost remains a likely sticking point. Unless fermentation-based materials like EcoFLEXY can be made competitively at scale, some firms may hesitate to switch without regulatory or market pressure.

For regulators and environmental advocates, Cellugy’s approach demonstrates how public funding can help bridge the gap between lab-scale promise and commercial viability. It also shows that innovation-led solutions don’t always come from inside the legacy cosmetics giants. In fact, small biotech firms like Cellugy may be better placed to build sustainability into the core of their business models rather than treating it as a retrofit.

Still, the industry’s next steps will be critical. If larger companies fail to follow through on public sustainability pledges, or if reformulation efforts stall, the microplastics problem in cosmetics may simply shift rather than shrink. Also, although firms like Geno, CarbonWave, and Lignopure are bringing complementary solutions to market, broader uptake will depend on how quickly the sector aligns behind credible standards for biodegradability, toxicity, and lifecycle impact.

What Cellugy has done here is to essentially raise the bar, but the coming years will reveal whether the rest of the industry is ready to meet it.

Tech Tip – Filter WhatsApp Chats with Custom Lists

Need to organise your chats more effectively? WhatsApp now lets you create custom filters like “Clients” or “Team” so your most important conversations are always easy to find.

How to:

– In WhatsApp, go to the ⋯ menu (Android) or Settings (iOS).
– Tap ‘Chats > Filter chats > Custom List’.
– Select the chats you want included and give the list a name.

What it’s for:

Keeps client or project conversations instantly accessible so there’s no scrolling through dozens of unrelated chats. Ideal for busy professionals managing multiple threads.

Pro‑Tip: You can update or rename lists as your work changes so your filters stay relevant and easy to use.

Featured Article : MPs Concern : ‘Predictive Policing’ in UK

A cross-party group of MPs is calling for the UK government to outlaw predictive policing technologies through an amendment to the forthcoming Crime and Policing Bill, citing concerns over racial profiling, surveillance, and algorithmic bias.

Proposed Law Aims to Outlaw Future Crime Predictions

At the centre of the debate is New Clause 30 (NC30), an amendment tabled by Green MP Siân Berry and backed by at least eight others, including Labour’s Clive Lewis and Zarah Sultana. If passed, the clause would explicitly prohibit UK police from using artificial intelligence (AI), automated decision-making (ADM), or profiling techniques to predict whether an individual or group is likely to commit a future offence.

Berry told the House of Commons that such systems are “inherently flawed” and represent “a fundamental threat to basic rights,” including the presumption of innocence. “Predictive policing, however cleverly sold, always relies on historic police and public data that is itself biased,” she argued. “It reinforces patterns of over-policing and turns communities into suspects, not citizens.”

What Is Predictive Policing?

Predictive policing refers to the use of data analytics, AI and algorithms to identify patterns that suggest where crimes are likely to occur or which individuals may be at greater risk of offending. It takes two broad forms, i.e. place-based systems that forecast crime in particular geographic locations, and person-based systems that claim to assess the risk posed by individuals.

Already Piloted or Deployed

It’s worth noting that these systems have already been piloted or deployed in over 30 UK police forces. For example, according to a 2025 Amnesty International report, 32 forces were using location-focused tools, while 11 had tested or deployed systems to forecast individual behaviour. The aim, according to police, is to deploy resources more efficiently and prevent crime before it happens.

However, critics argue that the data used to train these systems, such as arrest records, stop-and-search data, and local crime statistics, is historically biased. This, they say, leads to feedback loops where marginalised and heavily policed communities are disproportionately targeted by future interventions.

Why MPs Are Taking a Stand Now

The renewed push for a legislative ban follows a string of revelations over the past 18 months about the growing use of algorithmic policing in the UK, often without public consultation or oversight. One of the most contentious examples was uncovered by Statewatch in 2025, i.e., the Ministry of Justice’s so-called “Homicide Prediction Project”, a system under development to identify individuals at risk of committing murder using sensitive data, including health and domestic abuse records—even in cases where no criminal conviction exists.

Statewatch researcher Sofia Lyall called the initiative “chilling and dystopian,” warning that “using predictive tools built on data about addiction, mental health and disability amounts to highly intrusive profiling” and risks “coding bias directly into policing practice.”

The amendment to the Crime and Policing Bill comes as the government continues to expand data-driven law enforcement under new legislation. The Data Use and Access Act (passed earlier this year) permits certain forms of automated decision-making that were previously restricted under the Data Protection Act 2018. More than 30 civil society groups, including Big Brother Watch, Open Rights Group, Inquest and Amnesty, have signed a joint letter condemning the changes and calling for a ban on predictive policing to be included in the new bill.

Bias, Surveillance and Lack of Transparency

At the heart of the pushback is the view that predictive systems do not eliminate human bias, but instead replicate and scale it. As Open Rights Group’s Sara Chitseko explained in a May blog, “historical crime data reflects decades of discriminatory policing, particularly targeting poor neighbourhoods and racialised communities.”

The concern is not just over potential inaccuracies, but the broader impact on civil liberties. Campaigners warn that predictive tools undermine the right to privacy and fuel what they call a “pre-crime surveillance state,” in which individuals can be subjected to policing actions without having committed any crime.

This can include being flagged for increased surveillance, added to risk registers, or subjected to stop-and-search, all based on algorithmic assessments that may be impossible to scrutinise. Data from these tools is often shared across public bodies, meaning individuals can be affected in housing, education, or welfare decisions as a result of hidden profiling.

55 Automated Tools Identified

Researchers at the Public Law Project, which runs the Tracking Automated Government (TAG) register, have documented over 55 automated decision-making tools used across UK government departments, including policing. Many operate without publicly available data protection or equality assessments. Legal Director Ariane Adam said, “People deserve to know if a decision about their lives is being made by an opaque algorithm—and have a way to challenge it if it’s wrong.”

How the Crime and Policing Bill Fits In

The Crime and Policing Bill is part of a broader effort by the UK government to modernise policing powers and criminal justice processes. While not specifically focused on predictive technologies, the bill’s scope includes provisions for police data access, surveillance capabilities and crime prevention strategies.

Critics argue that without clear prohibitions, the bill risks giving predictive systems greater legitimacy. “Predictive policing isn’t just a technical tool—it’s a fundamental shift in the presumption of innocence,” said Berry. “We need the law to say clearly: you cannot be punished for something you haven’t done, just because a computer says you might.”

A second proposed amendment from Berry seeks to provide safeguards where automated decisions are used in policing. This would include a legal right to request human review, improved transparency over the use of algorithms, and meaningful routes for redress.

What the Police and Government Are Saying

Police forces and government departments have largely defended their use of predictive technologies, arguing that they allow for more proactive policing. For example, the Home Office has supported initiatives such as GRIP, a place-based prediction system used by 20 forces since 2021 to identify high-crime areas.

Proponents claim these tools help reduce violence and make best use of limited resources. However, recent assessments suggest the benefits may be overstated. Amnesty found “no conclusive evidence” that GRIP had reduced crime, while also warning it had “reinforced racial profiling” in the communities it targeted.

The government has not yet formally responded to the proposed amendments. However, officials have previously argued that AI and ADM can be used responsibly with the right oversight. The Department for Science, Innovation and Technology’s 2023 White Paper on AI governance promoted voluntary transparency standards but fell short of recommending statutory controls.

Businesses and Civil Society

If the amendment banning predictive policing passes, it could reshape how AI and automation are used across public services, not just policing. For civil society and legal groups, it would mark a significant win for rights-based governance of AI.

For businesses working in AI, data analytics and security tech, the implications are mixed. Suppliers of predictive systems to police forces may lose a key customer base, while developers of ethical or human-in-the-loop systems could find new demand for tools that meet stricter legal standards.

More broadly, companies operating in sectors such as insurance, HR tech, or public procurement may face growing scrutiny over how their algorithms are used to assess individuals, particularly if they supply services to the government. A legislative ban on predictive policing could signal the start of tighter controls on high-risk ADM across all sectors.

Cybersecurity professionals and data governance officers may also need to reassess compliance strategies, especially where their systems intersect with law enforcement or public sector clients.

The challenge, according to legal analysts, is ensuring any ban does not create ambiguity. “There’s a fine line between banning profiling-based prediction and stifling responsible innovation,” said one lawyer familiar with the TAG project. “Clear definitions and thresholds will be vital.”

Key Obstacles to Progress

Even with growing public and parliamentary concern, the road to banning predictive policing is unlikely to be smooth. One challenge is technical, i.e. there’s no consensus on what exactly counts as “predictive policing,” given the variety of tools and methods involved.

There’s also the legal complexity of drawing lines between fully automated systems and those that merely assist human decision-making. As with facial recognition and biometric surveillance, courts and regulators have struggled to keep pace with the technology.

Policymakers face a political challenge too: calls for stronger law and order measures remain popular with some voters, and banning high-tech crime-fighting tools may be portrayed as soft on crime. Opponents of the amendment are likely to argue that police need every available advantage to tackle modern threats, including gang violence, knife crime and terrorism.

However, it seems that the tide may be turning. As Berry put it in her Commons speech: “This is a moment to decide what kind of society we want to be—one where we protect rights and freedoms, or one where we criminalise people before they’ve done anything wrong.”

What Does This Mean For Your Business?

Whether or not the amendment to ban predictive policing is adopted, the pressure now facing the UK government reflects a growing public and parliamentary appetite for more robust oversight of AI and algorithmic decision-making. The evidence presented by civil rights groups, legal experts and academics points to a consistent pattern where predictive systems are deployed without transparency or accountability, the result is often discrimination and deep mistrust in public institutions.

For police forces, this moment raises urgent questions about how data is collected, analysed and applied. Even if predictive systems are well-intentioned, their reliance on flawed historical datasets and opaque algorithms makes it difficult to separate operational efficiency from systemic bias. Without clear legal limits, the use of such technologies could further entrench inequalities and reduce trust in frontline policing.

The implications extend far beyond law enforcement. For example, businesses involved in AI development, analytics or public sector contracting will need to stay alert to changing expectations around transparency, fairness and accountability. A legal ban on predictive policing could signal broader regulatory moves against high-risk algorithmic profiling, especially where sensitive or personal data is involved. Companies that rely on such tools in recruitment, risk scoring or fraud detection may need to rethink how their systems operate and how they explain them to clients and users.

For civil society and campaigners, the bill presents a rare chance to press for hard legal safeguards rather than soft ethical guidelines. The current momentum suggests that arguments grounded in lived experience, statistical evidence and human rights law are starting to gain traction in parliamentary debates.

What happens next will shape the relationship between data, power and the public for years to come. Whether through this bill or a future AI-specific law, the UK faces a clear choice: allow automated prediction to quietly redefine policing, or legislate to ensure that new technologies serve justice without undermining it.

Tech Insight : Block Or Charge AI Bots Accessing Your Website

A new system from Cloudflare gives millions of websites the power to block AI bots from scraping their content without permission and could soon let them charge for access via a new pay-per-crawl model.

AI Crawlers A Problem for Publishers and Creators

In recent years, the rapid growth of AI tools has sparked a battle over ownership, access, and compensation. At the centre of the controversy are “AI crawlers”, i.e. automated bots developed by companies like OpenAI, Google, and Anthropic to trawl the internet, copying data from websites to train large language models (LLMs) or power AI assistants.

For creators and publishers, the issue is that this content is often scraped without permission or compensation. Unlike traditional web crawlers used by search engines, which drive traffic back to the original source and support advertising revenue, AI bots typically use the content to generate summaries, answers or outputs directly, without crediting or linking to the sites they pulled from. This bypasses publishers entirely, cutting them out of the value chain.

The BBC, for example, recently accused US-based AI firm Perplexity of using its content without consent and demanded compensation. Similar rows have erupted in the US, with lawsuits from the likes of The New York Times, and in the UK, where artists have criticised the government over weak protections.

As Matthew Prince, co-founder and CEO of Cloudflare, put it: “AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate.”

Who Is Cloudflare?

Cloudflare is one of the internet’s biggest behind-the-scenes players. The US-listed tech firm provides security, performance optimisation and content delivery services for around 20 per cent of all websites globally. That scale makes any system it deploys highly influential, and potentially industry-defining.

On 1 July, the company launched a sweeping new system that gives website owners direct control over AI crawlers. Crucially, this is now turned on by default for new Cloudflare users, meaning that unless permission is granted, AI bots will be blocked from accessing site content altogether.

The move significantly changes the rules of engagement between content owners and AI firms, and lays the groundwork for a new type of economic model.

How the New System Works

The technology uses Cloudflare’s bot detection infrastructure to identify which crawlers are trying to access a site and what purpose they’re being used for, such as AI training, inference, or chatbot search responses. It means that AI crawlers must now declare their identity and intent. This in turn gives website owners the power to choose to allow access, deny it entirely, or ask for payment via a new initiative called Pay per Crawl.

Pay Per Crawl

Pay per Crawl is an experimental marketplace currently in private beta. It allows publishers to set a price (typically a micropayment) for each individual bot crawl. The AI companies must then agree to pay if they want continued access to the site’s content. The entire process is managed by Cloudflare as the intermediary.

The system also includes transparency tools such as dashboards showing how often bots visit a site and what they are collecting. This allows publishers to differentiate between helpful crawler (e.g. those from Google Search) and AI bots that may be extracting content without driving any traffic back.

Big Names Already Backing the Block

Over one million sites are already using Cloudflare’s earlier one-click tool to block AI crawlers. With the new system, even more are expected to adopt it, especially as the default setting now blocks crawlers unless explicitly allowed.

For example, leading media companies including Sky News, The Associated Press, BuzzFeed, TIME, The Atlantic, Condé Nast, Gannett (USA Today), and Dotdash Meredith have signed on to use the technology. Many see it as a step towards restoring control over their intellectual property and creating fairer terms for their contributions to the web.

“This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable,” said Roger Lynch, CEO of Condé Nast.

Also, TIME’s COO, Mark Howard, described the initiative as “a meaningful step toward building a healthier AI ecosystem—one that respects the value of trusted content and supports the creators behind it.”

Crawling Costs and Content Control

The problem, publishers argue, is that AI firms are currently reaping huge rewards from models trained on content that they never paid for. For example, a recent analysis by Cloudflare suggests that OpenAI’s crawler, GPTBot, scraped websites 1,700 times for every referral it gave in return. In comparison, Google’s bot gave one referral for every 14 scrapes – still skewed, but not nearly as extreme.

This imbalance has prompted fears that the original economic model of the open internet, i.e. where traffic from search engines fuels revenue for content creators, is breaking down. For example, as AI assistants become more prevalent and answer users’ questions directly, fewer people click through to the source material. That threatens the sustainability of journalism, research, and creative industries.

Therefore, by introducing a payment mechanism and making bot access conditional, Cloudflare hopes to reshape the model. As the company wrote in its announcement: “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Websites and AI Firms

For website owners, especially smaller publishers, creative professionals, and independent media, Cloudflare’s system could offer a much-needed line of defence. For example, many lack the technical resources to build their own bot detection or monetisation systems. With Cloudflare now providing this as a built-in service, it levels the playing field.

For AI companies, however, it creates a new layer of complexity and potentially, cost. While some like ProRata AI and Quora have expressed support for fair compensation models, others may be forced to rethink how they access training data or structure deals with publishers.

At the same time, AI firms that continue to ignore bot exclusion rules may now find themselves more easily blocked, routed into traps (like Cloudflare’s AI “Labyrinth” of junk content), or publicly named and shamed.

The move also puts pressure on Cloudflare’s competitors, such as Amazon Web Services, Google Cloud, and Akamai, to offer similar tools or risk falling behind in the arms race over content protection and AI ethics.

A Bet on a New Internet Economy

By launching Pay per Crawl (still in beta), Cloudflare is positioning itself as both a gatekeeper and broker of a new AI-era content economy. In doing so, it’s hoping to gain influence over how value flows between creators and AI companies, and opening the door to becoming a central payments infrastructure provider in this emerging market.

CEO Matthew Prince has even floated the idea of creating Cloudflare’s own stablecoin to support seamless micropayments at scale.

Challenges

That said, challenges remain. For example, the system only protects content hosted through Cloudflare. Critics like Ed Newton-Rex, founder of Fairly Trained, argue this is a “sticking plaster” rather than a full solution. Legal frameworks, they say, are still essential to address copyright and enforce compliance across the wider web.

Baroness Beeban Kidron, a prominent campaigner for creative rights, nonetheless praised the move as “decisive action,” saying: “If we want a vibrant public sphere, we need AI companies to contribute to the communities in which they operate.”

More broadly, the battle now turns to whether Cloudflare’s system can actually become the foundation for a fairer digital ecosystem, or whether AI firms and others will try to find ways around it.

What Does This Mean For Your Business?

For publishers, a permission-based model for AI web scraping could be the first meaningful opportunity to assert control over how their work is accessed and monetised in an AI-driven world. It gives media groups, content creators, and smaller businesses a chance to protect their intellectual property without needing bespoke technical solutions, and could eventually create new revenue streams where previously there were none. If widely adopted, it also signals a move away from the unspoken assumption that public web content is free for AI companies to exploit.

What makes this development particularly relevant is Cloudflare’s scale. With its technology touching around one fifth of the internet, its default blocking of AI bots resets the baseline. AI companies can no longer rely on passive access to build their models and must now navigate a fragmented, consent-based landscape. While this raises operational challenges for developers of AI tools, it may also encourage more formal, sustainable commercial arrangements between content owners and AI firms.

For UK businesses, the implications are twofold. On the one hand, firms producing original content, e.g. publishers, consultancies, and creative agencies, stand to gain from greater control and potential compensation. On the other, companies that rely on AI systems to summarise, synthesise or build upon external content may face new hurdles or costs. It highlights the need for businesses to understand not just how AI tools function, but where their data comes from and under what terms.

However, the effectiveness of Cloudflare’s model will depend on broad adoption and robust enforcement. The Pay per Crawl system is still in beta and, for now, limited in reach. There is also the risk that aggressive scraping bots will continue to operate outside legitimate channels or spoof identities to bypass detection. In that sense, legal backing remains a missing piece. As critics point out, a voluntary system only protects those within its walls.

Even so, the shift represents a turning point. Whether or not Cloudflare’s marketplace becomes the standard, it has created a framework that others may follow or adapt. For publishers, platforms and AI companies alike, the message is that the free-for-all era of unregulated AI scraping appears to be over. The next chapter will be defined by consent, compensation and a more negotiated relationship between those who create content and those who use it.

Tech News : Microsoft’s Enterprise Agreement Shake-Up Hits Resellers

Microsoft’s decision to bypass long-standing partners in its Enterprise Agreement (EA) renewals is sending financial shockwaves through the global IT channel, with UK-based Bytes Technology Group among the first major casualties.

Reshaping the Channel

For years, Microsoft relied on a network of accredited Large Service Providers (LSPs) to handle the sale and renewal of its three-year Enterprise Agreements, i.e. the long-term software licensing contracts tailored to large organisations. These deals provided LSPs with steady commission income and a foothold in enterprise IT procurement. But that model is changing.

Microsoft has begun reclaiming control of these high-value contracts, handling renewals directly through its own sales force rather than via partners. The change, first noticed in 2023, is accelerating fast. For example, Microsoft reportedly took back control of around a third of EA renewals last year and is expected to reclaim almost all of them by January 2026.

It seems that the company is not just shifting processes but is cutting off financial incentives too. For example, global EA commission payments to LSPs stood at approximately $2.5 billion in 2023, according to US Cloud, a Microsoft support partner. That figure dropped to $1.67 billion in 2024 and is expected to fall to just $583 million in 2025. By 2026, payouts are projected to stop entirely.

Bytes Bitten

For Bytes Technology Group (BTG), one of the UK’s largest Microsoft resellers and a London Stock Exchange-listed firm, it seems the effects have been immediate and severe. For example, shares in BTG recently plummeted over 25 per cent after the company issued a profit warning, citing delayed buying decisions, a difficult macroeconomic environment, and lower commission income from Microsoft.

BTG had previously forecast double-digit gross profit growth for the 2025–26 financial year. But its latest update painted a far more cautious picture, with gross profit now expected to be flat and operating profit lower than anticipated. The company made £2.1 billion in gross invoiced income in the year ending February 2025, with Microsoft sales accounting for around 50 per cent of its gross profit.

“The impact of changes to Microsoft enterprise incentives is weighted more to the first half due to high levels of renewals in March and April around the public sector year end and June around Microsoft’s year end,” BTG noted in a statement ahead of its AGM.

Why is Microsoft Doing This?

From Microsoft’s perspective, the shift is basically strategic. For example, reclaiming direct control over renewals allows it to improve pricing discipline, deepen customer relationships, and retain more margin, particularly at a time when the company is investing heavily in generative AI, including its Copilot tools for Microsoft 365, which are priced at $30 per user per month.

According to US Cloud, the move could deliver a 0.39 per cent annual EBITDA increase for Microsoft, which may sound modest but still adds measurable value to a business currently worth around $3 trillion.

Microsoft’s direct sales in EA accounts are rising fast, growing from $833 million in 2024 to an estimated $1.92 billion in 2025, and expected to reach $2.5 billion by 2026. By cutting commission payouts and increasing its direct footprint, the company is effectively reshaping its entire enterprise sales model.

A Reseller Role Rewritten (or Removed)

LSPs like BTG have spent decades building their businesses on the back of EA renewals, not just processing transactions but also guiding clients through complex licensing environments. Their role has often been compared to that of a financial adviser, providing independent insight and advocacy in negotiations.

“The analogy is losing your trusted financial advisor and being told to work directly with Wall Street,” said Mike Jones, president of US Cloud. “Sure, you’re cutting out the middleman, but you’re also losing valuable guidance.”

This advisory role, critics argue, can’t easily be replaced by Microsoft’s in-house teams, particularly for organisations that lack in-house licensing expertise. There’s concern that some enterprise clients may end up over-buying, under-utilising, or mismanaging licences as a result.

Restructuring to Survive

Faced with declining revenues, BTG and others are now rethinking their go-to-market strategies. For example, BTG has announced it is transitioning from a generalist sales approach to specialised, customer-segment-focused teams, a change it says will help it deliver more tailored solutions and build long-term service-based income.

However, that transition is likely to take time and come with risks. BTG’s CEO Sam Rudd acknowledged as much, stating: “In recent weeks, we’ve navigated a more challenging macro environment, compounded by the near-term effect of transforming our corporate sales team. While this has affected trading, our value proposition remains strong.”

Analysts are less confident. Indraneel Arampatta of Megabuyte said he suspects the changes in Microsoft’s partner model “are starting to bite,” adding that investors may be growing wary of BTG’s exposure to Microsoft and its ability to diversify.

Wider Implications for the Market

It should be noted that the situation is not unique to BTG. For example, similar providers across the UK, Europe, and North America are likely to be affected, especially those heavily reliant on Microsoft’s EA commissions. While some are already shifting towards managed services, cybersecurity, or cloud consultancy, not all will move fast enough to offset the financial loss.

This also raises questions about the future of Microsoft’s partner ecosystem. By sidelining its LSPs, the company risks alienating partners who have long championed its products and helped drive adoption at scale. In more complex environments such as hybrid cloud, AI implementation, or public sector transformations, trusted partners often play an indispensable role.

Some observers also warn of regulatory scrutiny. For example, Microsoft has already faced antitrust pressure in Europe over cloud licensing practices, and a further consolidation of sales control could draw additional attention from competition authorities.

Not All Businesses Will Benefit

While some enterprise clients may welcome direct engagement with Microsoft, it’s likely that others may struggle without LSP support. Navigating EA licensing terms, ensuring compliance, and optimising cost-efficiency can be daunting without expert guidance.

Also, while large organisations with in-house procurement and IT legal teams might manage, mid-sized businesses and public sector organisations could find the transition more difficult, especially as licensing complexity continues to increase alongside Microsoft’s evolving AI offerings.

Meanwhile, rivals such as Amazon Web Services and Google Cloud Platform may seek to capitalise on the disruption. LSPs looking to diversify may find receptive partners elsewhere, potentially shifting allegiances and deepening competition in the enterprise IT space.

What Does This Mean For Your Business?

What this means, in practice, is that a long-established revenue model for service providers is being dismantled at pace, while Microsoft tightens its grip on the most profitable parts of the enterprise customer lifecycle. The financial and operational consequences are already being felt, and the transition is unlikely to be smooth for most. Companies like BTG, with deep exposure to Microsoft licensing, now face a period of structural change where business models built around commission income must be replaced with higher-value services that take longer to scale. That puts pressure not only on margins but also on investor confidence, staffing, and client retention.

For UK businesses, particularly those without large internal IT procurement teams, the loss of hands-on licensing support could create some real challenges. The promise of simplified, direct relationships with Microsoft may sound appealing on paper, but the practical reality of negotiating large-scale EA renewals without experienced intermediaries may introduce risk and additional overheads. While some may adapt successfully, others could find themselves over-licensed, under-supported, or locked into costly configurations that don’t fully align with their needs.

For Microsoft, the short-term gains are measurable and aligned with its strategic goals. Greater pricing control, improved account oversight, and reduced channel leakage all strengthen its position, particularly as it looks to monetise AI offerings like Copilot and Azure-based services more aggressively. However, there is a risk that weakening partner engagement will erode long-term channel goodwill, which has historically underpinned Microsoft’s global reach and sustained competitive advantage.

The broader enterprise IT ecosystem also has a stake in this outcome. For example, if LSPs lose their relevance, the value of multi-vendor, consultative support in complex deployments may decline, or shift towards rival platforms. That creates an opening for Amazon, Google, and others to attract not just customers, but former Microsoft partners seeking more favourable terms. For regulators, meanwhile, the growing dominance of Microsoft’s direct sales model and its impact on channel diversity may increasingly warrant scrutiny.

Ultimately, Microsoft’s move is a calculated reshaping of its enterprise engagement model, but the disruption it causes is real and immediate for those in the channel. As LSPs rush to reinvent themselves, the winners will likely be those who can pivot quickly to new value propositions. The losers, by contrast, may be left watching as a decades-old business model slips quietly out of reach.

Tech News : Google’s Veo 3 Now Generates AI Audio (For Its AI Videos)

Google has launched Veo 3 (its most advanced video-generation AI yet) and for the first time, it can also create synced sound effects, ambient noise, and even dialogue to accompany the visuals.

From Silent Clips to Fully-Sounded Scenes

Announced at Google I/O 2025, the company’s annual developer conference, Veo 3 marks a significant leap in AI video generation by breaking the sound barrier. Unlike earlier models that produced silent clips requiring manual audio dubbing, Veo 3 natively generates both video and sound in response to user prompts. That includes environmental ambience, footsteps, character dialogue, and background music, all tightly synced with the generated visuals.

“For the first time, we’re emerging from the silent era of video generation,” said Demis Hassabis, CEO of Google DeepMind. “You can give Veo 3 a prompt describing characters and an environment, and suggest dialogue with a description of how you want it to sound.”

This appears to mark a clear departure from the static video outputs of Veo 2, which could render realistic 1080p clips but had no inbuilt audio functionality. Veo 3’s ability to generate both media types simultaneously is underpinned by multimodal training, allowing it to understand and translate visual scenes into contextually accurate sound.

Who Can Use Veo 3, And Where?

Veo 3 is now available through Google’s Gemini app for users subscribed to the AI Ultra plan, priced at $249.99 per month. As of now (early July), it’s rolling out across all countries where Gemini is active, including the UK and India. Users can access it via desktop or mobile and prompt the system using text, images, or a combination of both.

Up to 8 Seconds of Video With Audio

At launch, Veo 3 can generate up to 8 seconds of video with audio. For example, users can describe entire scenes, suggest character speech with tonal guidance (e.g. “a soft, nervous voice”), or request specific environmental sounds like birdsong, waves, or city traffic. Google says it plans to extend clip length and creative controls over time.

What’s New and Different?

The most notable change in Veo 3 (from 2) is its seamless integration of audio with video, something no other major model currently achieves at this level of fidelity and control. While earlier experiments with audio-generating AI exist, such as Meta’s AudioCraft or Google’s own SoundStorm, these tools typically treat sound and visuals as separate processes.

Veo 3, however, is built to generate both in parallel. It can understand raw video pixels and adjust audio timing accordingly, such as syncing a character’s footsteps with the terrain they walk on, or matching mouth movements to speech.

It also boasts significant improvements in visual realism. Google says Veo 3 now supports 4K resolution, more accurate physics, and refined prompt adherence. This means it’s better at understanding and sticking to the details users provide, even over multi-shot sequences involving actions and camera movements like pans or zooms.

Creators and Businesses

For video creators, advertisers, educators, and independent filmmakers, Veo 3 could remove one of the biggest barriers in AI content generation, namely having to source or manually create matching audio. With sound now generated natively, users can produce short-form content much faster, with minimal editing or post-production work.

For example, a marketing team could prompt Veo 3 to produce a product demo with a voiceover, or a teacher might generate an animated science explanation complete with relevant sound effects and narration.

Move to “Generative Cinema”

Google sees this as part of a broader shift toward “generative cinema,” where AI can help prototype, storyboard or even produce short-form entertainment. However, its reach could extend to gaming, AR/VR environments, and accessibility use cases such as auto-generating descriptive audio.

Google’s Position in a Crowded Field

Veo 3 arrives in an increasingly competitive video-generation space. For example, over the past year, tools like Runway Gen-3 Alpha, Pika Labs, Luma Dream Machine, and Alibaba’s EMO model have raised the bar for visual quality and scene consistency. However, very few models currently offer audio, and none do so at Veo 3’s level of native integration.
OpenAI’s Sora, which impressed with its photorealistic clips earlier this year, still outputs silent videos. While Runway allows users to add music and basic sound effects, this remains a separate, manually applied process. That gives Veo 3 a unique value proposition, at least for now.

Still, Google’s dominance is not guaranteed. As of now (July 2025), Veo 3’s capabilities are only available to high-paying subscribers through Gemini and haven’t yet been integrated into tools like YouTube Shorts, Google Ads, or enterprise APIs, though the company has confirmed that Veo 2 features are heading to the Vertex AI API in the coming weeks.

How Veo 3 Works

Though Google has not published technical papers on Veo 3, it builds on DeepMind’s earlier work in video-to-audio AI. In 2024, DeepMind revealed it was training models using paired video clips, ambient audio, and transcripts to learn audio-visual correlations. That foundational research likely informed Veo 3’s ability to match visual motion with appropriate audio output.

The model was almost certainly trained on large-scale datasets including YouTube material, though Google has not confirmed this publicly. DeepMind has said only that its models “may” use some YouTube content, raising questions about copyright and consent.
To address misuse risks, Veo 3 uses SynthID, Google’s proprietary watermarking system, which embeds invisible markers into every generated frame. It also includes visible watermarks for user-generated content and is subject to policy enforcement for unsafe or misleading material.

Criticism

Despite the impressive technology, it seems that Veo 3 has drawn scrutiny from some corners of the creative industry. For example, a 2024 study commissioned by the Animation Guild projected that AI tools like Veo could disrupt over 100,000 creative jobs in the US by 2026. Voice actors, sound designers, editors, and animators are among the roles most at risk.

Many artists also remain concerned about the lack of clarity around training data. Without formal consent or opt-out tools for creators on platforms like YouTube, Veo’s capabilities could be seen as drawing from (and replacing) the work of the very communities that power it.

Google says it is committed to responsible AI use and continues to test Veo with red-teaming exercises to identify abuse cases. It also relies on user feedback tools and policy enforcement to detect violations, though details on enforcement mechanisms remain limited.

That said, Veo 3’s creative potential is undeniable, and for businesses, creators, and Google’s own AI ambitions, it appears to mark a significant step forward in the race to multimodal dominance.

What Does This Mean For Your Business?

The arrival of Veo 3 appears to place Google at a clear technological advantage, at least temporarily, by addressing one of the most limiting aspects of AI video creation so far (i.e. the lack of audio). By combining video and sound generation into a single, prompt-driven process, it gives users far more flexibility and reduces the need for specialist editing tools or additional production stages. This will likely appeal to a wide range of professionals, from marketing teams to educators and indie content creators who want fast, realistic results without high production overheads.

For UK businesses in particular, the ability to generate short, full-sound videos in seconds could transform workflows across advertising, training, communications, and social media. SME marketing teams with limited budgets could produce explainers or campaign content in-house, while creative agencies may be able to build new service models around generative assets. However, the high monthly cost of access via Gemini’s AI Ultra plan may still limit uptake to larger firms or early adopters in creative sectors for now.

Competitively, Veo 3 puts pressure on OpenAI, Meta, and other major players who are still struggling to synchronise visuals and sound in a meaningful way. However, it also raises expectations. The moment Google delivers this feature set, users and clients may begin to assume it as standard. And as competitors catch up or release open-access alternatives, Google may need to expand Veo’s availability beyond Gemini and into more accessible developer platforms like Vertex AI or YouTube integrations.

The ethical questions are not going away either. Artists and voice professionals continue to challenge the use of training data that may have been scraped without consent. Even with SynthID watermarking, the risk of misuse or deepfake production remains a concern for regulators and rights-holders. Unless Google can offer greater transparency and clearer opt-out mechanisms, it may face mounting legal and reputational risks as adoption grows.
For now, though, Veo 3 appears to set a new benchmark in what multimodal AI tools can achieve. Whether it remains a premium creative niche or signals a broader shift in how visual content is produced will depend on how Google chooses to scale and integrate its technology in the months ahead.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives