Tech Insight : Spotify AI ‘Band’ Sparks Labelling Debate
In this Tech Insight, we look at how the sudden rise of a (possibly AI-generated) band on Spotify has reignited the debate over whether music streaming platforms should clearly label artificially created songs.
An Indie Rock Hit, But Who Made It?
The Velvet Sundown appeared suddenly in June 2025 and has since attracted more than 850,000 monthly listeners on Spotify. To give some idea of their popularity, their most streamed track, Dust on the Wind, has been played over 380,000 times, with many listeners noting similarities to the 1977 Kansas ballad Dust in the Wind. However, it seems that no one knows for sure who the band actually is, or whether its members even exist.
Four Names and Not Much Else
The band’s Spotify profile lists the four names of band members as Gabe Farrow, Lennie West, Milo Rains, and Orion “Rio” Del Mar. However, it’s been reported that none of these individuals has any online presence outside the band, i.e., no interviews, no live shows, no solo projects. In fact, their Instagram feed is populated with images that appear to be AI-generated, showing oddly rendered “band members” in stylised poses. Also, a supposed quote from Billboard magazine in their bio (“the memory of something you never lived, and somehow makes it feel real”) has no known source.
Speculation
It seems that the ambiguity has fuelled speculation across Reddit, X, and YouTube, with some users pointing out audio “artefacts” in the recordings (small glitches or distortions often associated with synthetic audio generation tools). Music producer and YouTuber Rick Beato has been reported as describing the band’s output as having the hallmarks of AI composition, especially after running one of the songs through a digital splitter in Logic Pro.
Suno, Deezer and the “AI Slop” Flood
Although The Velvet Sundown has not confirmed the use of AI tools, the music streaming service Deezer has flagged all of the band’s tracks as “100 per cent AI-generated” using its new detection algorithm. Deezer is currently the only major platform openly tagging AI music content. Its system identifies audio created with generative tools like Suno and Udio, which can compose full songs from basic text prompts.
In April 2025, Deezer reported that 18 per cent of all content uploaded to the platform was AI-generated, a sharp rise from just 10 per cent in January. That translates to more than 180,000 bot-made tracks per week! Deezer CEO Alexis Lanternier has said that the goal is not to ban AI music, but to ensure transparency and protect the interests of human artists.
“AI is not inherently good or bad,” he said. “But we believe a responsible and transparent approach is key to building trust with our users and the music industry.”
Spotify, Apple and Amazon Stay Quiet
In contrast, it seems that streaming platform Spotify has no current system in place to flag AI-generated content and has made no official comment on The Velvet Sundown. CEO Daniel Ek has previously been reported as saying that Spotify would not ban AI-made tracks, but stated he was opposed to using the technology to impersonate real artists. Despite that, Spotify’s algorithm has promoted The Velvet Sundown on its personalised “Discover Weekly” playlists, pushing the track to millions of users without any disclosure of its potential origin.
Apple Music and Amazon Music also host The Velvet Sundown’s music but have remained silent on whether they intend to introduce any kind of AI labelling or detection system.
A Lack of Labelling Sparks Concern
It seems that this lack of labelling has sparked concern from artists, producers and industry groups. For example, Ed Newton-Rex, founder of the advocacy group Fairly Trained, described the situation as “theft dressed up as competition.” He argued that AI firms are profiting from training data scraped from real musicians’ work, often without consent or compensation.
Identity Confusion and Media Hoaxes
Adding to the confusion, Rolling Stone US initially reported that a spokesperson for the band had confirmed the music was generated using Suno. That spokesperson later turned out to be fake, part of an elaborate “art hoax” designed to deceive journalists. The individual, known as Andrew Frelon, later admitted in a Substack post that he was unaffiliated with the band and fabricated the story.
In response, The Velvet Sundown issued a statement on its Spotify page denying any link to Frelon, describing his claims as “unauthorised.” The band has continued to post cryptic messages via its social media channels. One X post read: “They said we’re not real. Maybe you aren’t either.”
It seems that this ambiguity is part of the brand’s strategy. Their upcoming album, Paper Sun Rebellion, is being promoted with AI-style visuals and poetic descriptions that blur the line between artificial and authentic. Whether this is genuine artistic expression or a marketing gimmick has become a key question for commentators.
A Threat to Emerging Artists?
For real-world musicians, the rise of AI-generated bands actually poses a serious challenge. Kristian Heironimus, of the indie group Velvet Meadow, was recently quoted in NBC News as saying that watching an alleged AI act soar to 500,000 listeners in just two weeks was “disheartening.” He highlighted the difficulty of competing with endless, instantly produced AI content that can mimic any genre and flood discovery algorithms.
Many in the industry appear to share his concern. For example, last year, hundreds of artists, including Sir Elton John and Dua Lipa, signed an open letter demanding legal protections against the unauthorised use of their work in training AI models. While the UK government declined to introduce new legislation at the time, it is currently holding a separate consultation on AI and copyright.
Meanwhile, a coalition of major US record labels (Universal Music Group, Sony Music and Warner Music) has filed lawsuits against AI music developers Suno and Udio, alleging large-scale copyright infringement. Both companies argue that using publicly available music to train models constitutes “fair use,” a defence commonly deployed by AI developers.
Who Decides What’s Real?
The lack of consensus on what constitutes “AI-generated” music adds further complexity. Some artists use AI tools like Suno or Udio to create backing tracks or generate lyrics, while others use them to compose full songs. Platforms like Spotify typically don’t require artists to disclose how their music was made. Nor do they differentiate between artists who use AI as a tool and those who rely on it entirely.
Most Platforms Treat AI Songs Like Human Ones (For Now)
Deezer’s approach is based on identifying audio characteristics and metadata patterns typical of AI tools. However, even that has limits, particularly as generative music becomes more sophisticated. For now, most streaming platforms continue to treat AI songs the same as human-made ones for purposes of recommendation, royalties, and categorisation.
People Are Interested In More Than Just The Sound of an Artist
Industry voices like Lanternier warn that this is unsustainable. “People are not only interested in the sound,” he said. “They are interested in the whole story of an artist… We believe it’s right to support the real artist, so that they continue to create music that people love.”
Listeners Left in the Dark
From a user perspective, it seems it’s increasingly difficult to tell what’s human and what’s machine-made. For example, generative AI can now mimic voices, create convincing instrumentals, and even generate promotional images, all with minimal input. This means that listeners may not even realise they’re streaming AI-made content unless a platform flags it.
This has broader implications for trust. For example, as Professor Gina Neff from the University of Cambridge noted in relation to this subject, “Our collective grip on reality seems shaky. The Velvet Sundown story plays into the fears we have of losing control of AI and shows how important protecting online information is.”
“AI Slop” Drowning Out Real Voices?
It also seems that some fear that AI-generated content (sometimes referred to as “AI slop”) will drown out real voices, especially in genres like indie rock or ambient electronica where lyrical ambiguity and minimalistic production make AI mimicry easier to hide.
However, others, like Suno CEO Mikey Shulman, believe the conversation is overblown. “I think people are forgetting the important question, which is: how did it make you feel?” he said. “There are Grammy winners who use Suno in their production every day.”
That said, the debate continues to sharpen and, with AI tools getting more advanced and streaming platforms dragging their feet on labelling, the question of who (or what) we’re actually listening to may only get more complicated.
What Does This Mean For Your Business?
Whether The Velvet Sundown is a genuine artistic experiment or a high-concept AI stunt, its rapid rise appears to have thrown the spotlight on the growing presence of synthetic music in mainstream discovery channels. The core issue is not simply the existence of AI-generated tracks, but the complete absence of transparency around them. For example, if listeners can’t tell whether a song is human-made or artificially composed, the entire trust framework underpinning streaming services begins to erode.
This matters not just for artists and platforms, but for a much wider set of stakeholders. For independent musicians and producers, there’s a real risk that generative AI will continue to crowd their work out of key algorithms and recommendation feeds. For record labels and rights holders, the legal and licensing landscape remains unclear, especially while AI companies assert fair use in the training of their models. Also, for UK businesses in the creative, tech, marketing or media sectors, the commercial implications are equally serious. For example, AI-generated content could be misused in ad campaigns, misrepresent music licensing in branded environments, or create reputational risks for companies trying to align with genuine cultural voices.
It’s also becoming harder for consumers and businesses to trust the authenticity of what they’re engaging with. When a band can be created, promoted and streamed globally without anyone knowing if it even exists, it raises questions about how influence, monetisation, and even identity are being managed online. For a platform like Spotify to push AI-generated tracks without disclosure, while human creators struggle to break through, could create long-term imbalances in both visibility and revenue distribution.
What’s clear is that this isn’t just a creative debate but is also a technological, ethical and commercial one. Without industry-wide standards or regulatory clarity, streaming platforms may soon find themselves forced to choose between innovation and integrity. Whether through clear labelling, artist verification, or changes to royalty structures, the choices made now will shape the relationship between AI and music for years to come.
Tech News : Copywriting Danish People Against Deepfakes
The Danish government is planning a major legal shift to let people claim copyright over their own body, facial features, and voice, in what it says is the first European attempt to systematically tackle the threat posed by deepfakes.
A Legal Response to a Rapidly Growing Threat
Deepfakes, which are highly realistic synthetic media generated using artificial intelligence (AI), have become one of the most pressing digital threats of the past five years. By mimicking a person’s appearance, voice, and movements, these AI-generated videos, images or audio clips can convincingly impersonate individuals without their consent. Initially used for novelty and satire, they’re increasingly tied to malicious uses including fraud, harassment, and disinformation.
Massive Rise
According to a 2024 report from cybersecurity firm Sumsub, the number of detected deepfake videos worldwide rose by over a massive 700 per cent in a single year, with Europe seeing the sharpest spike. Consequently, the European Union’s law enforcement agency, Europol, has warned that deepfakes are “a significant threat to democracy and trust in institutions,” particularly around elections and public figures. However, individuals are also at risk, e.g. from revenge porn to financial scams where a cloned voice is used to impersonate a relative or company executive.
While many countries are beginning to introduce narrow legislation to deal with specific uses of deepfakes, Denmark is now attempting something broader.
What Denmark Is Proposing
Under the new proposals announced by Denmark’s Ministry of Culture in late June 2025, citizens would be granted copyright over their physical appearance, voice, and other personal traits. The hope is that this would allow them to demand the removal of AI-generated content that imitates them without permission (regardless of context) and seek compensation where harm has occurred.
Treated As A Creative Work
One important aspect of this new legal approach is that it would not rely on proving defamation or reputational damage, as is often required under existing European law. Instead, it would actually treat a person’s likeness as a creative work, similar to how a photograph or piece of music is protected. The law would apply to both private individuals and public figures, including artists and performers.
Culture Minister Jakob Engel-Schmidt described the legislation as a “bold step to protect personal identity in the age of AI,” noting that current legal protections lag behind technical capabilities. “Human beings can be run through the digital copy machine and misused for all sorts of purposes,” he said in a statement. “We are not willing to accept that.”
Timing, Process and Political Backing
The proposed changes will be submitted for public consultation before the Danish parliament breaks for summer recess, with formal legislation expected to be introduced in the autumn. Given the political climate, it’s highly likely to pass. For example, around 90 per cent of MPs reportedly support the reform, following widespread concern about the use of AI-generated content in political misinformation and online abuse.
Would Be A European First
The law would make Denmark the first European country to explicitly codify individual ownership of biometric traits for the purpose of combatting generative AI misuse. It is expected to take effect in early 2026 if passed.
What It Means in Practice
If enacted, the law would essentially give Danes the legal right to request takedowns of deepfake content from online platforms if it replicates their image, voice or body in a “realistic, digitally generated imitation.” The rule would apply whether or not the content was created with malicious intent.
Platforms that fail to comply with takedown requests could face “severe fines,” according to Engel-Schmidt. There’s also potential for EU-level action if enforcement proves challenging, particularly during Denmark’s upcoming EU presidency in 2026, when it plans to raise the issue with member states.
Includes Key Exceptions
Crucially, the proposal includes exceptions for parody and satire, which are protected under free expression rules. These carve-outs are intended to ensure that political cartoonists, satirical shows, and legitimate artistic works aren’t caught by the law.
Performances Too
The reform would also extend to artists’ performances. For example, musicians would have legal grounds to object if their voice or performance style is cloned by AI without consent, which has been a growing concern in the music industry as AI-generated songs imitate the voices of famous performers.
Why Businesses and Platforms Should Take Note
For technology companies, particularly those that operate online platforms or generate AI models, Denmark’s proposal could have far-reaching consequences.
In practical terms, businesses hosting user-generated content, such as social media platforms, image generators, or AI voice apps, may soon be legally obligated to implement mechanisms for recognising and responding to takedown requests based on biometric misuse. This could involve new detection systems, moderation processes, and audit trails to demonstrate compliance.
It also raises questions around liability. Under current EU law, platforms benefit from limited liability for illegal content they host, provided they act promptly when notified. Denmark’s new copyright-based approach might test the limits of that framework, especially if it leads to conflicts over enforcement or definitions of consent.
For creative industries, including advertising, film, and gaming, the law could restrict the use of AI tools trained on real individuals without licensing agreements. While this may increase costs and licensing complexity, supporters argue it could also encourage more ethical use of synthetic media.
From a business reputation standpoint, being seen to respect biometric rights could become a key trust signal for users and customers. A 2023 survey by the European Commission found that 79 per cent of EU citizens want stronger legal safeguards on the use of AI-generated likenesses.
How Other Countries Are Approaching the Issue
Globally, it seems, few countries have gone as far as Denmark is proposing, but some are moving in the same direction.
For example, in the United States, several states have passed deepfake-specific laws, mostly focused on election interference and non-consensual pornography. California, Texas, and New York, for instance, have made it illegal to create or distribute deepfakes that impersonate political candidates within 30 to 60 days of an election. However, there is no federal law yet, and a new budget proposal being debated in Congress could strip states of their authority to regulate AI for 10 years.
In China, deepfake creators must label synthetic media clearly and obtain consent from the people being replicated. Failure to comply can result in heavy fines. South Korea is also considering similar legislation, particularly to address deepfake abuse in online pornography, which has become a major social issue there.
Within Europe, the EU’s AI Act (adopted in 2024) includes provisions requiring deepfakes to be labelled as such, but it does not go as far as granting individuals copyright over their features. That’s why Denmark’s move is seen as a potential model for broader reforms.
What Challenges Remain?
Despite strong domestic support, Denmark’s proposal is not without critics. For example, some legal scholars have raised questions about how biometric copyright would be enforced across borders, especially on platforms based outside the EU. Others argue that tying personal identity to copyright, a system traditionally designed to protect creative works, may lead to unintended legal consequences.
There are also practical concerns, e.g. identifying a deepfake is not always straightforward, and takedown systems are often slow or ineffective. If enforcement relies heavily on users flagging violations, the burden may fall disproportionately on individuals without the resources or knowledge to pursue their rights.
For now, however, Denmark appears determined to lead the way by betting that stronger individual protections are the only way to restore trust in a digital landscape where seeing is no longer believing.
What Does This Mean For Your Business?
If Denmark succeeds in passing this reform, it could change how personal identity is treated under copyright law, not just nationally, but across Europe. By legally enshrining the right to control one’s own voice, face, and likeness, the country is effectively trying to redraw the boundary between creative freedom and personal protection in the age of synthetic media. For individuals, this could offer an unprecedented tool to fight back against misuse, without needing to prove reputational harm or navigate complex defamation law.
For UK businesses, particularly those in tech, media, and advertising, Denmark’s approach may offer a glimpse of what’s to come. If other EU countries follow suit, companies that operate across borders could face new compliance demands, from biometric consent processes to proactive takedown mechanisms. At the same time, businesses that adopt strong safeguards now, such as consent-driven AI use policies, may gain a competitive advantage by building trust with customers and clients. For those in the creative sector, for example, the move could also help clarify the grey area around training AI models on real human traits, especially in performance-heavy fields like music, voiceover, or influencer marketing.
However, enforcement remains a key challenge. For example, without international alignment, cross-border takedowns could prove difficult, and smaller platforms may struggle to implement the necessary safeguards. There’s also a risk that applying copyright principles to human identity could lead to unintended consequences, particularly if courts are left to interpret the balance between personal rights and creative expression.
Even so, Denmark’s proposed law appears to reflect a broader global reckoning with the risks of generative AI. It signals that governments are no longer willing to let platforms set the terms of engagement when it comes to biometric misuse. With deepfakes set to become more sophisticated and widespread, that signal may be just as important as the legal details that follow.
Tech News : Voice Calling Comes To WhatsApp Business Accounts
WhatsApp will soon let large businesses make and receive voice calls directly through the platform, as Meta expands its commercial communications offering with AI-driven tools and centralised marketing features.
Live Voice Calls Now Coming to the WhatsApp Business API
Until now, only small businesses on WhatsApp could speak to customers using voice messages or voice calls. Larger businesses, e.g. typically those using the WhatsApp Business Platform API, were limited to text-based messaging. However, Meta (WhatsApp’s owner) says that’s about to change. Meta has confirmed that over the coming weeks, voice calling will roll out for enterprise users, allowing companies to speak directly to customers and receive inbound calls within WhatsApp itself.
Receive Live Customer Voice Calls, and Call Them Back
Meta unveiled the new capability during its annual Conversations conference in Miami on 1 July, describing it as a response to increasing demand for more natural, flexible customer engagement options. The update means businesses using the API will soon be able to receive live voice calls from customers, as well as call them back, which is an option not previously available even in limited pilot tests.
For example, a telecoms provider could use WhatsApp to answer a customer’s technical query via chat, then (seamlessly) escalate to a live voice call when the situation requires real-time dialogue. Similarly, banks, or travel agents could offer consultations and problem resolution through a channel many consumers already use daily.
A Step Towards AI Voice Agents
While Meta has framed this as a way to support human-to-human conversations, the addition of voice to the business platform also appears to be a way to lay the groundwork for AI-driven voice assistants. For example, companies can already integrate with third-party providers like Vapi, ElevenLabs or Phonic to create AI voice agents capable of handling simple customer service tasks. By enabling voice pipelines in WhatsApp, however, Meta is opening the door to broader automation opportunities, thereby potentially reducing call centre overheads and offering round-the-clock support in natural language.
This move aligns with Meta’s wider strategy of embedding AI capabilities across its business tools.
In a blog post published on 1 July, the company said: “There also might be times it’s helpful to provide additional support to customers beyond just a text. Bringing calling and voice updates to the WhatsApp Business Platform will help people communicate in a way that works best for them and paves the way for AI-enabled voice support in the future.”
The Scale of WhatsApp Business
Crucially, WhatsApp Business has quietly become a key revenue generator for Meta. For example, over 200 million monthly users now rely on the platform globally, with Meta monetising the service through click-to-WhatsApp advertising and per-message fees for businesses. Analysts estimate that WhatsApp and Messenger business messaging generated over $10 billion in run-rate revenue for Meta in 2024 alone.
By adding richer tools like voice and AI, Meta appears to be looking to move beyond basic customer service into full-stack sales and support. Voice, therefore, adds a missing layer to the communication stack and helps differentiate WhatsApp from rival platforms such as Apple Business Chat or Google Business Messages, which focus more on text-based interaction.
Video, Voice Messaging and AI Follow-Ups Also Coming
It seems that voice calling isn’t the only new feature. For example, businesses on the WhatsApp API will also be able to send and receive voice messages, while some sectors (e.g. remote healthcare) will gain access to video call functionality. This is expected to enable new use cases, from live consultations to virtual product demos.
Meanwhile, Meta is expanding its AI-powered product recommendation tool. The system, currently being piloted with merchants in Mexico, uses AI to suggest items on a business’s website and then follows up with customers directly in WhatsApp. For example, if a user browses for trainers on a brand’s website, WhatsApp could later prompt them with related offers or updates, using AI to manage the entire conversation.
Although AI features are free for now, Meta has hinted that monetisation may follow once adoption scales. This reflects the model already used with messaging and click-based advertising, where usage thresholds determine costs.
Centralised Marketing Across Meta Platforms
In addition to in-app improvements, Meta is rolling out a centralised campaign management system allowing businesses to run marketing campaigns across WhatsApp, Facebook and Instagram from a single place. This integration with Meta’s Ads Manager platform includes tools for uploading contact lists, targeting customers with personalised messages, and letting Meta’s Advantage+ AI automatically optimise ad spend across channels.
For businesses already using multiple Meta services, this consolidation could mean significant efficiency gains. Creative assets, budget controls, and campaign setup flows are unified across all placements, including WhatsApp’s Status (the app’s equivalent of Instagram Stories), which is now open for ad placements for the first time.
Scaling Its Service
These changes aim to make WhatsApp a more versatile platform for doing business, not just chatting. According to Meta, the updates will enable more seamless interactions and give customers greater flexibility in how they engage with brands. From the business side, voice and AI tools help scale service without scaling headcount, while new campaign features streamline cross-platform marketing.
For example, a retailer could run a sale campaign across Facebook and Instagram, then retarget interested users via WhatsApp with AI-powered follow-ups and even offer voice support to complete the purchase.
Also, it seems that customer expectations are shifting. For example, a Salesforce survey (2023) found that 61 per cent of consumers now expect real-time service from the brands they deal with. Meta’s WhatsApp enhancements reflect this demand for immediacy, particularly in regions where WhatsApp is the dominant form of digital communication.
What About Privacy, Cost and Competition?
Despite the benefits, some concerns remain. Privacy is a recurring issue when AI and voice come together, particularly in business contexts. Meta has said little about how voice data will be stored, processed or encrypted, and whether AI agents would have access to customer audio in real time. Critics argue that without clear guardrails, businesses risk unintentionally mishandling sensitive information.
Costs are another open question. For example, while many features are being introduced without additional charges, Meta has a history of monetising its business tools after initial rollouts. Once adoption grows, businesses could find themselves paying for AI features, voice usage or higher-tier access to campaign tools.
Competition is also heating up. Apple, Google and various regional players are investing heavily in conversational commerce and AI-driven service layers. WhatsApp’s popularity in markets like Brazil, India and Indonesia gives Meta a head start, but richer features alone won’t guarantee long-term dominance.
Industry observers also note that Meta’s voice strategy may not appeal to every business. For example, some companies prefer to deflect live voice conversations to lower-cost channels such as chatbots or email. Also, while voice may improve service quality, it also requires staff availability and scheduling, thereby making it less scalable for some use cases.
Where It Goes From Here
Meta’s latest WhatsApp update does appear to reflect a broader push to turn the world’s most popular messaging app into a full-scale business platform. With over 2 billion users and deep integration across Facebook and Instagram, the infrastructure is already in place. The question now is how businesses will adopt (and ultimately pay for) these new capabilities, and how customers will respond to a growing blend of AI, ads, and automation within their daily chat experiences.
What Does This Mean For Your Business?
What’s clear is that Meta is steadily transforming WhatsApp from a messaging app into a comprehensive, AI-enhanced business tool, and one that spans marketing, sales, and customer service. For UK businesses, particularly those already embedded in the Meta ecosystem, these updates offer new channels for connecting with customers on their terms, using real-time voice, video, and AI-led engagement. The ability to unify WhatsApp with Facebook and Instagram marketing under a single campaign manager could also simplify workflows and improve return on ad spend.
At the same time, the introduction of live voice support changes the dynamics of customer service, potentially raising expectations among consumers while creating new pressure points for businesses. However, not every organisation will have the staffing or operational models to support on-demand voice calls. For those that do, especially in regulated or service-heavy sectors like finance, healthcare, or utilities, the feature could improve trust and responsiveness, if used with care.
There are also unanswered questions about how data is handled behind the scenes, particularly where AI voice agents are concerned. UK firms will need to watch closely for clarity on data storage, GDPR compliance, and whether Meta’s approach will meet domestic privacy standards. Any missteps here could undermine confidence in the system, especially among privacy-conscious users or sectors bound by tighter regulatory requirements.
For competitors, the race is on to match or outmanoeuvre Meta’s rapid AI integration but, with WhatsApp already installed on millions of UK smartphones, Meta has a head start in terms of user reach and familiarity. The challenge now will be ensuring that the platform remains trusted and accessible, even as it becomes more commercially driven.
Company Check : Microsoft Exchange & Skype Servers Go Subscription-Only
Microsoft has officially launched subscription-only versions of its on-premises Exchange Server and Skype for Business Server, thereby ending the era of year-numbered releases and perpetual licences.
A Long-Anticipated Transition Becomes Reality
After months of preparation and close calls with support deadlines, Microsoft has made its Subscription Edition (SE) versions of Exchange Server and Skype for Business Server generally available. These editions replace the traditional 2016 and 2019 versions, which are set to reach the end of extended support on 14 October 2025.
Although Exchange Online and Microsoft Teams remain Microsoft’s strategic focus, the software giant has acknowledged that many organisations still require on-premises options. The Subscription Editions were first introduced to select enterprise customers earlier this year but are now widely available to all qualifying customers.
Microsoft says the SE releases reflect its “commitment to ongoing support for scenarios where on-premises solutions remain critical”, noting that these deployments are often driven by regulatory requirements, data residency needs, or cloud-sceptical policies in sectors such as government, finance, and defence.
What’s Actually Changing?
At a technical level, the initial releases of Exchange Server SE and Skype for Business Server SE are nearly identical to their predecessors. Exchange SE is based on Exchange Server 2019 CU15, while Skype for Business Server SE shares its codebase with Skype for Business Server 2019 CU8HF1. As such, there are no new features, removed components, or major structural changes at this stage.
However, the licensing and servicing models are the parts that have changed fundamentally. For example, both servers are now governed by Microsoft’s Modern Lifecycle Policy, which removes fixed end-of-support dates as long as organisations keep systems updated. This transforms them into evergreen products, with two cumulative updates (CUs) planned per year and additional security patches as needed.
Crucially, Microsoft has dropped perpetual licensing in favour of a subscription-only model. Organisations must now pay regularly to continue using the software legally. Stop paying, and you’re effectively frozen at the last supported version, which is now outside Microsoft’s safety net for patches and support.
Why Now and Why Like This?
The timing of the general availability appears to be closely tied to looming deadlines. Both Exchange Server 2016 and 2019, as well as Skype for Business Server 2015 and 2019, are approaching end-of-support in October 2025. Microsoft had promised a transition plan well before this date, and the SE editions are the fulfilment of that promise, albeit cutting it close.
Another driving force is Microsoft’s long-term strategy to encourage cloud adoption. As Rob Helm, analyst at Directions on Microsoft, put it: “The licence price hikes, the cutoff of old versions, the weak link with new Outlook—they all point to a single message: If you care about Exchange email, get off Exchange Server.”
Yet despite the cloud push, Microsoft has also acknowledged the real-world barriers to migration for many organisations. In a blog post accompanying the release, the company said: “Exchange SE demonstrates our commitment to ongoing support for scenarios where on-premises solutions remain critical.”
This includes hybrid deployments, secure national infrastructures, and regions with inadequate cloud access or stringent legal obligations regarding data locality.
A Smooth but Inevitable Upgrade Path
It seems that Microsoft has gone to some lengths to present the upgrade path as low-risk. For those already running Exchange 2019 CU14 or CU15, moving to SE involves minimal disruption, i.e. no schema changes, no removed features, and no new installation prerequisites. Even licence keys remain unchanged (at least for now).
The same applies to Skype for Business Server, where the SE edition uses an identical build number to CU8HF1, minus a few cosmetic updates and the refreshed licence agreement.
However, organisations sticking with older versions will face a steeper climb. Future SE cumulative updates will introduce breaking changes. Exchange SE CU2, for instance, will block coexistence with legacy 2016 or 2019 servers, effectively forcing full migration. Skype for Business SE updates are expected to do the same.
Changing On-Prem Strategy
For Microsoft, this move is part of a broader shift in its on-prem strategy (the software that runs on a company’s own servers, rather than in the cloud), i.e. fewer fixed-version launches, more ongoing subscriptions, and tighter integration with cloud-based tools. Exchange SE and Skype SE will not see the same innovation curve as Microsoft 365 or Teams, but they offer a lifeline for organisations that cannot or will not go all-in on the cloud.
From a competitive standpoint, this opens up opportunities for rivals such as Zoho, Open-Xchange, and Proton, particularly in markets concerned about data sovereignty or vendor lock-in. Microsoft’s insistence on subscriptions may also play into the hands of open-source email and UC solutions, especially in price-sensitive or highly regulated environments.
For businesses, particularly UK-based organisations balancing compliance, cost, and control, the release of SE editions raises key strategic questions. For example, should they embrace the evergreen model and continue with Microsoft’s stack, or use the transition as an opportunity to diversify infrastructure or explore alternative platforms?
The Cost of Staying On-Prem
Perhaps the most controversial element of the announcement is pricing. Microsoft confirmed that all standalone on-prem server products, including Exchange SE and Skype SE, are subject to a 10 per cent price increase. Some licence types may even rise by up to 20 per cent, depending on the channel and configuration.
These hikes do not apply to cloud equivalents such as Exchange Online, Microsoft Teams, or SharePoint Online. The implication is that staying on-premises is becoming not just technically more demanding, but also financially more burdensome.
For organisations required to maintain on-prem email or voice systems, there’s little choice. Running unsupported software is not only a security risk but a compliance red flag, particularly under regulations such as the UK GDPR, ISO 27001, and sector-specific frameworks like NHS DSPT or FCA guidelines.
Operational and Cultural Implications
Beyond licensing and compliance, there are also some broader operational implications. For example, Teams responsible for managing Exchange or Skype for Business deployments will need to adapt to faster patch cycles, modernised update tooling, and shorter grace periods for non-compliance. There’s also a risk that core features might stagnate, with most new innovations funnelled to the cloud-only Microsoft 365 environment.
Microsoft has yet to confirm whether Exchange SE or Skype SE will receive any integration with future Copilot features, AI enhancements, or cross-platform sync improvements. As such, businesses relying on SE products may find themselves maintaining legacy tech in an ecosystem that’s moving on without them.
What Does This Mean For Your Business?
The switch to Subscription Editions may be framed as a practical continuity measure, but it also appears to signal a deeper change in how Microsoft intends to manage its remaining on-premises software. For many UK businesses, particularly those in regulated sectors or with hybrid infrastructure needs, SE offers a necessary bridge, but the subscription-only model means that bridge now comes with ongoing costs, tighter servicing rules, and less certainty about long-term feature investment. While Microsoft maintains that on-prem is still supported, the direction of travel looks like being clearly towards the cloud.
This means that organisations that have built operations around Exchange or Skype on-prem will now have to budget not only for higher licence costs but also for the internal work needed to meet Microsoft’s evolving update requirements. That could mean more testing, faster deployment cycles, and additional pressure on IT teams already juggling hybrid or multi-cloud environments. At the same time, those exploring alternatives may face challenges in interoperability, skills, and vendor maturity, making a full departure from Microsoft’s stack a complex decision rather than an easy switch.
For Microsoft, this shift allows continued servicing of legacy platforms without anchoring itself to ageing support timelines or major version overhauls. For competitors, however, it could create space to target niche on-premise or privacy-first customers that may feel increasingly underserved. For the wider industry, including managed service providers and IT resellers, the move may prompt a reassessment of support models, procurement strategies, and cloud migration readiness. Subscription Editions may keep the lights on for on-prem customers, but they also make clear that Microsoft’s long-term bet is firmly on the cloud.
Security Stop Press : Blur Your Property on Google Maps for Better Security
Blurring your property on Google Maps is a simple, permanent step available to any homeowner or tenant that may help reduce the risk of targeted crime.
Street View images can expose details such as building layouts, CCTV cameras, and even the type of vehicles on-site. Security experts warn this information can be useful to burglars, stalkers, or fraudsters planning remote reconnaissance.
To blur your property, go to Street View, click ‘Report a problem’ in the corner of the screen, and follow the prompts to outline and justify your request. Once processed by Google, the blur cannot be undone.
For home-based businesses or firms with visible assets, this small action may help reduce exposure without affecting normal operations. It’s a straightforward way to improve physical security in an increasingly digital world.
Sustainability-In-Tech : EU Funding to Replace Microplastics in Cosmetics
Cellugy, a Danish industrial biotech company, has received €8.1 million in EU funding to scale up production of EcoFLEXY, a biodegradable cellulose-based material designed to replace microplastics in everyday cosmetics.
A Hidden Threat
Microplastics, (i.e. tiny plastic particles under 5mm), are now found in everything from toothpaste and moisturiser to shower gels and makeup. These particles often go unnoticed by consumers but can persist in the environment for centuries, posing long-term risks to marine life and potentially human health.
Cosmetic companies have used fossil-derived polymers such as carbomers for decades because of their ability to provide smooth textures, stabilise emulsions, and extend shelf life. However, these ingredients are increasingly under scrutiny, both from regulators and from environmentally aware consumers. The European Chemicals Agency (ECHA) has estimated that more than 42,000 tonnes of intentionally added microplastics are used in EU products every year, with rinse-off cosmetics among the major contributors. That’s where Cellugy’s new products come in.
Who Is Cellugy?
Founded in Aarhus, Denmark, Cellugy is a synthetic biology startup developing sustainable, high-performance alternatives to petrochemical ingredients. The company is led by CEO and co-founder Dr Isabel Alvarez-Martos, who appears to have become quite an outspoken advocate for bio-based innovation as a means of catalysing systemic change in consumer goods.
Funding
Earlier this year, Cellugy secured €8.1 million from the EU LIFE Programme to support its BIOCARE4LIFE project with the main aim being to commercialise EcoFLEXY, the company’s flagship ingredient designed specifically for the personal care sector.
The Technology Behind EcoFLEXY
EcoFLEXY is a fermentation-derived, biofabricated cellulose, which is essentially a high-purity biopolymer produced without cutting down trees or using harsh extraction chemicals. Cellugy feeds sucrose to specially engineered bacteria in a controlled environment, allowing them to synthesise cellulose in ultra-pure, crystalline form.
The resulting material is a rheology modifier, i.e. a substance used to control the texture, viscosity, and flow of cosmetics. It performs a similar role to carbomers but offers what Cellugy describes as “enhanced stability, compatibility, and sensoriality”, industry terms referring to product consistency, chemical resilience, and feel on the skin.
Importantly, EcoFLEXY is biodegradable, bio-based, and scalable. Its structure is stable in the presence of salts and other charged compounds, making it suitable even for more complex product formulations like sunscreens and gels.
How Much Impact Could This Really Have?
Cellugy estimates that EcoFLEXY could prevent the release of 259 tonnes of microplastics into the environment each year, scaling to over 1,200 tonnes annually by 2034. That’s equivalent to removing millions of contaminated beauty products from the market.
This projection appears to be based on current usage patterns and is being validated by project partners including The Footprint Firm, a Danish circular economy consultancy, and Sci2sci, a Berlin-based AI company helping optimise Cellugy’s fermentation process.
“Our role is to optimise every layer of production so that EcoFLEXY can compete not just on environmental benefits, but on cost and performance metrics that matter to manufacturers,” said Angelina Lesnikova, CEO of Sci2sci.
Cellugy’s funding will cover four years of industrial scaling and validation, with the company aiming to generate “significant revenue within three to five years,” according to Dr Alvarez-Martos.
Why Microplastics in Cosmetics Are So Problematic
While most consumers are now aware of the dangers of plastic bottles and packaging, fewer realise that the products they apply to their skin may also contain plastic particles. Worryingly, these ingredients do not break down in wastewater treatment plants and often end up in rivers, lakes, and oceans.
Once in the environment, for example, they are consumed by marine organisms such as plankton, worms, and fish, working their way up the food chain to humans. A 2018 study by WWF suggested the average person may ingest up to 5 grams of plastic per week, equivalent to a credit card’s worth!
Also, it’s not just the environment at risk. For example, some synthetic polymers are known or suspected to interfere with hormones, trigger allergies, or accumulate in tissues. While research into long-term effects is ongoing, consumer concerns are growing.
Implications for the Cosmetics Industry
EcoFLEXY enters a market already under pressure to clean up. For example, in 2023, the EU adopted legislation to restrict intentionally added microplastics in cosmetic and cleaning products. The new rules are expected to gradually phase out many current formulations, forcing brands to reformulate or risk non-compliance.
Yet it seems that not all “natural” alternatives perform well. For example, according to Cellugy, many plant-based thickeners lack the chemical stability needed for modern cosmetics. EcoFLEXY aims to fill this gap, offering brands a way to remain compliant without sacrificing product performance.
“An alternative material that simply aims to be more sustainable is not enough,” said Dr Alvarez-Martos. “The critical challenge is about delivering bio-based solutions that actually outperform petrochemicals.”
Cellugy Not the Only One
It should be noted here that Cellugy isn’t the only company exploring microplastic alternatives for cosmetics. Examples of other startups and multinationals exploring the same thing include:
– Geno (USA), a biotech firm backed by L’Oréal and Unilever, is working on bioengineered alternatives to fossil-derived surfactants and polymers.
– Lignopure (Germany), which has developed LignoBase, a lignin-based ingredient for personal care formulations.
– CarbonWave (USA), which is turning sargassum seaweed into emulsifiers and stabilisers for skin care products.
However, it seems that few have focused as specifically on the rheology modifier market, where carbomers still dominate due to their low cost, proven performance, and widespread availability.
By targeting this particular category, Cellugy appears to be carving out a commercially attractive and environmentally urgent niche.
Challenges Ahead
Despite the promising figures, there are some key challenges to take note of. For example, biotech production processes like fermentation can be difficult and expensive to scale, especially when consistency and purity are paramount. Manufacturers also need to be convinced not only of EcoFLEXY’s ecological merits, but of its price competitiveness and long-term supply reliability.
Some industry insiders caution that switching ingredients often requires lengthy reformulation cycles and new safety testing. And while regulatory pressure helps push adoption, it also creates risks if new rules change or enforcement is delayed.
Sceptics may also question whether bio-based equals low-impact. Although fermentation is generally cleaner than petrochemical processing, it still requires energy, water, and feedstock inputs, raising questions about lifecycle emissions and land use.
That said, at the moment, the momentum appears to be on Cellugy’s side. With regulatory deadlines looming and younger consumers demanding transparency and traceability, the pressure to eliminate microplastics from cosmetics is unlikely to subside.
As the personal care sector enters a new phase of sustainability-led innovation, Cellugy’s success (or failure) could set a precedent for how the industry balances performance with environmental responsibility.
What Does This Mean For Your Organisation?
If EcoFLEXY delivers on its promises, Cellugy could become a key driver in shifting the cosmetics industry away from petrochemical dependency. By offering a material that is not only biodegradable and biobased but also capable of meeting the technical demands of high-performance formulations, the company is addressing a gap that has long held back broader adoption of sustainable alternatives. The emphasis on performance parity matters here, particularly for manufacturers who are unwilling or unable to compromise on product quality to meet environmental goals.
For UK businesses, the potential benefits are clear. Brands looking to stay ahead of incoming regulations around microplastics could find in EcoFLEXY a ready-made solution that reduces risk and supports green innovation claims. At the same time, contract manufacturers, product developers, and retailers in the UK may see opportunities to differentiate themselves in an increasingly sustainability-conscious market. However, cost remains a likely sticking point. Unless fermentation-based materials like EcoFLEXY can be made competitively at scale, some firms may hesitate to switch without regulatory or market pressure.
For regulators and environmental advocates, Cellugy’s approach demonstrates how public funding can help bridge the gap between lab-scale promise and commercial viability. It also shows that innovation-led solutions don’t always come from inside the legacy cosmetics giants. In fact, small biotech firms like Cellugy may be better placed to build sustainability into the core of their business models rather than treating it as a retrofit.
Still, the industry’s next steps will be critical. If larger companies fail to follow through on public sustainability pledges, or if reformulation efforts stall, the microplastics problem in cosmetics may simply shift rather than shrink. Also, although firms like Geno, CarbonWave, and Lignopure are bringing complementary solutions to market, broader uptake will depend on how quickly the sector aligns behind credible standards for biodegradability, toxicity, and lifecycle impact.
What Cellugy has done here is to essentially raise the bar, but the coming years will reveal whether the rest of the industry is ready to meet it.