Featured Article : AI Agents Failing (40% Cancellations Predicted)

New research has found that 70 per cent of AI agents struggle to complete standard office tasks successfully, while Gartner warns that over 40 per cent of current agentic AI projects will be scrapped by the end of 2027.

What Are ‘AI Agents’ And Why Are They Struggling?

AI agents are software systems that use large language models (LLMs), like ChatGPT or Claude, in combination with tools and applications to carry out goal-driven tasks without constant human input. Unlike chatbots or virtual assistants that only provide responses, agentic AI is designed to take actions, such as navigating software, interacting with web content, or managing emails, based on natural language instructions.

Examples include agents that can generate reports, schedule meetings, or execute multi-step operations such as processing CRM queries or managing code deployments. The idea behind them is that AI can behave like a semi-autonomous digital worker, thereby improving speed and efficiency while reducing costs. However, recent evidence suggests the reality falls far short of the promise.

For example, in a landmark study by researchers at Carnegie Mellon University (CMU), most of today’s leading AI agents were only able to complete around 30–35 per cent of assigned office tasks. That means they failed nearly 70 per cent of the time.

Testing Real-World Tasks

To evaluate how AI agents perform in realistic workplace scenarios, the CMU team created TheAgentCompany, a simulated IT company environment designed to mimic tasks that real employees might encounter. These included browsing the web, writing and editing code, interpreting spreadsheets, drafting performance reviews, and messaging colleagues on internal comms tools like RocketChat.

Results Not Good

Researchers tested agents based on how many tasks they could complete fully and accurately. Top-scoring models included Gemini 2.5 Pro, which managed a 30.3 per cent success rate, and Claude 3.7 Sonnet, which achieved 26.3 per cent. Other well-known models fared worse. GPT-4o completed just 8.6 per cent of tasks, while some large-scale models like Amazon-Nova-Pro and Qwen-2 scored under 2 per cent.

Variation and Serious Slip-Ups

“We find in experiments that the best-performing model…was able to autonomously perform 30.3 per cent of the provided tests to completion,” the CMU team noted. Even with extra credit for partial progress, most agents still fell short of reliable performance.

Also, it looks as though the failures weren’t just minor slip-ups. For example, in some cases, agents forgot to message colleagues, froze while interacting with pop-ups, or even faked task completion, such as renaming users to make it seem like they’d contacted the correct person.

Salesforce’s Findings Echo the Concerns

A separate study by Salesforce offered similarly sobering results. In their CRM-focused benchmark CRMArena-Pro, LLM agents completed about 58 per cent of simple, single-turn customer service tasks. However, in multi-step scenarios where context had to be maintained, success rates dropped sharply to around 35 per cent. None of the evaluated agents demonstrated any meaningful understanding of confidentiality—an essential requirement for deployment in enterprise settings.

The researchers concluded: “LLM agents are generally not well-equipped with many of the skills essential for complex work tasks.”

Over 40 Per Cent of Projects Will Be Cancelled by 2027 …

Industry analysts at Gartner believe this isn’t just a technical hiccup, but could be an indicator of wider strategic risk. For example, the firm predicts that more than 40 per cent of all agentic AI projects will actually be cancelled by the end of 2027. Their assessment is based on the three key drivers of spiralling costs, unclear business value, and inadequate risk controls.

“Most agentic AI projects right now are early-stage experiments or proofs of concept that are mostly driven by hype and are often misapplied,” said Anushree Verma, Senior Director Analyst at Gartner. “This can blind organisations to the real cost and complexity of deploying AI agents at scale.”

A January 2025 Gartner poll of more than 3,400 business respondents revealed that while 19 per cent had already made significant investments in agentic AI, another 42 per cent were only dipping a toe in. Around a third were still waiting to see how the technology matures before committing.

What’s Going Wrong?

A key issue appears to be the fact that many supposed “AI agents” aren’t really agentic at all. For example, Gartner has criticised the growing trend of ‘agent washing’, where vendors rebrand chatbots, rule-based automation tools, or even basic assistants as ‘agents’ to ride the hype wave. Of the thousands of companies claiming to offer agentic AI products, Gartner estimates that only around 130 genuinely qualify.

Even for the legitimate players, it seems that technical challenges abound. For example, CMU’s team highlighted the following major limitations:

– Common-sense reasoning failures. AI agents often misinterpret basic instructions or misunderstand context. This limits their ability to carry out even straightforward workplace tasks.

– Poor tool integration. Many agents struggle to operate reliably within software interfaces. They may freeze, click the wrong buttons, or fail to retrieve the right data.

– Fabricated outputs. Hallucination remains a major problem. Agents sometimes invent plausible-sounding but incorrect responses. Among developers, 75 per cent report experiencing hallucinated functions or APIs.

– High cost and inefficiency. Despite being pitched as labour-saving, one study estimated that a typical AI agent task involved around 30 steps and cost over $6, which is often more than it would take a person to do manually.

– Security and privacy risks. Because agents need wide-ranging system permissions, there’s a serious risk they could accidentally expose sensitive data, or act unpredictably in ways that breach confidentiality.

Complexity and Context

While some agent frameworks are improving, it seems that the wider problem is that many office tasks require not just automation, but judgement. For example, Graham Neubig, a co-author of the CMU paper, explained that while coding agents can be sandboxed to limit risk, office agents must interact with live systems, sensitive messages, and human colleagues.

“It’s very easy to sandbox code…whereas, if an agent is processing emails on your company email server…it could send the email to the wrong people,” Neubig warned.

There’s also the issue of persistence. Multi-step tasks require agents to keep track of state, adapt based on outcomes, and respond to dynamic inputs. Even advanced models struggle to maintain context and consistency across more than a handful of steps, particularly when unexpected events, e.g. like a pop-up, error message, or missing file, intervene.

Buyers, and the Enterprise

For AI companies, the research findings appear to cast doubt on the maturity of the agentic AI market. Those selling genuine solutions will need to demonstrate clear, auditable performance, while others may face a credibility backlash if their products are exposed as agent-washed rebrands.

For enterprise buyers, the message is to proceed with caution. Agentic AI holds promise, but only for very specific use cases where outputs can be clearly defined, risks are manageable, and success is measurable. Without that, projects risk becoming costly distractions that never reach production.

Gartner suggests that businesses focus on agentic AI investments only where it can deliver proven ROI, e.g. by automating decisions, not just tasks, or by redesigning workflows to be agent-friendly from the ground up. “It’s about driving business value through cost, quality, speed and scale,” Verma explained.

Even so, Gartner remains optimistic that the agentic AI landscape will improve. By 2028, they predict that 15 per cent of all daily work decisions will be made autonomously by AI agents, up from none in 2024. They also expect 33 per cent of enterprise software applications to include agentic AI functionality by that time, suggesting that while short-term challenges are real, the long-term potential may still emerge.

What Does This Mean For Your Business?

The current hype around AI agents may be loud, but the reality behind the scenes appears to be proving far messier. Recent research shows that these systems still struggle with many of the core qualities needed for effective office automation, e.g., context awareness, reliability, consistency, and trust. While some agents show promise in structured environments like coding or CRM workflows, real-world office tasks often involve ambiguity, judgement, and unexpected challenges that most agents today simply can’t handle. This mismatch between marketing and capability is already fuelling disillusionment across the enterprise tech landscape.

For UK businesses, this could mean adopting a much more measured approach. For example, rather than rushing into large-scale AI rollouts, organisations may want to carefully assess whether agentic tools truly solve the problem at hand, and whether those benefits outweigh the risks and complexity. In industries where security, compliance, or client confidentiality are vital, agents that behave unpredictably or hallucinate outputs could introduce significant operational or reputational risk. Decision-makers will need to ask hard questions about vendor claims, demand transparency around performance, and avoid falling for superficial rebrands.

Also, for AI developers and solution providers, the pressure is now mounting to deliver genuine value and technical maturity. As Gartner’s forecast suggests, many agentic AI projects may be scrapped before they ever reach deployment. Rising costs, patchy results, and lack of clarity about return on investment are already stalling momentum. Yet amid this shakeout, there remains opportunity. Businesses still want tools that save time, reduce admin overhead, and support hybrid teams. If AI agents can evolve into reliable, well-integrated assistants that are grounded in workflows that make sense for users, they may yet become part of the fabric of enterprise software.

Until then, the safest path forward appears to be to treat agents as experimental copilots, not replacements. Hybrid approaches that combine AI capabilities with human oversight are likely to produce the most stable and trustworthy results. For now, it seems that the goal shouldn’t be full autonomy, but augmentation that helps people work smarter, and doesn’t automate them out of the loop.

Tech Insight : Spotify AI ‘Band’ Sparks Labelling Debate

In this Tech Insight, we look at how the sudden rise of a (possibly AI-generated) band on Spotify has reignited the debate over whether music streaming platforms should clearly label artificially created songs.

An Indie Rock Hit, But Who Made It?

The Velvet Sundown appeared suddenly in June 2025 and has since attracted more than 850,000 monthly listeners on Spotify. To give some idea of their popularity, their most streamed track, Dust on the Wind, has been played over 380,000 times, with many listeners noting similarities to the 1977 Kansas ballad Dust in the Wind. However, it seems that no one knows for sure who the band actually is, or whether its members even exist.

Four Names and Not Much Else

The band’s Spotify profile lists the four names of band members as Gabe Farrow, Lennie West, Milo Rains, and Orion “Rio” Del Mar. However, it’s been reported that none of these individuals has any online presence outside the band, i.e., no interviews, no live shows, no solo projects. In fact, their Instagram feed is populated with images that appear to be AI-generated, showing oddly rendered “band members” in stylised poses. Also, a supposed quote from Billboard magazine in their bio (“the memory of something you never lived, and somehow makes it feel real”) has no known source.

Speculation

It seems that the ambiguity has fuelled speculation across Reddit, X, and YouTube, with some users pointing out audio “artefacts” in the recordings (small glitches or distortions often associated with synthetic audio generation tools). Music producer and YouTuber Rick Beato has been reported as describing the band’s output as having the hallmarks of AI composition, especially after running one of the songs through a digital splitter in Logic Pro.

Suno, Deezer and the “AI Slop” Flood

Although The Velvet Sundown has not confirmed the use of AI tools, the music streaming service Deezer has flagged all of the band’s tracks as “100 per cent AI-generated” using its new detection algorithm. Deezer is currently the only major platform openly tagging AI music content. Its system identifies audio created with generative tools like Suno and Udio, which can compose full songs from basic text prompts.

In April 2025, Deezer reported that 18 per cent of all content uploaded to the platform was AI-generated, a sharp rise from just 10 per cent in January. That translates to more than 180,000 bot-made tracks per week! Deezer CEO Alexis Lanternier has said that the goal is not to ban AI music, but to ensure transparency and protect the interests of human artists.

“AI is not inherently good or bad,” he said. “But we believe a responsible and transparent approach is key to building trust with our users and the music industry.”

Spotify, Apple and Amazon Stay Quiet

In contrast, it seems that streaming platform Spotify has no current system in place to flag AI-generated content and has made no official comment on The Velvet Sundown. CEO Daniel Ek has previously been reported as saying that Spotify would not ban AI-made tracks, but stated he was opposed to using the technology to impersonate real artists. Despite that, Spotify’s algorithm has promoted The Velvet Sundown on its personalised “Discover Weekly” playlists, pushing the track to millions of users without any disclosure of its potential origin.

Apple Music and Amazon Music also host The Velvet Sundown’s music but have remained silent on whether they intend to introduce any kind of AI labelling or detection system.

A Lack of Labelling Sparks Concern

It seems that this lack of labelling has sparked concern from artists, producers and industry groups. For example, Ed Newton-Rex, founder of the advocacy group Fairly Trained, described the situation as “theft dressed up as competition.” He argued that AI firms are profiting from training data scraped from real musicians’ work, often without consent or compensation.

Identity Confusion and Media Hoaxes

Adding to the confusion, Rolling Stone US initially reported that a spokesperson for the band had confirmed the music was generated using Suno. That spokesperson later turned out to be fake, part of an elaborate “art hoax” designed to deceive journalists. The individual, known as Andrew Frelon, later admitted in a Substack post that he was unaffiliated with the band and fabricated the story.

In response, The Velvet Sundown issued a statement on its Spotify page denying any link to Frelon, describing his claims as “unauthorised.” The band has continued to post cryptic messages via its social media channels. One X post read: “They said we’re not real. Maybe you aren’t either.”

It seems that this ambiguity is part of the brand’s strategy. Their upcoming album, Paper Sun Rebellion, is being promoted with AI-style visuals and poetic descriptions that blur the line between artificial and authentic. Whether this is genuine artistic expression or a marketing gimmick has become a key question for commentators.

A Threat to Emerging Artists?

For real-world musicians, the rise of AI-generated bands actually poses a serious challenge. Kristian Heironimus, of the indie group Velvet Meadow, was recently quoted in NBC News as saying that watching an alleged AI act soar to 500,000 listeners in just two weeks was “disheartening.” He highlighted the difficulty of competing with endless, instantly produced AI content that can mimic any genre and flood discovery algorithms.

Many in the industry appear to share his concern. For example, last year, hundreds of artists, including Sir Elton John and Dua Lipa, signed an open letter demanding legal protections against the unauthorised use of their work in training AI models. While the UK government declined to introduce new legislation at the time, it is currently holding a separate consultation on AI and copyright.

Meanwhile, a coalition of major US record labels (Universal Music Group, Sony Music and Warner Music) has filed lawsuits against AI music developers Suno and Udio, alleging large-scale copyright infringement. Both companies argue that using publicly available music to train models constitutes “fair use,” a defence commonly deployed by AI developers.

Who Decides What’s Real?

The lack of consensus on what constitutes “AI-generated” music adds further complexity. Some artists use AI tools like Suno or Udio to create backing tracks or generate lyrics, while others use them to compose full songs. Platforms like Spotify typically don’t require artists to disclose how their music was made. Nor do they differentiate between artists who use AI as a tool and those who rely on it entirely.

Most Platforms Treat AI Songs Like Human Ones (For Now)

Deezer’s approach is based on identifying audio characteristics and metadata patterns typical of AI tools. However, even that has limits, particularly as generative music becomes more sophisticated. For now, most streaming platforms continue to treat AI songs the same as human-made ones for purposes of recommendation, royalties, and categorisation.

People Are Interested In More Than Just The Sound of an Artist

Industry voices like Lanternier warn that this is unsustainable. “People are not only interested in the sound,” he said. “They are interested in the whole story of an artist… We believe it’s right to support the real artist, so that they continue to create music that people love.”

Listeners Left in the Dark

From a user perspective, it seems it’s increasingly difficult to tell what’s human and what’s machine-made. For example, generative AI can now mimic voices, create convincing instrumentals, and even generate promotional images, all with minimal input. This means that listeners may not even realise they’re streaming AI-made content unless a platform flags it.

This has broader implications for trust. For example, as Professor Gina Neff from the University of Cambridge noted in relation to this subject, “Our collective grip on reality seems shaky. The Velvet Sundown story plays into the fears we have of losing control of AI and shows how important protecting online information is.”

“AI Slop” Drowning Out Real Voices?

It also seems that some fear that AI-generated content (sometimes referred to as “AI slop”) will drown out real voices, especially in genres like indie rock or ambient electronica where lyrical ambiguity and minimalistic production make AI mimicry easier to hide.

However, others, like Suno CEO Mikey Shulman, believe the conversation is overblown. “I think people are forgetting the important question, which is: how did it make you feel?” he said. “There are Grammy winners who use Suno in their production every day.”

That said, the debate continues to sharpen and, with AI tools getting more advanced and streaming platforms dragging their feet on labelling, the question of who (or what) we’re actually listening to may only get more complicated.

What Does This Mean For Your Business?

Whether The Velvet Sundown is a genuine artistic experiment or a high-concept AI stunt, its rapid rise appears to have thrown the spotlight on the growing presence of synthetic music in mainstream discovery channels. The core issue is not simply the existence of AI-generated tracks, but the complete absence of transparency around them. For example, if listeners can’t tell whether a song is human-made or artificially composed, the entire trust framework underpinning streaming services begins to erode.

This matters not just for artists and platforms, but for a much wider set of stakeholders. For independent musicians and producers, there’s a real risk that generative AI will continue to crowd their work out of key algorithms and recommendation feeds. For record labels and rights holders, the legal and licensing landscape remains unclear, especially while AI companies assert fair use in the training of their models. Also, for UK businesses in the creative, tech, marketing or media sectors, the commercial implications are equally serious. For example, AI-generated content could be misused in ad campaigns, misrepresent music licensing in branded environments, or create reputational risks for companies trying to align with genuine cultural voices.

It’s also becoming harder for consumers and businesses to trust the authenticity of what they’re engaging with. When a band can be created, promoted and streamed globally without anyone knowing if it even exists, it raises questions about how influence, monetisation, and even identity are being managed online. For a platform like Spotify to push AI-generated tracks without disclosure, while human creators struggle to break through, could create long-term imbalances in both visibility and revenue distribution.

What’s clear is that this isn’t just a creative debate but is also a technological, ethical and commercial one. Without industry-wide standards or regulatory clarity, streaming platforms may soon find themselves forced to choose between innovation and integrity. Whether through clear labelling, artist verification, or changes to royalty structures, the choices made now will shape the relationship between AI and music for years to come.

Tech News : Copywriting Danish People Against Deepfakes

The Danish government is planning a major legal shift to let people claim copyright over their own body, facial features, and voice, in what it says is the first European attempt to systematically tackle the threat posed by deepfakes.

A Legal Response to a Rapidly Growing Threat

Deepfakes, which are highly realistic synthetic media generated using artificial intelligence (AI), have become one of the most pressing digital threats of the past five years. By mimicking a person’s appearance, voice, and movements, these AI-generated videos, images or audio clips can convincingly impersonate individuals without their consent. Initially used for novelty and satire, they’re increasingly tied to malicious uses including fraud, harassment, and disinformation.

Massive Rise

According to a 2024 report from cybersecurity firm Sumsub, the number of detected deepfake videos worldwide rose by over a massive 700 per cent in a single year, with Europe seeing the sharpest spike. Consequently, the European Union’s law enforcement agency, Europol, has warned that deepfakes are “a significant threat to democracy and trust in institutions,” particularly around elections and public figures. However, individuals are also at risk, e.g. from revenge porn to financial scams where a cloned voice is used to impersonate a relative or company executive.

While many countries are beginning to introduce narrow legislation to deal with specific uses of deepfakes, Denmark is now attempting something broader.

What Denmark Is Proposing

Under the new proposals announced by Denmark’s Ministry of Culture in late June 2025, citizens would be granted copyright over their physical appearance, voice, and other personal traits. The hope is that this would allow them to demand the removal of AI-generated content that imitates them without permission (regardless of context) and seek compensation where harm has occurred.

Treated As A Creative Work

One important aspect of this new legal approach is that it would not rely on proving defamation or reputational damage, as is often required under existing European law. Instead, it would actually treat a person’s likeness as a creative work, similar to how a photograph or piece of music is protected. The law would apply to both private individuals and public figures, including artists and performers.

Culture Minister Jakob Engel-Schmidt described the legislation as a “bold step to protect personal identity in the age of AI,” noting that current legal protections lag behind technical capabilities. “Human beings can be run through the digital copy machine and misused for all sorts of purposes,” he said in a statement. “We are not willing to accept that.”

Timing, Process and Political Backing

The proposed changes will be submitted for public consultation before the Danish parliament breaks for summer recess, with formal legislation expected to be introduced in the autumn. Given the political climate, it’s highly likely to pass. For example, around 90 per cent of MPs reportedly support the reform, following widespread concern about the use of AI-generated content in political misinformation and online abuse.

Would Be A European First

The law would make Denmark the first European country to explicitly codify individual ownership of biometric traits for the purpose of combatting generative AI misuse. It is expected to take effect in early 2026 if passed.

What It Means in Practice

If enacted, the law would essentially give Danes the legal right to request takedowns of deepfake content from online platforms if it replicates their image, voice or body in a “realistic, digitally generated imitation.” The rule would apply whether or not the content was created with malicious intent.

Platforms that fail to comply with takedown requests could face “severe fines,” according to Engel-Schmidt. There’s also potential for EU-level action if enforcement proves challenging, particularly during Denmark’s upcoming EU presidency in 2026, when it plans to raise the issue with member states.

Includes Key Exceptions

Crucially, the proposal includes exceptions for parody and satire, which are protected under free expression rules. These carve-outs are intended to ensure that political cartoonists, satirical shows, and legitimate artistic works aren’t caught by the law.

Performances Too

The reform would also extend to artists’ performances. For example, musicians would have legal grounds to object if their voice or performance style is cloned by AI without consent, which has been a growing concern in the music industry as AI-generated songs imitate the voices of famous performers.

Why Businesses and Platforms Should Take Note

For technology companies, particularly those that operate online platforms or generate AI models, Denmark’s proposal could have far-reaching consequences.

In practical terms, businesses hosting user-generated content, such as social media platforms, image generators, or AI voice apps, may soon be legally obligated to implement mechanisms for recognising and responding to takedown requests based on biometric misuse. This could involve new detection systems, moderation processes, and audit trails to demonstrate compliance.

It also raises questions around liability. Under current EU law, platforms benefit from limited liability for illegal content they host, provided they act promptly when notified. Denmark’s new copyright-based approach might test the limits of that framework, especially if it leads to conflicts over enforcement or definitions of consent.

For creative industries, including advertising, film, and gaming, the law could restrict the use of AI tools trained on real individuals without licensing agreements. While this may increase costs and licensing complexity, supporters argue it could also encourage more ethical use of synthetic media.

From a business reputation standpoint, being seen to respect biometric rights could become a key trust signal for users and customers. A 2023 survey by the European Commission found that 79 per cent of EU citizens want stronger legal safeguards on the use of AI-generated likenesses.

How Other Countries Are Approaching the Issue

Globally, it seems, few countries have gone as far as Denmark is proposing, but some are moving in the same direction.

For example, in the United States, several states have passed deepfake-specific laws, mostly focused on election interference and non-consensual pornography. California, Texas, and New York, for instance, have made it illegal to create or distribute deepfakes that impersonate political candidates within 30 to 60 days of an election. However, there is no federal law yet, and a new budget proposal being debated in Congress could strip states of their authority to regulate AI for 10 years.

In China, deepfake creators must label synthetic media clearly and obtain consent from the people being replicated. Failure to comply can result in heavy fines. South Korea is also considering similar legislation, particularly to address deepfake abuse in online pornography, which has become a major social issue there.

Within Europe, the EU’s AI Act (adopted in 2024) includes provisions requiring deepfakes to be labelled as such, but it does not go as far as granting individuals copyright over their features. That’s why Denmark’s move is seen as a potential model for broader reforms.

What Challenges Remain?

Despite strong domestic support, Denmark’s proposal is not without critics. For example, some legal scholars have raised questions about how biometric copyright would be enforced across borders, especially on platforms based outside the EU. Others argue that tying personal identity to copyright, a system traditionally designed to protect creative works, may lead to unintended legal consequences.

There are also practical concerns, e.g. identifying a deepfake is not always straightforward, and takedown systems are often slow or ineffective. If enforcement relies heavily on users flagging violations, the burden may fall disproportionately on individuals without the resources or knowledge to pursue their rights.

For now, however, Denmark appears determined to lead the way by betting that stronger individual protections are the only way to restore trust in a digital landscape where seeing is no longer believing.

What Does This Mean For Your Business?

If Denmark succeeds in passing this reform, it could change how personal identity is treated under copyright law, not just nationally, but across Europe. By legally enshrining the right to control one’s own voice, face, and likeness, the country is effectively trying to redraw the boundary between creative freedom and personal protection in the age of synthetic media. For individuals, this could offer an unprecedented tool to fight back against misuse, without needing to prove reputational harm or navigate complex defamation law.

For UK businesses, particularly those in tech, media, and advertising, Denmark’s approach may offer a glimpse of what’s to come. If other EU countries follow suit, companies that operate across borders could face new compliance demands, from biometric consent processes to proactive takedown mechanisms. At the same time, businesses that adopt strong safeguards now, such as consent-driven AI use policies, may gain a competitive advantage by building trust with customers and clients. For those in the creative sector, for example, the move could also help clarify the grey area around training AI models on real human traits, especially in performance-heavy fields like music, voiceover, or influencer marketing.

However, enforcement remains a key challenge. For example, without international alignment, cross-border takedowns could prove difficult, and smaller platforms may struggle to implement the necessary safeguards. There’s also a risk that applying copyright principles to human identity could lead to unintended consequences, particularly if courts are left to interpret the balance between personal rights and creative expression.

Even so, Denmark’s proposed law appears to reflect a broader global reckoning with the risks of generative AI. It signals that governments are no longer willing to let platforms set the terms of engagement when it comes to biometric misuse. With deepfakes set to become more sophisticated and widespread, that signal may be just as important as the legal details that follow.

Tech News : Voice Calling Comes To WhatsApp Business Accounts

WhatsApp will soon let large businesses make and receive voice calls directly through the platform, as Meta expands its commercial communications offering with AI-driven tools and centralised marketing features.

Live Voice Calls Now Coming to the WhatsApp Business API

Until now, only small businesses on WhatsApp could speak to customers using voice messages or voice calls. Larger businesses, e.g. typically those using the WhatsApp Business Platform API, were limited to text-based messaging. However, Meta (WhatsApp’s owner) says that’s about to change. Meta has confirmed that over the coming weeks, voice calling will roll out for enterprise users, allowing companies to speak directly to customers and receive inbound calls within WhatsApp itself.

Receive Live Customer Voice Calls, and Call Them Back

Meta unveiled the new capability during its annual Conversations conference in Miami on 1 July, describing it as a response to increasing demand for more natural, flexible customer engagement options. The update means businesses using the API will soon be able to receive live voice calls from customers, as well as call them back, which is an option not previously available even in limited pilot tests.

For example, a telecoms provider could use WhatsApp to answer a customer’s technical query via chat, then (seamlessly) escalate to a live voice call when the situation requires real-time dialogue. Similarly, banks, or travel agents could offer consultations and problem resolution through a channel many consumers already use daily.

A Step Towards AI Voice Agents

While Meta has framed this as a way to support human-to-human conversations, the addition of voice to the business platform also appears to be a way to lay the groundwork for AI-driven voice assistants. For example, companies can already integrate with third-party providers like Vapi, ElevenLabs or Phonic to create AI voice agents capable of handling simple customer service tasks. By enabling voice pipelines in WhatsApp, however, Meta is opening the door to broader automation opportunities, thereby potentially reducing call centre overheads and offering round-the-clock support in natural language.

This move aligns with Meta’s wider strategy of embedding AI capabilities across its business tools.

In a blog post published on 1 July, the company said: “There also might be times it’s helpful to provide additional support to customers beyond just a text. Bringing calling and voice updates to the WhatsApp Business Platform will help people communicate in a way that works best for them and paves the way for AI-enabled voice support in the future.”

The Scale of WhatsApp Business

Crucially, WhatsApp Business has quietly become a key revenue generator for Meta. For example, over 200 million monthly users now rely on the platform globally, with Meta monetising the service through click-to-WhatsApp advertising and per-message fees for businesses. Analysts estimate that WhatsApp and Messenger business messaging generated over $10 billion in run-rate revenue for Meta in 2024 alone.

By adding richer tools like voice and AI, Meta appears to be looking to move beyond basic customer service into full-stack sales and support. Voice, therefore, adds a missing layer to the communication stack and helps differentiate WhatsApp from rival platforms such as Apple Business Chat or Google Business Messages, which focus more on text-based interaction.

Video, Voice Messaging and AI Follow-Ups Also Coming

It seems that voice calling isn’t the only new feature. For example, businesses on the WhatsApp API will also be able to send and receive voice messages, while some sectors (e.g. remote healthcare) will gain access to video call functionality. This is expected to enable new use cases, from live consultations to virtual product demos.

Meanwhile, Meta is expanding its AI-powered product recommendation tool. The system, currently being piloted with merchants in Mexico, uses AI to suggest items on a business’s website and then follows up with customers directly in WhatsApp. For example, if a user browses for trainers on a brand’s website, WhatsApp could later prompt them with related offers or updates, using AI to manage the entire conversation.

Although AI features are free for now, Meta has hinted that monetisation may follow once adoption scales. This reflects the model already used with messaging and click-based advertising, where usage thresholds determine costs.

Centralised Marketing Across Meta Platforms

In addition to in-app improvements, Meta is rolling out a centralised campaign management system allowing businesses to run marketing campaigns across WhatsApp, Facebook and Instagram from a single place. This integration with Meta’s Ads Manager platform includes tools for uploading contact lists, targeting customers with personalised messages, and letting Meta’s Advantage+ AI automatically optimise ad spend across channels.

For businesses already using multiple Meta services, this consolidation could mean significant efficiency gains. Creative assets, budget controls, and campaign setup flows are unified across all placements, including WhatsApp’s Status (the app’s equivalent of Instagram Stories), which is now open for ad placements for the first time.

Scaling Its Service

These changes aim to make WhatsApp a more versatile platform for doing business, not just chatting. According to Meta, the updates will enable more seamless interactions and give customers greater flexibility in how they engage with brands. From the business side, voice and AI tools help scale service without scaling headcount, while new campaign features streamline cross-platform marketing.

For example, a retailer could run a sale campaign across Facebook and Instagram, then retarget interested users via WhatsApp with AI-powered follow-ups and even offer voice support to complete the purchase.

Also, it seems that customer expectations are shifting. For example, a Salesforce survey (2023) found that 61 per cent of consumers now expect real-time service from the brands they deal with. Meta’s WhatsApp enhancements reflect this demand for immediacy, particularly in regions where WhatsApp is the dominant form of digital communication.

What About Privacy, Cost and Competition?

Despite the benefits, some concerns remain. Privacy is a recurring issue when AI and voice come together, particularly in business contexts. Meta has said little about how voice data will be stored, processed or encrypted, and whether AI agents would have access to customer audio in real time. Critics argue that without clear guardrails, businesses risk unintentionally mishandling sensitive information.

Costs are another open question. For example, while many features are being introduced without additional charges, Meta has a history of monetising its business tools after initial rollouts. Once adoption grows, businesses could find themselves paying for AI features, voice usage or higher-tier access to campaign tools.

Competition is also heating up. Apple, Google and various regional players are investing heavily in conversational commerce and AI-driven service layers. WhatsApp’s popularity in markets like Brazil, India and Indonesia gives Meta a head start, but richer features alone won’t guarantee long-term dominance.

Industry observers also note that Meta’s voice strategy may not appeal to every business. For example, some companies prefer to deflect live voice conversations to lower-cost channels such as chatbots or email. Also, while voice may improve service quality, it also requires staff availability and scheduling, thereby making it less scalable for some use cases.

Where It Goes From Here

Meta’s latest WhatsApp update does appear to reflect a broader push to turn the world’s most popular messaging app into a full-scale business platform. With over 2 billion users and deep integration across Facebook and Instagram, the infrastructure is already in place. The question now is how businesses will adopt (and ultimately pay for) these new capabilities, and how customers will respond to a growing blend of AI, ads, and automation within their daily chat experiences.

What Does This Mean For Your Business?

What’s clear is that Meta is steadily transforming WhatsApp from a messaging app into a comprehensive, AI-enhanced business tool, and one that spans marketing, sales, and customer service. For UK businesses, particularly those already embedded in the Meta ecosystem, these updates offer new channels for connecting with customers on their terms, using real-time voice, video, and AI-led engagement. The ability to unify WhatsApp with Facebook and Instagram marketing under a single campaign manager could also simplify workflows and improve return on ad spend.

At the same time, the introduction of live voice support changes the dynamics of customer service, potentially raising expectations among consumers while creating new pressure points for businesses. However, not every organisation will have the staffing or operational models to support on-demand voice calls. For those that do, especially in regulated or service-heavy sectors like finance, healthcare, or utilities, the feature could improve trust and responsiveness, if used with care.

There are also unanswered questions about how data is handled behind the scenes, particularly where AI voice agents are concerned. UK firms will need to watch closely for clarity on data storage, GDPR compliance, and whether Meta’s approach will meet domestic privacy standards. Any missteps here could undermine confidence in the system, especially among privacy-conscious users or sectors bound by tighter regulatory requirements.

For competitors, the race is on to match or outmanoeuvre Meta’s rapid AI integration but, with WhatsApp already installed on millions of UK smartphones, Meta has a head start in terms of user reach and familiarity. The challenge now will be ensuring that the platform remains trusted and accessible, even as it becomes more commercially driven.

Company Check : Microsoft Exchange & Skype Servers Go Subscription-Only

Microsoft has officially launched subscription-only versions of its on-premises Exchange Server and Skype for Business Server, thereby ending the era of year-numbered releases and perpetual licences.

A Long-Anticipated Transition Becomes Reality

After months of preparation and close calls with support deadlines, Microsoft has made its Subscription Edition (SE) versions of Exchange Server and Skype for Business Server generally available. These editions replace the traditional 2016 and 2019 versions, which are set to reach the end of extended support on 14 October 2025.

Although Exchange Online and Microsoft Teams remain Microsoft’s strategic focus, the software giant has acknowledged that many organisations still require on-premises options. The Subscription Editions were first introduced to select enterprise customers earlier this year but are now widely available to all qualifying customers.

Microsoft says the SE releases reflect its “commitment to ongoing support for scenarios where on-premises solutions remain critical”, noting that these deployments are often driven by regulatory requirements, data residency needs, or cloud-sceptical policies in sectors such as government, finance, and defence.

What’s Actually Changing?

At a technical level, the initial releases of Exchange Server SE and Skype for Business Server SE are nearly identical to their predecessors. Exchange SE is based on Exchange Server 2019 CU15, while Skype for Business Server SE shares its codebase with Skype for Business Server 2019 CU8HF1. As such, there are no new features, removed components, or major structural changes at this stage.

However, the licensing and servicing models are the parts that have changed fundamentally. For example, both servers are now governed by Microsoft’s Modern Lifecycle Policy, which removes fixed end-of-support dates as long as organisations keep systems updated. This transforms them into evergreen products, with two cumulative updates (CUs) planned per year and additional security patches as needed.

Crucially, Microsoft has dropped perpetual licensing in favour of a subscription-only model. Organisations must now pay regularly to continue using the software legally. Stop paying, and you’re effectively frozen at the last supported version, which is now outside Microsoft’s safety net for patches and support.

Why Now and Why Like This?

The timing of the general availability appears to be closely tied to looming deadlines. Both Exchange Server 2016 and 2019, as well as Skype for Business Server 2015 and 2019, are approaching end-of-support in October 2025. Microsoft had promised a transition plan well before this date, and the SE editions are the fulfilment of that promise, albeit cutting it close.

Another driving force is Microsoft’s long-term strategy to encourage cloud adoption. As Rob Helm, analyst at Directions on Microsoft, put it: “The licence price hikes, the cutoff of old versions, the weak link with new Outlook—they all point to a single message: If you care about Exchange email, get off Exchange Server.”

Yet despite the cloud push, Microsoft has also acknowledged the real-world barriers to migration for many organisations. In a blog post accompanying the release, the company said: “Exchange SE demonstrates our commitment to ongoing support for scenarios where on-premises solutions remain critical.”

This includes hybrid deployments, secure national infrastructures, and regions with inadequate cloud access or stringent legal obligations regarding data locality.

A Smooth but Inevitable Upgrade Path

It seems that Microsoft has gone to some lengths to present the upgrade path as low-risk. For those already running Exchange 2019 CU14 or CU15, moving to SE involves minimal disruption, i.e. no schema changes, no removed features, and no new installation prerequisites. Even licence keys remain unchanged (at least for now).

The same applies to Skype for Business Server, where the SE edition uses an identical build number to CU8HF1, minus a few cosmetic updates and the refreshed licence agreement.

However, organisations sticking with older versions will face a steeper climb. Future SE cumulative updates will introduce breaking changes. Exchange SE CU2, for instance, will block coexistence with legacy 2016 or 2019 servers, effectively forcing full migration. Skype for Business SE updates are expected to do the same.

Changing On-Prem Strategy

For Microsoft, this move is part of a broader shift in its on-prem strategy (the software that runs on a company’s own servers, rather than in the cloud), i.e. fewer fixed-version launches, more ongoing subscriptions, and tighter integration with cloud-based tools. Exchange SE and Skype SE will not see the same innovation curve as Microsoft 365 or Teams, but they offer a lifeline for organisations that cannot or will not go all-in on the cloud.

From a competitive standpoint, this opens up opportunities for rivals such as Zoho, Open-Xchange, and Proton, particularly in markets concerned about data sovereignty or vendor lock-in. Microsoft’s insistence on subscriptions may also play into the hands of open-source email and UC solutions, especially in price-sensitive or highly regulated environments.

For businesses, particularly UK-based organisations balancing compliance, cost, and control, the release of SE editions raises key strategic questions. For example, should they embrace the evergreen model and continue with Microsoft’s stack, or use the transition as an opportunity to diversify infrastructure or explore alternative platforms?

The Cost of Staying On-Prem

Perhaps the most controversial element of the announcement is pricing. Microsoft confirmed that all standalone on-prem server products, including Exchange SE and Skype SE, are subject to a 10 per cent price increase. Some licence types may even rise by up to 20 per cent, depending on the channel and configuration.

These hikes do not apply to cloud equivalents such as Exchange Online, Microsoft Teams, or SharePoint Online. The implication is that staying on-premises is becoming not just technically more demanding, but also financially more burdensome.

For organisations required to maintain on-prem email or voice systems, there’s little choice. Running unsupported software is not only a security risk but a compliance red flag, particularly under regulations such as the UK GDPR, ISO 27001, and sector-specific frameworks like NHS DSPT or FCA guidelines.

Operational and Cultural Implications

Beyond licensing and compliance, there are also some broader operational implications. For example, Teams responsible for managing Exchange or Skype for Business deployments will need to adapt to faster patch cycles, modernised update tooling, and shorter grace periods for non-compliance. There’s also a risk that core features might stagnate, with most new innovations funnelled to the cloud-only Microsoft 365 environment.

Microsoft has yet to confirm whether Exchange SE or Skype SE will receive any integration with future Copilot features, AI enhancements, or cross-platform sync improvements. As such, businesses relying on SE products may find themselves maintaining legacy tech in an ecosystem that’s moving on without them.

What Does This Mean For Your Business?

The switch to Subscription Editions may be framed as a practical continuity measure, but it also appears to signal a deeper change in how Microsoft intends to manage its remaining on-premises software. For many UK businesses, particularly those in regulated sectors or with hybrid infrastructure needs, SE offers a necessary bridge, but the subscription-only model means that bridge now comes with ongoing costs, tighter servicing rules, and less certainty about long-term feature investment. While Microsoft maintains that on-prem is still supported, the direction of travel looks like being clearly towards the cloud.

This means that organisations that have built operations around Exchange or Skype on-prem will now have to budget not only for higher licence costs but also for the internal work needed to meet Microsoft’s evolving update requirements. That could mean more testing, faster deployment cycles, and additional pressure on IT teams already juggling hybrid or multi-cloud environments. At the same time, those exploring alternatives may face challenges in interoperability, skills, and vendor maturity, making a full departure from Microsoft’s stack a complex decision rather than an easy switch.

For Microsoft, this shift allows continued servicing of legacy platforms without anchoring itself to ageing support timelines or major version overhauls. For competitors, however, it could create space to target niche on-premise or privacy-first customers that may feel increasingly underserved. For the wider industry, including managed service providers and IT resellers, the move may prompt a reassessment of support models, procurement strategies, and cloud migration readiness. Subscription Editions may keep the lights on for on-prem customers, but they also make clear that Microsoft’s long-term bet is firmly on the cloud.

Security Stop Press : Blur Your Property on Google Maps for Better Security

Blurring your property on Google Maps is a simple, permanent step available to any homeowner or tenant that may help reduce the risk of targeted crime.

Street View images can expose details such as building layouts, CCTV cameras, and even the type of vehicles on-site. Security experts warn this information can be useful to burglars, stalkers, or fraudsters planning remote reconnaissance.

To blur your property, go to Street View, click ‘Report a problem’ in the corner of the screen, and follow the prompts to outline and justify your request. Once processed by Google, the blur cannot be undone.

For home-based businesses or firms with visible assets, this small action may help reduce exposure without affecting normal operations. It’s a straightforward way to improve physical security in an increasingly digital world.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives