Tech Tip – Filter WhatsApp Chats with Custom Lists
Need to organise your chats more effectively? WhatsApp now lets you create custom filters like “Clients” or “Team” so your most important conversations are always easy to find.
How to:
– In WhatsApp, go to the ⋯ menu (Android) or Settings (iOS).
– Tap ‘Chats > Filter chats > Custom List’.
– Select the chats you want included and give the list a name.
What it’s for:
Keeps client or project conversations instantly accessible so there’s no scrolling through dozens of unrelated chats. Ideal for busy professionals managing multiple threads.
Pro‑Tip: You can update or rename lists as your work changes so your filters stay relevant and easy to use.
Featured Article : MPs Concern : ‘Predictive Policing’ in UK
A cross-party group of MPs is calling for the UK government to outlaw predictive policing technologies through an amendment to the forthcoming Crime and Policing Bill, citing concerns over racial profiling, surveillance, and algorithmic bias.
Proposed Law Aims to Outlaw Future Crime Predictions
At the centre of the debate is New Clause 30 (NC30), an amendment tabled by Green MP Siân Berry and backed by at least eight others, including Labour’s Clive Lewis and Zarah Sultana. If passed, the clause would explicitly prohibit UK police from using artificial intelligence (AI), automated decision-making (ADM), or profiling techniques to predict whether an individual or group is likely to commit a future offence.
Berry told the House of Commons that such systems are “inherently flawed” and represent “a fundamental threat to basic rights,” including the presumption of innocence. “Predictive policing, however cleverly sold, always relies on historic police and public data that is itself biased,” she argued. “It reinforces patterns of over-policing and turns communities into suspects, not citizens.”
What Is Predictive Policing?
Predictive policing refers to the use of data analytics, AI and algorithms to identify patterns that suggest where crimes are likely to occur or which individuals may be at greater risk of offending. It takes two broad forms, i.e. place-based systems that forecast crime in particular geographic locations, and person-based systems that claim to assess the risk posed by individuals.
Already Piloted or Deployed
It’s worth noting that these systems have already been piloted or deployed in over 30 UK police forces. For example, according to a 2025 Amnesty International report, 32 forces were using location-focused tools, while 11 had tested or deployed systems to forecast individual behaviour. The aim, according to police, is to deploy resources more efficiently and prevent crime before it happens.
However, critics argue that the data used to train these systems, such as arrest records, stop-and-search data, and local crime statistics, is historically biased. This, they say, leads to feedback loops where marginalised and heavily policed communities are disproportionately targeted by future interventions.
Why MPs Are Taking a Stand Now
The renewed push for a legislative ban follows a string of revelations over the past 18 months about the growing use of algorithmic policing in the UK, often without public consultation or oversight. One of the most contentious examples was uncovered by Statewatch in 2025, i.e., the Ministry of Justice’s so-called “Homicide Prediction Project”, a system under development to identify individuals at risk of committing murder using sensitive data, including health and domestic abuse records—even in cases where no criminal conviction exists.
Statewatch researcher Sofia Lyall called the initiative “chilling and dystopian,” warning that “using predictive tools built on data about addiction, mental health and disability amounts to highly intrusive profiling” and risks “coding bias directly into policing practice.”
The amendment to the Crime and Policing Bill comes as the government continues to expand data-driven law enforcement under new legislation. The Data Use and Access Act (passed earlier this year) permits certain forms of automated decision-making that were previously restricted under the Data Protection Act 2018. More than 30 civil society groups, including Big Brother Watch, Open Rights Group, Inquest and Amnesty, have signed a joint letter condemning the changes and calling for a ban on predictive policing to be included in the new bill.
Bias, Surveillance and Lack of Transparency
At the heart of the pushback is the view that predictive systems do not eliminate human bias, but instead replicate and scale it. As Open Rights Group’s Sara Chitseko explained in a May blog, “historical crime data reflects decades of discriminatory policing, particularly targeting poor neighbourhoods and racialised communities.”
The concern is not just over potential inaccuracies, but the broader impact on civil liberties. Campaigners warn that predictive tools undermine the right to privacy and fuel what they call a “pre-crime surveillance state,” in which individuals can be subjected to policing actions without having committed any crime.
This can include being flagged for increased surveillance, added to risk registers, or subjected to stop-and-search, all based on algorithmic assessments that may be impossible to scrutinise. Data from these tools is often shared across public bodies, meaning individuals can be affected in housing, education, or welfare decisions as a result of hidden profiling.
55 Automated Tools Identified
Researchers at the Public Law Project, which runs the Tracking Automated Government (TAG) register, have documented over 55 automated decision-making tools used across UK government departments, including policing. Many operate without publicly available data protection or equality assessments. Legal Director Ariane Adam said, “People deserve to know if a decision about their lives is being made by an opaque algorithm—and have a way to challenge it if it’s wrong.”
How the Crime and Policing Bill Fits In
The Crime and Policing Bill is part of a broader effort by the UK government to modernise policing powers and criminal justice processes. While not specifically focused on predictive technologies, the bill’s scope includes provisions for police data access, surveillance capabilities and crime prevention strategies.
Critics argue that without clear prohibitions, the bill risks giving predictive systems greater legitimacy. “Predictive policing isn’t just a technical tool—it’s a fundamental shift in the presumption of innocence,” said Berry. “We need the law to say clearly: you cannot be punished for something you haven’t done, just because a computer says you might.”
A second proposed amendment from Berry seeks to provide safeguards where automated decisions are used in policing. This would include a legal right to request human review, improved transparency over the use of algorithms, and meaningful routes for redress.
What the Police and Government Are Saying
Police forces and government departments have largely defended their use of predictive technologies, arguing that they allow for more proactive policing. For example, the Home Office has supported initiatives such as GRIP, a place-based prediction system used by 20 forces since 2021 to identify high-crime areas.
Proponents claim these tools help reduce violence and make best use of limited resources. However, recent assessments suggest the benefits may be overstated. Amnesty found “no conclusive evidence” that GRIP had reduced crime, while also warning it had “reinforced racial profiling” in the communities it targeted.
The government has not yet formally responded to the proposed amendments. However, officials have previously argued that AI and ADM can be used responsibly with the right oversight. The Department for Science, Innovation and Technology’s 2023 White Paper on AI governance promoted voluntary transparency standards but fell short of recommending statutory controls.
Businesses and Civil Society
If the amendment banning predictive policing passes, it could reshape how AI and automation are used across public services, not just policing. For civil society and legal groups, it would mark a significant win for rights-based governance of AI.
For businesses working in AI, data analytics and security tech, the implications are mixed. Suppliers of predictive systems to police forces may lose a key customer base, while developers of ethical or human-in-the-loop systems could find new demand for tools that meet stricter legal standards.
More broadly, companies operating in sectors such as insurance, HR tech, or public procurement may face growing scrutiny over how their algorithms are used to assess individuals, particularly if they supply services to the government. A legislative ban on predictive policing could signal the start of tighter controls on high-risk ADM across all sectors.
Cybersecurity professionals and data governance officers may also need to reassess compliance strategies, especially where their systems intersect with law enforcement or public sector clients.
The challenge, according to legal analysts, is ensuring any ban does not create ambiguity. “There’s a fine line between banning profiling-based prediction and stifling responsible innovation,” said one lawyer familiar with the TAG project. “Clear definitions and thresholds will be vital.”
Key Obstacles to Progress
Even with growing public and parliamentary concern, the road to banning predictive policing is unlikely to be smooth. One challenge is technical, i.e. there’s no consensus on what exactly counts as “predictive policing,” given the variety of tools and methods involved.
There’s also the legal complexity of drawing lines between fully automated systems and those that merely assist human decision-making. As with facial recognition and biometric surveillance, courts and regulators have struggled to keep pace with the technology.
Policymakers face a political challenge too: calls for stronger law and order measures remain popular with some voters, and banning high-tech crime-fighting tools may be portrayed as soft on crime. Opponents of the amendment are likely to argue that police need every available advantage to tackle modern threats, including gang violence, knife crime and terrorism.
However, it seems that the tide may be turning. As Berry put it in her Commons speech: “This is a moment to decide what kind of society we want to be—one where we protect rights and freedoms, or one where we criminalise people before they’ve done anything wrong.”
What Does This Mean For Your Business?
Whether or not the amendment to ban predictive policing is adopted, the pressure now facing the UK government reflects a growing public and parliamentary appetite for more robust oversight of AI and algorithmic decision-making. The evidence presented by civil rights groups, legal experts and academics points to a consistent pattern where predictive systems are deployed without transparency or accountability, the result is often discrimination and deep mistrust in public institutions.
For police forces, this moment raises urgent questions about how data is collected, analysed and applied. Even if predictive systems are well-intentioned, their reliance on flawed historical datasets and opaque algorithms makes it difficult to separate operational efficiency from systemic bias. Without clear legal limits, the use of such technologies could further entrench inequalities and reduce trust in frontline policing.
The implications extend far beyond law enforcement. For example, businesses involved in AI development, analytics or public sector contracting will need to stay alert to changing expectations around transparency, fairness and accountability. A legal ban on predictive policing could signal broader regulatory moves against high-risk algorithmic profiling, especially where sensitive or personal data is involved. Companies that rely on such tools in recruitment, risk scoring or fraud detection may need to rethink how their systems operate and how they explain them to clients and users.
For civil society and campaigners, the bill presents a rare chance to press for hard legal safeguards rather than soft ethical guidelines. The current momentum suggests that arguments grounded in lived experience, statistical evidence and human rights law are starting to gain traction in parliamentary debates.
What happens next will shape the relationship between data, power and the public for years to come. Whether through this bill or a future AI-specific law, the UK faces a clear choice: allow automated prediction to quietly redefine policing, or legislate to ensure that new technologies serve justice without undermining it.
Tech Insight : Block Or Charge AI Bots Accessing Your Website
A new system from Cloudflare gives millions of websites the power to block AI bots from scraping their content without permission and could soon let them charge for access via a new pay-per-crawl model.
AI Crawlers A Problem for Publishers and Creators
In recent years, the rapid growth of AI tools has sparked a battle over ownership, access, and compensation. At the centre of the controversy are “AI crawlers”, i.e. automated bots developed by companies like OpenAI, Google, and Anthropic to trawl the internet, copying data from websites to train large language models (LLMs) or power AI assistants.
For creators and publishers, the issue is that this content is often scraped without permission or compensation. Unlike traditional web crawlers used by search engines, which drive traffic back to the original source and support advertising revenue, AI bots typically use the content to generate summaries, answers or outputs directly, without crediting or linking to the sites they pulled from. This bypasses publishers entirely, cutting them out of the value chain.
The BBC, for example, recently accused US-based AI firm Perplexity of using its content without consent and demanded compensation. Similar rows have erupted in the US, with lawsuits from the likes of The New York Times, and in the UK, where artists have criticised the government over weak protections.
As Matthew Prince, co-founder and CEO of Cloudflare, put it: “AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate.”
Who Is Cloudflare?
Cloudflare is one of the internet’s biggest behind-the-scenes players. The US-listed tech firm provides security, performance optimisation and content delivery services for around 20 per cent of all websites globally. That scale makes any system it deploys highly influential, and potentially industry-defining.
On 1 July, the company launched a sweeping new system that gives website owners direct control over AI crawlers. Crucially, this is now turned on by default for new Cloudflare users, meaning that unless permission is granted, AI bots will be blocked from accessing site content altogether.
The move significantly changes the rules of engagement between content owners and AI firms, and lays the groundwork for a new type of economic model.
How the New System Works
The technology uses Cloudflare’s bot detection infrastructure to identify which crawlers are trying to access a site and what purpose they’re being used for, such as AI training, inference, or chatbot search responses. It means that AI crawlers must now declare their identity and intent. This in turn gives website owners the power to choose to allow access, deny it entirely, or ask for payment via a new initiative called Pay per Crawl.
Pay Per Crawl
Pay per Crawl is an experimental marketplace currently in private beta. It allows publishers to set a price (typically a micropayment) for each individual bot crawl. The AI companies must then agree to pay if they want continued access to the site’s content. The entire process is managed by Cloudflare as the intermediary.
The system also includes transparency tools such as dashboards showing how often bots visit a site and what they are collecting. This allows publishers to differentiate between helpful crawler (e.g. those from Google Search) and AI bots that may be extracting content without driving any traffic back.
Big Names Already Backing the Block
Over one million sites are already using Cloudflare’s earlier one-click tool to block AI crawlers. With the new system, even more are expected to adopt it, especially as the default setting now blocks crawlers unless explicitly allowed.
For example, leading media companies including Sky News, The Associated Press, BuzzFeed, TIME, The Atlantic, Condé Nast, Gannett (USA Today), and Dotdash Meredith have signed on to use the technology. Many see it as a step towards restoring control over their intellectual property and creating fairer terms for their contributions to the web.
“This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable,” said Roger Lynch, CEO of Condé Nast.
Also, TIME’s COO, Mark Howard, described the initiative as “a meaningful step toward building a healthier AI ecosystem—one that respects the value of trusted content and supports the creators behind it.”
Crawling Costs and Content Control
The problem, publishers argue, is that AI firms are currently reaping huge rewards from models trained on content that they never paid for. For example, a recent analysis by Cloudflare suggests that OpenAI’s crawler, GPTBot, scraped websites 1,700 times for every referral it gave in return. In comparison, Google’s bot gave one referral for every 14 scrapes – still skewed, but not nearly as extreme.
This imbalance has prompted fears that the original economic model of the open internet, i.e. where traffic from search engines fuels revenue for content creators, is breaking down. For example, as AI assistants become more prevalent and answer users’ questions directly, fewer people click through to the source material. That threatens the sustainability of journalism, research, and creative industries.
Therefore, by introducing a payment mechanism and making bot access conditional, Cloudflare hopes to reshape the model. As the company wrote in its announcement: “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”
Websites and AI Firms
For website owners, especially smaller publishers, creative professionals, and independent media, Cloudflare’s system could offer a much-needed line of defence. For example, many lack the technical resources to build their own bot detection or monetisation systems. With Cloudflare now providing this as a built-in service, it levels the playing field.
For AI companies, however, it creates a new layer of complexity and potentially, cost. While some like ProRata AI and Quora have expressed support for fair compensation models, others may be forced to rethink how they access training data or structure deals with publishers.
At the same time, AI firms that continue to ignore bot exclusion rules may now find themselves more easily blocked, routed into traps (like Cloudflare’s AI “Labyrinth” of junk content), or publicly named and shamed.
The move also puts pressure on Cloudflare’s competitors, such as Amazon Web Services, Google Cloud, and Akamai, to offer similar tools or risk falling behind in the arms race over content protection and AI ethics.
A Bet on a New Internet Economy
By launching Pay per Crawl (still in beta), Cloudflare is positioning itself as both a gatekeeper and broker of a new AI-era content economy. In doing so, it’s hoping to gain influence over how value flows between creators and AI companies, and opening the door to becoming a central payments infrastructure provider in this emerging market.
CEO Matthew Prince has even floated the idea of creating Cloudflare’s own stablecoin to support seamless micropayments at scale.
Challenges
That said, challenges remain. For example, the system only protects content hosted through Cloudflare. Critics like Ed Newton-Rex, founder of Fairly Trained, argue this is a “sticking plaster” rather than a full solution. Legal frameworks, they say, are still essential to address copyright and enforce compliance across the wider web.
Baroness Beeban Kidron, a prominent campaigner for creative rights, nonetheless praised the move as “decisive action,” saying: “If we want a vibrant public sphere, we need AI companies to contribute to the communities in which they operate.”
More broadly, the battle now turns to whether Cloudflare’s system can actually become the foundation for a fairer digital ecosystem, or whether AI firms and others will try to find ways around it.
What Does This Mean For Your Business?
For publishers, a permission-based model for AI web scraping could be the first meaningful opportunity to assert control over how their work is accessed and monetised in an AI-driven world. It gives media groups, content creators, and smaller businesses a chance to protect their intellectual property without needing bespoke technical solutions, and could eventually create new revenue streams where previously there were none. If widely adopted, it also signals a move away from the unspoken assumption that public web content is free for AI companies to exploit.
What makes this development particularly relevant is Cloudflare’s scale. With its technology touching around one fifth of the internet, its default blocking of AI bots resets the baseline. AI companies can no longer rely on passive access to build their models and must now navigate a fragmented, consent-based landscape. While this raises operational challenges for developers of AI tools, it may also encourage more formal, sustainable commercial arrangements between content owners and AI firms.
For UK businesses, the implications are twofold. On the one hand, firms producing original content, e.g. publishers, consultancies, and creative agencies, stand to gain from greater control and potential compensation. On the other, companies that rely on AI systems to summarise, synthesise or build upon external content may face new hurdles or costs. It highlights the need for businesses to understand not just how AI tools function, but where their data comes from and under what terms.
However, the effectiveness of Cloudflare’s model will depend on broad adoption and robust enforcement. The Pay per Crawl system is still in beta and, for now, limited in reach. There is also the risk that aggressive scraping bots will continue to operate outside legitimate channels or spoof identities to bypass detection. In that sense, legal backing remains a missing piece. As critics point out, a voluntary system only protects those within its walls.
Even so, the shift represents a turning point. Whether or not Cloudflare’s marketplace becomes the standard, it has created a framework that others may follow or adapt. For publishers, platforms and AI companies alike, the message is that the free-for-all era of unregulated AI scraping appears to be over. The next chapter will be defined by consent, compensation and a more negotiated relationship between those who create content and those who use it.
Tech News : Microsoft’s Enterprise Agreement Shake-Up Hits Resellers
Microsoft’s decision to bypass long-standing partners in its Enterprise Agreement (EA) renewals is sending financial shockwaves through the global IT channel, with UK-based Bytes Technology Group among the first major casualties.
Reshaping the Channel
For years, Microsoft relied on a network of accredited Large Service Providers (LSPs) to handle the sale and renewal of its three-year Enterprise Agreements, i.e. the long-term software licensing contracts tailored to large organisations. These deals provided LSPs with steady commission income and a foothold in enterprise IT procurement. But that model is changing.
Microsoft has begun reclaiming control of these high-value contracts, handling renewals directly through its own sales force rather than via partners. The change, first noticed in 2023, is accelerating fast. For example, Microsoft reportedly took back control of around a third of EA renewals last year and is expected to reclaim almost all of them by January 2026.
It seems that the company is not just shifting processes but is cutting off financial incentives too. For example, global EA commission payments to LSPs stood at approximately $2.5 billion in 2023, according to US Cloud, a Microsoft support partner. That figure dropped to $1.67 billion in 2024 and is expected to fall to just $583 million in 2025. By 2026, payouts are projected to stop entirely.
Bytes Bitten
For Bytes Technology Group (BTG), one of the UK’s largest Microsoft resellers and a London Stock Exchange-listed firm, it seems the effects have been immediate and severe. For example, shares in BTG recently plummeted over 25 per cent after the company issued a profit warning, citing delayed buying decisions, a difficult macroeconomic environment, and lower commission income from Microsoft.
BTG had previously forecast double-digit gross profit growth for the 2025–26 financial year. But its latest update painted a far more cautious picture, with gross profit now expected to be flat and operating profit lower than anticipated. The company made £2.1 billion in gross invoiced income in the year ending February 2025, with Microsoft sales accounting for around 50 per cent of its gross profit.
“The impact of changes to Microsoft enterprise incentives is weighted more to the first half due to high levels of renewals in March and April around the public sector year end and June around Microsoft’s year end,” BTG noted in a statement ahead of its AGM.
Why is Microsoft Doing This?
From Microsoft’s perspective, the shift is basically strategic. For example, reclaiming direct control over renewals allows it to improve pricing discipline, deepen customer relationships, and retain more margin, particularly at a time when the company is investing heavily in generative AI, including its Copilot tools for Microsoft 365, which are priced at $30 per user per month.
According to US Cloud, the move could deliver a 0.39 per cent annual EBITDA increase for Microsoft, which may sound modest but still adds measurable value to a business currently worth around $3 trillion.
Microsoft’s direct sales in EA accounts are rising fast, growing from $833 million in 2024 to an estimated $1.92 billion in 2025, and expected to reach $2.5 billion by 2026. By cutting commission payouts and increasing its direct footprint, the company is effectively reshaping its entire enterprise sales model.
A Reseller Role Rewritten (or Removed)
LSPs like BTG have spent decades building their businesses on the back of EA renewals, not just processing transactions but also guiding clients through complex licensing environments. Their role has often been compared to that of a financial adviser, providing independent insight and advocacy in negotiations.
“The analogy is losing your trusted financial advisor and being told to work directly with Wall Street,” said Mike Jones, president of US Cloud. “Sure, you’re cutting out the middleman, but you’re also losing valuable guidance.”
This advisory role, critics argue, can’t easily be replaced by Microsoft’s in-house teams, particularly for organisations that lack in-house licensing expertise. There’s concern that some enterprise clients may end up over-buying, under-utilising, or mismanaging licences as a result.
Restructuring to Survive
Faced with declining revenues, BTG and others are now rethinking their go-to-market strategies. For example, BTG has announced it is transitioning from a generalist sales approach to specialised, customer-segment-focused teams, a change it says will help it deliver more tailored solutions and build long-term service-based income.
However, that transition is likely to take time and come with risks. BTG’s CEO Sam Rudd acknowledged as much, stating: “In recent weeks, we’ve navigated a more challenging macro environment, compounded by the near-term effect of transforming our corporate sales team. While this has affected trading, our value proposition remains strong.”
Analysts are less confident. Indraneel Arampatta of Megabuyte said he suspects the changes in Microsoft’s partner model “are starting to bite,” adding that investors may be growing wary of BTG’s exposure to Microsoft and its ability to diversify.
Wider Implications for the Market
It should be noted that the situation is not unique to BTG. For example, similar providers across the UK, Europe, and North America are likely to be affected, especially those heavily reliant on Microsoft’s EA commissions. While some are already shifting towards managed services, cybersecurity, or cloud consultancy, not all will move fast enough to offset the financial loss.
This also raises questions about the future of Microsoft’s partner ecosystem. By sidelining its LSPs, the company risks alienating partners who have long championed its products and helped drive adoption at scale. In more complex environments such as hybrid cloud, AI implementation, or public sector transformations, trusted partners often play an indispensable role.
Some observers also warn of regulatory scrutiny. For example, Microsoft has already faced antitrust pressure in Europe over cloud licensing practices, and a further consolidation of sales control could draw additional attention from competition authorities.
Not All Businesses Will Benefit
While some enterprise clients may welcome direct engagement with Microsoft, it’s likely that others may struggle without LSP support. Navigating EA licensing terms, ensuring compliance, and optimising cost-efficiency can be daunting without expert guidance.
Also, while large organisations with in-house procurement and IT legal teams might manage, mid-sized businesses and public sector organisations could find the transition more difficult, especially as licensing complexity continues to increase alongside Microsoft’s evolving AI offerings.
Meanwhile, rivals such as Amazon Web Services and Google Cloud Platform may seek to capitalise on the disruption. LSPs looking to diversify may find receptive partners elsewhere, potentially shifting allegiances and deepening competition in the enterprise IT space.
What Does This Mean For Your Business?
What this means, in practice, is that a long-established revenue model for service providers is being dismantled at pace, while Microsoft tightens its grip on the most profitable parts of the enterprise customer lifecycle. The financial and operational consequences are already being felt, and the transition is unlikely to be smooth for most. Companies like BTG, with deep exposure to Microsoft licensing, now face a period of structural change where business models built around commission income must be replaced with higher-value services that take longer to scale. That puts pressure not only on margins but also on investor confidence, staffing, and client retention.
For UK businesses, particularly those without large internal IT procurement teams, the loss of hands-on licensing support could create some real challenges. The promise of simplified, direct relationships with Microsoft may sound appealing on paper, but the practical reality of negotiating large-scale EA renewals without experienced intermediaries may introduce risk and additional overheads. While some may adapt successfully, others could find themselves over-licensed, under-supported, or locked into costly configurations that don’t fully align with their needs.
For Microsoft, the short-term gains are measurable and aligned with its strategic goals. Greater pricing control, improved account oversight, and reduced channel leakage all strengthen its position, particularly as it looks to monetise AI offerings like Copilot and Azure-based services more aggressively. However, there is a risk that weakening partner engagement will erode long-term channel goodwill, which has historically underpinned Microsoft’s global reach and sustained competitive advantage.
The broader enterprise IT ecosystem also has a stake in this outcome. For example, if LSPs lose their relevance, the value of multi-vendor, consultative support in complex deployments may decline, or shift towards rival platforms. That creates an opening for Amazon, Google, and others to attract not just customers, but former Microsoft partners seeking more favourable terms. For regulators, meanwhile, the growing dominance of Microsoft’s direct sales model and its impact on channel diversity may increasingly warrant scrutiny.
Ultimately, Microsoft’s move is a calculated reshaping of its enterprise engagement model, but the disruption it causes is real and immediate for those in the channel. As LSPs rush to reinvent themselves, the winners will likely be those who can pivot quickly to new value propositions. The losers, by contrast, may be left watching as a decades-old business model slips quietly out of reach.
Tech News : Google’s Veo 3 Now Generates AI Audio (For Its AI Videos)
Google has launched Veo 3 (its most advanced video-generation AI yet) and for the first time, it can also create synced sound effects, ambient noise, and even dialogue to accompany the visuals.
From Silent Clips to Fully-Sounded Scenes
Announced at Google I/O 2025, the company’s annual developer conference, Veo 3 marks a significant leap in AI video generation by breaking the sound barrier. Unlike earlier models that produced silent clips requiring manual audio dubbing, Veo 3 natively generates both video and sound in response to user prompts. That includes environmental ambience, footsteps, character dialogue, and background music, all tightly synced with the generated visuals.
“For the first time, we’re emerging from the silent era of video generation,” said Demis Hassabis, CEO of Google DeepMind. “You can give Veo 3 a prompt describing characters and an environment, and suggest dialogue with a description of how you want it to sound.”
This appears to mark a clear departure from the static video outputs of Veo 2, which could render realistic 1080p clips but had no inbuilt audio functionality. Veo 3’s ability to generate both media types simultaneously is underpinned by multimodal training, allowing it to understand and translate visual scenes into contextually accurate sound.
Who Can Use Veo 3, And Where?
Veo 3 is now available through Google’s Gemini app for users subscribed to the AI Ultra plan, priced at $249.99 per month. As of now (early July), it’s rolling out across all countries where Gemini is active, including the UK and India. Users can access it via desktop or mobile and prompt the system using text, images, or a combination of both.
Up to 8 Seconds of Video With Audio
At launch, Veo 3 can generate up to 8 seconds of video with audio. For example, users can describe entire scenes, suggest character speech with tonal guidance (e.g. “a soft, nervous voice”), or request specific environmental sounds like birdsong, waves, or city traffic. Google says it plans to extend clip length and creative controls over time.
What’s New and Different?
The most notable change in Veo 3 (from 2) is its seamless integration of audio with video, something no other major model currently achieves at this level of fidelity and control. While earlier experiments with audio-generating AI exist, such as Meta’s AudioCraft or Google’s own SoundStorm, these tools typically treat sound and visuals as separate processes.
Veo 3, however, is built to generate both in parallel. It can understand raw video pixels and adjust audio timing accordingly, such as syncing a character’s footsteps with the terrain they walk on, or matching mouth movements to speech.
It also boasts significant improvements in visual realism. Google says Veo 3 now supports 4K resolution, more accurate physics, and refined prompt adherence. This means it’s better at understanding and sticking to the details users provide, even over multi-shot sequences involving actions and camera movements like pans or zooms.
Creators and Businesses
For video creators, advertisers, educators, and independent filmmakers, Veo 3 could remove one of the biggest barriers in AI content generation, namely having to source or manually create matching audio. With sound now generated natively, users can produce short-form content much faster, with minimal editing or post-production work.
For example, a marketing team could prompt Veo 3 to produce a product demo with a voiceover, or a teacher might generate an animated science explanation complete with relevant sound effects and narration.
Move to “Generative Cinema”
Google sees this as part of a broader shift toward “generative cinema,” where AI can help prototype, storyboard or even produce short-form entertainment. However, its reach could extend to gaming, AR/VR environments, and accessibility use cases such as auto-generating descriptive audio.
Google’s Position in a Crowded Field
Veo 3 arrives in an increasingly competitive video-generation space. For example, over the past year, tools like Runway Gen-3 Alpha, Pika Labs, Luma Dream Machine, and Alibaba’s EMO model have raised the bar for visual quality and scene consistency. However, very few models currently offer audio, and none do so at Veo 3’s level of native integration.
OpenAI’s Sora, which impressed with its photorealistic clips earlier this year, still outputs silent videos. While Runway allows users to add music and basic sound effects, this remains a separate, manually applied process. That gives Veo 3 a unique value proposition, at least for now.
Still, Google’s dominance is not guaranteed. As of now (July 2025), Veo 3’s capabilities are only available to high-paying subscribers through Gemini and haven’t yet been integrated into tools like YouTube Shorts, Google Ads, or enterprise APIs, though the company has confirmed that Veo 2 features are heading to the Vertex AI API in the coming weeks.
How Veo 3 Works
Though Google has not published technical papers on Veo 3, it builds on DeepMind’s earlier work in video-to-audio AI. In 2024, DeepMind revealed it was training models using paired video clips, ambient audio, and transcripts to learn audio-visual correlations. That foundational research likely informed Veo 3’s ability to match visual motion with appropriate audio output.
The model was almost certainly trained on large-scale datasets including YouTube material, though Google has not confirmed this publicly. DeepMind has said only that its models “may” use some YouTube content, raising questions about copyright and consent.
To address misuse risks, Veo 3 uses SynthID, Google’s proprietary watermarking system, which embeds invisible markers into every generated frame. It also includes visible watermarks for user-generated content and is subject to policy enforcement for unsafe or misleading material.
Criticism
Despite the impressive technology, it seems that Veo 3 has drawn scrutiny from some corners of the creative industry. For example, a 2024 study commissioned by the Animation Guild projected that AI tools like Veo could disrupt over 100,000 creative jobs in the US by 2026. Voice actors, sound designers, editors, and animators are among the roles most at risk.
Many artists also remain concerned about the lack of clarity around training data. Without formal consent or opt-out tools for creators on platforms like YouTube, Veo’s capabilities could be seen as drawing from (and replacing) the work of the very communities that power it.
Google says it is committed to responsible AI use and continues to test Veo with red-teaming exercises to identify abuse cases. It also relies on user feedback tools and policy enforcement to detect violations, though details on enforcement mechanisms remain limited.
That said, Veo 3’s creative potential is undeniable, and for businesses, creators, and Google’s own AI ambitions, it appears to mark a significant step forward in the race to multimodal dominance.
What Does This Mean For Your Business?
The arrival of Veo 3 appears to place Google at a clear technological advantage, at least temporarily, by addressing one of the most limiting aspects of AI video creation so far (i.e. the lack of audio). By combining video and sound generation into a single, prompt-driven process, it gives users far more flexibility and reduces the need for specialist editing tools or additional production stages. This will likely appeal to a wide range of professionals, from marketing teams to educators and indie content creators who want fast, realistic results without high production overheads.
For UK businesses in particular, the ability to generate short, full-sound videos in seconds could transform workflows across advertising, training, communications, and social media. SME marketing teams with limited budgets could produce explainers or campaign content in-house, while creative agencies may be able to build new service models around generative assets. However, the high monthly cost of access via Gemini’s AI Ultra plan may still limit uptake to larger firms or early adopters in creative sectors for now.
Competitively, Veo 3 puts pressure on OpenAI, Meta, and other major players who are still struggling to synchronise visuals and sound in a meaningful way. However, it also raises expectations. The moment Google delivers this feature set, users and clients may begin to assume it as standard. And as competitors catch up or release open-access alternatives, Google may need to expand Veo’s availability beyond Gemini and into more accessible developer platforms like Vertex AI or YouTube integrations.
The ethical questions are not going away either. Artists and voice professionals continue to challenge the use of training data that may have been scraped without consent. Even with SynthID watermarking, the risk of misuse or deepfake production remains a concern for regulators and rights-holders. Unless Google can offer greater transparency and clearer opt-out mechanisms, it may face mounting legal and reputational risks as adoption grows.
For now, though, Veo 3 appears to set a new benchmark in what multimodal AI tools can achieve. Whether it remains a premium creative niche or signals a broader shift in how visual content is produced will depend on how Google chooses to scale and integrate its technology in the months ahead.
Company Check : Microsoft Cuts 9,000 Jobs As AI Soars
Microsoft is laying off nearly 4 per cent of its global workforce as it pours billions into artificial intelligence infrastructure, triggering fresh questions over priorities and pressure points at one of the world’s biggest tech firms.
A Costly AI Pivot Brings Organisational Shake-Up
The US tech giant confirmed this week that around 9,000 jobs (i.e. approximately 4 per cent of its 228,000-strong global workforce) will be cut in the latest round of restructuring. The layoffs, which follow a 6,000-person reduction announced in May, are part of Microsoft’s efforts to streamline operations and manage the spiralling costs associated with its aggressive push into artificial intelligence (AI).
Adjustments
A Microsoft spokesperson said the company was “implementing organisational and workforce adjustments” to ensure teams are “best positioned for the future.” Thesea changes include reducing management layers, simplifying internal processes, and consolidating teams and roles. The company also stated it aims to empower employees to “focus on meaningful work by leveraging new technologies and capabilities.”
While the job losses span multiple business units, reports indicate that Microsoft’s gaming division, sales teams, and international operations are among the hardest hit.
Betting on AI
At the heart of the cuts lies Microsoft’s extraordinary $80 billion capital expenditure plan for its 2025 fiscal year, most of which is being funnelled into AI infrastructure. That includes building out massive data centres and purchasing high-end chips to power services like its Copilot AI assistant and the broader integration of generative AI into tools such as Microsoft 365, Azure, and GitHub.
These moves reflect the company’s ambition to remain a leader in the AI arms race. For example, Microsoft is already the largest backer of OpenAI, the developer behind ChatGPT, and earlier this year hired DeepMind co-founder Mustafa Suleyman to head up a new AI division. CEO Satya Nadella has previously said AI will define the next era of computing, and Microsoft is positioning itself to be central to that transformation.
However, such ambition comes at a cost. For example, Microsoft’s cloud division, which includes Azure, is expected to see its profit margins shrink this quarter due to the steep capital outlay required to scale up AI services. This has prompted Microsoft to rebalance its operating model, trimming staff even as it invests heavily elsewhere.
Gaming Division Hit as Projects Cancelled
Although Microsoft has not publicly broken down the affected departments, reports( e.g. by The Verge and Bloomberg) appear to reveal significant disruption in its gaming business. For example, the company is reportedly shutting down ‘The Initiative’, a first-party studio behind the reboot of Perfect Dark, and cancelling the game’s development entirely. Another project, Everwild, is also understood to be shelved.
Studios including ZeniMax Online (makers of Elder Scrolls Online) and Turn 10 (known for Forza Motorsport) have also lost staff, while Barcelona-based King, part of the wider Microsoft Gaming division, is said to be cutting around 200 jobs, or 10 per cent of its workforce.
The gaming layoffs have raised concerns within the industry, particularly given Microsoft’s recent $69 billion acquisition of Activision Blizzard, completed in late 2023. Analysts say that while the company remains committed to gaming, the restructuring suggests a renewed focus on cost discipline and fewer experimental or long-gestation titles.
Sales and International Offices Also Affected
Beyond gaming, Microsoft also appears to be trimming back its sales organisation, particularly within its international teams. According to Washington state filings, more than 800 jobs will go in Redmond and Bellevue, two key hubs near Microsoft’s Seattle headquarters.
Other earlier reports also suggested that thousands of sales and customer service roles were under review as Microsoft looks to simplify go-to-market strategies and reduce duplication across territories. While Microsoft has not disclosed the exact breakdown, it confirmed that job losses are not limited to any one division or region.
A Wider Industry Pattern
It’s worth noting, however, that Microsoft is far from alone in recalibrating its workforce. For example, Meta, Google, and Amazon have all announced job cuts over the past year, despite maintaining strong revenues and investing heavily in AI. Meta recently confirmed plans to trim its “lowest-performing” 5 per cent, while Amazon’s Andy Jassy suggested that AI would “reduce the need” for corporate staff over time.
Microsoft’s latest round though has sparked fresh debate, particularly given the company’s strong financial position. Its stock remains near record highs, and demand for Azure and AI-linked services is surging.
Critics argue that cutting thousands of jobs while investing billions in unproven technologies may be short-sighted. “It’s hard to reconcile the scale of these layoffs with Microsoft’s healthy profits and booming stock price,” one former employee wrote on LinkedIn. “The AI race shouldn’t come at the expense of people’s livelihoods.”
There also appear to be concerns that the pace of AI infrastructure growth may outstrip customer demand. While Microsoft has pushed its Copilot AI across its software suite, uptake has been mixed. Some enterprise clients have voiced preference for using standalone tools like ChatGPT, citing cost and ease of use.
Implications for Businesses and Users
For Microsoft’s business customers, the shake-up could mean that the company’s intense focus on AI could accelerate the availability of new productivity tools and cloud capabilities. Its goal of embedding generative AI across software like Outlook, Excel, and Teams promises significant efficiency gains, if widely adopted.
However, job losses across sales and customer support teams may also create short-term disruption, especially for small and mid-sized businesses that rely on personalised assistance. It’s possible too that a leaner organisational structure may also slow responsiveness or delay product support in key markets.
Gaming users may also feel the impact. Microsoft has spent years trying to differentiate Xbox from rivals through exclusive titles and studio acquisitions. The cancellation of projects like Perfect Dark raises questions about the company’s creative roadmap, and whether its gaming strategy is still evolving or being scaled back.
Balancing Growth and Responsibility
Microsoft insists that the layoffs are necessary to “align its resources with strategic priorities” and adapt to a dynamic technology landscape. It’s clear, however, that the company is walking a fine line by trying to lead the AI revolution while avoiding the perception that it’s sacrificing stable jobs in the process.
With expectations running high across both the enterprise and consumer markets, Microsoft’s next challenge will be to prove that its AI investments can deliver real-world value, while maintaining the trust of its employees, users, and investors.
What Does This Mean For Your Business?
The real test for Microsoft will be whether its AI-led strategy delivers enough tangible business value to justify the level of disruption it is now inflicting. While the company remains profitable and well-positioned at the forefront of the AI sector, cutting 9,000 jobs (many in customer-facing and creative roles) risks damaging internal morale and external confidence. For UK businesses, this could mean less personalised support, slower response times, and uncertainty about future service structures, especially for smaller firms that depend on Microsoft’s cloud and productivity tools for day-to-day operations.
There is also a reputational cost to consider. For all the talk of long-term alignment and streamlined processes, this is the fourth round of cuts in a single year. That creates unease not just within Microsoft’s workforce, but across the tech industry more broadly. Partners and clients may begin to question how stable support structures will remain as Microsoft retools itself around AI. Even investors could grow wary if infrastructure spending continues to outpace revenue returns from products like Copilot and Azure AI.
None of this means Microsoft’s strategy is necessarily wrong. The company is doing what many others are attempting to do, pivoting towards what it believes will be the next great computing platform. However, the scale and speed of that pivot means it now faces pressure to show results quickly. If Microsoft can prove that its vast AI investments lead to genuinely better tools, improved business outcomes, and sustained growth, it may yet justify the cuts. If not, it could find itself having sacrificed stability and goodwill for a vision that was never as widely shared as it assumed.