Video Update : Another Massive Upgrade To CoPilot – Already !
CoPilot’s brand-new “Researcher Agent” is a pretty major upgrade, so this week’s Video-of-the-Week takes it to task and has a look at what it can do for your business.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – Use Outlook’s “Report” Button to Flag Suspicious Emails
Spot something ‘phishy’ in your inbox? Outlook’s built-in “Report” tool lets you quickly flag dodgy messages, and helps Microsoft improve detection.
How to:
– In the Outlook desktop app or web version, click on the email in your inbox to preview it in the Reading Pane — no need to open it fully.
– Click the Report button in the toolbar (sometimes labelled Junk or Phishing).
– Choose Phishing or Junk, depending on the content.
– The email will be flagged and moved out of your inbox.
Pro-Tip: Reporting dodgy messages helps train Microsoft’s filters and protects others in your organisation too.
Featured Article : Grok Blocked! Quarter Of EU Firms Ban Access
New research shows that one in four European organisations have banned Elon Musk’s Grok AI chatbot due to concerns over misinformation, data privacy and reputational risk, making it far more widely rejected than rival tools like ChatGPT or Gemini.
A Trust Gap Is Emerging in the AI Race
The findings from cybersecurity firm Netskope point to a growing shift in how European businesses are evaluating generative AI tools. While platforms like ChatGPT and Gemini continue to gain traction, Grok’s higher rate of rejection suggests that organisations are becoming more selective and are prioritising transparency, reliability and alignment with company values over novelty or brand recognition.
What Is Grok?
Grok is a generative AI chatbot developed by Elon Musk’s company xAI and built into X, the social media platform formerly known as Twitter. Marketed as a bold, “truth-seeking” alternative to mainstream AI tools, Grok is designed to answer user prompts in real time with internet-connected responses. However, a series of controversial and misleading outputs (along with a lack of transparency about how it handles user data and trains its model) have made many organisations wary of its use.
Grok’s Risk Profile Raises Red Flags
While most generative AI tools are being rapidly adopted in European workplaces, Grok appears to be the exception. For example, Netskope’s latest threat report reveals that 25 per cent of European organisations have now blocked the app at network level. In contrast, only 9.8 per cent have blocked OpenAI’s ChatGPT, and just 9.2 per cent have done the same with Google Gemini.
Content Moderation Issue
Part of the issue appears to lie in Grok’s content moderation, or lack thereof. For example, the chatbot has made headlines for spreading inflammatory and false claims, including the promotion of a “white genocide” conspiracy theory in South Africa and casting doubt on key facts about the Holocaust. These incidents appear to have deeply shaken confidence in the platform’s ethical safeguards and prompted scrutiny around how the model handles prompts, training data and user inputs.
Companies More Selective About AI Tools
Gianpietro Cutolo, a cloud threat researcher at Netskope, said the bans on Grok highlight a growing awareness of the risks linked to generative AI. As he explained, organisations are starting to draw clearer lines between different platforms based on how they handle security and compliance. “They’re becoming more savvy that not all AI is equal when it comes to data security,” he said, noting that concerns around reputation, regulation and data protection are now shaping AI adoption decisions.
Privacy and Transparency
Neil Thacker, Netskope’s Global Privacy and Data Protection Officer, believes the trend is indicative of a broader shift in how European firms assess digital tools. “Businesses are becoming aware that not all apps are the same in the way they handle data privacy, ownership of data that is shared with the app, or in how much detail they reveal about the way they train the model with any data that is shared within prompts,” he said.
This appears to be particularly relevant in Europe, where GDPR sets strict requirements on how personal and sensitive data can be used. Grok’s relative lack of clarity over what it does with user input, especially in enterprise contexts, appears to have tipped the scales for many firms.
It also doesn’t help that Grok is closely tied to X, a platform currently under EU investigation for failing to tackle disinformation under the Digital Services Act. The crossover has raised uncomfortable questions about how data might be shared or leveraged across Musk’s various companies.
Not The Only One Blocked
Despite its controversial reputation, it seems that Grok is far from alone in being blocked. The most blacklisted generative AI app in Europe is Stable Diffusion, an image generator from UK-based Stability AI, which is blocked by 41 per cent of organisations due to privacy and licensing concerns.
However, Grok’s fall from grace stands out because of how stark the contrast is with its peers. ChatGPT, for instance, remains by far the most widely used generative AI chatbot in Europe. Netskope’s report found that 91 per cent of European firms now use some form of cloud-based GenAI tool in their operations, suggesting that the appetite for AI is strong, but users are choosing carefully.
The relative trust in OpenAI and Google reflects the degree to which those platforms have invested in transparency, compliance documentation, and enterprise safeguards. Features such as business-specific data privacy settings, clearer disclosures on training practices, and regulated API access have helped cement their position as ‘safe bets’ in regulated industries.
Musk’s Reputation
There’s also a reputational issue at play, i.e. Elon Musk has become a polarising figure in both tech and politics, particularly in Europe. For example, Tesla’s EU sales dropped by more than 50 per cent year-on-year last month, with some industry analysts attributing the decline to Musk’s increasingly vocal support of far-right politicians and his role in the Trump administration.
It seems that the backlash may now be spilling over into his other ventures. Grok’s public branding as an unfiltered “truth-seeking” AI has been praised by some users, but in a European context, it risks triggering compliance concerns around hate speech, misinformation, and AI safety.
‘DOGE’ Link
Also, a recent Reuters investigation found that Grok is being quietly promoted within the US federal government through Musk’s (somewhat unpopular) Department of Government Efficiency (DOGE), thereby raising concerns over potential conflicts of interest and handling of sensitive data.
What Are Businesses Doing Instead?
With Grok now off-limits in one in four European organisations, it appears that most companies are leaning into AI platforms with clearer data control options and dedicated enterprise tools. For example, ChatGPT Enterprise and Microsoft’s Copilot (powered by OpenAI’s models) are increasingly popular among large firms for their security features, audit trails, and compatibility with existing workplace platforms like Microsoft 365.
Meanwhile, companies with highly sensitive data are now exploring private GenAI solutions, such as running open-source models like Llama or Mistral on internal infrastructure, or through secured cloud environments provided by AWS, Azure or Google Cloud.
Others are looking at AI governance platforms to sit between employees and GenAI tools, offering monitoring, usage tracking and guardrails. Tools like DataRobot, Writer, or even Salesforce’s Einstein Copilot are positioning themselves not just as generative AI providers, but as risk-managed AI partners.
At the same time, it shows how quickly sentiment can shift. Musk’s original pitch for Grok as an edgy, tell-it-like-it-is alternative to Silicon Valley’s AI offerings found some traction among individual users. But in a business setting, particularly in Europe, compliance, reliability, and reputational alignment seem to matter more than iconoclasm.
Regulation Reshaping the Playing Field
The surge in bans against Grok also reflects a change in how generative AI is being governed and evaluated at the institutional level. Across Europe, regulators are moving to tighten rules on artificial intelligence, with the EU’s landmark AI Act expected to set a global precedent. This new framework categorises AI systems by risk level and could impose strict obligations on tools used in high-stakes environments like recruitment, finance, and public services.
That means tools like Grok, which are perceived to lack sufficient transparency or safety mechanisms, could face even greater scrutiny in the future. European firms are clearly starting to anticipate these regulatory pressures, and adjusting their AI strategies accordingly.
Grok’s Market Position May Be Out of Step
At the same time, the pattern of bans has implications for the competitive dynamics of the GenAI sector. For example, while OpenAI, Google and Microsoft have invested heavily in enterprise-ready versions of their chatbots, with controls for data retention, content filtering and auditability, Grok appears less geared towards business use. Its integration into a consumer social media platform and emphasis on uncensored responses make it an outlier in an increasingly risk-aware market.
Security and Deployment Strategies Are Evolving
There’s also a growing role for cloud providers and IT security teams in shaping how AI tools are deployed across organisations. Many companies are now turning to secure gateways, policy enforcement tools, or in some cases, completely air-gapped deployments of open-source models to ensure data stays within strict compliance boundaries. These developments suggest the AI market is maturing quickly, with an emphasis not only on innovation, but on operational control.
What Does This Mean For Your Businesses?
For UK businesses, the growing rejection of Grok highlights the importance of due diligence when selecting generative AI tools. With data privacy laws such as the UK GDPR still closely aligned with EU regulations, similar concerns around transparency, content reliability and compliance are just as relevant domestically. Organisations operating across borders, particularly those in regulated sectors like finance, healthcare or legal services, are likely to favour tools that not only perform well but also come with clear safeguards, documentation and support for enterprise-grade governance.
More broadly, the story of Grok is a reminder that in today’s AI landscape, branding and ambition are no longer enough. The success of generative AI tools increasingly depends on trust, i.e. trust in how data is handled, how outputs are generated, and how tools behave under pressure. For developers and vendors, that means security, transparency and adaptability must be built into the product from day one. For businesses, it means asking tougher questions before deploying any new tool into day-to-day operations.
While Elon Musk’s approach may continue to resonate with individual users who value unfiltered output or alignment with particular ideologies, enterprise buyers are clearly playing by a different rulebook. They’re looking for stability, accountability and risk management, not provocation. As regulation tightens, that divide is likely to widen.
Tech Insight : Why Google’s New ‘Fingerprint’ Policy Matters
In this Tech Insight, we look at Google’s controversial decision to allow advertisers to use device fingerprinting, exploring what the technology involves, why it has sparked concern, and what it means for users, businesses, and regulators.
A Policy Reversal
In February 2025, Google quietly updated its advertising platform rules, allowing companies that use its services to deploy a tracking method known as ‘device fingerprinting’. The change came with little fanfare but has quickly become one of the most debated privacy developments of the year.
Until now, fingerprinting was explicitly prohibited under Google’s policies. The company had long argued it undermined user control and transparency. In a 2019 blog post, Google described it as a technique that “subverts user choice and is wrong”. But five years later, the same practice is being positioned as a legitimate tool for reaching audiences on platforms where cookies no longer work effectively.
According to Google, the decision reflects changes in how people use the internet. For example, with more users accessing content via smart TVs, consoles and streaming devices, environments where cookies and consent banners are limited or irrelevant, fingerprinting offers advertisers a new way to track and measure campaign effectiveness. The company says it is also investing in “privacy-enhancing technologies” that reduce risks while still allowing ads to be targeted and measured.
However, the reaction from regulators, privacy campaigners and some in the tech community has been far from supportive.
What Is Fingerprinting?
Fingerprinting is a method of identifying users based on the technical details of their device and browsing setup. Unlike cookies, which store data on a user’s device, fingerprinting collects data that’s already being transmitted as part of normal web use.
This includes information such as:
– Browser version and type.
– Operating system and installed fonts.
– Screen size and resolution.
– Language settings and time zone.
– Battery level and available plugins.
– IP address and network information.
Individually, none of these data points reveals much but, when combined, they can create a unique “fingerprint” that allows advertisers or third parties to recognise a user each time they go online, often without them knowing, and without a way to opt out.
Also, because it happens passively in the background, fingerprinting is hard to block. Even clearing cookies or browsing in private mode won’t prevent it. For privacy advocates, that’s a key part of the problem.
How Fingerprinting’s Being Used
With third-party cookies disappearing and users browsing through everything from laptops to smart TVs, fingerprinting essentially offers a way for advertisers to maintain continuity, even when cookies and consent banners can’t keep up.
Advertisers use it to build persistent profiles that help with targeting, measurement, and fraud detection. In technical terms, it’s a highly efficient way to link impressions and conversions without relying on traditional identifiers.
Why Critics Are Alarmed
Almost immediately after Google’s announcement, a wave of criticism followed. For example, the UK’s independent data protection regulator, the Information Commissioner’s Office (ICO), called the move “irresponsible” and said it risks undermining the principle of informed consent.
In a December blog post, Stephen Almond, Executive Director of Regulatory Risk at the ICO, warned: “Fingerprinting is not a fair means of tracking users online because it is likely to reduce people’s choice and control over how their information is collected.”
The ICO has published draft guidance explaining that fingerprinting, like cookies, must comply with existing UK data laws. These include the UK GDPR and the Privacy and Electronic Communications Regulations (PECR). That means advertisers need to demonstrate transparency, secure user consent where required, and ensure users understand how their data is being processed.
The problem, critics say, is that fingerprinting makes this nearly impossible. The Electronic Frontier Foundation’s Lena Cohen described it as a “workaround to offering and honouring informed choice”. Mozilla’s Martin Thomson went further, saying: “By allowing fingerprinting, Google has given itself — and the advertising industry it dominates — permission to use a form of tracking that people can’t do much to stop.”
Google’s Justification
Google insists that fingerprinting is already widely used across the industry and that its updated policy simply reflects this reality. The company has argued that IP addresses and device signals are essential for preventing fraud, measuring ad performance, and reaching users on platforms where traditional tracking methods fall short.
In a statement, a Google spokesperson said: “We continue to give users choice whether to receive personalised ads, and will work across the industry to encourage responsible data use.”
Criticism From Privacy Campaigners
However, privacy campaigners argue that the decision puts business interests above users. They point out that fingerprinting isn’t just harder to detect, but it’s also harder to control. For example, unlike cookies, there’s no pop-up, no ‘accept’ or ‘reject’ button, and no straightforward way for users to opt out.
Pete Wallace, from advertising technology company GumGum, said the change represents a backwards step: “Fingerprinting feels like it’s taking a much more business-centric approach to the use of consumer data rather than a consumer-centric approach.”
Advertisers Welcome the Change
Unsurprisingly perhaps, within the advertising industry, many welcomed Google’s decision because as the usefulness of cookies declines, brands are looking for alternative ways to reach users, especially across multiple devices.
For example, Jon Halvorson, Global VP at Mondelez International, said: “This update opens up more opportunities for the ecosystem in a fragmented and growing space while respecting user privacy.”
Trade bodies such as the IAB Tech Lab and Network Advertising Initiative echoed the sentiment, saying the update enables responsible targeting and better cross-device measurement.
That said, even among advertisers, there’s an awareness that the use of fingerprinting must be handled carefully. Some fear that if it is abused or poorly implemented, it could invite regulatory action, or worse, further erode user trust in the online ad industry.
Legal Responsibilities Under UK Law
For UK companies using Google’s advertising tools, the policy change doesn’t mean fingerprinting is suddenly risk-free. While Google’s own platform rules now allow the practice, UK data protection law still applies, and it’s strict.
For example, organisations planning to use fingerprinting must ensure their tracking methods are:
– Clearly explained to users, with full transparency.
– Proportionate to their purpose, and not excessive.
– Based on freely given, informed consent where applicable.
– Open to user control, including rights to opt out or request erasure.
The ICO has warned that fingerprinting, by its very nature, makes it harder to meet these standards. The fact that it often operates behind the scenes and without user awareness means that it may not be providing the level of transparency required under the UK GDPR and PECR is a significant challenge.
Therefore, any business using fingerprinting for advertising will need to demonstrate that it is not only aware of these rules, but fully compliant with them. Regulators have already signalled their willingness to act where necessary, and given Google’s influence, this policy change is likely to come under particular scrutiny.
The Reputational Risks Are Real
It should be noted, however, that while it’s effective, fingerprinting comes with serious downsides, especially for businesses operating in sensitive or highly regulated sectors. For example, since users often don’t know it’s happening, fingerprinting can undermine trust, even when it’s being used within legal boundaries.
For industries like healthcare, finance, or public services, silent tracking could prove more damaging than the data is worth. If customers feel they’ve been tracked without consent, the backlash, whether legal, reputational or both, can be swift.
Fragmentation Across the Ecosystem
Another practical challenge is that fingerprinting isn’t supported equally across platforms. While Google has now allowed it within its ad systems, others have gone in the opposite direction.
For example, browsers like Safari, Firefox and Brave actively block or limit fingerprinting. Apple in particular has built its privacy credentials around restricting such practices. This means advertisers relying heavily on fingerprinting could see patchy results or data gaps depending on the devices or browsers their audiences are using.
Part of a Broader Toolkit
It’s worth remembering here that fingerprinting isn’t the only tool on the table. Many ad tech providers are combining it with alternatives such as :
— Contextual targeting : Showing ads based on the content you’re looking at (e.g. showing travel ads on a travel blog).
— First-party data : Information a company collects directly from you, like your purchase history or website activity — not from third parties.
— On-device processing : Data is analysed on your phone or computer, never sent to a central server.
— Federated learning : Your device trains a model (like for ad targeting or recommendations), and only anonymised updates are shared — not your personal data.
Therefore, rather than replacing cookies outright, fingerprinting may end up as just one option in a mixed strategy, and used selectively where consent is hard to obtain, or where traditional identifiers are unavailable.
What Does This Mean for Your Business?
For UK businesses, the reintroduction of fingerprinting within its advertising ecosystem may offer more stable tracking across devices and platforms, especially as third-party cookies continue to decline. However, the use of such techniques also brings legal and reputational risks that cannot be delegated to Google or any external platform.
Organisations that advertise online, whether directly or through agencies, should now assess how fingerprinting fits within their broader compliance obligations under UK data protection law. The Information Commissioner’s Office has made it clear that fingerprinting is subject to the same principles of transparency, consent, and fairness as other tracking methods. Simply using a tool because it is technically available does not make its use lawful.
Beyond legal considerations, there’s also a growing risk to customer trust. For example, if users discover that they are being tracked through methods they cannot see, manage or decline, the damage to a brand’s credibility could be significant, particularly in sectors where data sensitivity is high. For many organisations, the question may not just be whether fingerprinting can improve ad performance, but whether it aligns with the expectations of their audience and the values they wish to uphold.
This change also places pressure on advertisers, platforms, and regulators to clarify the boundaries of responsible data use. For some, fingerprinting may form part of a wider privacy-aware strategy that includes contextual targeting or consent-based identifiers. For others, it may prove too opaque or contentious to justify. Either way, businesses will need to make informed decisions, and be ready to explain them.
Tech News : Fastest Change In Tech History
The pace and scale of artificial intelligence (AI) development is now outstripping every previous tech revolution, according to new landmark reports.
Faster Than Anything We’ve Seen Before
Some of the latest data confirms that AI really is moving faster than anything that’s come before it. That’s the key message from recent high-profile reports including Mary Meeker’s new Trends – Artificial Intelligence report and Stanford’s latest AI Index, both released in spring 2025. Together, the data they present highlights an industry surging ahead at a speed that’s catching even seasoned technologists off guard.
Meeker, the influential venture capitalist once dubbed “Queen of the Internet”, hasn’t published a trends report since 2019 but it seems that the extraordinary pace of AI progress has lured her back, and her new 340-page analysis uses the word “unprecedented” more than 50 times (with good reason).
“Adoption of artificial intelligence technology is unlike anything seen before in the history of computing,” Meeker writes. “The speed, scale, and competitive intensity are fundamentally reshaping the tech landscape.”
Stanford’s findings echo this. For example, its 2025 AI Index Report outlines how generative AI in particular has catalysed a rapid transformation, with advances in model size, performance, use cases, and user uptake occurring faster than academic and policy communities can track.
The Numbers That Prove the Surge
In terms of users, OpenAI’s ChatGPT generative AI chatbot hit 100 million users in two months and it’s now approaching 800 million monthly users just 17 months after launch. No platform in history has scaled that quickly – not Google, not Facebook, not TikTok.
Business adoption of AI is rising rapidly. For example, according to Stanford’s AI Index 2025, more than 70 per cent of surveyed global companies are now either actively deploying or exploring the use of generative AI. This represents a significant increase from fewer than 10 per cent just two years earlier. At the same time, worldwide investment in AI reached $189 billion in 2023, with technology firms allocating record levels of funding to infrastructure, research, and product development.
Cost of Accessing AI Falling
It seems that the cost of accessing AI services is also falling sharply. For example, Meeker’s Trends – Artificial Intelligence report notes that inference costs, i.e. the operational cost of running AI models, have declined by a massive 99.7 per cent over the past two years. Based on Stanford’s calculations, this means that businesses are now able to access advanced AI capabilities at a fraction of the price paid in 2022.
What’s Driving This Acceleration?
Several factors are converging at once to drive this acceleration. These are:
– Hardware efficiency leaps. Nvidia’s 2024 Blackwell GPU reportedly uses 105,000x less energy per token than its 2014 Kepler chip! At the same time, custom AI chips from Google (TPU), Amazon (Trainium), and Microsoft (Athena) are rapidly improving performance and slashing energy use.
– Cloud hyperscale investment. The world’s biggest tech firms are betting big on AI infrastructure. Microsoft, Amazon, and Google are all racing to expand their cloud platforms with AI-specific hardware and software. As Meeker puts it, “These aren’t side projects — they’re foundational bets.”
– Open-source momentum. Hugging Face, Mistral, Meta’s LLaMA, and a host of Chinese labs are releasing increasingly powerful open-source models. This is democratising access, increasing competition, and reducing costs — all of which accelerate adoption.
– Government and sovereign AI initiatives. National efforts, particularly in China and the EU, are helping to fund AI infrastructure and drive localisation. These projects are pushing innovation outside Silicon Valley at a rapid pace.
– Developer ecosystem growth. Millions of developers are now building on top of generative AI APIs. Google’s Gemini, OpenAI’s GPT, Anthropic’s Claude, and others have created platforms where innovation compounds rapidly. As Stanford notes, “Industry now outperforms academia on nearly every AI benchmark.”
AI Agents – From Chat to Task Execution
One major change in the past year has been the move beyond simple chatbot interfaces. For example, so-called “AI agents”, i.e. systems that can plan and carry out multi-step tasks, are emerging quickly. This includes tools that can search the web, book travel, summarise documents, or even write and run code autonomously.
Companies like OpenAI, Google DeepMind, and Adept are racing to build these agentic systems. The goal is to create AI that can do, not just respond. This could fundamentally change knowledge work, and is already being trialled in areas like customer service, legal research, and software testing.
The Message
For businesses, the message appears to be that there is a need to adapt quickly, or risk falling behind.
Meeker’s report emphasises that AI is already “redefining productivity”, with tools delivering step changes in output for tasks like drafting, data analysis, code generation, and document processing. Many enterprise users report 20–40 per cent efficiency gains when integrating AI into daily workflows.
However, it’s not just about performance. Falling costs and rising model capabilities mean that AI is becoming accessible to even small businesses, not just tech giants. Whether it’s automating customer support or generating marketing copy, SMEs now have access to tools that rival those of major players.
From a market perspective, however, things are less clear-cut. While revenue is rising – OpenAI is projected to hit $3.4 billion in 2025, up from around $1.6 billion last year – most AI firms are still burning through capital at unsustainable rates.
Also, training large models is very expensive. GPT-4, for example, reportedly cost $78 million just to train, and newer models will likely exceed that. As Meeker cautions: “Only time will tell which side of the money-making equation the current AI aspirants will land.”
Challenges, Criticism, and Growing Pains
Despite the enthusiasm, not everything is rosy. The pace of AI’s rise has sparked a host of issues, such as:
– Energy use and environmental impact. Training and running AI models consumes vast amounts of electricity. Even with hardware improvements, Stanford warns of “significant sustainability challenges” as model sizes increase.
– AI misuse and disinformation. The Stanford report logs a steep rise in reported AI misuse incidents, particularly involving deepfakes, scams, and electoral disinformation. Regulatory frameworks remain patchy and reactive.
– Labour market upheaval. Stanford data shows a clear impact on job structures, particularly in content-heavy and administrative roles. While AI augments some jobs, it also displaces others and workers, employers, and policymakers are struggling to keep up.
– Profitability concerns. While AI infrastructure is growing rapidly, it’s not yet clear which companies will convert hype into long-term revenue. Even the most well-funded players face stiff competition, regulatory scrutiny, and the risk of market saturation.
What Does This Mean For Your Business?
It seems that the combination of surging adoption, falling costs, and rising capability is placing AI at the centre of digital transformation efforts across nearly every sector. For global businesses, the incentives to engage with AI tools are growing rapidly, with productivity benefits now being demonstrated at scale. At the same time, the pace of change is creating new risks, particularly in terms of workforce disruption, misuse, and unsustainable infrastructure demands, that still lack clear long-term responses.
For UK businesses, the implications are becoming increasingly difficult to ignore. As global competitors embed AI into operations, decision-making, and service delivery, organisations that delay may struggle to keep pace. At the same time, the availability of open-source models and accessible APIs means that smaller firms and startups are also in a position to benefit, if they can navigate the complexity and choose the right tools. Key sectors such as financial services, legal, healthcare, and logistics are already seeing early AI-driven efficiencies, and pressure is mounting on others to follow suit.
Policy makers, regulators, and infrastructure providers also have critical roles to play. Whether it is through ensuring fair access to computing resources, investing in AI literacy and skills, or designing governance frameworks that can evolve with the technology, stakeholders across the economy will need to respond quickly and collaboratively. While the financial picture remains uncertain, what is now clear is that AI is no longer a frontier science, but is a core driver of technological change, and one that is advancing at a pace few expected.
Tech News : Gmail Now Summarises Emails Automatically
Gmail users will now see AI-generated summary cards appear by default at the top of long emails, thanks to an automatic update to Google’s Gemini assistant.
Google Doubles Down on Inbox AI
Google has announced that as of 29 May 2025, its Gemini artificial intelligence (AI) assistant will automatically summarise long email threads in Gmail, without waiting for a prompt or tap from the user. The update, initially rolling out to mobile users on Android and iOS, is part of a move towards integrating AI more seamlessly (and visibly) into everyday productivity tools.
Until now, users could choose to trigger a summary by tapping a button labelled “Summarise this email.” However, with this change, Gemini summary cards will start appearing by default on eligible emails, unless the user has opted out of smart features or is in a region where they are disabled by default.
The move by Google could be seen as less of a visual tweak, and more of a subtle but significant change in the relationship between users, their inboxes, and Google’s AI.
What Is Gemini, and Why Does It Matter?
Gemini is Google’s suite of generative AI tools, positioned as a direct competitor to Microsoft Copilot and other AI assistants. It spans across multiple Google Workspace apps including Docs, Sheets, and Gmail, offering assistance with drafting content, summarising information, and generating replies.
Originally introduced under the “Duet AI” brand in 2023, Gemini was rebranded and expanded in early 2024 as part of Google’s wider AI push. Its integration into Gmail’s side panel was one of the first widely adopted use cases, giving users access to email-specific tools like drafting responses, summarising lengthy threads, and generating replies using natural language.
Up to now, Gemini’s role in Gmail has largely been opt-in, with users having to initiate actions themselves.
From Passive Tool to Active Assistant
With the new update, Gemini becomes more assertive. For example, long or complex emails, especially those that form part of back-and-forth threads, will now automatically display a summary card at the top of the message. The card outlines the key points of the conversation so far and will update dynamically as new replies come in.
Google says this move is intended to save users time and reduce email fatigue, a problem that has long plagued busy professionals. For example, according to a 2024 McKinsey report, workers still spend around 28 per cent of their workweek reading and responding to emails. Google is, therefore, betting that AI summaries can streamline this process, especially on mobile, where skimming a long message chain is often more tedious.
In an announcement on its Workspace Updates blog, Google said the feature “will synthesise all the key points from the email thread, and any replies thereafter will also be a part of the synopsis, keeping all summaries up to date.”
Who Gets It and When?
The feature began rolling out on 29 May 2025 to Rapid Release domains and is now gradually being deployed across Scheduled Release domains over a 15-day window. It’s available to the following Google Workspace editions:
– Business Starter, Standard, and Plus.
– Enterprise Starter, Standard, and Plus.
– Google One AI Premium.
– Gemini Business and Enterprise customers (existing add-ons).
– Gemini Education and Education Premium add-on users.
The feature is currently limited to English-language emails, and Google has not yet announced support for other languages.
Smart Features
Importantly, Gemini summary cards are only visible to users who have smart features and personalisation turned on in Gmail. These settings control whether Google can use AI to offer tailored features based on content in a user’s inbox.
In some regions, including the UK, EU, Switzerland, and Japan, smart features are turned off by default due to local data protection laws. Users in these areas would need to manually enable the feature in Gmail’s settings to start seeing the summary cards.
How to Opt Out or Take Back Control
For users who’d rather not have Gemini skimming their emails on their behalf, there are ways to disable the feature. For example, users can:
– Go to Gmail Settings > See all settings > Smart features and personalisation.
– Toggle off “Smart features” to prevent summary cards and other AI-based tools from appearing.
– Disable “Smart features in Gmail, Chat and Meet” for more comprehensive opt-out control.
Admins of Google Workspace domains can also manage these settings at a policy level from the Admin Console, giving organisations central control over the feature’s rollout.
It’s worth noting here that, even with the automatic summaries in place, the manual “Summarise this email” chip still remains, both at the top of eligible emails and in the Gemini side panel. This means that users who want to selectively invoke AI help can still do so.
Automation or Overreach?
While Google pitches the change as a productivity boost, not everyone is celebrating the move. For example, one key concern is accuracy. AI summaries, particularly those generated in real time from nuanced human conversations, are notoriously hit-and-miss. Even Google’s own AI Overviews in Search have come under fire for offering incorrect or misleading answers, as recently highlighted in a series of viral screenshots on social media.
Google’s not alone in being criticised for this. For example, it’s also been reported that Apple’s push-notification summaries, based on similar AI technology, repeatedly misinterpreted news headlines. Apple has since paused that feature for news apps, pending a fix.
It seems that a similar level of scepticism now surrounds Gmail’s automatic summaries. Critics argue that important context can easily be lost or misrepresented by an AI synopsis, especially in complex or emotionally nuanced threads.
As highlighted by Dr Jenna McCarthy, a digital communications researcher at the University of Manchester: “This kind of automation risks giving people a false sense of understanding,” adding that “Summaries might look slick, but in business or legal emails, the devil is often in the detail.”
It’s worth noting here that Google itself actually appears to acknowledge this limitation. For example, in its support documentation, the company stresses that the summaries are meant to complement human reading, not replace it.
Privacy and Trust Still Under Scrutiny
Alongside concerns about accuracy, privacy remains a hot topic. Although Google insists that all AI interactions respect user data protection rules and don’t expose personal content to human reviewers, the idea of automated scanning, even for benign purposes like summarising, may raise some eyebrows among privacy-conscious users.
Google directs users to its Privacy Hub for more information, but as with other AI features, transparency is key. Users are likely to expect more clarity around how data is used, stored, and processed when features like this are switched on by default.
Part of a Move Towards Embedded AI
Google’s update also reflects a broader industry direction, i.e. AI tools are increasingly moving from optional add-ons to proactive, built-in features. Rather than waiting for user prompts, systems like Gemini are starting to anticipate needs and take action automatically.
In Google’s case, the aim appears to be to create a more seamless experience across Workspace, where AI quietly handles repetitive or time-consuming tasks like summarising threads, without disrupting the user’s workflow. This aligns with recent updates across other Workspace apps, where Gemini is being positioned as a default productivity layer rather than a separate tool.
However, the effectiveness of this approach will depend heavily on how much trust users place in the AI’s accuracy and judgement—and how much control they feel they still have over their own inbox.
What Does This Mean For Your Business?
While the arrival of automatic Gemini summaries may seem like a small design tweak, the implications actually go much deeper. By removing the need for users to actively request a summary, Google is signalling a shift towards AI that no longer waits in the wings, but steps forward by default. For some, that may be welcome, especially for those managing high volumes of email who are eager to shave precious minutes off their working day. However, for others, the change may raise fresh concerns around trust, data processing, and the growing opacity of algorithmic decision-making in everyday tools.
For UK businesses, the move could offer real productivity gains, particularly in fast-paced environments where clarity and speed of communication are key. Admins can tailor how the feature is used across teams, allowing for top-down management of when and where AI steps in. But the benefits must be weighed carefully against the risks, especially when dealing with sensitive conversations, contractual details, or any context where nuance really matters. There is a clear responsibility on organisations to communicate how these features work, and to ensure staff feel confident in knowing when to rely on AI and when to override it.
It’s also likely to prompt fresh conversations among regulators, particularly in the UK and across Europe where smart features are already turned off by default. The tension between helpful automation and meaningful consent is growing sharper as more tools cross that line from optional to ambient. For users, the key will be staying informed, knowing not just what AI is doing, but how to retain agency and control in the process.
Ultimately, Gemini’s automatic summaries are part of a broader evolution in how AI is being woven into our daily workflows. The question now is not just whether the technology works, but whether people trust it enough to let it work for them.