Tech Insight : Why Google’s New ‘Fingerprint’ Policy Matters
In this Tech Insight, we look at Google’s controversial decision to allow advertisers to use device fingerprinting, exploring what the technology involves, why it has sparked concern, and what it means for users, businesses, and regulators.
A Policy Reversal
In February 2025, Google quietly updated its advertising platform rules, allowing companies that use its services to deploy a tracking method known as ‘device fingerprinting’. The change came with little fanfare but has quickly become one of the most debated privacy developments of the year.
Until now, fingerprinting was explicitly prohibited under Google’s policies. The company had long argued it undermined user control and transparency. In a 2019 blog post, Google described it as a technique that “subverts user choice and is wrong”. But five years later, the same practice is being positioned as a legitimate tool for reaching audiences on platforms where cookies no longer work effectively.
According to Google, the decision reflects changes in how people use the internet. For example, with more users accessing content via smart TVs, consoles and streaming devices, environments where cookies and consent banners are limited or irrelevant, fingerprinting offers advertisers a new way to track and measure campaign effectiveness. The company says it is also investing in “privacy-enhancing technologies” that reduce risks while still allowing ads to be targeted and measured.
However, the reaction from regulators, privacy campaigners and some in the tech community has been far from supportive.
What Is Fingerprinting?
Fingerprinting is a method of identifying users based on the technical details of their device and browsing setup. Unlike cookies, which store data on a user’s device, fingerprinting collects data that’s already being transmitted as part of normal web use.
This includes information such as:
– Browser version and type.
– Operating system and installed fonts.
– Screen size and resolution.
– Language settings and time zone.
– Battery level and available plugins.
– IP address and network information.
Individually, none of these data points reveals much but, when combined, they can create a unique “fingerprint” that allows advertisers or third parties to recognise a user each time they go online, often without them knowing, and without a way to opt out.
Also, because it happens passively in the background, fingerprinting is hard to block. Even clearing cookies or browsing in private mode won’t prevent it. For privacy advocates, that’s a key part of the problem.
How Fingerprinting’s Being Used
With third-party cookies disappearing and users browsing through everything from laptops to smart TVs, fingerprinting essentially offers a way for advertisers to maintain continuity, even when cookies and consent banners can’t keep up.
Advertisers use it to build persistent profiles that help with targeting, measurement, and fraud detection. In technical terms, it’s a highly efficient way to link impressions and conversions without relying on traditional identifiers.
Why Critics Are Alarmed
Almost immediately after Google’s announcement, a wave of criticism followed. For example, the UK’s independent data protection regulator, the Information Commissioner’s Office (ICO), called the move “irresponsible” and said it risks undermining the principle of informed consent.
In a December blog post, Stephen Almond, Executive Director of Regulatory Risk at the ICO, warned: “Fingerprinting is not a fair means of tracking users online because it is likely to reduce people’s choice and control over how their information is collected.”
The ICO has published draft guidance explaining that fingerprinting, like cookies, must comply with existing UK data laws. These include the UK GDPR and the Privacy and Electronic Communications Regulations (PECR). That means advertisers need to demonstrate transparency, secure user consent where required, and ensure users understand how their data is being processed.
The problem, critics say, is that fingerprinting makes this nearly impossible. The Electronic Frontier Foundation’s Lena Cohen described it as a “workaround to offering and honouring informed choice”. Mozilla’s Martin Thomson went further, saying: “By allowing fingerprinting, Google has given itself — and the advertising industry it dominates — permission to use a form of tracking that people can’t do much to stop.”
Google’s Justification
Google insists that fingerprinting is already widely used across the industry and that its updated policy simply reflects this reality. The company has argued that IP addresses and device signals are essential for preventing fraud, measuring ad performance, and reaching users on platforms where traditional tracking methods fall short.
In a statement, a Google spokesperson said: “We continue to give users choice whether to receive personalised ads, and will work across the industry to encourage responsible data use.”
Criticism From Privacy Campaigners
However, privacy campaigners argue that the decision puts business interests above users. They point out that fingerprinting isn’t just harder to detect, but it’s also harder to control. For example, unlike cookies, there’s no pop-up, no ‘accept’ or ‘reject’ button, and no straightforward way for users to opt out.
Pete Wallace, from advertising technology company GumGum, said the change represents a backwards step: “Fingerprinting feels like it’s taking a much more business-centric approach to the use of consumer data rather than a consumer-centric approach.”
Advertisers Welcome the Change
Unsurprisingly perhaps, within the advertising industry, many welcomed Google’s decision because as the usefulness of cookies declines, brands are looking for alternative ways to reach users, especially across multiple devices.
For example, Jon Halvorson, Global VP at Mondelez International, said: “This update opens up more opportunities for the ecosystem in a fragmented and growing space while respecting user privacy.”
Trade bodies such as the IAB Tech Lab and Network Advertising Initiative echoed the sentiment, saying the update enables responsible targeting and better cross-device measurement.
That said, even among advertisers, there’s an awareness that the use of fingerprinting must be handled carefully. Some fear that if it is abused or poorly implemented, it could invite regulatory action, or worse, further erode user trust in the online ad industry.
Legal Responsibilities Under UK Law
For UK companies using Google’s advertising tools, the policy change doesn’t mean fingerprinting is suddenly risk-free. While Google’s own platform rules now allow the practice, UK data protection law still applies, and it’s strict.
For example, organisations planning to use fingerprinting must ensure their tracking methods are:
– Clearly explained to users, with full transparency.
– Proportionate to their purpose, and not excessive.
– Based on freely given, informed consent where applicable.
– Open to user control, including rights to opt out or request erasure.
The ICO has warned that fingerprinting, by its very nature, makes it harder to meet these standards. The fact that it often operates behind the scenes and without user awareness means that it may not be providing the level of transparency required under the UK GDPR and PECR is a significant challenge.
Therefore, any business using fingerprinting for advertising will need to demonstrate that it is not only aware of these rules, but fully compliant with them. Regulators have already signalled their willingness to act where necessary, and given Google’s influence, this policy change is likely to come under particular scrutiny.
The Reputational Risks Are Real
It should be noted, however, that while it’s effective, fingerprinting comes with serious downsides, especially for businesses operating in sensitive or highly regulated sectors. For example, since users often don’t know it’s happening, fingerprinting can undermine trust, even when it’s being used within legal boundaries.
For industries like healthcare, finance, or public services, silent tracking could prove more damaging than the data is worth. If customers feel they’ve been tracked without consent, the backlash, whether legal, reputational or both, can be swift.
Fragmentation Across the Ecosystem
Another practical challenge is that fingerprinting isn’t supported equally across platforms. While Google has now allowed it within its ad systems, others have gone in the opposite direction.
For example, browsers like Safari, Firefox and Brave actively block or limit fingerprinting. Apple in particular has built its privacy credentials around restricting such practices. This means advertisers relying heavily on fingerprinting could see patchy results or data gaps depending on the devices or browsers their audiences are using.
Part of a Broader Toolkit
It’s worth remembering here that fingerprinting isn’t the only tool on the table. Many ad tech providers are combining it with alternatives such as :
— Contextual targeting : Showing ads based on the content you’re looking at (e.g. showing travel ads on a travel blog).
— First-party data : Information a company collects directly from you, like your purchase history or website activity — not from third parties.
— On-device processing : Data is analysed on your phone or computer, never sent to a central server.
— Federated learning : Your device trains a model (like for ad targeting or recommendations), and only anonymised updates are shared — not your personal data.
Therefore, rather than replacing cookies outright, fingerprinting may end up as just one option in a mixed strategy, and used selectively where consent is hard to obtain, or where traditional identifiers are unavailable.
What Does This Mean for Your Business?
For UK businesses, the reintroduction of fingerprinting within its advertising ecosystem may offer more stable tracking across devices and platforms, especially as third-party cookies continue to decline. However, the use of such techniques also brings legal and reputational risks that cannot be delegated to Google or any external platform.
Organisations that advertise online, whether directly or through agencies, should now assess how fingerprinting fits within their broader compliance obligations under UK data protection law. The Information Commissioner’s Office has made it clear that fingerprinting is subject to the same principles of transparency, consent, and fairness as other tracking methods. Simply using a tool because it is technically available does not make its use lawful.
Beyond legal considerations, there’s also a growing risk to customer trust. For example, if users discover that they are being tracked through methods they cannot see, manage or decline, the damage to a brand’s credibility could be significant, particularly in sectors where data sensitivity is high. For many organisations, the question may not just be whether fingerprinting can improve ad performance, but whether it aligns with the expectations of their audience and the values they wish to uphold.
This change also places pressure on advertisers, platforms, and regulators to clarify the boundaries of responsible data use. For some, fingerprinting may form part of a wider privacy-aware strategy that includes contextual targeting or consent-based identifiers. For others, it may prove too opaque or contentious to justify. Either way, businesses will need to make informed decisions, and be ready to explain them.
Tech News : Fastest Change In Tech History
The pace and scale of artificial intelligence (AI) development is now outstripping every previous tech revolution, according to new landmark reports.
Faster Than Anything We’ve Seen Before
Some of the latest data confirms that AI really is moving faster than anything that’s come before it. That’s the key message from recent high-profile reports including Mary Meeker’s new Trends – Artificial Intelligence report and Stanford’s latest AI Index, both released in spring 2025. Together, the data they present highlights an industry surging ahead at a speed that’s catching even seasoned technologists off guard.
Meeker, the influential venture capitalist once dubbed “Queen of the Internet”, hasn’t published a trends report since 2019 but it seems that the extraordinary pace of AI progress has lured her back, and her new 340-page analysis uses the word “unprecedented” more than 50 times (with good reason).
“Adoption of artificial intelligence technology is unlike anything seen before in the history of computing,” Meeker writes. “The speed, scale, and competitive intensity are fundamentally reshaping the tech landscape.”
Stanford’s findings echo this. For example, its 2025 AI Index Report outlines how generative AI in particular has catalysed a rapid transformation, with advances in model size, performance, use cases, and user uptake occurring faster than academic and policy communities can track.
The Numbers That Prove the Surge
In terms of users, OpenAI’s ChatGPT generative AI chatbot hit 100 million users in two months and it’s now approaching 800 million monthly users just 17 months after launch. No platform in history has scaled that quickly – not Google, not Facebook, not TikTok.
Business adoption of AI is rising rapidly. For example, according to Stanford’s AI Index 2025, more than 70 per cent of surveyed global companies are now either actively deploying or exploring the use of generative AI. This represents a significant increase from fewer than 10 per cent just two years earlier. At the same time, worldwide investment in AI reached $189 billion in 2023, with technology firms allocating record levels of funding to infrastructure, research, and product development.
Cost of Accessing AI Falling
It seems that the cost of accessing AI services is also falling sharply. For example, Meeker’s Trends – Artificial Intelligence report notes that inference costs, i.e. the operational cost of running AI models, have declined by a massive 99.7 per cent over the past two years. Based on Stanford’s calculations, this means that businesses are now able to access advanced AI capabilities at a fraction of the price paid in 2022.
What’s Driving This Acceleration?
Several factors are converging at once to drive this acceleration. These are:
– Hardware efficiency leaps. Nvidia’s 2024 Blackwell GPU reportedly uses 105,000x less energy per token than its 2014 Kepler chip! At the same time, custom AI chips from Google (TPU), Amazon (Trainium), and Microsoft (Athena) are rapidly improving performance and slashing energy use.
– Cloud hyperscale investment. The world’s biggest tech firms are betting big on AI infrastructure. Microsoft, Amazon, and Google are all racing to expand their cloud platforms with AI-specific hardware and software. As Meeker puts it, “These aren’t side projects — they’re foundational bets.”
– Open-source momentum. Hugging Face, Mistral, Meta’s LLaMA, and a host of Chinese labs are releasing increasingly powerful open-source models. This is democratising access, increasing competition, and reducing costs — all of which accelerate adoption.
– Government and sovereign AI initiatives. National efforts, particularly in China and the EU, are helping to fund AI infrastructure and drive localisation. These projects are pushing innovation outside Silicon Valley at a rapid pace.
– Developer ecosystem growth. Millions of developers are now building on top of generative AI APIs. Google’s Gemini, OpenAI’s GPT, Anthropic’s Claude, and others have created platforms where innovation compounds rapidly. As Stanford notes, “Industry now outperforms academia on nearly every AI benchmark.”
AI Agents – From Chat to Task Execution
One major change in the past year has been the move beyond simple chatbot interfaces. For example, so-called “AI agents”, i.e. systems that can plan and carry out multi-step tasks, are emerging quickly. This includes tools that can search the web, book travel, summarise documents, or even write and run code autonomously.
Companies like OpenAI, Google DeepMind, and Adept are racing to build these agentic systems. The goal is to create AI that can do, not just respond. This could fundamentally change knowledge work, and is already being trialled in areas like customer service, legal research, and software testing.
The Message
For businesses, the message appears to be that there is a need to adapt quickly, or risk falling behind.
Meeker’s report emphasises that AI is already “redefining productivity”, with tools delivering step changes in output for tasks like drafting, data analysis, code generation, and document processing. Many enterprise users report 20–40 per cent efficiency gains when integrating AI into daily workflows.
However, it’s not just about performance. Falling costs and rising model capabilities mean that AI is becoming accessible to even small businesses, not just tech giants. Whether it’s automating customer support or generating marketing copy, SMEs now have access to tools that rival those of major players.
From a market perspective, however, things are less clear-cut. While revenue is rising – OpenAI is projected to hit $3.4 billion in 2025, up from around $1.6 billion last year – most AI firms are still burning through capital at unsustainable rates.
Also, training large models is very expensive. GPT-4, for example, reportedly cost $78 million just to train, and newer models will likely exceed that. As Meeker cautions: “Only time will tell which side of the money-making equation the current AI aspirants will land.”
Challenges, Criticism, and Growing Pains
Despite the enthusiasm, not everything is rosy. The pace of AI’s rise has sparked a host of issues, such as:
– Energy use and environmental impact. Training and running AI models consumes vast amounts of electricity. Even with hardware improvements, Stanford warns of “significant sustainability challenges” as model sizes increase.
– AI misuse and disinformation. The Stanford report logs a steep rise in reported AI misuse incidents, particularly involving deepfakes, scams, and electoral disinformation. Regulatory frameworks remain patchy and reactive.
– Labour market upheaval. Stanford data shows a clear impact on job structures, particularly in content-heavy and administrative roles. While AI augments some jobs, it also displaces others and workers, employers, and policymakers are struggling to keep up.
– Profitability concerns. While AI infrastructure is growing rapidly, it’s not yet clear which companies will convert hype into long-term revenue. Even the most well-funded players face stiff competition, regulatory scrutiny, and the risk of market saturation.
What Does This Mean For Your Business?
It seems that the combination of surging adoption, falling costs, and rising capability is placing AI at the centre of digital transformation efforts across nearly every sector. For global businesses, the incentives to engage with AI tools are growing rapidly, with productivity benefits now being demonstrated at scale. At the same time, the pace of change is creating new risks, particularly in terms of workforce disruption, misuse, and unsustainable infrastructure demands, that still lack clear long-term responses.
For UK businesses, the implications are becoming increasingly difficult to ignore. As global competitors embed AI into operations, decision-making, and service delivery, organisations that delay may struggle to keep pace. At the same time, the availability of open-source models and accessible APIs means that smaller firms and startups are also in a position to benefit, if they can navigate the complexity and choose the right tools. Key sectors such as financial services, legal, healthcare, and logistics are already seeing early AI-driven efficiencies, and pressure is mounting on others to follow suit.
Policy makers, regulators, and infrastructure providers also have critical roles to play. Whether it is through ensuring fair access to computing resources, investing in AI literacy and skills, or designing governance frameworks that can evolve with the technology, stakeholders across the economy will need to respond quickly and collaboratively. While the financial picture remains uncertain, what is now clear is that AI is no longer a frontier science, but is a core driver of technological change, and one that is advancing at a pace few expected.
Tech News : Gmail Now Summarises Emails Automatically
Gmail users will now see AI-generated summary cards appear by default at the top of long emails, thanks to an automatic update to Google’s Gemini assistant.
Google Doubles Down on Inbox AI
Google has announced that as of 29 May 2025, its Gemini artificial intelligence (AI) assistant will automatically summarise long email threads in Gmail, without waiting for a prompt or tap from the user. The update, initially rolling out to mobile users on Android and iOS, is part of a move towards integrating AI more seamlessly (and visibly) into everyday productivity tools.
Until now, users could choose to trigger a summary by tapping a button labelled “Summarise this email.” However, with this change, Gemini summary cards will start appearing by default on eligible emails, unless the user has opted out of smart features or is in a region where they are disabled by default.
The move by Google could be seen as less of a visual tweak, and more of a subtle but significant change in the relationship between users, their inboxes, and Google’s AI.
What Is Gemini, and Why Does It Matter?
Gemini is Google’s suite of generative AI tools, positioned as a direct competitor to Microsoft Copilot and other AI assistants. It spans across multiple Google Workspace apps including Docs, Sheets, and Gmail, offering assistance with drafting content, summarising information, and generating replies.
Originally introduced under the “Duet AI” brand in 2023, Gemini was rebranded and expanded in early 2024 as part of Google’s wider AI push. Its integration into Gmail’s side panel was one of the first widely adopted use cases, giving users access to email-specific tools like drafting responses, summarising lengthy threads, and generating replies using natural language.
Up to now, Gemini’s role in Gmail has largely been opt-in, with users having to initiate actions themselves.
From Passive Tool to Active Assistant
With the new update, Gemini becomes more assertive. For example, long or complex emails, especially those that form part of back-and-forth threads, will now automatically display a summary card at the top of the message. The card outlines the key points of the conversation so far and will update dynamically as new replies come in.
Google says this move is intended to save users time and reduce email fatigue, a problem that has long plagued busy professionals. For example, according to a 2024 McKinsey report, workers still spend around 28 per cent of their workweek reading and responding to emails. Google is, therefore, betting that AI summaries can streamline this process, especially on mobile, where skimming a long message chain is often more tedious.
In an announcement on its Workspace Updates blog, Google said the feature “will synthesise all the key points from the email thread, and any replies thereafter will also be a part of the synopsis, keeping all summaries up to date.”
Who Gets It and When?
The feature began rolling out on 29 May 2025 to Rapid Release domains and is now gradually being deployed across Scheduled Release domains over a 15-day window. It’s available to the following Google Workspace editions:
– Business Starter, Standard, and Plus.
– Enterprise Starter, Standard, and Plus.
– Google One AI Premium.
– Gemini Business and Enterprise customers (existing add-ons).
– Gemini Education and Education Premium add-on users.
The feature is currently limited to English-language emails, and Google has not yet announced support for other languages.
Smart Features
Importantly, Gemini summary cards are only visible to users who have smart features and personalisation turned on in Gmail. These settings control whether Google can use AI to offer tailored features based on content in a user’s inbox.
In some regions, including the UK, EU, Switzerland, and Japan, smart features are turned off by default due to local data protection laws. Users in these areas would need to manually enable the feature in Gmail’s settings to start seeing the summary cards.
How to Opt Out or Take Back Control
For users who’d rather not have Gemini skimming their emails on their behalf, there are ways to disable the feature. For example, users can:
– Go to Gmail Settings > See all settings > Smart features and personalisation.
– Toggle off “Smart features” to prevent summary cards and other AI-based tools from appearing.
– Disable “Smart features in Gmail, Chat and Meet” for more comprehensive opt-out control.
Admins of Google Workspace domains can also manage these settings at a policy level from the Admin Console, giving organisations central control over the feature’s rollout.
It’s worth noting here that, even with the automatic summaries in place, the manual “Summarise this email” chip still remains, both at the top of eligible emails and in the Gemini side panel. This means that users who want to selectively invoke AI help can still do so.
Automation or Overreach?
While Google pitches the change as a productivity boost, not everyone is celebrating the move. For example, one key concern is accuracy. AI summaries, particularly those generated in real time from nuanced human conversations, are notoriously hit-and-miss. Even Google’s own AI Overviews in Search have come under fire for offering incorrect or misleading answers, as recently highlighted in a series of viral screenshots on social media.
Google’s not alone in being criticised for this. For example, it’s also been reported that Apple’s push-notification summaries, based on similar AI technology, repeatedly misinterpreted news headlines. Apple has since paused that feature for news apps, pending a fix.
It seems that a similar level of scepticism now surrounds Gmail’s automatic summaries. Critics argue that important context can easily be lost or misrepresented by an AI synopsis, especially in complex or emotionally nuanced threads.
As highlighted by Dr Jenna McCarthy, a digital communications researcher at the University of Manchester: “This kind of automation risks giving people a false sense of understanding,” adding that “Summaries might look slick, but in business or legal emails, the devil is often in the detail.”
It’s worth noting here that Google itself actually appears to acknowledge this limitation. For example, in its support documentation, the company stresses that the summaries are meant to complement human reading, not replace it.
Privacy and Trust Still Under Scrutiny
Alongside concerns about accuracy, privacy remains a hot topic. Although Google insists that all AI interactions respect user data protection rules and don’t expose personal content to human reviewers, the idea of automated scanning, even for benign purposes like summarising, may raise some eyebrows among privacy-conscious users.
Google directs users to its Privacy Hub for more information, but as with other AI features, transparency is key. Users are likely to expect more clarity around how data is used, stored, and processed when features like this are switched on by default.
Part of a Move Towards Embedded AI
Google’s update also reflects a broader industry direction, i.e. AI tools are increasingly moving from optional add-ons to proactive, built-in features. Rather than waiting for user prompts, systems like Gemini are starting to anticipate needs and take action automatically.
In Google’s case, the aim appears to be to create a more seamless experience across Workspace, where AI quietly handles repetitive or time-consuming tasks like summarising threads, without disrupting the user’s workflow. This aligns with recent updates across other Workspace apps, where Gemini is being positioned as a default productivity layer rather than a separate tool.
However, the effectiveness of this approach will depend heavily on how much trust users place in the AI’s accuracy and judgement—and how much control they feel they still have over their own inbox.
What Does This Mean For Your Business?
While the arrival of automatic Gemini summaries may seem like a small design tweak, the implications actually go much deeper. By removing the need for users to actively request a summary, Google is signalling a shift towards AI that no longer waits in the wings, but steps forward by default. For some, that may be welcome, especially for those managing high volumes of email who are eager to shave precious minutes off their working day. However, for others, the change may raise fresh concerns around trust, data processing, and the growing opacity of algorithmic decision-making in everyday tools.
For UK businesses, the move could offer real productivity gains, particularly in fast-paced environments where clarity and speed of communication are key. Admins can tailor how the feature is used across teams, allowing for top-down management of when and where AI steps in. But the benefits must be weighed carefully against the risks, especially when dealing with sensitive conversations, contractual details, or any context where nuance really matters. There is a clear responsibility on organisations to communicate how these features work, and to ensure staff feel confident in knowing when to rely on AI and when to override it.
It’s also likely to prompt fresh conversations among regulators, particularly in the UK and across Europe where smart features are already turned off by default. The tension between helpful automation and meaningful consent is growing sharper as more tools cross that line from optional to ambient. For users, the key will be staying informed, knowing not just what AI is doing, but how to retain agency and control in the process.
Ultimately, Gemini’s automatic summaries are part of a broader evolution in how AI is being woven into our daily workflows. The question now is not just whether the technology works, but whether people trust it enough to let it work for them.
Company Check : Meta : Merchandising & Military
Meta is stepping up its push into physical retail, open-source AI, and military-grade AR and VR technology, but each move is attracting scrutiny and raising new questions about ethics, transparency, and the company’s strategic direction.
Physical Stores Selling Smart Glasses
Meta is reportedly preparing to open a new wave of physical retail stores in a strategic bid to push sales of its Ray-Ban smart glasses and other wearable devices. The move (first revealed by Business Insider) signals a move from virtual ambitions into tangible retail expansion as the company looks to solidify its position in the emerging face-computing market.
It’s worth noting here that this won’t be Meta’s first foray into bricks-and-mortar. For example, the company launched its debut physical store in Burlingame, California in 2022, followed by a pop-up in Los Angeles. But this latest round of hiring and planning suggests a much broader rollout is on the cards.
The logic behind Meta’s planned move appears to be that smart glasses, especially those blending AR features with fashion, are inherently tactile. Trying them on in person can make or break a sale. It seems that although Meta reportedly sold over 1 million Ray-Ban Meta smart glasses in 2024 alone, CEO Mark Zuckerberg is said to have challenged staff to raise that figure to 5 million units, prompting the search for a more immersive retail strategy.
In-Store Demos
In-store demos will also likely help Meta showcase its Meta Quest VR headsets, especially as rivals like Apple raise the stakes with more premium offerings like the Vision Pro, priced at $3,499 but struggling to attract mass-market adoption.
By creating dedicated physical spaces, Meta seems to believe it could address two problems at once, i.e. differentiating its devices from commodity tech, and humanising a brand that’s often criticised for being too virtual and too data-hungry.
For business users, this matters. For example, the rise of face-worn computing is set to impact fields from healthcare to logistics. With AI-assisted smart glasses already capable of real-time transcription, photo capture, and even livestreaming, the line between personal wearables and professional tools is blurring fast.
However, it seems that retail has proven a tricky terrain for Big Tech. For example, Microsoft famously shuttered its 83 stores in 2020, and Amazon has scaled back its ambitions after mixed success with physical shops. Meta’s challenge, therefore, may be to offer something more experiential than transactional, and something that convinces users and developers alike that these devices are more than gadgets.
“Open Washing” Accusations Cloud Meta’s AI Open Source Push
At the same time that Meta is championing openness in AI, the company is facing renewed criticism for allegedly misrepresenting the nature of its flagship Llama models, with critics accusing it of “open washing.”
This latest controversy stems from Meta’s role in sponsoring a Linux Foundation research paper, The Economic and Workforce Impacts of Open Source AI, published in May. The report highlights the cost savings and innovation benefits of open source AI (OSAI), noting that 89 per cent of AI-adopting organisations use some form of open source infrastructure, and that open models are significantly cheaper to deploy than proprietary ones, findings that are hard to ignore for small businesses and tech start-ups.
Meta’s involvement in the report, however, has triggered backlash. For example, critics, including OpenUK CEO Amanda Brock, argue that Llama does not meet the widely accepted Open Source Definition (OSD), largely due to commercial use restrictions embedded in its licence.
“Llama isn’t ‘open source’, whatever definition you choose to use for open source,” Brock stated. “We rely on open source being usable by anyone for any purpose, and Llama is not.”
The nuance here is key. Meta’s Llama models (including Llama 2 and 3) are open access, meaning researchers and developers can use them freely in many cases. However, the restrictions on high-scale commercial use mean they fall short of being truly “open source” under OSI standards.
For Meta, the implications are twofold. First, its marketing message around openness risks losing credibility, especially as regulators in the EU and US begin using “open source” as a basis for liability exceptions in AI laws. Second, it could jeopardise Meta’s appeal to developers who value transparency, forkability, and independence from Big Tech.
The Linux Foundation report itself finds that open models are being adopted more heavily by small businesses than large enterprises, with smaller firms citing lower costs and greater flexibility as primary drivers. If Llama isn’t genuinely open, these businesses could end up relying on what they believe is community-driven infrastructure, only to face legal grey areas or cost barriers later.
While Meta has made real contributions to the open AI ecosystem, including the release of PyTorch and participation in Hugging Face, it’s likely that the wider industry is watching closely to see whether the company’s vision of openness is consistent, or just convenient.
From Metaverse to Military
In yet another Meta development, and this time one that caught many observers by surprise, Meta has signed a deal with Anduril Industries, a fast-growing US defence contractor, to build AR and VR devices for military use.
The irony hasn’t gone unnoticed. For example, Anduril’s founder, Palmer Luckey, was famously ousted from Meta’s predecessor, Facebook, back in 2017, reportedly over political donations and internal disagreements. Now, Luckey is working with his former employer on a project aimed at turning soldiers into what he describes as “technomancers.”
“I am glad to be working with Meta once again,” said Luckey in a statement. “The products we are building with Meta do just that.”
Battlefield-Ready Technology
According to Meta, the partnership will leverage its expertise in AI and extended reality (XR) to deliver battlefield-ready technology that enhances real-time situational awareness. The systems are expected to integrate with Anduril’s Lattice platform, a command-and-control interface powered by AI that overlays live battlefield intelligence into soldiers’ fields of view.
Could Actually Make Money
Meta’s AR and VR ambitions have so far been expensive and, arguably, unproven. Its Reality Labs division lost $4.2 billion in Q1 2025, part of a long-term investment strategy that has seen Meta burn through over $80 billion on immersive tech since acquiring Oculus in 2014. However, with the US military’s proposed $1 trillion budget, the defence market could finally offer a return.
Progress and Pitfalls
For business users and the wider XR market, the defence partnership signals both progress and potential pitfalls. For example, military involvement could fast-track innovation, enhance hardware capabilities, and make advanced technologies more accessible for commercial use. However, it also raises serious questions about Meta’s role in surveillance, data handling, and the broader ethical implications of merging consumer tech with military objectives.
Notably, Microsoft handed off its own US Army IVAS contract to Anduril earlier this year, having struggled to deliver effective headsets with its now-discontinued HoloLens. That leaves Meta and Anduril in what appears to be a strong position to lead the next wave of military-grade XR, and potentially commercial spin-offs.
What Does This Mean For Your Business?
Taken together, Meta’s latest moves suggest a company trying to reposition itself at the centre of three major battlegrounds: consumer hardware, AI ethics and extended reality. Each initiative carries its own rationale. Retail stores aim to offer hands-on experience, the push for AI openness suggests leadership in cost-efficient tools, and the defence partnership seeks a path to long-term commercial viability. However, the connections between these efforts reveal a deeper tension. Is Meta attempting to cater to every audience at once, or is it spreading its efforts too widely in a fast-changing technological landscape?
The physical retail expansion is perhaps the most straightforward. It gives Meta a tangible way to demonstrate products that have, until now, lived mostly in speculative hype cycles. Smart glasses, in particular, are on the verge of moving from novelty to utility, and a physical showroom model could help convert curiosity into confidence. For UK businesses operating in sectors like healthcare, manufacturing, logistics or retail, that’s potentially game-changing. If the technology works and is easy to trial and adopt, it could speed up the mainstreaming of AR in workplace settings.
However, it’s the open-source AI row that cuts to the heart of Meta’s credibility. The company is trying to paint itself as a champion of openness, cost savings, and accessibility, which is a narrative that appeals to developers and small firms alike, but the reality of Llama’s licensing restrictions muddies that message. If Meta is seen to be overstating its openness, or using community narratives to mask corporate control, it could backfire with the very audiences it’s hoping to win over. For UK tech start-ups and SMEs, who often rely on open source to compete with bigger players, the difference between “open” and “open enough” represents a business risk.
The Anduril partnership adds another layer of complexity. On paper, it could finally make Meta’s multibillion-dollar investment in XR technologies pay off. But aligning with military objectives also risks alienating consumers, employees, and partners who are wary of how immersive tech might be used in surveillance or combat. In a world increasingly conscious of tech’s societal impacts, even commercial buyers may start asking harder questions about the provenance and purpose of the tools they deploy.
From businesses and developers to policymakers, the message appears to be that Meta is doubling down on its hardware and AI bets. That said, how it navigates trust, ethics and transparency will shape not only its own future, but the broader acceptance of emerging tech across the board. What comes next may, therefore, depend less on product specs and more on public perception, legal scrutiny, and whether Meta can balance innovation with genuine accountability.
Security Stop Press : Asus Routers Hit by Stealth Backdoor Attack
Thousands of Asus routers have been compromised in a silent, persistent attack that gives hackers remote access, even after firmware updates.
Cybersecurity firm GreyNoise uncovered the campaign, which targets internet-facing Asus models like the RT-AC3100 and RT-AX55. Attackers use brute-force logins or old vulnerabilities to gain admin access, then exploit a flaw (CVE-2023-39780) to enable hidden logging features and install a stealthy backdoor.
SSH access is then enabled through official settings, with an attacker-controlled key added. GreyNoise warns this “persists across firmware upgrades” and may be part of a long-term botnet operation, with over 4,800 affected devices already detected.
Businesses using Asus routers should check for SSH on port 53282, inspect authorised\_keys, and block known malicious IPs. If compromise is suspected, only a full factory reset can remove the backdoor.
Sustainability-in-Tech : Ancient Bacteria Powers New Green Chemical Facility
A startup with roots in Denmark and Germany is now using ancient bacteria and Texan emissions to make low-carbon chemicals, thereby offering a novel alternative to fossil-fuel-based manufacturing.
A Biotech Startup With Climate Ambitions
Founded in 2021, ‘Again’ is the brainchild of Danish researchers and German entrepreneur Max Kufner. It positions itself as the world’s first scalable, carbon-negative chemical manufacturer, one aiming to overhaul how industrial chemicals are made.
How Again’s Process Works
Rather than capturing CO₂ just to store it underground (as with carbon capture and storage, or CCS), Again’s process feeds waste CO₂ straight into its custom-designed bioreactors. There, it’s fermented with hydrogen and processed by ancient, oxygen-hating bacteria, some of the oldest life forms on Earth. These hardy microbes, once dominant in Earth’s CO₂-rich primordial soup, now have a new purpose, i.e. transforming industrial emissions into chemicals like acetate, used in everything from paints and adhesives to cosmetics and plastics.
According to Again, this approach can reduce emissions associated with chemical production by up to 80 per cent, thereby making it a potential game-changer for one of the planet’s most polluting sectors.
Why Texas? Why Now?
Again’s new plant, dubbed TXS-1, is being built in Texas City which is an industrial hub on the Gulf Coast and home to major petrochemical facilities. The reasons why it’s such a strategic location for this purpose are :
– Abundant CO₂ supply. Again will capture waste CO₂ directly from a refinery on-site, avoiding costly transport emissions.
– Hydrogen availability. The region is rapidly scaling up hydrogen production, another essential input for Again’s process.
– Industrial partnerships. The facility is hosted at a site operated by Diamond Infrastructure Solutions, a joint venture between Dow and Macquarie Asset Management. Chemicals giant HELM AG is also on board to distribute Again’s products.
Ancient Bacteria Meet AI
At the heart of Again’s process is a mix of ancient biology and modern computation.
For example, the bacteria involved are strict anaerobes, organisms that evolved billions of years ago, long before oxygen was present in Earth’s atmosphere. Back in those early conditions, CO₂ dominated, and these microbes adapted to use it as a food source. Today, Again has harnessed these same organisms, placing them in oxygen-free bioreactors alongside green hydrogen. As they metabolise the mixture, they produce valuable chemicals like acetate, a key building block used across multiple industries.
The process has been optimised using AI-powered bioengineering and chemical modelling, allowing Again to tweak conditions for maximum output and efficiency. The company describes it as similar to brewing, only instead of beer, the end product is a clean, commercially viable chemical, ready for use in adhesives, textiles, paints or even packaging.
From Copenhagen to the Gulf Coast
Again’s journey started in Denmark. In 2023, the company launched its first operational pilot plant on the industrial outskirts of Copenhagen. That facility now captures up to one tonne of CO₂ per day and converts it into acetate using the same bacterial fermentation process.
That successful trial laid the groundwork for a rapid international expansion. Again has raised more than $150 million in funding to date, including a €39.4 million Series A round co-led by GV and HV Capital, and a €47 million grant from the EU’s Horizon Europe initiative. Alongside the new US site, the company is also building a second European facility in Norway as part of the PyroCO₂ project—a multi-partner initiative exploring large-scale carbon capture and utilisation.
The company says the US is an especially attractive market for its technology due to strong industrial demand, federal support for low-carbon manufacturing, and the sheer volume of CO₂ emissions in the petrochemical sector. TXS-1 will be co-located with existing industrial infrastructure, allowing Again to capture emissions directly at the source and avoid costly transport logistics.
Why Green Chemicals Matter
The global chemical industry contributes around 4 per cent of total greenhouse gas emissions which is twice the amount produced by aviation. However, unlike power generation or transport, where decarbonisation efforts are more mature, the chemical sector remains particularly tough to tackle. That’s because carbon isn’t just an energy source in this context but is a core ingredient.
Uses Captured CO₂
Traditional chemical production relies on fossil-based feedstocks such as oil, gas and coal. That means the process remains carbon-intensive, even if the energy powering the plants becomes renewable. Again’s approach flips this equation, i.e., using captured CO₂ as a feedstock turns waste into value, effectively recycling emissions back into the supply chain.
The resulting chemicals are functionally identical to their fossil-derived counterparts, meaning customers don’t need to compromise on performance to choose a lower-carbon option. Again’s scientific co-founder, Dr Torbjørn Jensen, is keen to point out that the potential climate benefits are substantial, saying: “We have the means to not only capture waste CO₂ but turn it into useful products to fully decarbonise the supply chain.”
No Premiums, No Excuses
Cost is another area where Again is clearly aiming to stand apart. For example, while many climate tech firms rely on subsidies or carbon credits to stay competitive, Again claims its green chemicals are price-aligned with fossil-based alternatives. That makes them a viable swap-in for major industrial buyers.
Also, because the company co-locates its facilities with industrial emitters, it avoids the need to build entirely new infrastructure or transport captured CO₂ across long distances. This keeps operational costs lower and simplifies logistics (both key concerns for heavy industry).
According to Again, its model not only reduces emissions but helps build supply chain resilience. By producing chemicals locally using waste inputs, companies can reduce their reliance on volatile global fossil markets and mitigate geopolitical risk.
A Growing Ecosystem of Carbon Utilisers
It’s worth noting here that Again isn’t the only player reimagining how carbon can be reused rather than emitted. Several other startups and innovators are working on similar problems, though often using very different technologies. These include, for example:
– LanzaTech, based in the US and New Zealand, uses microbial gas fermentation to turn industrial emissions into fuels, chemicals and even fabrics. Its tech is already operating at commercial scale in China and Belgium.
– Twelve, based in California, uses electrochemical reactors to transform captured CO₂ into syngas, plastics and even jet fuel. It has partnered with major brands like Mercedes-Benz and Shopify.
– Carbon Clean, headquartered in the UK, develops compact carbon capture systems designed for smaller industrial sites. Some of its partners are exploring reuse pathways for the captured emissions.
– Climeworks, based in Switzerland, focuses mainly on direct air capture and storage, but has also collaborated on utilisation pilots for synthetic fuels and fertilisers.
What makes Again’s model distinctive is its biological foundation and its emphasis on full commercial scalability. The company believes its AI-enhanced, plug-and-play bioreactors could be deployed in a wide range of industrial settings, bringing emissions down while making useful products at the same time.
Challenges and Open Questions
While the potential is clear, the path to industrial-scale success is far from straightforward. For example, some of the issues to be tackled include:
– Scaling up. Even with TXS-1 and other plants online, the amount of CO₂ processed will remain a fraction of global chemical-sector emissions. Expanding from thousands to millions of tonnes per year will require vast investment and infrastructure alignment.
– Hydrogen dependency. Again’s process depends on green hydrogen, which remains costly and in limited supply. If the hydrogen used isn’t produced from renewable sources, the overall emissions savings could be undermined.
– Regulatory support. The success of projects like Again’s often hinges on supportive climate policies, especially in high-emitting regions. Carbon pricing, clean energy incentives and emissions regulations will all play a role in shaping demand.
– Industry buy-in. Despite the environmental benefits, industrial clients will need assurance that the supply, quality and pricing of green chemicals can match fossil-based equivalents at scale. Long-term contracts and offtake agreements will be key to proving commercial viability.
Some critics may also question whether these technologies risk entrenching the petrochemical status quo, making it easier for fossil-heavy industries to continue operating, rather than shifting toward fundamentally different models of production and consumption.
For now, however, Again’s approach seems to offer something rarely seen in the climate tech space, i.e. a scalable, biologically driven process that recycles carbon, reduces emissions and produces critical products without asking customers to pay more or change how they operate. That may prove to be a winning formula in the urgent race to decarbonise industry.
What Does This Mean For Your Organisation?
What Again is building in Texas appears to reflect a growing confidence in the potential of carbon utilisation technologies to deliver real-world impact. By rethinking carbon not as waste, but as a resource, companies like Again are beginning to close the loop on emissions-heavy sectors that have traditionally been among the hardest to clean up. For the global petrochemicals industry, long viewed as a decarbonisation dead end, this marks a meaningful shift from theory to scalable practice.
For businesses, especially those in manufacturing, construction, and fast-moving consumer goods, the implications may be significant because the ability to source carbon-negative chemicals without a cost penalty is a powerful proposition. It suggests that environmental responsibility no longer has to come with financial compromise. In a world where supply chain resilience is under constant strain, Again’s co-located model also offers a localised, low-risk alternative to long-haul chemical imports. This could have strategic value not just in the US, but in Europe too.
UK businesses, in particular, may want to watch this space closely. For example, with increasing pressure from regulators, investors and customers to lower emissions, a viable route to greener inputs could open up new paths to compliance and competitive advantage. Although Again’s current facilities are outside the UK, its presence in Denmark and Norway, and the plug-and-play nature of its tech, means it could easily become part of Britain’s low-carbon supply chain in the near future, especially if domestic hydrogen capacity scales up.
At the same time, the challenges highlighted remain very real. Cost, scale, and energy inputs will all determine whether this approach can transition from promising to mainstream. That said, the early signs are encouraging. By blending millennia-old biology with modern science and smart commercial thinking, Again has shown what’s possible when sustainability is treated not as a side project but as a core business model. Whether it succeeds or not, it’s helping to rewrite the rulebook on what a cleaner, circular industrial future could look like.