Sustainability-in-Tech : New ‘Meat’ From Fermented Fungi & Oats
Swedish startup Millow is using dry fermentation to create scalable, low-impact meat substitutes that could reshape the future of food production.
A New Approach to Protein Production
Millow, a foodtech company based in Gothenburg, has launched its first commercial-scale factory to produce a new kind of meat alternative. Its method combines oats and mycelium (the root-like fibres of fungi), using a patented dry fermentation technique to create a solid, sliceable protein block that can be turned into familiar foods such as burgers, meatballs and kebabs.
Addresses Two Core Challenges
Millow’s production model is designed to address two core challenges in the alternative protein sector, i.e. sustainability and scalability. For example, unlike many existing meat substitutes, which rely on liquid fermentation, imported ingredients or complex multi-stage processing, Millow’s product is created from just two inputs. The result is a minimally processed food that avoids the use of binders, flavourings or additives and can be produced at scale in just 24 hours.
New Factory
Millow’s 2,500 square metre facility was built in a repurposed LEGO factory and will eventually produce up to 500 kilograms of protein each day. The site also houses advanced fermentation labs to support future research and development.
What Makes Millow Different?
Although mycelium-based products are not entirely new, Millow’s approach appears to be significantly different from earlier efforts. Most notably, it avoids the need to extract protein strands or recombine them with synthetic binders, as is done in products like Quorn. Instead, the dry fermentation process grows a whole block of protein directly from the grain and fungus mixture.
The company also uses a proprietary texturing method, known as MUTE (Mycelium Utilised Texture Engineering), which gives the final product a structure similar to muscle tissue. This allows it to behave more like meat when cooked or handled, with a firmer texture and the ability to hold up in stews and other wet dishes.
Gluten Free and More
Millow says its product is fully plant-based, gluten free and contains no genetically modified ingredients. It also says the product is rich in nutrition, offering up to 27 grams of complete protein per 100 grams, along with fibre, vitamins and essential minerals.
A Response to Sector Shortcomings
Millow’s entry into the market comes at a time when the plant-based meat sector is facing growing criticism. Millow’s founders say their aim is to move the sector on from the shortcomings of first-generation plant-based meat, which often struggled with over-processing, long ingredient lists and inconsistent consumer appeal. By focusing on transparency, wholefood ingredients and production efficiency, the company appears to be trying to position its product as a more scalable and environmentally responsible alternative.
As the company puts it on its website: “Not all meat substitutes are actually better for the planet. Most alternatives are ultra-processed, which means they’ve gone through many different manufacturing stages right across the globe. And at every stage, a lot of energy and water are consumed. Millow is entirely different.”
Environmental Benefit
Millow is also aiming to produce food with a clearer environmental benefit. For example, a life cycle assessment of the product found it can cut greenhouse gas emissions by up to 97 percent compared to beef. Compared to soy-based substitutes, emissions are reduced by around 80 percent.
Water use is also significantly lower. For example, producing one kilogram of Millow requires only 3 to 4 litres of water. By contrast, producing a kilogram of beef can require over 15,000 litres, while soy protein typically uses more than 1,800 litres.
A Shift Toward Fermentation-Based Foods
While total investment in alternative proteins fell to 1.1 billion dollars globally in 2024, funding for fermentation startups rose by 43 percent, according to the Good Food Institute. This reflects a shift away from mimicry-focused meat alternatives towards more efficient, adaptable systems that can deliver clean-label products at scale.
Millow is, therefore, part of a growing European cluster exploring the potential of fungal fermentation. Hamburg-based Infinite Roots, for example, raised 58 million dollars in early 2025 to develop protein from brewery byproducts. Berlin’s Formo Foods has also attracted major investment to create cheese analogues using microbial fermentation.
These companies all seem to be sharing a focus on improving environmental outcomes through more efficient resource usage. By using locally sourced inputs, reducing energy consumption and avoiding long supply chains, they aim to provide credible alternatives to meat without the compromises seen in earlier products.
Business and Industry Implications
Millow’s model appears to offer a number of practical advantages for different stakeholders. For food brands and retailers, the simplicity of the ingredient list and clean processing could align with growing consumer demand for wholefood, high-protein alternatives that are not highly processed.
Foodservice providers may also benefit from the flexibility of the product, which can be barbecued, roasted, baked or fried without losing its structure. The mycelium-oat base also offers a neutral flavour profile that can be adapted to regional tastes.
For the wider food and farming sectors, the technology presents opportunities as well as challenges. The ability to swap grain types in the fermentation process opens the door for localised protein production using existing crops. This could reduce reliance on imported soy or pea protein, while creating new demand for Nordic oats or other regionally grown grains.
Not All Good News
However, it should be noted that there are, of course, some limitations. For example, Millow is currently only available in Sweden, and commercial rollout will depend on regulatory approval, consumer uptake and the ability to scale consistently. The company is working on distribution agreements and expects to launch products in retail and foodservice by the end of 2025.
Questions and Criticisms Remain
Although the sustainability credentials appear to be strong, some experts have urged caution. It seems that mycelium-based foods remain unfamiliar to many consumers, and overcoming cultural and psychological barriers may take time. Also, there are broader questions about production scale, cost competitiveness and long-term safety assessments, particularly in new markets.
There is also the issue of transparency. While Millow claims to be the most sustainable meat alternative currently available, much of the supporting data comes from internal research. Independent validation will be important if the company wants to win the confidence of regulators and buyers in new territories.
That said, Millow represents a departure from the first wave of alternative proteins. By using biotechnology and fungal fermentation to reduce complexity, cost and environmental impact, the company is helping to set a new direction for the sector. Whether others will follow remains to be seen.
What Does This Mean For Your Organisation?
What Millow’s approach demonstrates is that alternative protein production is entering a new phase, one shaped more by operational efficiency and wholefood principles than by novelty or marketing claims. Rather than mimicking meat through increasingly complex formulations, this new category focuses on simplicity, transparency and functional performance. For investors and researchers, this signals a change in the priorities of the sector. For food producers, it could offer a chance to streamline supply chains and reduce reliance on global commodity crops.
For UK businesses, particularly those in retail, foodservice and manufacturing, the emergence of scalable dry fermentation methods presents both opportunity and disruption. For example, if adapted successfully to local grain inputs and regional production models, similar technology could help strengthen domestic protein resilience while supporting decarbonisation goals. It could also create new partnerships between biotech firms and UK arable growers, with potential to reinvigorate oat and cereal markets in a lower-emissions food system. For buyers and procurement teams, the appeal is likely to lie in a product that promises nutrition, versatility and sustainability without the drawbacks associated with ultra-processing or long ingredient lists.
However, it seems that the model will need to prove itself beyond the Swedish market. Regulatory navigation, consumer education and price competitiveness will all play a role in determining its commercial viability. Various stakeholders, including environmental groups, health regulators and farming unions, will most likely be watching closely to see whether these claims of efficiency and low impact can be consistently validated at scale. As more companies experiment with mycelium and dry fermentation, a clearer picture will emerge of how these innovations fit into the wider protein economy. Millow is not the only player, but it offers a compelling case study in how targeted science and regional focus can create new routes to sustainable food production.
Video Update : Another Massive Upgrade To CoPilot – Already !
CoPilot’s brand-new “Researcher Agent” is a pretty major upgrade, so this week’s Video-of-the-Week takes it to task and has a look at what it can do for your business.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – Use Outlook’s “Report” Button to Flag Suspicious Emails
Spot something ‘phishy’ in your inbox? Outlook’s built-in “Report” tool lets you quickly flag dodgy messages, and helps Microsoft improve detection.
How to:
– In the Outlook desktop app or web version, click on the email in your inbox to preview it in the Reading Pane — no need to open it fully.
– Click the Report button in the toolbar (sometimes labelled Junk or Phishing).
– Choose Phishing or Junk, depending on the content.
– The email will be flagged and moved out of your inbox.
Pro-Tip: Reporting dodgy messages helps train Microsoft’s filters and protects others in your organisation too.
Featured Article : Grok Blocked! Quarter Of EU Firms Ban Access
New research shows that one in four European organisations have banned Elon Musk’s Grok AI chatbot due to concerns over misinformation, data privacy and reputational risk, making it far more widely rejected than rival tools like ChatGPT or Gemini.
A Trust Gap Is Emerging in the AI Race
The findings from cybersecurity firm Netskope point to a growing shift in how European businesses are evaluating generative AI tools. While platforms like ChatGPT and Gemini continue to gain traction, Grok’s higher rate of rejection suggests that organisations are becoming more selective and are prioritising transparency, reliability and alignment with company values over novelty or brand recognition.
What Is Grok?
Grok is a generative AI chatbot developed by Elon Musk’s company xAI and built into X, the social media platform formerly known as Twitter. Marketed as a bold, “truth-seeking” alternative to mainstream AI tools, Grok is designed to answer user prompts in real time with internet-connected responses. However, a series of controversial and misleading outputs (along with a lack of transparency about how it handles user data and trains its model) have made many organisations wary of its use.
Grok’s Risk Profile Raises Red Flags
While most generative AI tools are being rapidly adopted in European workplaces, Grok appears to be the exception. For example, Netskope’s latest threat report reveals that 25 per cent of European organisations have now blocked the app at network level. In contrast, only 9.8 per cent have blocked OpenAI’s ChatGPT, and just 9.2 per cent have done the same with Google Gemini.
Content Moderation Issue
Part of the issue appears to lie in Grok’s content moderation, or lack thereof. For example, the chatbot has made headlines for spreading inflammatory and false claims, including the promotion of a “white genocide” conspiracy theory in South Africa and casting doubt on key facts about the Holocaust. These incidents appear to have deeply shaken confidence in the platform’s ethical safeguards and prompted scrutiny around how the model handles prompts, training data and user inputs.
Companies More Selective About AI Tools
Gianpietro Cutolo, a cloud threat researcher at Netskope, said the bans on Grok highlight a growing awareness of the risks linked to generative AI. As he explained, organisations are starting to draw clearer lines between different platforms based on how they handle security and compliance. “They’re becoming more savvy that not all AI is equal when it comes to data security,” he said, noting that concerns around reputation, regulation and data protection are now shaping AI adoption decisions.
Privacy and Transparency
Neil Thacker, Netskope’s Global Privacy and Data Protection Officer, believes the trend is indicative of a broader shift in how European firms assess digital tools. “Businesses are becoming aware that not all apps are the same in the way they handle data privacy, ownership of data that is shared with the app, or in how much detail they reveal about the way they train the model with any data that is shared within prompts,” he said.
This appears to be particularly relevant in Europe, where GDPR sets strict requirements on how personal and sensitive data can be used. Grok’s relative lack of clarity over what it does with user input, especially in enterprise contexts, appears to have tipped the scales for many firms.
It also doesn’t help that Grok is closely tied to X, a platform currently under EU investigation for failing to tackle disinformation under the Digital Services Act. The crossover has raised uncomfortable questions about how data might be shared or leveraged across Musk’s various companies.
Not The Only One Blocked
Despite its controversial reputation, it seems that Grok is far from alone in being blocked. The most blacklisted generative AI app in Europe is Stable Diffusion, an image generator from UK-based Stability AI, which is blocked by 41 per cent of organisations due to privacy and licensing concerns.
However, Grok’s fall from grace stands out because of how stark the contrast is with its peers. ChatGPT, for instance, remains by far the most widely used generative AI chatbot in Europe. Netskope’s report found that 91 per cent of European firms now use some form of cloud-based GenAI tool in their operations, suggesting that the appetite for AI is strong, but users are choosing carefully.
The relative trust in OpenAI and Google reflects the degree to which those platforms have invested in transparency, compliance documentation, and enterprise safeguards. Features such as business-specific data privacy settings, clearer disclosures on training practices, and regulated API access have helped cement their position as ‘safe bets’ in regulated industries.
Musk’s Reputation
There’s also a reputational issue at play, i.e. Elon Musk has become a polarising figure in both tech and politics, particularly in Europe. For example, Tesla’s EU sales dropped by more than 50 per cent year-on-year last month, with some industry analysts attributing the decline to Musk’s increasingly vocal support of far-right politicians and his role in the Trump administration.
It seems that the backlash may now be spilling over into his other ventures. Grok’s public branding as an unfiltered “truth-seeking” AI has been praised by some users, but in a European context, it risks triggering compliance concerns around hate speech, misinformation, and AI safety.
‘DOGE’ Link
Also, a recent Reuters investigation found that Grok is being quietly promoted within the US federal government through Musk’s (somewhat unpopular) Department of Government Efficiency (DOGE), thereby raising concerns over potential conflicts of interest and handling of sensitive data.
What Are Businesses Doing Instead?
With Grok now off-limits in one in four European organisations, it appears that most companies are leaning into AI platforms with clearer data control options and dedicated enterprise tools. For example, ChatGPT Enterprise and Microsoft’s Copilot (powered by OpenAI’s models) are increasingly popular among large firms for their security features, audit trails, and compatibility with existing workplace platforms like Microsoft 365.
Meanwhile, companies with highly sensitive data are now exploring private GenAI solutions, such as running open-source models like Llama or Mistral on internal infrastructure, or through secured cloud environments provided by AWS, Azure or Google Cloud.
Others are looking at AI governance platforms to sit between employees and GenAI tools, offering monitoring, usage tracking and guardrails. Tools like DataRobot, Writer, or even Salesforce’s Einstein Copilot are positioning themselves not just as generative AI providers, but as risk-managed AI partners.
At the same time, it shows how quickly sentiment can shift. Musk’s original pitch for Grok as an edgy, tell-it-like-it-is alternative to Silicon Valley’s AI offerings found some traction among individual users. But in a business setting, particularly in Europe, compliance, reliability, and reputational alignment seem to matter more than iconoclasm.
Regulation Reshaping the Playing Field
The surge in bans against Grok also reflects a change in how generative AI is being governed and evaluated at the institutional level. Across Europe, regulators are moving to tighten rules on artificial intelligence, with the EU’s landmark AI Act expected to set a global precedent. This new framework categorises AI systems by risk level and could impose strict obligations on tools used in high-stakes environments like recruitment, finance, and public services.
That means tools like Grok, which are perceived to lack sufficient transparency or safety mechanisms, could face even greater scrutiny in the future. European firms are clearly starting to anticipate these regulatory pressures, and adjusting their AI strategies accordingly.
Grok’s Market Position May Be Out of Step
At the same time, the pattern of bans has implications for the competitive dynamics of the GenAI sector. For example, while OpenAI, Google and Microsoft have invested heavily in enterprise-ready versions of their chatbots, with controls for data retention, content filtering and auditability, Grok appears less geared towards business use. Its integration into a consumer social media platform and emphasis on uncensored responses make it an outlier in an increasingly risk-aware market.
Security and Deployment Strategies Are Evolving
There’s also a growing role for cloud providers and IT security teams in shaping how AI tools are deployed across organisations. Many companies are now turning to secure gateways, policy enforcement tools, or in some cases, completely air-gapped deployments of open-source models to ensure data stays within strict compliance boundaries. These developments suggest the AI market is maturing quickly, with an emphasis not only on innovation, but on operational control.
What Does This Mean For Your Businesses?
For UK businesses, the growing rejection of Grok highlights the importance of due diligence when selecting generative AI tools. With data privacy laws such as the UK GDPR still closely aligned with EU regulations, similar concerns around transparency, content reliability and compliance are just as relevant domestically. Organisations operating across borders, particularly those in regulated sectors like finance, healthcare or legal services, are likely to favour tools that not only perform well but also come with clear safeguards, documentation and support for enterprise-grade governance.
More broadly, the story of Grok is a reminder that in today’s AI landscape, branding and ambition are no longer enough. The success of generative AI tools increasingly depends on trust, i.e. trust in how data is handled, how outputs are generated, and how tools behave under pressure. For developers and vendors, that means security, transparency and adaptability must be built into the product from day one. For businesses, it means asking tougher questions before deploying any new tool into day-to-day operations.
While Elon Musk’s approach may continue to resonate with individual users who value unfiltered output or alignment with particular ideologies, enterprise buyers are clearly playing by a different rulebook. They’re looking for stability, accountability and risk management, not provocation. As regulation tightens, that divide is likely to widen.
Tech Insight : Why Google’s New ‘Fingerprint’ Policy Matters
In this Tech Insight, we look at Google’s controversial decision to allow advertisers to use device fingerprinting, exploring what the technology involves, why it has sparked concern, and what it means for users, businesses, and regulators.
A Policy Reversal
In February 2025, Google quietly updated its advertising platform rules, allowing companies that use its services to deploy a tracking method known as ‘device fingerprinting’. The change came with little fanfare but has quickly become one of the most debated privacy developments of the year.
Until now, fingerprinting was explicitly prohibited under Google’s policies. The company had long argued it undermined user control and transparency. In a 2019 blog post, Google described it as a technique that “subverts user choice and is wrong”. But five years later, the same practice is being positioned as a legitimate tool for reaching audiences on platforms where cookies no longer work effectively.
According to Google, the decision reflects changes in how people use the internet. For example, with more users accessing content via smart TVs, consoles and streaming devices, environments where cookies and consent banners are limited or irrelevant, fingerprinting offers advertisers a new way to track and measure campaign effectiveness. The company says it is also investing in “privacy-enhancing technologies” that reduce risks while still allowing ads to be targeted and measured.
However, the reaction from regulators, privacy campaigners and some in the tech community has been far from supportive.
What Is Fingerprinting?
Fingerprinting is a method of identifying users based on the technical details of their device and browsing setup. Unlike cookies, which store data on a user’s device, fingerprinting collects data that’s already being transmitted as part of normal web use.
This includes information such as:
– Browser version and type.
– Operating system and installed fonts.
– Screen size and resolution.
– Language settings and time zone.
– Battery level and available plugins.
– IP address and network information.
Individually, none of these data points reveals much but, when combined, they can create a unique “fingerprint” that allows advertisers or third parties to recognise a user each time they go online, often without them knowing, and without a way to opt out.
Also, because it happens passively in the background, fingerprinting is hard to block. Even clearing cookies or browsing in private mode won’t prevent it. For privacy advocates, that’s a key part of the problem.
How Fingerprinting’s Being Used
With third-party cookies disappearing and users browsing through everything from laptops to smart TVs, fingerprinting essentially offers a way for advertisers to maintain continuity, even when cookies and consent banners can’t keep up.
Advertisers use it to build persistent profiles that help with targeting, measurement, and fraud detection. In technical terms, it’s a highly efficient way to link impressions and conversions without relying on traditional identifiers.
Why Critics Are Alarmed
Almost immediately after Google’s announcement, a wave of criticism followed. For example, the UK’s independent data protection regulator, the Information Commissioner’s Office (ICO), called the move “irresponsible” and said it risks undermining the principle of informed consent.
In a December blog post, Stephen Almond, Executive Director of Regulatory Risk at the ICO, warned: “Fingerprinting is not a fair means of tracking users online because it is likely to reduce people’s choice and control over how their information is collected.”
The ICO has published draft guidance explaining that fingerprinting, like cookies, must comply with existing UK data laws. These include the UK GDPR and the Privacy and Electronic Communications Regulations (PECR). That means advertisers need to demonstrate transparency, secure user consent where required, and ensure users understand how their data is being processed.
The problem, critics say, is that fingerprinting makes this nearly impossible. The Electronic Frontier Foundation’s Lena Cohen described it as a “workaround to offering and honouring informed choice”. Mozilla’s Martin Thomson went further, saying: “By allowing fingerprinting, Google has given itself — and the advertising industry it dominates — permission to use a form of tracking that people can’t do much to stop.”
Google’s Justification
Google insists that fingerprinting is already widely used across the industry and that its updated policy simply reflects this reality. The company has argued that IP addresses and device signals are essential for preventing fraud, measuring ad performance, and reaching users on platforms where traditional tracking methods fall short.
In a statement, a Google spokesperson said: “We continue to give users choice whether to receive personalised ads, and will work across the industry to encourage responsible data use.”
Criticism From Privacy Campaigners
However, privacy campaigners argue that the decision puts business interests above users. They point out that fingerprinting isn’t just harder to detect, but it’s also harder to control. For example, unlike cookies, there’s no pop-up, no ‘accept’ or ‘reject’ button, and no straightforward way for users to opt out.
Pete Wallace, from advertising technology company GumGum, said the change represents a backwards step: “Fingerprinting feels like it’s taking a much more business-centric approach to the use of consumer data rather than a consumer-centric approach.”
Advertisers Welcome the Change
Unsurprisingly perhaps, within the advertising industry, many welcomed Google’s decision because as the usefulness of cookies declines, brands are looking for alternative ways to reach users, especially across multiple devices.
For example, Jon Halvorson, Global VP at Mondelez International, said: “This update opens up more opportunities for the ecosystem in a fragmented and growing space while respecting user privacy.”
Trade bodies such as the IAB Tech Lab and Network Advertising Initiative echoed the sentiment, saying the update enables responsible targeting and better cross-device measurement.
That said, even among advertisers, there’s an awareness that the use of fingerprinting must be handled carefully. Some fear that if it is abused or poorly implemented, it could invite regulatory action, or worse, further erode user trust in the online ad industry.
Legal Responsibilities Under UK Law
For UK companies using Google’s advertising tools, the policy change doesn’t mean fingerprinting is suddenly risk-free. While Google’s own platform rules now allow the practice, UK data protection law still applies, and it’s strict.
For example, organisations planning to use fingerprinting must ensure their tracking methods are:
– Clearly explained to users, with full transparency.
– Proportionate to their purpose, and not excessive.
– Based on freely given, informed consent where applicable.
– Open to user control, including rights to opt out or request erasure.
The ICO has warned that fingerprinting, by its very nature, makes it harder to meet these standards. The fact that it often operates behind the scenes and without user awareness means that it may not be providing the level of transparency required under the UK GDPR and PECR is a significant challenge.
Therefore, any business using fingerprinting for advertising will need to demonstrate that it is not only aware of these rules, but fully compliant with them. Regulators have already signalled their willingness to act where necessary, and given Google’s influence, this policy change is likely to come under particular scrutiny.
The Reputational Risks Are Real
It should be noted, however, that while it’s effective, fingerprinting comes with serious downsides, especially for businesses operating in sensitive or highly regulated sectors. For example, since users often don’t know it’s happening, fingerprinting can undermine trust, even when it’s being used within legal boundaries.
For industries like healthcare, finance, or public services, silent tracking could prove more damaging than the data is worth. If customers feel they’ve been tracked without consent, the backlash, whether legal, reputational or both, can be swift.
Fragmentation Across the Ecosystem
Another practical challenge is that fingerprinting isn’t supported equally across platforms. While Google has now allowed it within its ad systems, others have gone in the opposite direction.
For example, browsers like Safari, Firefox and Brave actively block or limit fingerprinting. Apple in particular has built its privacy credentials around restricting such practices. This means advertisers relying heavily on fingerprinting could see patchy results or data gaps depending on the devices or browsers their audiences are using.
Part of a Broader Toolkit
It’s worth remembering here that fingerprinting isn’t the only tool on the table. Many ad tech providers are combining it with alternatives such as :
— Contextual targeting : Showing ads based on the content you’re looking at (e.g. showing travel ads on a travel blog).
— First-party data : Information a company collects directly from you, like your purchase history or website activity — not from third parties.
— On-device processing : Data is analysed on your phone or computer, never sent to a central server.
— Federated learning : Your device trains a model (like for ad targeting or recommendations), and only anonymised updates are shared — not your personal data.
Therefore, rather than replacing cookies outright, fingerprinting may end up as just one option in a mixed strategy, and used selectively where consent is hard to obtain, or where traditional identifiers are unavailable.
What Does This Mean for Your Business?
For UK businesses, the reintroduction of fingerprinting within its advertising ecosystem may offer more stable tracking across devices and platforms, especially as third-party cookies continue to decline. However, the use of such techniques also brings legal and reputational risks that cannot be delegated to Google or any external platform.
Organisations that advertise online, whether directly or through agencies, should now assess how fingerprinting fits within their broader compliance obligations under UK data protection law. The Information Commissioner’s Office has made it clear that fingerprinting is subject to the same principles of transparency, consent, and fairness as other tracking methods. Simply using a tool because it is technically available does not make its use lawful.
Beyond legal considerations, there’s also a growing risk to customer trust. For example, if users discover that they are being tracked through methods they cannot see, manage or decline, the damage to a brand’s credibility could be significant, particularly in sectors where data sensitivity is high. For many organisations, the question may not just be whether fingerprinting can improve ad performance, but whether it aligns with the expectations of their audience and the values they wish to uphold.
This change also places pressure on advertisers, platforms, and regulators to clarify the boundaries of responsible data use. For some, fingerprinting may form part of a wider privacy-aware strategy that includes contextual targeting or consent-based identifiers. For others, it may prove too opaque or contentious to justify. Either way, businesses will need to make informed decisions, and be ready to explain them.
Tech News : Fastest Change In Tech History
The pace and scale of artificial intelligence (AI) development is now outstripping every previous tech revolution, according to new landmark reports.
Faster Than Anything We’ve Seen Before
Some of the latest data confirms that AI really is moving faster than anything that’s come before it. That’s the key message from recent high-profile reports including Mary Meeker’s new Trends – Artificial Intelligence report and Stanford’s latest AI Index, both released in spring 2025. Together, the data they present highlights an industry surging ahead at a speed that’s catching even seasoned technologists off guard.
Meeker, the influential venture capitalist once dubbed “Queen of the Internet”, hasn’t published a trends report since 2019 but it seems that the extraordinary pace of AI progress has lured her back, and her new 340-page analysis uses the word “unprecedented” more than 50 times (with good reason).
“Adoption of artificial intelligence technology is unlike anything seen before in the history of computing,” Meeker writes. “The speed, scale, and competitive intensity are fundamentally reshaping the tech landscape.”
Stanford’s findings echo this. For example, its 2025 AI Index Report outlines how generative AI in particular has catalysed a rapid transformation, with advances in model size, performance, use cases, and user uptake occurring faster than academic and policy communities can track.
The Numbers That Prove the Surge
In terms of users, OpenAI’s ChatGPT generative AI chatbot hit 100 million users in two months and it’s now approaching 800 million monthly users just 17 months after launch. No platform in history has scaled that quickly – not Google, not Facebook, not TikTok.
Business adoption of AI is rising rapidly. For example, according to Stanford’s AI Index 2025, more than 70 per cent of surveyed global companies are now either actively deploying or exploring the use of generative AI. This represents a significant increase from fewer than 10 per cent just two years earlier. At the same time, worldwide investment in AI reached $189 billion in 2023, with technology firms allocating record levels of funding to infrastructure, research, and product development.
Cost of Accessing AI Falling
It seems that the cost of accessing AI services is also falling sharply. For example, Meeker’s Trends – Artificial Intelligence report notes that inference costs, i.e. the operational cost of running AI models, have declined by a massive 99.7 per cent over the past two years. Based on Stanford’s calculations, this means that businesses are now able to access advanced AI capabilities at a fraction of the price paid in 2022.
What’s Driving This Acceleration?
Several factors are converging at once to drive this acceleration. These are:
– Hardware efficiency leaps. Nvidia’s 2024 Blackwell GPU reportedly uses 105,000x less energy per token than its 2014 Kepler chip! At the same time, custom AI chips from Google (TPU), Amazon (Trainium), and Microsoft (Athena) are rapidly improving performance and slashing energy use.
– Cloud hyperscale investment. The world’s biggest tech firms are betting big on AI infrastructure. Microsoft, Amazon, and Google are all racing to expand their cloud platforms with AI-specific hardware and software. As Meeker puts it, “These aren’t side projects — they’re foundational bets.”
– Open-source momentum. Hugging Face, Mistral, Meta’s LLaMA, and a host of Chinese labs are releasing increasingly powerful open-source models. This is democratising access, increasing competition, and reducing costs — all of which accelerate adoption.
– Government and sovereign AI initiatives. National efforts, particularly in China and the EU, are helping to fund AI infrastructure and drive localisation. These projects are pushing innovation outside Silicon Valley at a rapid pace.
– Developer ecosystem growth. Millions of developers are now building on top of generative AI APIs. Google’s Gemini, OpenAI’s GPT, Anthropic’s Claude, and others have created platforms where innovation compounds rapidly. As Stanford notes, “Industry now outperforms academia on nearly every AI benchmark.”
AI Agents – From Chat to Task Execution
One major change in the past year has been the move beyond simple chatbot interfaces. For example, so-called “AI agents”, i.e. systems that can plan and carry out multi-step tasks, are emerging quickly. This includes tools that can search the web, book travel, summarise documents, or even write and run code autonomously.
Companies like OpenAI, Google DeepMind, and Adept are racing to build these agentic systems. The goal is to create AI that can do, not just respond. This could fundamentally change knowledge work, and is already being trialled in areas like customer service, legal research, and software testing.
The Message
For businesses, the message appears to be that there is a need to adapt quickly, or risk falling behind.
Meeker’s report emphasises that AI is already “redefining productivity”, with tools delivering step changes in output for tasks like drafting, data analysis, code generation, and document processing. Many enterprise users report 20–40 per cent efficiency gains when integrating AI into daily workflows.
However, it’s not just about performance. Falling costs and rising model capabilities mean that AI is becoming accessible to even small businesses, not just tech giants. Whether it’s automating customer support or generating marketing copy, SMEs now have access to tools that rival those of major players.
From a market perspective, however, things are less clear-cut. While revenue is rising – OpenAI is projected to hit $3.4 billion in 2025, up from around $1.6 billion last year – most AI firms are still burning through capital at unsustainable rates.
Also, training large models is very expensive. GPT-4, for example, reportedly cost $78 million just to train, and newer models will likely exceed that. As Meeker cautions: “Only time will tell which side of the money-making equation the current AI aspirants will land.”
Challenges, Criticism, and Growing Pains
Despite the enthusiasm, not everything is rosy. The pace of AI’s rise has sparked a host of issues, such as:
– Energy use and environmental impact. Training and running AI models consumes vast amounts of electricity. Even with hardware improvements, Stanford warns of “significant sustainability challenges” as model sizes increase.
– AI misuse and disinformation. The Stanford report logs a steep rise in reported AI misuse incidents, particularly involving deepfakes, scams, and electoral disinformation. Regulatory frameworks remain patchy and reactive.
– Labour market upheaval. Stanford data shows a clear impact on job structures, particularly in content-heavy and administrative roles. While AI augments some jobs, it also displaces others and workers, employers, and policymakers are struggling to keep up.
– Profitability concerns. While AI infrastructure is growing rapidly, it’s not yet clear which companies will convert hype into long-term revenue. Even the most well-funded players face stiff competition, regulatory scrutiny, and the risk of market saturation.
What Does This Mean For Your Business?
It seems that the combination of surging adoption, falling costs, and rising capability is placing AI at the centre of digital transformation efforts across nearly every sector. For global businesses, the incentives to engage with AI tools are growing rapidly, with productivity benefits now being demonstrated at scale. At the same time, the pace of change is creating new risks, particularly in terms of workforce disruption, misuse, and unsustainable infrastructure demands, that still lack clear long-term responses.
For UK businesses, the implications are becoming increasingly difficult to ignore. As global competitors embed AI into operations, decision-making, and service delivery, organisations that delay may struggle to keep pace. At the same time, the availability of open-source models and accessible APIs means that smaller firms and startups are also in a position to benefit, if they can navigate the complexity and choose the right tools. Key sectors such as financial services, legal, healthcare, and logistics are already seeing early AI-driven efficiencies, and pressure is mounting on others to follow suit.
Policy makers, regulators, and infrastructure providers also have critical roles to play. Whether it is through ensuring fair access to computing resources, investing in AI literacy and skills, or designing governance frameworks that can evolve with the technology, stakeholders across the economy will need to respond quickly and collaboratively. While the financial picture remains uncertain, what is now clear is that AI is no longer a frontier science, but is a core driver of technological change, and one that is advancing at a pace few expected.