Brands Pay To Be Recommended By AI, Not Google

The Prompting Company has raised $6.5 million to help businesses get mentioned in AI-generated answers from tools like ChatGPT, Gemini, and Claude, signalling a major shift in how people now discover products online.

Who Is The Prompting Company?

The Prompting Company is a young, Y Combinator-backed startup (Y Combinator is a Silicon Valley startup accelerator) that wants to redefine online marketing for the age of artificial intelligence. Founded just four months ago by Kevin Chandra, Michelle Marcelline, and Albert Purnama, the company specialises in what it calls Generative Engine Optimisation, or GEO. The idea is that as people increasingly ask AI tools for advice instead of searching Google, brands must learn how to make their products visible to these systems.

The three founders, all originally from Indonesia, previously built Typedream, an AI-assisted website builder later acquired by Beehiiv, and Cotter, a passwordless authentication service bought by Stytch. Their latest venture reflects what many see as a fundamental turning point in digital discovery, i.e. AI assistants are becoming the new gateway to information, and by extension, to products and services.

Client List

The company’s early client list already includes Rippling (an HR and payroll software platform), Rho (a corporate banking and spend management platform), Motion (an AI-powered productivity and scheduling tool), Fondo (a tax automation platform for startups), Kernel (a data and machine learning infrastructure company), Traceloop (a developer observability platform), and Vapi (an AI voice agent platform), along with one unnamed Fortune 10 business.

What The Prompting Company Does

The startup’s service is actually built around a relatively simple process. For example, first, it identifies the kinds of questions AI systems are being asked in a particular market. Rather than focusing on short search keywords like “best business bank account”, GEO looks for longer, more contextual prompts such as “I’ve just set up a small company, what’s the best business account with no monthly fees?”

Once those queries are identified, The Prompting Company creates structured, machine-readable content that directly answers them. These AI-optimised pages strip away human-facing clutter like pop-ups, menus, and marketing slogans. Instead, they present clean, factual information written in a format that large language models can easily interpret and reference. The company then automatically routes AI crawlers to these pages instead of the brand’s normal website.

In short, it is search engine optimisation for AI rather than for humans. Its goal is to make brands “the product cited by ChatGPT”, as its own website puts it. The service operates on a subscription model, starting from $99 per month for basic tracking of 25 prompts and rising to enterprise plans with custom integrations and support.

The Trend

The Prompting Company’s entire business seems to rest on a simple observation that people are no longer starting their product searches on Google. They are asking AI assistants instead.

For example, Adobe’s 2025 Digital Economy Report found that US traffic from generative AI tools surged 4,700 per cent in a single year, with 38 per cent of consumers saying they had already used AI for shopping. Of those, 73 per cent said AI had now become their main tool for product research. The same report showed that visitors coming via AI assistants stayed 32 per cent longer on sites, viewed more pages, and were 27 per cent less likely to leave immediately. In other words, more shoppers are turning to generative AI tools to find deals, research products, and make buying decisions.

These changes suggest that AI assistants are now beginning to perform the filtering role that search engines once did. Instead of scrolling through links, users receive an instant shortlist of relevant products, often only two or three names. Being one of those names, therefore, has obvious commercial value.

Why Investors Are Paying Attention

That value explains why investors have been so quick to back The Prompting Company. The $6.5 million seed round, led by Peak XV Partners and Base10 with participation from Y Combinator and others, reflects growing belief that the next phase of digital advertising will take place inside AI assistants.

For investors, the logic is pretty straightforward. Whoever shapes how AI tools make product recommendations will control the top of the sales funnel for entire industries. Traditional search and pay-per-click advertising rely on visible results and bids for keywords. In AI-driven discovery, there may be no visible results page at all. An assistant could simply say, “You should try Rho for business banking,” and the conversation ends there.

That urgency among brands is reflected in the company’s own analysis, which suggests that much of the recent growth in website traffic is now coming from AI bots rather than human visitors. The founders say that developers are already using AI tools to ask for product recommendations inside their workflows, and they believe that ordinary consumers are beginning to do the same.

What Will The Funding Be Used For?

The startup says it will use the $6.5 million to scale its platform, develop AI-facing website templates for customers, and expand partnerships with major AI providers. It is also collaborating with Nvidia on “next-generation AI search”, though the details of that project have not been disclosed.

The company currently claims to host around half a million AI-optimised pages and to be driving double-digit millions of monthly visits for clients. Its customers span fintech, developer tools, and enterprise software, but the founders say the model applies to any sector where customers ask detailed, conversational questions.

The Lead In A New Sector

The funding gives The Prompting Company a clear lead in what could soon be a major new marketing sector. For example, by positioning itself as the first dedicated GEO platform, it is creating a new type of infrastructure for online visibility. The company argues that the fastest-growing “users” of the internet today are AI agents, not humans, and that brands need to design websites for those agents first.

It also aims to make GEO repeatable and data-driven, similar to how SEO matured into an industry over the past two decades. The difference is that in AI discovery, results are generated dynamically rather than ranked on a static page, meaning brands will need constant updates to stay visible.

Competitors

The rise of GEO is highly likely to unsettle traditional SEO agencies and digital advertisers. The overlap between Google search results and AI recommendations is shrinking, with some analyses suggesting it has dropped from around 70 per cent to below 20 per cent. That means a brand ranking first on Google might not even appear in an AI assistant’s answer.

Agencies built around keyword bidding and link optimisation now face the challenge of learning how to influence AI-generated answers, which rely on context and relevance rather than metadata and backlinks. This transition could change how marketing budgets are allocated, with more money flowing towards GEO-style services.

AI Companies

For companies like OpenAI, Google, Anthropic, and Meta, this trend could be an opportunity as well as a risk. For example, on the one hand, AI-driven shopping and product discovery could open new sources of revenue, especially as assistants move beyond recommending items to actually completing purchases. OpenAI’s recent integration with Stripe, for example, already allows ChatGPT to handle some transactions directly.

On the other hand, questions around bias and commercial influence are inevitable. For example, if AI assistants begin recommending brands that have paid for optimisation or have supplied AI-friendly data, users may expect clear disclosure of those relationships. Transparency will become crucial as assistants start to resemble personal shoppers or product curators.

There are also technical implications to consider here. GEO depends on AI models being able to browse the open web and ingest structured content. ChatGPT, Gemini, and Perplexity can already do this, but others, such as Anthropic’s Claude, have been more limited. This could lead to a divided ecosystem with some assistants open to optimisation, while others keep recommendations strictly in-house.

Businesses And Advertisers

For businesses, the important message is that appearing in AI-generated answers may soon matter as much as appearing on page one of Google once did. The Prompting Company claims its system allows even small or new brands to compete by creating high-quality, context-aware content that AI tools are more likely to cite.

Early signs suggest that AI-driven traffic, while smaller in volume than search, may be higher in quality. For example, Adobe’s data shows that visitors arriving from AI recommendations tend to stay longer and are more focused on buying decisions. They also use AI most often for complex or big-ticket purchases, where research matters more than impulse.

For advertisers, however, it also poses some new questions, such as how do you measure success when a chatbot’s conversation, not a click, triggers a purchase? How do you influence visibility in an algorithm that changes with every prompt? Also, how do you maintain brand trust when recommendations are made by machines rather than people?

Challenges And Criticisms

As with any fast-moving technology, the rise of generative engine optimisation (GEO) raises a number of ethical and practical questions for both businesses and consumers.

The first challenge is transparency. For example, if brands start paying to be mentioned by AI, users must be able to tell whether a recommendation is organic or commercially influenced. Regulators could extend existing advertising disclosure rules to AI assistants, just as they have done with influencer marketing.

Bias is also a key issue to consider. AI systems are only as balanced as the data they are trained on, and introducing commercial optimisation risks amplifying existing inequalities. Studies of AI in retail have already raised concerns about how these systems collect and use customer data, and whether they treat all consumers fairly. Experts have warned that businesses must prioritise transparency, bias testing, and responsible data use if AI-driven commerce is to gain public trust.

Another challenge is attribution. For example, AI traffic still converts at lower rates than traditional search or social referrals, though the gap is narrowing. Marketers can’t yet prove, with precision, that being mentioned in an AI answer directly leads to a sale. Until that attribution problem is solved, investment in GEO may remain experimental for many firms.

Finally, there are issues around the subject of dependence. For example, if AI assistants become the main interface for product discovery, the brands that are mentioned will dominate attention, and those that are not may struggle to be seen at all. For now, The Prompting Company is just positioning itself as the bridge between those two realities, betting that businesses will soon have to market to AI agents as actively as they do to people.

What Does This Mean For Your Business?

If GEO takes hold in the way its backers expect, the structure of online discovery could change faster than many realise. For UK businesses in particular, this means rethinking how visibility is achieved and measured. Instead of fighting for Google rankings or paying for search ads, companies may soon need to consider whether their products can be understood, cited, and recommended by AI systems that are shaping what customers see first. That shift could favour agile firms that adopt AI-ready content strategies early, while leaving slower competitors struggling to appear in the new recommendation landscape.

For advertising and marketing industries, GEO could become both a challenge and an opportunity. For example, traditional SEO agencies may need to retrain their focus on machine-readable design, structured data, and conversational context, while media buyers could face a future where there are no clear ad slots to purchase. Instead, visibility might depend on maintaining technical partnerships, feeding accurate data to AI systems, and monitoring how generative models respond to brand information in real time.

AI companies also face growing scrutiny as these practices expand. If assistants begin to behave like digital sales representatives, they will need to explain how and why specific products are recommended. Regulators and consumer watchdogs will expect transparency around paid optimisation, and users will demand the ability to distinguish between genuine relevance and commercial influence. Maintaining public trust will require clear standards, and the companies that set them will likely shape the rules for everyone else.

For investors and innovators, the appeal is pretty obvious. GEO creates a new layer of infrastructure in the digital economy, one that could define how brands reach audiences as AI assistants replace search boxes. However, the broader outcome will depend on how responsibly the model is used. If transparency and fairness are built into the system from the start, AI-powered product discovery could simplify choices for consumers and open new routes to market for smaller firms. If not, it risks becoming another opaque advertising channel that benefits only those able to pay for visibility.

For now, The Prompting Company has positioned itself at the centre of that debate. Its technology reflects a future in which algorithms act as gatekeepers to consumer attention, and its early funding shows how much confidence investors have in that vision. Whether this transforms online marketing or simply adds another layer to it will depend on how quickly businesses, regulators, and AI developers adapt to a world where products must be marketed not only to people but to the machines that speak to them.

Microsoft Accusedly Misleads Over Copilot Prices

Australian regulators have taken Microsoft to court, alleging the company misled around 2.7 million Microsoft 365 users by implying they had to accept a higher-priced AI-powered plan or cancel altogether, while failing to reveal a cheaper alternative that was still available.

What Happened and Why?

The case focuses on Microsoft’s handling of its consumer subscription services, i.e., Microsoft 365 Personal and Family plans, used by millions of households for applications such as Word, Excel, PowerPoint, Outlook and OneDrive. These plans are sold on monthly or annual auto-renewing subscriptions, making them a cornerstone of many users’ digital routines.

Back in October 2024, Microsoft decided to integrate Copilot (its generative AI assistant) into Microsoft 365 Personal and Family subscriptions in Australia. The rollout later expanded worldwide in January 2025. Microsoft described Copilot as a major innovation, offering “AI-powered features” that would “help users unlock their potential”.

However, this integration also triggered quite a sharp price rise. For example, according to the Australian Competition and Consumer Commission (ACCC), the annual Microsoft 365 Personal plan increased from AUD 109 to AUD 159, a rise of 45 per cent, while the Family plan rose from AUD 139 to AUD 179, a 29 per cent increase. Monthly fees also went up.

Only Two Choices Implied

The ACCC says Microsoft notified subscribers through two emails and a blog post, telling them their next renewal would include Copilot and the higher price. The messages reportedly told users that unless they cancelled before renewal, the higher charge would apply automatically.

For example, one such email stated: “Unless you cancel two days before your renewal date, we’ll charge AUD 159.00 including taxes every year. Cancel any time to stop future charges or change how you pay by managing your subscription.”

The regulator now alleges these communications implied users had only two choices, i.e., pay more for Copilot, or cancel their subscription entirely. What the company failed to mention, according to the ACCC, was that there was a third option available, which was switching to what Microsoft called the “Classic” plan.

Classic Plan Is Third Option

Although the Classic plan allowed customers to retain all the features of their existing Microsoft 365 subscription, without Copilot, at the old price, it seems that it was not mentioned in Microsoft’s emails or blog post. Instead, the ACCC says the Classic option only appeared if a customer began the cancellation process, navigating through several screens before the option was revealed.

Given how integral Microsoft 365 has become to home users, e.g., providing essential software and cloud storage, the ACCC argues this created unfair pressure. ACCC chair Gina Cass-Gottlieb said: “The Microsoft Office apps included in 365 subscriptions are essential in many people’s lives, and given there are limited substitutes to the bundled package, cancelling the subscription is a decision many would not make lightly.”

Proceedings Filed Against Microsoft

On 27 October 2025, the ACCC filed proceedings in Australia’s Federal Court against both Microsoft Corporation in the United States and its Australian subsidiary, Microsoft Pty Ltd. The regulator alleges that Microsoft engaged in misleading or deceptive conduct, and made false or misleading representations, in breach of sections 18 and 29 of the Australian Consumer Law.

Specifically, the ACCC says Microsoft falsely represented that users had to accept Copilot to maintain access to their subscription (“Copilot Necessity Representation”), that they had to pay higher prices to continue using Microsoft 365 (“Price Necessity Representation”), and that they only had the two options of accepting the higher price or cancelling (“Options Representation”).

The ACCC claims these representations were false and misleading because the Classic plan was available at the old price, without Copilot, and that by omitting mention of that plan, the regulator says Microsoft denied customers the chance to make an informed decision.

Cass-Gottlieb stated: “We will allege in court that Microsoft deliberately omitted reference to the Classic plans in its communications and concealed their existence until after subscribers initiated the cancellation process to increase the number of consumers on more expensive Copilot-integrated plans.” She added: “We believe many Microsoft 365 customers would have opted for the Classic plan had they been aware of all the available options.”

The regulator argues that this omission caused consumers financial harm. Many subscribers, believing they had no alternative, allowed their subscriptions to renew automatically at the higher Copilot rate, and those users, the ACCC says, effectively paid more for something they might not have chosen.

Why It Matters

The case is significant because it highlights how software subscription models are evolving with the introduction of AI features. For example, Microsoft’s integration of Copilot, and the resulting price increases, demonstrates how companies are bundling AI capabilities into established services, but the ACCC argues that this bundling must be transparent and optional.

For consumers, the issue is both financial and procedural. A 45 per cent increase represents a notable cost rise for households relying on Microsoft 365. More importantly, the regulator argues that burying the cheaper Classic plan behind the cancellation flow deprived users of informed consent, which is a key principle in consumer law.

For Microsoft, the allegations really go beyond pricing. For example, the case also touches on interface design and user experience. Regulators are increasingly focused on so-called “dark patterns”, i.e., design choices that nudge users into particular decisions. The ACCC says Microsoft’s renewal flow was structured to steer users towards the more expensive plan by hiding the cheaper one.

For competitors, the case could shape how AI features are rolled out across subscription products. Companies like Google, Apple and Adobe are all integrating AI into consumer and productivity tools. If the court rules that Microsoft’s conduct was misleading, others may need to rethink how they communicate AI upgrades and pricing options.

Cass-Gottlieb said the regulator’s goal is broader than this single case: “All businesses need to provide accurate information about their services and prices. Failure to do so risks breaching the Australian Consumer Law.”

What Happens Next?

The (Australian) Federal Court will now review the ACCC’s evidence, including Microsoft’s October 2024 blog post and the two key emails sent to subscribers. The regulator is seeking penalties, injunctions, declarations, consumer redress and costs.

If the court finds against Microsoft, penalties could be substantial. For example, under Australian law, the maximum fine for each breach is the greater of AUD 50 million, three times the value of any benefits obtained, or 30 per cent of the company’s adjusted turnover during the breach period. The ACCC has signalled that it will seek a significant penalty, citing the number of affected consumers and the scale of the alleged conduct.

Microsoft – Reviewing The Claims

Microsoft has said it is reviewing the claims, adding that “consumer trust and transparency are top priorities” and that it intends to work constructively with the ACCC. The company has not yet filed a detailed defence.

Wider Context

The case comes at a time when regulators worldwide are scrutinising how big technology companies integrate AI into their products. Microsoft has made Copilot central to its software strategy, embedding it into Windows, Office and Bing. The integration has been marketed as a major advance, but it has also raised questions about whether AI is being used to justify higher subscription fees.

It’s worth noting here that, earlier in 2025, Microsoft faced separate antitrust scrutiny in Europe, where it agreed to unbundle Teams from Microsoft 365 after competition regulators raised concerns about unfair bundling. The Australian case is different in that it focuses on consumer fairness rather than competition, but both point to a growing willingness among regulators to challenge how Microsoft structures its product offerings.

The proceedings also coincide with a wider policy debate about consumer protection in digital markets. Regulators in the UK, EU and Australia have been warning companies against design choices that obscure cheaper or less data-intensive options. The ACCC’s case against Microsoft is one of the first major tests of these principles in the context of AI subscription pricing.

Challenges and Criticisms

Microsoft’s defence is expected to centre on whether its communications were genuinely misleading. For example, the company may argue that the price rise and integration were communicated transparently and that the Classic plan was a courtesy option, not an advertised tier.

Critics, however, say the case exposes how complex modern subscription models have become. Consumer advocates argue that when essential software like Microsoft 365 becomes tied to expensive AI add-ons, users may have little real choice, particularly if they are steered away from cheaper options through interface design.

The ACCC alleges Microsoft deliberately hid the Classic option to increase uptake of Copilot, describing the concealment as an intentional strategy. It says consumers’ dependence on Microsoft’s software made them more vulnerable to such tactics.

Meanwhile, business users and consumers have voiced frustration online. For example, some told Australian media they were surprised to find higher charges on renewal and were unaware that an alternative existed. Others have reportedly raised concerns that global technology companies may be using AI upgrades as a pretext for universal price increases.

The case is now being closely watched by regulators and consumer organisations worldwide, who see it as an early test of how AI-linked price changes will be governed under consumer law. For Microsoft, the outcome could determine how it promotes future Copilot features, and how transparent it will need to be with millions of subscribers when the next upgrade arrives.

What This Means For Your Business?

If the ACCC’s case succeeds, it could redefine how companies communicate subscription changes and AI integrations worldwide. The issues at stake extend far beyond Microsoft’s customer base in Australia. For example, transparency in pricing, honest representation of product features, and fair presentation of choices are all central to maintaining consumer trust in a digital economy that increasingly runs on subscriptions rather than ownership. The Court’s decision will, therefore, be closely analysed by consumer regulators, legal teams and software firms around the world.

For Microsoft, the financial penalties may be less significant than the reputational and operational consequences. The company’s strategy of embedding Copilot into every tier of its software ecosystem depends on users accepting AI features as a normal, even necessary, part of productivity software. If regulators conclude that the rollout was handled in a way that misled users, Microsoft may need to re-evaluate how it introduces future AI upgrades and how clearly it differentiates between optional and bundled products. Other technology firms will also be watching closely, given that most are following a similar path of building premium AI layers into existing subscriptions.

For UK businesses, the case highlights how global developments in consumer law can have local implications. The Competition and Markets Authority has already warned UK companies about interface design that conceals key information or discourages users from exercising choice. If Microsoft is found to have breached consumer law in Australia, it may prompt British regulators to take a closer look at how AI-driven services are marketed and priced in the UK. It could also encourage businesses that depend on Microsoft 365 to examine their own contracts and renewal processes more carefully, particularly where subscription changes are linked to new technologies or price adjustments.

The broader lesson is that as AI becomes more integrated into software, the boundary between innovation and obligation must remain clear. Consumers need to know when they are paying extra for AI functionality and when they can reasonably decline it. For regulators, the challenge will be to ensure that product evolution does not erode transparency or consumer control. For the tech industry, the message is that trust will be built not only through advanced technology, but through openness about what that technology costs, how it is delivered, and the real choices available to those who use it.

WhatsApp Introduces Passkey-Encrypted Backups

WhatsApp is rolling out passkey-encrypted backups, thereby letting users protect and recover their chat history using their face, fingerprint, or device screen lock instead of remembering a long password or storing a 64-digit recovery key.

A Major Step in WhatsApp’s Encryption Journey

WhatsApp has announced a new feature that allows users to encrypt their chat backups with passkeys rather than relying on passwords or lengthy encryption codes. Passkeys are a form of passwordless authentication that combine something a user has (their phone) with something they are or know (such as biometrics or a screen lock code). According to WhatsApp, this will make end-to-end encrypted backups simpler and safer to use across iOS and Android devices.

Previously

For years, the app’s end-to-end encryption actually only covered live chats and calls. Messages were secure in transit but often less so once stored in cloud backups. Until 2021, backups to iCloud and Google Drive were not encrypted, which meant anyone who gained access to those cloud accounts could potentially read the stored chat history. That year, Meta introduced end-to-end encrypted backups, giving users the option to protect those files using a password or a randomly generated 64-character key. It was a major privacy milestone, but a cumbersome one: if a user lost the password or key, their backup became permanently inaccessible.

No Need to Memorise a Key

WhatsApp’s new passkey approach doesn’t change how backups are encrypted, but it does change how users unlock them. Instead of memorising a key, people can now rely on the same biometric or lock screen verification they already use to access their phone.

Why Passkeys, and Why Now?

In a blog post titled Encrypting Your WhatsApp Chat Backup Just Got Easier, the company explained the rationale behind the move. “Passkeys will allow you to use your fingerprint, face, or screen lock code to encrypt your chat backups instead of having to memorise a password or a cumbersome 64-digit encryption key,” WhatsApp said. “Now, with just a tap or a glance, the same security that protects your personal chats and calls on WhatsApp is applied to your chat backups so they are always safe, accessible and private.”

The move actually reflects a broader trend in cybersecurity and user experience. For example, while passwords remain the default for most online services, they are increasingly seen as both inconvenient and insecure. Passkeys, built on the FIDO and WebAuthn standards, have been adopted by Apple, Google, and Microsoft as part of the industry-wide transition towards passwordless authentication. WhatsApp’s latest feature extends this approach to backup protection, bringing it in line with these major ecosystems.

Usability is also a central motivation. For example, many users either forgot their encrypted backup password or never enabled the feature at all because of fears they might lose the key. With passkeys, the backup process is far more seamless. The device itself becomes the trusted gatekeeper, using local authentication that the user already understands.

This could also help WhatsApp’s reputation among privacy advocates. The service now has over three billion monthly active users worldwide, and any improvement in accessibility could drive wider adoption of its encryption features.

When?

The company said the rollout will take place “over the coming weeks and months”, meaning not all users will see the new option immediately.

How It Works in Practice

Once available, users can enable passkey-encrypted backups through the app’s settings: Settings → Chats → Chat backup → End-to-end encrypted backup. From there, they can choose to secure their backup using a passkey rather than a password or encryption key.

The difference becomes most apparent when restoring chats to a new device. For example, under the old system, the user needed to type their password or locate their encryption key before WhatsApp could decrypt and restore messages. With passkeys, they simply authenticate using biometrics or a screen lock from their old device, which confirms their identity and decrypts the backup automatically.

This means that a small business owner switching to a new phone can now restore years of client messages and attachments simply by scanning their fingerprint, instead of searching for a forgotten password. It is a small change in process but a significant improvement in ease of use and data recovery.

Why This Matters to UK Businesses

In the UK, WhatsApp is used by millions of professionals as an informal business communication tool. From contractors and consultants to property managers and customer service teams, many rely on WhatsApp to share documents, voice notes, and updates. This has often created a compliance and data protection challenge. Backups stored on cloud platforms without encryption could expose client data if an employee’s personal account were hacked.

By making encrypted backups easier to use, therefore, WhatsApp is now closing one of the remaining security gaps. Businesses that use WhatsApp informally can now encourage staff to enable backup encryption without worrying that forgotten passwords will lock them out of their data. For industries handling sensitive information, e.g., healthcare, construction, and legal services, this makes it simpler to protect communications while maintaining accessibility.

WhatsApp’s focus on usability could also help retain users in the face of competition. For example, rivals such as Signal have long made privacy their main selling point, while enterprise platforms like Microsoft Teams and Slack promote compliance features and centralised data management. Making encrypted backups effortless helps WhatsApp defend its position as both a consumer and small-business communication tool.

Context

The introduction of passkeys for backups also appears to align with Meta’s wider strategy to make encryption a default standard across its messaging platforms. In late 2023, Meta completed the rollout of end-to-end encryption for Messenger and Facebook chats, drawing both praise and criticism from privacy campaigners and regulators. WhatsApp’s latest enhancement, therefore, reinforces that commitment to strong encryption, while also signalling that Meta is aware of usability barriers that have historically held users back.

At the same time, this move may raise new questions for regulators, e.g., governments in the UK, EU, and elsewhere continue to debate how encrypted services fit with lawful access and online safety legislation. If backups are locked behind device-specific passkeys that even Meta cannot access, traditional data requests will yield little beyond metadata such as contact timestamps. That strengthens user privacy but complicates investigations where access to message history has previously depended on unencrypted backups in the cloud.

Potential Challenges and Criticisms

While the update marks another step forward in security and privacy, it is not without its caveats. For example, the security of passkey-encrypted backups depends on the strength of the device lock itself. A weak PIN or an easily accessible biometric can undermine the system. If someone can unlock a user’s phone, they may also be able to restore the encrypted backup. Users are therefore advised to maintain strong device security to benefit fully from the new system.

Recovery is another concern. Unlike a password, a biometric cannot be written down or stored safely elsewhere. That means if a user loses their device and has no other registered one to authorise the restore, they may permanently lose access to their encrypted backup. WhatsApp has confirmed that it will not store recovery copies of encryption keys, maintaining its position that “only you” can access your backup. This reinforces privacy but leaves no route for account recovery if the passkey cannot be used.

The staggered rollout also means adoption will be uneven. Not all users will have access immediately, and device compatibility could differ by region. For organisations using WhatsApp across multiple teams or countries, this might temporarily complicate backup policies or support processes.

There are also some technical limits to consider. For example, the new passkey feature does not address certain underlying encryption vulnerabilities identified by researchers earlier this year, such as weaknesses in WhatsApp’s “prekey” handshake mechanism that could theoretically expose some message metadata under specific conditions. Those findings relate to message exchange rather than backups, but they underline that security in complex systems is never static.

Finally, while this change enhances privacy for individuals, it introduces new complications for organisations that must retain communication records for legal or contractual reasons. Encrypted backups that only employees can decrypt may hinder internal auditing or eDiscovery processes unless alternative data management policies are in place.

WhatsApp’s decision to make passkey-encrypted backups available, therefore, reflects both a technological evolution and a strategic balancing act, i.e., strengthening privacy while trying to keep security practical for billions of users and acceptable to regulators. It reinforces Meta’s message that personal data should remain under user control, but it also leaves open questions about recovery, compliance, and how far convenience can coexist with absolute privacy.

What Does This Mean for Your Business?

WhatsApp’s passkey-encrypted backups close a long-standing gap in its privacy model by uniting strong security with genuine ease of use. The change ensures that users can now protect years of chat history without worrying about lost passwords or unmanageable encryption keys. It also signals Meta’s intent to keep WhatsApp at the forefront of privacy technology while aligning with the global shift toward passwordless authentication across major platforms.

For UK businesses, the update is both an advantage and a challenge. For example, it strengthens protection for sensitive conversations, reducing the risk of data exposure from insecure cloud backups. However, it also places more control in the hands of individual employees, limiting an organisation’s ability to monitor or recover business communications when needed. Firms that use WhatsApp informally for client contact or internal coordination will need to update their data management policies to account for encrypted, user-controlled backups.

Regulators and policymakers are likely to see this as another reminder that end-to-end encryption is now the default expectation rather than a specialist option. While it may complicate lawful access to stored message data, it reflects the direction most major tech companies are taking to meet user privacy demands. For everyday users, the result should be a simpler, more trustworthy backup system that makes security part of the normal experience rather than an optional extra.

The broader lesson here is that encryption can only achieve mass adoption when it becomes invisible to the user. WhatsApp’s move may bring that goal closer, reshaping how individuals, businesses, and governments think about control over digital information in a world where privacy and usability must now coexist.

Company Check : OpenAI Completes Shift Into For-Profit Company

OpenAI has now finished converting itself into a for-profit public benefit corporation, while keeping a mission-led foundation on top, in what may be the most important restructuring so far in the commercial AI race.

Started As Non-Profit

OpenAI was originally founded (back in 2015) as a non-profit research lab with a stated mission to ensure that artificial general intelligence (AI), i.e., AI that is smarter than humans across a wide range of tasks, benefits all of humanity. The company says that mission still applies and what has just been changed is really the legal and financial structure used to pursue it.

To give some background, from 2019 onwards, OpenAI began operating a hybrid model, where a for-profit subsidiary sat under the original non-profit parent. That 2019 model capped investor returns and was designed to let OpenAI raise money for large-scale computing without abandoning its public interest mission. The company has now gone further and completed a full recapitalisation.

The new for-profit entity is called OpenAI Group PBC, and it sits under a renamed parent called the OpenAI Foundation, which is still formally a non-profit. A public benefit corporation in US law is a for-profit company that has an explicit social purpose written into its charter and is legally required to consider wider stakeholders, not only shareholders.

Control Through The Foundation

OpenAI says this structure gives it the best of both worlds. For example, the Foundation is meant to act as a mission guardian and still controls the board of the for-profit, while the for-profit can raise capital, issue equity in the normal way, and operate much more like a conventional tech business. The OpenAI Foundation appoints all members of the OpenAI Group board and can remove directors at any time, which is intended to stop the commercial arm drifting away from the stated mission.

How The Ownership Now Looks

The OpenAI Foundation now owns about 26 per cent of OpenAI Group, a stake the company values at around 130 billion dollars, based on a 500 billion dollar valuation for OpenAI. The Foundation has also been given a warrant that could increase its ownership if OpenAI’s valuation climbs dramatically over the next 15 years, which OpenAI says is designed to ensure that the Foundation remains the single largest long-term beneficiary of OpenAI’s success.

Microsoft’s Input

Microsoft, which first partnered with OpenAI in 2019 and has provided tens of billions of dollars’ worth of cash and cloud infrastructure, will now hold roughly 27 per cent of OpenAI Group. That stake is understood to be worth in the region of 135 billion dollars. Microsoft’s total investment to date is believed to be about 13.8 billion dollars and the new deal effectively locks in a near tenfold return on paper.

Employees Have A Stake

The remaining 47 per cent or so will be held by current and former employees and other investors, including large external backers such as SoftBank. OpenAI employees themselves will collectively hold a significant equity position. The company has said publicly that Sam Altman, its co-founder and chief executive, will not personally take an equity stake in the newly restructured business.

A Move Away From The “Capped Profit” Model

Under this new arrangement, all shareholders in OpenAI Group now hold ordinary stock that rises in value if the company grows. That is an important break from the older “capped profit” model, which had limited investor upside to 100 times their investment, sometimes less. Investors had warned for months that those limits made it harder for OpenAI to raise money at the scale needed to compete with rivals such as Google, Meta, and Anthropic.

Why OpenAI Says The Change Was Necessary

OpenAI’s leadership has argued that the economics of cutting-edge AI made the previous structure unsustainable. For example, training and running increasingly capable AI models depends on enormous quantities of specialised chips, electricity, cooling, data centre space, and engineering talent.

In a livestream outlining the change, Sam Altman said OpenAI had already committed to roughly 1.4 trillion dollars of infrastructure spending, including plans for about 30 gigawatts of dedicated computing capacity, and described that as part of a “gigantic infrastructure buildout” needed to support its research and products.

Altman also said the new for-profit public benefit corporation would “be able to attract the resources we need” to achieve those goals. He framed the move not as a retreat from the original mission but as a way to make it financially viable at global scale.

The Scale Of OpenAI’s Expansion

The restructuring comes as OpenAI expands well beyond its original chatbot. The company is now developing the AI-enabled browser ChatGPT Atlas and a video generation tool called Sora. It is also turning ChatGPT into a full platform where third-party apps can run inside the chatbot.

OpenAI says ChatGPT now has more than 800 million weekly active users, up from 100 million in early 2023, and processes billions of messages a day. At its DevDay event in October 2025, the company said this user base gives developers access to “hundreds of millions” of potential customers inside ChatGPT itself.

Internally, OpenAI sees this scale as justification for moving towards a model closer to a cloud provider than a research lab. Its long-term plans include multi-hundred-billion-dollar data centre projects and major chip supply deals.

Microsoft’s Role In The New Structure

The restructuring also resets the relationship between OpenAI and Microsoft, which had become complicated and politically sensitive. For example, under the previous agreement, Microsoft had broad rights to license and deploy OpenAI’s technologies inside its own products and Azure cloud, in return for providing OpenAI with the compute capacity it needed.

At the same time, Microsoft’s access to OpenAI’s research had conditions tied to artificial general intelligence, or AGI, which created uncertainty about what would happen if OpenAI declared it had reached that milestone.

Under the updated terms, therefore, Microsoft keeps commercial rights to OpenAI’s models and products through 2032, except for consumer hardware. The two companies will also set up an independent expert panel to verify any claim that AGI has been reached, rather than leaving it to OpenAI’s own board.

Microsoft also now gains the freedom to develop AGI-level systems on its own or with other partners. OpenAI, meanwhile, can now work with other cloud and hardware providers, although a reported 250 billion dollar Azure commitment means Microsoft remains central to its infrastructure.

Businesses

For customers, especially UK and global businesses using ChatGPT and related tools, the restructuring signals that OpenAI is no longer just a research organisation. Instead, it is presenting itself as a stable, long-term commercial partner with clear funding and governance.

OpenAI’s chief financial officer has been reported as saying that the Microsoft deal improves its ability to raise capital efficiently, which should be an important reassurance for enterprise buyers who depend on OpenAI’s ongoing investment in model upgrades and infrastructure.

The company has already said it is on track for around 13 billion dollars in revenue this year and is heavily promoting GPT-powered copilots and ChatGPT Enterprise as secure, controllable assistants for regulated industries.

The Power Of Platform Reach

The scale of ChatGPT’s user base is becoming a real strategic asset. If developers can publish applications inside ChatGPT that reach those users directly, OpenAI is effectively creating its own software ecosystem. Sam Altman told developers that “your apps can reach hundreds of millions of chat users” through the interface, a clear signal of where the business is heading.

OpenAI has also promised that its Foundation will continue to fund safety and ethics work. For example, it has committed 25 billion dollars to early focus areas including technical methods to minimise AI harms and research on health and disease. The company says this proves that “mission and commercial success advance together.”

Concerns Over Oversight

Critics, however, have questioned whether a company of OpenAI’s scale can truly balance those goals. For example, the consumer advocacy group Public Citizen argues that the new model effectively turns the non-profit into “a corporate foundation” created to advance the interests of OpenAI’s for-profit arm.

Legal scholars have also raised some doubts about how enforceable a public benefit corporation’s duties really are. For example, Luís Calderón Gómez of Cardozo School of Law has been quoted as saying the law gives companies wide leeway on when to prioritise profit or purpose, calling it “a bit of an empty, unenforceable promise.”

Regulatory Approval And Scrutiny

Attorneys General in California and Delaware have examined the recapitalisation closely because OpenAI’s non-profit assets were “irrevocably dedicated to its charitable purpose.” Both regulators have now approved the change, but only after assurances that the Foundation would retain meaningful oversight.

Some commentators have highlighted how OpenAI could not simply abandon its non-profit obligations without paying fair market value for its assets, which is actually an almost impossible task given the company’s 500 billion dollar valuation.

Generally, it seems that critics are worried that this hybrid model may leave accountability in corporate hands and they fear that AI safety, transparency, and ethics will continue to be handled by internal panels and committees rather than by independent public regulators.

Implications For The AI Market

The restructuring has some implications far beyond OpenAI itself. For example, competitors like Anthropic, Google, Meta, and xAI are now competing on infrastructure scale, compute access, and data availability as much as model performance. OpenAI’s plans for vast long-term chip and energy supply agreements underline how industrialised AI development has become.

Also, Microsoft’s market value briefly passed four trillion dollars after the new deal was announced, reflecting investor confidence in AI’s commercial potential. The two companies are now bound through at least 2032 on model access and cloud contracts, yet both are free to pursue AGI-level work independently.

For governments, the question is who will verify claims that AI systems are approaching AGI? For business users, the focus will be on the stability and transparency of the providers they now depend on. For regulators, the issue is whether a structure that combines charitable oversight with profit-driven control can genuinely deliver on OpenAI’s original promise to ensure that AI benefits everyone.

What Does This Mean For Your Business?

The completed restructuring makes OpenAI one of the most commercially powerful companies in the world while still claiming a public mission at its core. It marks a decisive point where the organisation founded to serve the public good has become an essential pillar of the private AI economy. The OpenAI Foundation may retain formal control, but the market incentives now surrounding the Group mean the company’s behaviour will inevitably be judged by how well it balances its ethical commitments with investor expectations.

For regulators and policymakers, the challenge will be ensuring that OpenAI’s growing influence does not outpace public oversight. As its models shape productivity, education, and media, the concentration of technical capability and data in a single firm will raise questions about accountability and competition. The presence of Microsoft, with its 27 per cent stake, further embeds this partnership at the centre of global AI infrastructure, giving it unprecedented control over how AI reaches both consumers and enterprises.

For UK businesses, the move is likely to have practical consequences. For example, it may bring greater stability, clearer licensing, and a more predictable product roadmap for the ChatGPT tools already being deployed across finance, retail, marketing, and professional services. It also suggests that OpenAI will become a more commercially driven supplier, with pricing and support models that align with corporate software markets rather than experimental research. In this sense, the restructuring could make AI adoption easier for British firms, but also tighten dependence on a single transatlantic provider.

For investors, the shift opens the door to an eventual public offering that could rival the largest listings in history. For OpenAI’s competitors, it raises the bar for capital and infrastructure required to stay relevant. And for everyday users, it may signal a future where AI tools evolve faster but with fewer avenues for independent scrutiny.

OpenAI’s new structure may ultimately prove to be a balancing act between purpose and profit. Whether it succeeds will depend less on how well it is worded in corporate charters and more on how the company behaves when commercial pressures collide with its original promise to ensure that advanced AI benefits all of humanity.

Security Stop-Press: New AI Security Researcher ‘Aardvark’

OpenAI has introduced Aardvark, an autonomous security agent powered by GPT-5 that scans codebases to detect and fix software vulnerabilities before attackers can exploit them.

Described as “an agentic security researcher,” Aardvark continuously analyses repositories, monitors commits, and tests code in sandboxed environments to validate real-world exploitability. It then proposes human-reviewable patches using OpenAI’s Codex system.

OpenAI said Aardvark has already uncovered meaningful flaws in its own software and external partner projects, identifying 92 per cent of known vulnerabilities in benchmark tests and ten new issues worthy of CVE identifiers.

The system is currently in private beta, with OpenAI inviting select organisations to apply for early access through its website to help refine accuracy and reporting workflows. Wider availability is expected once testing concludes, with OpenAI also planning free scans for selected non-commercial open-source projects.

Businesses interested in trying Aardvark can apply to join the beta via OpenAI’s official site and begin integrating it with their GitHub environments to test how autonomous code analysis could help their own security posture.

Sustainability-In-Tech : Europe’s First Underground Mine Data Centre

Europe’s first full-scale data centre built inside a working mine in northern Italy is being hailed as a landmark in sustainable digital infrastructure, combining high-performance computing with energy efficiency and circular use of underground space.

Who’s Behind the Project and Where Is It?

The project, known as Trentino DataMine, is being developed in the San Romedio dolomite mine in Val di Non, deep in the Dolomites of northern Italy. The mine is owned by Tassullo, a century-old company that extracts dolomite for use in construction materials. Around 100 metres below ground, in a vast network of stable, dry rock chambers, the site has long been used to store apples, cheese, and wine, thanks to its naturally cool and constant temperature of around 12°C.

Trentino DataMine is led by the University of Trento through a public-private partnership involving several Italian firms, including Dedagroup, GPI, Covi Costruzioni, and ISA. Together they have formed a limited company to design, build, and operate the facility. The €50.2 million project is partly financed by Italy’s National Recovery and Resilience Plan (PNRR), which channels EU Next Generation funds to sustainable and innovative developments. Around €18.4 million of the funding comes from public sources, with the remainder provided by private IT and construction companies.

Intacture – 5 Megawatts

The new facility, called Intacture, will provide around 5 megawatts of computing capacity. However, its focus is not just storage or cloud hosting, but also advanced computing for research, artificial intelligence (AI), cybersecurity, and healthcare data. The University of Trento describes the project as a “strategic centre for innovation, sustainability and advanced technology” designed to support high-performance computing, edge computing, and quantum cryptography research.

Why Build a Data Centre Underground?

The decision to locate a data centre inside a mine has both some technical and environmental logic behind it, i.e., cooling, energy use, security, and land availability are all central to the reasoning.

Traditional data centres expend large amounts of electricity on cooling systems to prevent servers from overheating. In Trentino, the natural rock temperature, steady at about 12°C, provides passive cooling without the need for large chillers or water-based cooling towers. That dramatically cuts electricity consumption and eliminates the water use associated with many conventional data centres. Dedagroup’s Chief Technology Officer, Roberto Loro, said the site offered “a combination of physical security with low environmental and energy impact.”

Security is another key driver. For example, the mine’s dolomite rock is naturally dry and geologically stable, offering protection from earthquakes and floods. Also, being encased in solid rock shields the site from electromagnetic interference and physical threats such as explosions or extreme weather. Giuliano Claudio Peritore, President of the Association of Italian Internet Providers, has described the project as “absolutely fascinating”, noting that “we think of a mine as being a humid place, therefore not suited to a data centre. Instead, in Trentino we have something special because the dolomite rock is absolutely dry, in a stable mountain.”

The underground setting also saves land. For example, rather than paving over new industrial plots or farmland, the data centre reuses existing voids created by mining operations. In doing so, it preserves surface landscapes while putting unused underground volumes to productive use, which is a clear advantage in regions where land use and visual impact are increasingly sensitive issues.

How the Mine Is Being Transformed

It should be noted that the mine is actually still active, and the excavation work for the data centre was carefully integrated into the ongoing extraction of dolomite. Around 63,000 tonnes of rock (about the volume of 20 Olympic swimming pools) were removed to create the chambers for the facility. The extracted dolomite is being reused by Tassullo to manufacture eco-friendly building materials, creating a circular loop between extraction, construction, and digital infrastructure.

80% Underground

Roughly 80 per cent of the data centre is underground, with the rest of the space used for offices, reception, and security areas near the surface. Around 60 workers have already installed 50 kilometres of fibre and electrical cabling, together with 3 kilometres of ventilation ducts and several power generators. The design relies on natural cooling from the rock, with mechanical ventilation only needed for air circulation.

Coexistence of Industries

What essentially makes the site unique is the coexistence of digital and agricultural industries within the same underground system. For example, the mine has long stored local apples, wines, and Trentingrana cheese. The servers’ waste heat can now be channelled to warm other sections of the mine, while nearby storage operations requiring refrigeration can benefit from the cooling infrastructure. Dedagroup’s Loro highlighted the potential for collaboration, saying: “Those who need heat can use the heat we produce.” It creates a self-balancing ecosystem in which energy flows are shared between food logistics and digital computing.

Sustainability and Regional Strategy

The Trentino DataMine could be said to embody several sustainability principles, i.e., reducing energy and water use, recycling material outputs, and avoiding new land consumption. It also fits into the wider strategy of transforming the region into a digital innovation hub under the EU’s green and digital transition agenda.

By locating advanced computing capacity in northern Italy, the project also supports Europe’s ambition for greater digital sovereignty. Sensitive data in fields such as healthcare, AI, and finance can be processed locally, under European data governance frameworks, instead of being sent to large foreign-owned cloud providers. Italy’s Minister for Enterprises, Adolfo Urso, called the mine “a new hub for public-private collaboration, research and regional development”, adding that it shows how unused underground spaces can drive both innovation and sustainability.

Economically, the project is expected to pay for itself over 15 years and generate skilled employment across data management, engineering, and scientific research. The operators also see it as a blueprint for other European regions where disused or stable underground sites, such as salt mines or tunnels, could be converted into low-impact data infrastructure.

Potential Challenges and Practical Considerations

Despite the clear environmental advantages, underground data centres bring a unique set of challenges. For example, connectivity must be maintained through kilometres of tunnel, and redundancy has to be built in to guarantee service uptime. Engineers have installed multiple fibre paths to ensure data continuity, but protecting those cables from vibration and mining equipment remains an ongoing task.

Also, maintenance logistics are more complex than in standard above-ground facilities. Technicians must move equipment through controlled tunnels, and ventilation must ensure safe air quality at all times. Emergency procedures, power backups, and fire safety systems must be adapted for enclosed spaces.

There are also environmental balance issues. While heat recovery is a promising concept, the actual usability of server heat depends on matching the right temperatures to neighbouring processes. Any imbalance could lead to excess heat that still needs to be vented, which would limit the energy savings. Regulators will also monitor that operations within the mine, especially the storage of food products, are not affected by the digital facility’s heat or air circulation.

Tailored Rules Needed?

For policymakers, Trentino DataMine raises new regulatory questions. Data centres built inside industrial extraction sites may need tailored rules covering safety, environmental protection, and labour standards. Italy’s authorities have already classified the facility as a “green” project under the PNRR, but its mixed use means future projects of this type will need careful legal frameworks.

Stakeholders

For the Trentino region, the DataMine offers a new model of economic diversification. For example, it links high-tech sectors with traditional industries like agriculture and construction, keeping the value of public investment within the local economy. The University of Trento sees the facility as a nucleus for research in AI, edge computing, and cybersecurity, potentially attracting both private and public partners from across Europe.

For the data-centre industry, it offers a live test of how underground environments can cut cooling energy use and improve physical resilience. With European data-centre electricity demand expected to rise by 28 per cent by 2030 according to the European Commission, efficiency measures like this are becoming increasingly important.

For local industries, proximity to computing power could bring new advantages. Agricultural firms that already store produce in the mine could benefit from AI-driven monitoring or predictive logistics systems hosted just a few metres away. For construction firms, the circular reuse of dolomite reinforces Trentino’s positioning as a region of sustainable materials innovation.

Other Unusual Data-Centre Locations Around the World

The Trentino site is the first in Europe built inside a working mine, but it joins a small group of projects exploring alternative environments for digital infrastructure. Others include, for example:

– In Norway, the Lefdal Mine Datacenter occupies a former mineral mine on the country’s west coast. It uses hydropower and draws cold water from a nearby fjord for cooling, achieving extremely low energy consumption. The operators claim near-zero freshwater use and a minimal environmental footprint.

– Microsoft has tested underwater data centres in its Project Natick experiment off the coast of Scotland’s Orkney Islands. The company submerged 864 servers in a sealed pressure vessel on the seabed and found that failure rates were one-eighth of comparable land-based systems, largely due to the stable, cold environment.

– Other developers are exploring floating or underwater data pods in coastal cities, though regulatory and maintenance challenges remain significant. In the United States, proposals to deploy subsea AI processing capsules in San Francisco Bay have drawn mixed reactions over environmental and safety concerns.

Across these experiments, the goal is broadly the same, i.e., to find a workable way to reduce the environmental footprint of data processing, improve efficiency, and find new ways to integrate computing into existing or underused spaces. The Trentino DataMine, therefore, adds a new European example to that list, turning an active dolomite mine into a shared underground ecosystem where technology, agriculture, and sustainability coexist.

What Does This Mean For Your Organisation?

What emerges from Trentino is not just an unusual engineering choice but a possible template for how digital capacity could be added in places that do not want more noise, heat, land pressure, or surface build-out. The operators are arguing that a mine with a constant 12°C climate can offer something that a standard warehouse on an industrial estate cannot, i.e., passive cooling, protection from physical and electromagnetic threats, and almost no demand for new above-ground land. In sustainability terms that’s important because data processing is on track to become one of Europe’s most resource-intensive activities, particularly with the growing computational load of AI and high-performance analytics.

At the same time, this model is clearly not plug-and-play. Keeping a live data centre running inside an active mine brings engineering risks that conventional builds do not face, and regulators will have to decide how to classify mixed-use underground sites that are at once storage depots for food, sources of construction materials, and high-security computing hubs. The fact that Trentino DataMine is being backed through national recovery funds and positioned as “green” is significant, but it also raises expectations. If this is going to be treated as a blueprint then it will have to prove that waste heat recovery, energy reuse, and non-destructive land use work in practice and not just on paper.

For UK businesses the story is relevant on several levels. For example, energy cost and regulatory scrutiny around data use are rising in the UK, while AI workloads and data retention obligations keep expanding. British organisations that depend on data-heavy services, including finance, healthcare, manufacturing and logistics, are already looking for hosting models that are both affordable and politically acceptable. A site like Trentino shows one possible direction for future colocation and high-performance compute: hardened, local, energy-efficient, physically sovereign, and directly tied into regional industries rather than sitting in anonymous hyperscale campuses. That matters for any UK company that is under pressure to evidence sustainability credentials to clients, boards, and regulators while still processing large volumes of data. It also matters for UK local authorities and regional development bodies, which face the same tension Trentino is trying to resolve, i.e., how to attract digital infrastructure and skilled digital jobs without giving up agricultural land, upsetting communities, or straining local water and power networks.

For national and regional governments across Europe the project draws a clean line between digital sovereignty and physical geography. Instead of assuming that high-performance computing must live in vast surface facilities owned by global cloud providers, Trentino suggests that local partnerships between universities, utilities, industrial operators and municipalities can create high-spec capacity underground, in territory that is already zoned for extraction or storage. That in turn keeps data, talent and long-term investment inside the region. It is also politically useful. A data centre marketed as low-impact and circular is an easier sell to voters than another high-consumption facility drawing megawatts from the grid and dumping hot water into rivers.

However, the final question is how far this idea can actually travel. For example, Norway’s fjord-cooled mine, Microsoft’s sealed seabed capsules and the San Romedio dolomite galleries are all attempts to reframe what a data centre physically is. Each approach chooses an environment where cooling and physical resilience are essentially provided by nature. If those models scale, then the debate around data centres in Europe may start moving away from “where can we find more land and power” and towards “which underused environments can safely host secure compute with the lowest ongoing footprint.” The real test for Trentino DataMine now is whether it stays a one-off regional showcase, or whether it becomes evidence that digital infrastructure, food logistics, materials production and climate responsibility can operate in the same physical space without compromising one another.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives