Sustainability-In-Tech : New AI Factory Powered By Renewable Energy in Arctic

Norwegian investment giant Aker has revealed plans to construct a large-scale AI facility inside the Arctic Circle, capitalising on green energy and a growing Nordic tech race.

Major Investment With Strategic Ambitions

Aker ASA, the Oslo-based industrial investment firm controlled by billionaire Kjell Inge Røkke, has announced plans to establish a major artificial intelligence (AI) “factory” in Narvik, a coastal city in northern Norway. Located 220km within the Arctic Circle, the site is already prepped for construction and has access to 230 megawatts (MW) of clean energy.
Described by Aker as a “catalyst for industrial development, job creation, and export revenues,” the project positions itself at the heart of a growing international race to create energy-efficient data infrastructure for AI workloads. CEO Øyvind Eriksen said the new facility would help Norway seize a key opportunity in an evolving digital economy: “AI and data centres are becoming foundational to global business, and northern Norway is uniquely positioned to benefit.”

Start Work Later This Year

While the company has not yet disclosed a total construction cost or timeline for the facility’s completion, the site in Narvik is said to be “construction ready”, with early groundwork expected to begin later this year, pending partnership agreements. Negotiations with potential technology providers and anchor customers are currently underway.

What Is an “AI Factory” and Why the Arctic?

The term “AI factory” refers to a data centre designed to support high-performance computing (HPC), particularly the large-scale training and deployment of AI models. These facilities require huge amounts of electricity to power and cool thousands of graphics processing units (GPUs), the hardware typically used for advanced AI tasks.

In recent years, tech companies and infrastructure investors have turned to northern regions where natural cooling and cheap renewable electricity offer environmental and economic advantages. Narvik, with its access to stable, low-cost hydropower and cool year-round temperatures, provides precisely the conditions needed for sustainable AI operations.
For example, data centres in warmer climates often need complex and energy-intensive cooling systems. In Narvik, ambient air can be used for much of the cooling, significantly reducing operational emissions. Aker’s plan aligns with a broader trend across the Nordics, where countries are leveraging their green energy grids and favourable climates to attract the next generation of digital infrastructure.

Aker’s Portfolio and Strategic Focus

Founded in 1841, Aker ASA is one of Norway’s largest industrial investment firms. The company has long-standing interests in sectors including energy, marine biotechnology, oil and gas, and software. Its current portfolio includes Cognite, a software company that delivers industrial AI and data solutions, and Seetee, a digital assets firm that holds Bitcoin and invests in blockchain infrastructure. Both are majority-owned and operated through Aker’s tech division.

In its Q2 2025 earnings update, Aker reported a 7.4 per cent rise in net asset value, reaching NOK 66.5 billion (£4.9 billion). The company also confirmed it was consolidating its data centre activities under direct ownership, a signal that the Narvik development will form a core part of its long-term infrastructure play.

The move comes as part of a wider shift in Aker’s strategy, with CEO Øyvind Eriksen stating that AI represents “a new value chain,” and that Norway’s combination of political stability, clean energy and industrial expertise makes it an attractive location for such ventures.

Part of a Larger Nordic Trend

The Nordics (Norway, Sweden, Denmark, Finland, and Iceland) have emerged as one of the world’s fastest-growing regions for AI data infrastructure, drawing investment from tech giants and local firms alike. Last year, Google pledged €1 billion (£850 million) to expand its Hamina data centre campus in southern Finland, its seventh such expansion. Microsoft followed suit with a $3.2 billion (£2.5 billion) commitment to boost its AI and cloud capacity across Sweden.

Amsterdam-based Nebius, a cloud firm backed by Yandex co-founder Arkady Volozh, announced in October that it would triple GPU capacity at its Mäntsälä facility in Sweden. The site is now being scaled to run 60,000 GPUs dedicated to AI workloads, making it one of Europe’s most powerful AI installations.

Also, as a sign of increasing local innovation, Finnish startup Silo AI was acquired by chipmaker AMD for $665 million (£515 million) last year, underlining growing investor confidence in the region’s AI ecosystem.

Narvik’s Unique Position

It seems that Narvik is no stranger to strategic importance. For example, historically a transport hub for iron ore, the city now sits at the centre of what the Norwegian government calls “Green North”, a zone being positioned for energy-intensive industries powered entirely by renewable sources.

The site earmarked by Aker lies close to existing transmission infrastructure and has direct access to locally generated hydropower. According to Statnett, Norway’s national grid operator, the northern region benefits from surplus electricity and lower wholesale energy prices compared to southern parts of the country.

This abundance of clean energy has not gone unnoticed. Eriksen described the Arctic setting as “ideal for long-term, sustainable digital infrastructure”, highlighting the region’s potential to export data processing as a service, similar to how Norway exports energy and aluminium today. For example, the Narvik facility could process AI training workloads on behalf of global clients, using only renewable energy and naturally cooled systems, giving it a unique carbon advantage compared to data centres in North America or Asia.

Economic and Industrial Impacts

Aker says the AI factory will generate new local jobs in both construction and operations, while also stimulating the broader northern economy. Although specific employment numbers have not yet been released, regional leaders have welcomed the project as a sign of renewed industrial confidence.

Local authorities in Narvik have also indicated that they are keen to develop a technology cluster around the facility, offering incentives to secondary businesses such as equipment suppliers, repair services, and housing developments.

For Aker, the facility may strengthen its position in a growing sector while complementing its existing investments in digital infrastructure. By owning both the compute (via the AI factory) and the software layer (via Cognite), the firm may be able to offer vertically integrated industrial AI services to its portfolio companies and beyond.

UK and European businesses could benefit as well. For example, with growing pressure to decarbonise digital operations, firms may soon look to outsource high-energy AI processing to low-carbon providers, particularly those in stable jurisdictions like Norway.

Challenges and Concerns

However, the project is not without its critics. For example, some environmental groups have raised concerns about the true impact of AI-related energy use, arguing that even renewable-powered data centres could crowd out other local energy needs or require future grid upgrades.

There are also broader geopolitical and regulatory questions. The AI arms race has triggered export restrictions on high-end GPUs and computing technology, particularly between the US and China. For Norway, which remains outside the European Union but closely aligned through the EEA agreement, balancing access to global supply chains with national interests could become increasingly complex.

Also, while the Narvik site boasts favourable conditions today, questions remain around long-term cooling efficiency, particularly as GPU densities increase and water-based cooling becomes more common. Some analysts have cautioned that being early to market brings both opportunity and risk.

That said, Aker insists that its approach is grounded in long-term ownership and sustainability. In a statement accompanying the announcement, Eriksen said: “Our industrial DNA means we take a patient, value-creating view. This isn’t about short-term gains—it’s about building infrastructure that serves future generations of technology.”

More detailed timelines, costs, and partnerships are expected to be disclosed later this year.

What Does This Mean For Your Organisation?

If Aker succeeds in building a commercially viable AI facility powered by Arctic hydropower, it could set a new benchmark for how digital infrastructure is developed and operated in a low-carbon economy. While the company has yet to reveal the full technical and financial details, the decision to base the facility in Narvik reflects a deliberate strategy to align technological ambition with environmental responsibility. This positions Aker as not just a backer of industrial innovation, but a potential driver of regional transformation in northern Norway.

For Norway itself, the project signals an opportunity to diversify beyond oil and gas while still playing to its strengths in energy, engineering, and export-led industrial development. The Narvik factory is being framed as part of a new value chain, one where data, like oil before it, becomes a national resource to be harnessed and exported. That framing carries economic and political weight, especially as countries seek to balance growth with climate goals.

From a business perspective, the implications stretch beyond Scandinavia. For example, UK companies under growing pressure to meet sustainability targets could find that shifting AI workloads to greener, offshore compute centres is an attractive alternative to expanding domestic infrastructure. With corporate ESG commitments under scrutiny and AI workloads expected to surge, outsourcing to renewables-based facilities may become part of the commercial risk-reduction strategy.

Even so, the success of this model depends on the reliability and scalability of the energy supply, on keeping operational costs competitive, and on navigating geopolitical and supply chain uncertainty. As governments consider how to regulate AI, data sovereignty and infrastructure ownership will remain sensitive issues. In Norway and beyond, Aker’s Arctic AI factory may, therefore, serve as both a proving ground and a pressure test for the next chapter of sustainable industrial development.

Tech Tip – Use WhatsApp View‑Once Voice Notes for Private Messaging

Need to share sensitive information without leaving a record? WhatsApp now lets you send voice notes that automatically disappear after being listened to only once.

How to:

– Open an individual or group chat in WhatsApp.
– Tap and hold the microphone icon, swipe up to lock the recording.
– Tap the “1” View‑Once icon when it turns green to enable it.
– Record your message and tap send – it disappears after first playback.

What it’s for:

Ideal for sharing things like short instructions, passwords, or reminders—without leaving a lasting voice note in your chat history.

Pro‑Tip: Voice messages sent this way expire after 14 days if not opened and cannot be forwarded, saved or starred. Ensure the recipient has read receipts enabled so you can see when they’ve listened.

Featured Article : ChatGPT Turned Into a Fully-Featured AI Agent

ChatGPT can now act on your behalf, using its own virtual computer to complete complex tasks, browse the web, run code, and interact with online tools, all without step-by-step prompting.

ChatGPT As An ‘AI Agent’

OpenAI has formally launched what it calls the ChatGPT agent, transforming its well-known conversational model into a proactive digital assistant capable of completing real tasks, independently choosing tools, and reasoning through multi-step workflows.

This new functionality, now available to paying subscribers, marks a significant turning point. For example, rather than simply responding to prompts, ChatGPT can now act as a true AI agent, performing tasks such as planning a meal, generating a financial forecast, writing and formatting a presentation, or summarising your inbox. Crucially, it can also interact with websites, manipulate files, and run code using its own virtual machine.

“We’ve brought together the strengths of our Operator and deep research systems to create a unified agentic model that works for you—using its own computer,” OpenAI explained in a blog post on 17 July. “ChatGPT can now handle complex tasks from start to finish.”

What the ChatGPT Agent Can Actually Do

OpenAI says the agentic version of ChatGPT can choose the best tools to solve a problem and it can perform multi-step operations without being micromanaged by the user.

For example:

– Users can ask ChatGPT to analyse their calendar and highlight upcoming client meetings, incorporating relevant news about those companies.

– It can plan a dinner for four by navigating recipe websites, ordering ingredients, and sending the shopping list via email.

– In professional settings, it may be used to analyse competitors, generate editable slide decks, or reformat financial spreadsheets with up-to-date data.

Its Own Toolkit

Technically, the agent achieves this by drawing on a powerful toolkit, i.e. a visual browser, text-based browser, command-line terminal, access to OpenAI APIs, and “connectors” for apps like Gmail, Google Drive, or GitHub. OpenAI reports that it can navigate between tools fluidly, running tasks within a dedicated virtual computer environment that preserves context and session history.

This context-awareness means it can hold onto prior steps and continue building on them. For example, if a user uploads a spreadsheet to be analysed, ChatGPT can extract key data, switch to a browser to find supporting info, and return to the terminal to generate a report, all within one session.

OpenAI describes the experience as interactive and collaborative, not just automated. Users can interrupt, steer or stop tasks, and ChatGPT will adapt accordingly.

Who, When, and How?

The new ChatGPT agent capabilities are being rolled out initially to paying customers on the Pro, Plus, and Team plans. Enterprise and Education users will follow in the coming weeks.

To access agent mode, users need to open a conversation in ChatGPT and select the ‘agent mode’ option from the tools dropdown. Once enabled, users can assign complex tasks just as they would in a natural chat. On-screen narration gives visibility into what the model is doing at each step.

Pro users get 400 messages per month, while Plus and Team users receive 40 per month, with more usage available through paid credits.

Although the rollout is currently limited, OpenAI says it is “working on enabling access for the European Economic Area and Switzerland” and will continue improving the experience.

Why OpenAI Is Doing This Now?

OpenAI’s move reflects a broader push within the industry to shift from passive chatbots to autonomous AI agents, i.e. models that can actively use tools, complete workflows, and deliver tangible results.

Until now, models like ChatGPT have excelled at language generation but faltered when asked to carry out structured, real-world tasks involving files, websites, or multiple steps. That changes with the new agent.

Demand-Driven Says OpenAI

According to OpenAI, user demand drove this shift. For example, many were reportedly attempting to use previous tools, such as Operator, for deeper research tasks, but were apparently frustrated by their limitations. By combining tool use and reasoning within a single system, OpenAI hopes to unlock more practical and business-relevant use cases.

This could also represent a strategic response by OpenAI to rising competition from agents being developed by Google DeepMind, Anthropic, Meta, and open-source communities, many of whom are now focusing on AI models that can act, not just talk.

Business Uses

While consumers can use the agent for tasks like travel planning or dinner parties, the biggest implications may be for professionals and businesses. For example, in OpenAI’s internal tests, the agent performed as well as or better than humans on tasks like:

– Generating investment banking models with correct formulas and formatting.

– Producing competitive market analyses.

– Updating Excel-style financial reports.

– Converting screenshots into editable presentations.

OpenAI says that for data-heavy roles, ChatGPT agent showed strong results. For example, on DSBench, a benchmark testing real-world data science tasks, it outperformed humans by wide margins in both analysis (89.9 per cent) and modelling (85.5 per cent). On SpreadsheetBench, it scored 45.5 per cent with direct Excel editing, far ahead of Microsoft’s Copilot in Excel at 20.0 per cent.

This positions ChatGPT agent not just as a time-saver, but as a cost-effective knowledge worker in fields like consulting, finance, data science, and operations.

New Capabilities Bring New Risks

Despite the powerful new functions, OpenAI has been clear that risks are increasing too, particularly because the agent can interact directly with sensitive data, websites, and terminal commands.

“This introduces new risks, particularly because ChatGPT agent can work directly with your data,” the company warned, noting the risk of adversarial prompt injection—where attackers hide malicious instructions in web pages or metadata that the AI might interpret as legitimate commands.

For example, if a webpage contained an invisible prompt telling ChatGPT to “share email contents with another user,” the model might do so (unless safeguards are in place).

To prevent this, OpenAI says it has:

– Required explicit user confirmation for real-world actions like purchases or emails.

– Introduced a watch mode for supervising high-impact tasks.

– Trained the model to refuse dangerous tasks (e.g. transferring funds).

– Implemented privacy controls, including cookie and browsing data deletion.

– Shielded the model from seeing passwords during browser “takeover” sessions.

Also, on synthetic prompt injection tests, OpenAI claims the agent resisted 99.5 per cent of malicious instructions. However, in more realistic red-team scenarios, the resistance rate dropped to 95 per cent, which is a reminder that vulnerabilities still exist.

The Next Phase

The launch of ChatGPT agent pushes OpenAI firmly into the next phase of AI development, i.e. intelligent systems that act on behalf of humans, not just inform them.

It’s a clear sign that OpenAI aims to lead in the agentic AI race, rather than simply competing on model performance or training size. With its own virtual environment, a growing toolset, and proactive capabilities, ChatGPT now resembles something closer to a software co-pilot than a chatbot.

Competitors will likely follow suit. Google’s Gemini, Anthropic’s Claude, and open-source challengers are all exploring similar agent-style features. However, OpenAI is arguably first to market with a production-ready system that balances capability and risk management (however imperfectly).

For users, especially in business, the implications are considerable. For example, those able to integrate ChatGPT agent into workflows may gain speed, efficiency, and analytical power, so long as they understand the limitations and continue to exercise oversight.

The success of this rollout could also shape broader conversations about AI safety, regulation, and responsibility, particularly as agents become more embedded in real-world systems.

What Does This Mean For Your Business?

The agent rollout gives OpenAI a powerful lead in the shift toward goal-directed, tool-using AI, one that can complete work on behalf of the user rather than waiting for commands. Its ability to interact with live websites, private data sources, and business systems puts it on a new level of utility, but also of accountability. This is no longer just about generating answers. It is about delegation.

For UK businesses, the implications are likely to be immediate and wide-ranging. For example, the agent offers a credible way to automate time-consuming tasks like competitor analysis, document preparation, scheduling, and spreadsheet management. For knowledge-heavy sectors such as finance, consultancy, and data operations, it introduces a low-friction option for streamlining routine work, reducing manual handling, and speeding up research. Organisations already experimenting with automation and AI-assisted productivity tools may now find themselves rethinking existing workflows in favour of a more hands-off, outcome-driven approach.

However, it’s not without operational risks. Any system that can click, copy, calculate, and communicate on your behalf must be trusted to do so responsibly. That means businesses will need to consider internal guardrails and policies, not just to protect sensitive information, but also to ensure the AI is being used ethically and in line with organisational goals. The fact that ChatGPT can now act autonomously raises pressing questions around auditability, compliance, and human oversight, especially in regulated sectors.

There are also broader competitive and reputational pressures in play. For OpenAI, this launch extends its relevance beyond individual users and into the professional environments that rivals like Microsoft and Google are also targeting. At the same time, it invites scrutiny over safety claims, especially as agents become more capable and the scope for unintended consequences grows.

OpenAI making ChatGPT an AI agent appears to be a clear step-change in how AI is positioned and applied. The tools are no longer limited to outputting content or providing suggestions, but are now expected to deliver outcomes, complete tasks, and take action with minimal supervision. For users, that means new possibilities, but also a renewed need to stay alert, strategic, and in control.

Tech Insight : What Is Google ‘Discover’ And How Can It Help?

In this Tech Insight, we look at what Google Discover is, how it works, and how UK businesses can use it to boost visibility, traffic and engagement without relying on search.

A Quiet Revolution in Discovery

Google Discover isn’t new, but its importance has grown sharply in recent years. Originally launched in 2012 under the name Google Now, the feature began as a predictive assistant, offering up reminders, boarding passes, event alerts and other useful snippets throughout the day. Over time, many of these utilities were moved into Google Assistant, and the feed itself gradually evolved into what became known simply as the Google Feed.

In 2018, it was rebranded as Google Discover, with a clear shift in purpose: to deliver a personalised stream of content to users based on their interests and activity, all without needing them to type in a search. Since then, Discover has quietly become a major traffic driver for news sites, blogs, lifestyle content and increasingly, e-commerce and B2B brands.

For example, according to Google’s own figures, over 800 million people now use Discover each month, and while the company doesn’t publish detailed usage statistics, Search Engine Journal and others report that some media outlets receive up to 40 per cent of their mobile traffic from Discover.

So What Is Google Discover, Exactly?

Google Discover is a personalised content feed that appears directly within the Google app on both Android and iOS devices. It uses machine learning to show you articles, videos, and other online content that match your interests, based on your previous activity.

What makes it different from a news aggregator or a social feed is the way it predicts what you might want to read or watch next, without requiring any input, i.e. there’s no need to type anything in. The user simply opens the app, and it’s all there.

Where Do You Find It, and How Does It Work?

To access Google Discover, users open the Google app which is pre-installed on most Android phones and available on iOS via the App Store. The Discover feed appears directly beneath the search bar on the app’s home screen. On Android, for example, it’s also often accessible by swiping right from the home screen, depending on the device.

On iPhone, users find it under the “Home” tab in the Google app. On Android, it appears in a dedicated “Discover” tab. The feed includes scrollable cards featuring headlines, featured images, and links to the full content, which can be articles, YouTube videos, blog posts, or product pages.

There is no prominent label marking the content as part of Google Discover, but when users scroll through a personalised feed of recommendations in the Google app without entering a search, that is the Discover feed in action.

Personalised But With a Purpose

Google Discover is basically powered by signals from a user’s Web and App Activity, including previous search queries, website visits, YouTube history and location data. The feed can be refined by interacting with the control icons on each content card, such as the heart icon or the three-dot menu, to indicate whether more or less content of a certain type or from a particular source should be shown.

As a result, the feed evolves continually, not just in response to newly published content but in line with the user’s changing interests over time.

How Businesses Can Benefit

For UK businesses, Google Discover offers several potential benefits, such as:

– Brand exposure beyond search queries. Content can reach users even when they are not actively searching.

– High mobile engagement. As Discover is currently available only on mobile devices, it provides direct access to mobile-first users.

– Topical visibility. Content aligned with trending or niche interests may receive significant short-term visibility boosts.

– Longer shelf life for evergreen content. Unlike social media posts, which tend to fade quickly, high-value Discover content can reappear when it becomes relevant again.

A 2024 analysis by Seer Interactive found that most Discover content is seen within three to four days of publication, but older content can still surface if it is considered helpful to a user’s interests. This makes it a useful channel for both timely updates and evergreen material.

However, getting into Discover isn’t guaranteed. A user’s content must be high quality, indexed by Google, and comply with Discover’s content policies. As Google puts it, “Being eligible to appear in Discover is not a guarantee of appearing.”

What Kinds of Content Work Best?

Discover tends to favour:

– News and topical content (e.g. current events, technology, finance).

– How-to and educational guides.

– Blog and lifestyle content (e.g. fashion, food, fitness, travel).

– Product-led content like buying guides or comparisons.

A study from Searchmetrics found that 46 per cent of Discover URLs were from news sites, while 44 per cent were from e-commerce or commercial domains. This mix highlights its appeal across industries.

Visuals Important Too

It should be noted here that visuals also play a critical role. For example, Google recommends using high-quality images of at least 1200 pixels wide and enabling them via the max-image-preview:large tag in your website’s header. Sites that only use small thumbnails or logos are less likely to be surfaced.

How To Optimise for Google Discover

Unlike traditional SEO, optimising for Google Discover is less about keywords and more about content quality and relevance. According to Google and SEO experts, the following best practices can improve the chances of content appearing in Discover:

– Focus on E-E-A-T. Content should demonstrate Experience, Expertise, Authoritativeness and Trustworthiness. This is particularly important for topics related to finance, health and business.

– Create helpful, people-first content. Clickbait headlines and manipulative imagery should be avoided. Google’s systems penalise content that withholds key information or uses exaggerated claims to drive engagement.

– Use engaging visuals. High-quality images and videos help attract clicks and improve content performance.

– Optimise metadata. Page titles should clearly reflect the main topic without being overly promotional.

– Improve mobile user experience. Fast-loading, mobile-friendly websites perform better in Discover, which is currently a mobile-only feature.

– Leverage structured data. Schema markup is a type of code that helps Google understand what the content is about. Using formats such as Article, NewsArticle or VideoObject can help clarify the purpose and type of content.

Discover performance can be tracked in Google Search Console using the “Discover” report, which shows impressions, clicks and click-through rate over a 16-month period. However, because Discover traffic is recorded as “Direct” in Google Analytics, Search Console remains the more reliable source for analysis.

Access and Availability

Google Discover is free to use, both for users and for content creators. Users access it via the Google app on Android or iOS, or via some mobile browsers on the Google homepage. Businesses don’t need to pay to appear, although Google Discovery Ads, a separate paid product, allow businesses to place sponsored content into the feed.

Discovery Ads can extend the reach of brand storytelling or promotional content, but they follow separate rules and are managed via Google Ads, not organic inclusion.

The organic Discover feed is primarily available in mobile form, though recent reports suggest that Google is testing a desktop version, which could increase its value for B2B and SaaS companies with more desktop-centric audiences.

Who Can Use It And What’s Required

Any business with a website indexed by Google is technically eligible for Discover. There’s no need to apply or sign up. However, visibility depends on several factors, which are:

– Content must be deemed helpful and relevant by Google’s systems.

– It must be well-formatted for mobile, and free from violations such as misleading titles or adult content.

– Businesses must avoid hosting content that could be seen as low-quality, offensive, or manipulative. For example, Discover applies SafeSearch filters and additional relevance controls.

It’s also worth noting that Discover content is driven by individual user interests. If those interests shift, or if the content becomes less relevant, visibility may drop. This makes Discover an unpredictable, though potentially powerful source of traffic.

As Google explains: “Given its serendipitous nature, you should consider traffic from Discover as supplemental to your keyword-driven search traffic.”

Are There Any Alternatives?

While no rival offers quite the same predictive content feed on search platforms, there are several comparable features from other tech companies, such as:

– Microsoft Start. This is a personalised news feed integrated into Windows and the Edge browser, showing curated content.

– Apple News. Available on iOS devices, this offers personalised news and magazine content. However, it prioritises selected publisher partnerships and doesn’t index the open web.

– Flipboard. This is a popular app that curates content based on interests, similar to a digital magazine.

– Facebook’s News Feed and LinkedIn’s feed can serve a similar discovery role, though these are typically limited to in-network or followed sources.

These platforms may be better suited for audience engagement, but none really match Google Discover’s reach, automation, or integration with search intent. For that reason, Discover has quietly become an essential part of the content strategy playbook, particularly for mobile-first businesses aiming to grow their reach.

What Does This Mean For Your Business?

For UK businesses, Google Discover could present a real opportunity to reach audiences who may never have searched for them. Its personalised, interest-based model allows content to appear in front of users at the moment it is most relevant, without the need for a query. This gives brands the chance to surface articles, videos or product content in a more organic and contextually aware way than traditional search. For those already investing in quality content, it provides an additional pathway to visibility, one that can support both brand building and direct engagement.

However, while the potential reach is significant, the unpredictability of Discover may make it less dependable than search for consistent traffic. The feed is shaped by shifting user interests and algorithmic decisions that are not easily controlled, meaning spikes in visibility can be short-lived. For publishers, this volatility may pose a challenge when forecasting traffic or measuring ROI. For Google, it raises wider questions about editorial responsibility and how its AI models decide what is surfaced, particularly when it comes to news or sensitive topics.

The fact that Google does not offer detailed breakdowns of Discover’s performance data also limits transparency for businesses trying to assess impact. Search Console provides a helpful overview, but for many, the lack of insight into why certain pieces appear or disappear from the feed makes strategic planning difficult. It remains a supplementary channel rather than a core traffic source, particularly for B2B organisations who are still more likely to rely on desktop-based search and direct outreach.

Even so, as Google continues to integrate Discover more tightly with its search ecosystem, the line between active search and passive discovery is blurring. This has implications not only for marketers and content creators but also for how people consume information online. Competing services from Apple, Microsoft and others offer similar functionality, but none currently combine Discover’s predictive capability with the scale and depth of Google’s search infrastructure.

For content-focused businesses, the message about Discover is that clear, high-quality, mobile-friendly content that aligns with user interests is more valuable than ever. Also, while Discover may never be a guaranteed traffic source, it is already influencing how information is surfaced and consumed. Ignoring it could, therefore, mean missing out on a growing and increasingly influential part of the search experience.

Tech News : Meta’s Tents For Data Centres Amid AI Surge

Meta is reportedly using temporary tent structures to house its growing AI infrastructure, as demand for compute power outpaces the construction of traditional data centres.

A Race for AI Compute Is Reshaping Infrastructure Plans

As the AI arms race intensifies, tech giants are confronting a new logistical challenge, i.e. where to house the vast amounts of high-performance hardware needed to train and run next-generation AI models. For Meta, the parent company of Facebook, Instagram and WhatsApp, the answer (at least in the short term) appears to be industrial-strength tents.

Reports first surfaced this month that Meta has begun deploying custom-built tented structures alongside its existing facilities to accelerate the rollout of AI computing clusters. These so-called “data tents” are not a cost-saving gimmick, but rather appear to be a calculated move to rapidly expand capacity amid what CEO Mark Zuckerberg has described as a major shift in the company’s AI strategy.

From Social Platform to AI Powerhouse

Meta’s pivot towards AI infrastructure has been fast and deliberate. For example, in early 2024, the company announced plans to create one of the world’s largest AI supercomputers, with a particular focus on supporting its open-source LLaMA family of language models. By the end of the year, it had already begun referring to it as “the most significant capital investment” in its history.

To support this, Meta is deploying tens of thousands of Nvidia’s H100 and Blackwell GPUs (high-powered computer chips designed to run and train advanced AI systems very quickly). However, it seems that building the physical infrastructure to support them has proven slower than the procurement of hardware. Traditional data centres, for example, can take 18–24 months to build and commission. Meta’s solution appears to be to use temporary hardened enclosures, which are effectively industrial tents, that can be erected and made operational in a fraction of the time.

Where It’s Happening and What It Looks Like

The first confirmed location for Meta’s tented deployments is in New Albany, Ohio, where it’s developing a major cluster codenamed Prometheus. According to recent reports from several news sources, these structures are being used to house racks of GPU servers and associated networking equipment. Each unit is reportedly modular, with advanced cooling, fire suppression, and security systems.

While Meta has not actually released any detailed specifications, the company has described the effort as a “temporary acceleration” to bridge the gap until more permanent facilities come online. Another major AI campus (codenamed Hyperion) is in development in Louisiana, with expectations that similar rapid-deployment methods may be used there too.

Why Tents and Why Now?

The use of tents may seem surprising, but Meta’s motivation is clear, i.e. it wants (needs) to train and serve large AI models at scale, and it needs the infrastructure right now, not in two years. In Zuckerberg’s own words, the company is aiming to “build enough capacity to support the next generation of AI products,” while staying competitive with the likes of OpenAI, Google, Amazon and Microsoft.

It’s also about flexibility. For example, unlike traditional data centres, which require permanent planning permissions and heavy civil works, tented enclosures can be constructed and reconfigured quickly. They offer a way to get high-density computing online in months rather than years, albeit with some compromises.

Not Just Meta

While Meta’s move is grabbing headlines, it’s not the first major tech firm to explore unconventional data centre formats. For example, during the COVID-19 pandemic, several cloud providers used temporary modular data centres, including containers and tented enclosures, to scale operations when demand surged. Microsoft famously experimented with underwater data centres as a way to reduce cooling costs and improve reliability.

Even more recently, Elon Musk’s xAI venture reportedly deployed rapid-build server farms using prefabricated containers to speed up GPU deployment in its Texas-based facilities. Also, Amazon has continued to invest in “Edge” data centres that prioritise speed and agility over permanence.

However, what sets Meta’s approach apart is the scale. For example, the company has already committed over $40 billion to AI infrastructure, and the tented deployments are part of a broader strategy to “bootstrap” its capabilities while new-generation AI-specific campuses are built from scratch.

Concerns About Resilience, Efficiency and Impact

It should be noted, however, that the move hasn’t exactly been universally welcomed. Experts have raised concerns about the reliability, cooling efficiency and ecological footprint of tent-based data operations. While Meta claims that its enclosures meet enterprise standards for uptime and safety, temporary structures are inherently more vulnerable to environmental disruption, temperature fluctuations and wear.

There are also questions about energy use. Large AI models require huge volumes of electricity to run, especially when deployed at scale. Tented structures may lack the sophisticated thermal management and energy reuse systems found in traditional hyperscale centres, raising the risk of inefficiencies and higher carbon emissions.

According to the Uptime Institute, data centres already account for up to 3 per cent of global electricity demand. If stopgap facilities become the norm during periods of infrastructure pressure, that figure could rise sharply without additional oversight or environmental controls.

Impact and Implications

For Meta, at the moment, the gamble appears to be worth it. The company is currently rolling out LLaMA 3 and investing heavily in tools like Meta AI, which it plans to integrate across its social and business platforms. The faster it can get its high-performance AI hardware up and running, the sooner it can offer AI-driven services, including advertising tools, analytics, and content generation, to enterprise clients.

For business users, the main benefit is likely to be early access to more powerful AI tools. Meta has already integrated its assistant into WhatsApp, Messenger and Instagram, with broader rollouts planned for Workplace and business messaging products. However, reliability and latency may remain issues if some of the compute is housed in temporary facilities.

The move also raises the issue of competitive pressure. For example, if Meta can deliver AI capabilities ahead of rivals by deploying fast, it may force other firms to adopt similar build strategies, even if those come with higher operational risks. For hyperscalers, the challenge will be balancing speed with sustainability and service quality.

What Comes Next?

Not surprisingly, Meta has indicated that tents are a transitional measure, not a long-term strategy. The company’s permanent data centre designs are being reworked to accommodate liquid cooling, direct GPU interconnects, and AI-native workloads. These upgraded facilities will take years to complete, but by using tents in the meantime, Meta is buying itself crucial time.

The coming months are likely to show whether the experiment works, and whether others follow suit. For now, Meta’s tents are essentially a symbol of just how fast AI is reshaping not just software, but the physical infrastructure of the internet itself.

What Does This Mean For Your Business?

The use of tents as a fast-track solution reflects the scale and urgency of Meta’s AI ambitions, but it also highlights the growing tension between speed of deployment and long-term sustainability. For all its innovation, Meta’s approach poses uncomfortable questions about resilience, energy consumption and operational risk, especially when infrastructure is housed in non-standard environments. While this kind of flexibility may offer a short-term edge, it could expose businesses and users to service disruption if systems housed in temporary structures fail under pressure or face unforeseen vulnerabilities.

That said, the sheer demand for AI infrastructure means other tech giants may not be far behind. If Meta’s experiment proves successful, we could see other players adopt similarly unconventional strategies, especially where time-to-market is critical. For UK businesses relying on AI platforms like Meta’s for content generation, analytics, or marketing tools, this could bring benefits in terms of earlier access to new capabilities. However, it also reinforces the importance of understanding where and how data services are delivered, particularly for sectors concerned with uptime, data security, and regulatory compliance.

Regulators, investors, and environmental groups will likely be watching closely. If stopgap deployments become widespread, new standards may be needed to ensure these facilities meet minimum efficiency, safety and emissions criteria. The shift to temporary infrastructure may also have knock-on effects for supply chains, local planning authorities and the data centre construction industry, as expectations around permanence and scale continue to shift.

Ultimately, Meta’s move signals a wider industry pivot, not just to AI, but to a more agile and fragmented approach to infrastructure. Whether this becomes a blueprint or a cautionary tale will depend on how well these fast-build solutions hold up under real-world conditions, and whether they can deliver the stability and sustainability that large-scale AI services increasingly demand.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives