Meta is reportedly using temporary tent structures to house its growing AI infrastructure, as demand for compute power outpaces the construction of traditional data centres.
A Race for AI Compute Is Reshaping Infrastructure Plans
As the AI arms race intensifies, tech giants are confronting a new logistical challenge, i.e. where to house the vast amounts of high-performance hardware needed to train and run next-generation AI models. For Meta, the parent company of Facebook, Instagram and WhatsApp, the answer (at least in the short term) appears to be industrial-strength tents.
Reports first surfaced this month that Meta has begun deploying custom-built tented structures alongside its existing facilities to accelerate the rollout of AI computing clusters. These so-called “data tents” are not a cost-saving gimmick, but rather appear to be a calculated move to rapidly expand capacity amid what CEO Mark Zuckerberg has described as a major shift in the company’s AI strategy.
From Social Platform to AI Powerhouse
Meta’s pivot towards AI infrastructure has been fast and deliberate. For example, in early 2024, the company announced plans to create one of the world’s largest AI supercomputers, with a particular focus on supporting its open-source LLaMA family of language models. By the end of the year, it had already begun referring to it as “the most significant capital investment” in its history.
To support this, Meta is deploying tens of thousands of Nvidia’s H100 and Blackwell GPUs (high-powered computer chips designed to run and train advanced AI systems very quickly). However, it seems that building the physical infrastructure to support them has proven slower than the procurement of hardware. Traditional data centres, for example, can take 18–24 months to build and commission. Meta’s solution appears to be to use temporary hardened enclosures, which are effectively industrial tents, that can be erected and made operational in a fraction of the time.
Where It’s Happening and What It Looks Like
The first confirmed location for Meta’s tented deployments is in New Albany, Ohio, where it’s developing a major cluster codenamed Prometheus. According to recent reports from several news sources, these structures are being used to house racks of GPU servers and associated networking equipment. Each unit is reportedly modular, with advanced cooling, fire suppression, and security systems.
While Meta has not actually released any detailed specifications, the company has described the effort as a “temporary acceleration” to bridge the gap until more permanent facilities come online. Another major AI campus (codenamed Hyperion) is in development in Louisiana, with expectations that similar rapid-deployment methods may be used there too.
Why Tents and Why Now?
The use of tents may seem surprising, but Meta’s motivation is clear, i.e. it wants (needs) to train and serve large AI models at scale, and it needs the infrastructure right now, not in two years. In Zuckerberg’s own words, the company is aiming to “build enough capacity to support the next generation of AI products,” while staying competitive with the likes of OpenAI, Google, Amazon and Microsoft.
It’s also about flexibility. For example, unlike traditional data centres, which require permanent planning permissions and heavy civil works, tented enclosures can be constructed and reconfigured quickly. They offer a way to get high-density computing online in months rather than years, albeit with some compromises.
Not Just Meta
While Meta’s move is grabbing headlines, it’s not the first major tech firm to explore unconventional data centre formats. For example, during the COVID-19 pandemic, several cloud providers used temporary modular data centres, including containers and tented enclosures, to scale operations when demand surged. Microsoft famously experimented with underwater data centres as a way to reduce cooling costs and improve reliability.
Even more recently, Elon Musk’s xAI venture reportedly deployed rapid-build server farms using prefabricated containers to speed up GPU deployment in its Texas-based facilities. Also, Amazon has continued to invest in “Edge” data centres that prioritise speed and agility over permanence.
However, what sets Meta’s approach apart is the scale. For example, the company has already committed over $40 billion to AI infrastructure, and the tented deployments are part of a broader strategy to “bootstrap” its capabilities while new-generation AI-specific campuses are built from scratch.
Concerns About Resilience, Efficiency and Impact
It should be noted, however, that the move hasn’t exactly been universally welcomed. Experts have raised concerns about the reliability, cooling efficiency and ecological footprint of tent-based data operations. While Meta claims that its enclosures meet enterprise standards for uptime and safety, temporary structures are inherently more vulnerable to environmental disruption, temperature fluctuations and wear.
There are also questions about energy use. Large AI models require huge volumes of electricity to run, especially when deployed at scale. Tented structures may lack the sophisticated thermal management and energy reuse systems found in traditional hyperscale centres, raising the risk of inefficiencies and higher carbon emissions.
According to the Uptime Institute, data centres already account for up to 3 per cent of global electricity demand. If stopgap facilities become the norm during periods of infrastructure pressure, that figure could rise sharply without additional oversight or environmental controls.
Impact and Implications
For Meta, at the moment, the gamble appears to be worth it. The company is currently rolling out LLaMA 3 and investing heavily in tools like Meta AI, which it plans to integrate across its social and business platforms. The faster it can get its high-performance AI hardware up and running, the sooner it can offer AI-driven services, including advertising tools, analytics, and content generation, to enterprise clients.
For business users, the main benefit is likely to be early access to more powerful AI tools. Meta has already integrated its assistant into WhatsApp, Messenger and Instagram, with broader rollouts planned for Workplace and business messaging products. However, reliability and latency may remain issues if some of the compute is housed in temporary facilities.
The move also raises the issue of competitive pressure. For example, if Meta can deliver AI capabilities ahead of rivals by deploying fast, it may force other firms to adopt similar build strategies, even if those come with higher operational risks. For hyperscalers, the challenge will be balancing speed with sustainability and service quality.
What Comes Next?
Not surprisingly, Meta has indicated that tents are a transitional measure, not a long-term strategy. The company’s permanent data centre designs are being reworked to accommodate liquid cooling, direct GPU interconnects, and AI-native workloads. These upgraded facilities will take years to complete, but by using tents in the meantime, Meta is buying itself crucial time.
The coming months are likely to show whether the experiment works, and whether others follow suit. For now, Meta’s tents are essentially a symbol of just how fast AI is reshaping not just software, but the physical infrastructure of the internet itself.
What Does This Mean For Your Business?
The use of tents as a fast-track solution reflects the scale and urgency of Meta’s AI ambitions, but it also highlights the growing tension between speed of deployment and long-term sustainability. For all its innovation, Meta’s approach poses uncomfortable questions about resilience, energy consumption and operational risk, especially when infrastructure is housed in non-standard environments. While this kind of flexibility may offer a short-term edge, it could expose businesses and users to service disruption if systems housed in temporary structures fail under pressure or face unforeseen vulnerabilities.
That said, the sheer demand for AI infrastructure means other tech giants may not be far behind. If Meta’s experiment proves successful, we could see other players adopt similarly unconventional strategies, especially where time-to-market is critical. For UK businesses relying on AI platforms like Meta’s for content generation, analytics, or marketing tools, this could bring benefits in terms of earlier access to new capabilities. However, it also reinforces the importance of understanding where and how data services are delivered, particularly for sectors concerned with uptime, data security, and regulatory compliance.
Regulators, investors, and environmental groups will likely be watching closely. If stopgap deployments become widespread, new standards may be needed to ensure these facilities meet minimum efficiency, safety and emissions criteria. The shift to temporary infrastructure may also have knock-on effects for supply chains, local planning authorities and the data centre construction industry, as expectations around permanence and scale continue to shift.
Ultimately, Meta’s move signals a wider industry pivot, not just to AI, but to a more agile and fragmented approach to infrastructure. Whether this becomes a blueprint or a cautionary tale will depend on how well these fast-build solutions hold up under real-world conditions, and whether they can deliver the stability and sustainability that large-scale AI services increasingly demand.