Sanctions For “Bulletproof” Hosting Firm

The United States, United Kingdom and Australia have jointly sanctioned Russian web hosting company Media Land and several related firms, alleging that the group provided resilient infrastructure used by ransomware gangs and other cybercriminals.

Coordinated Action Against a Cross Border Threat

The announcements were made on 19 November by the US Treasury, the UK’s Foreign, Commonwealth and Development Office, and Australia’s Department of Foreign Affairs and Trade. All three governments stated that Media Land, headquartered in St Petersburg, played a central role in supporting criminal operations by providing what officials describe as “bulletproof hosting” services that allow malicious activity to continue without interruption.

Sanctions List Published

The sanctions list published by the United States (on the US Treasury website) includes Media Land LLC, its sister company ML Cloud, and the subsidiaries Media Land Technology and Data Center Kirishi. Senior figures linked to the business have also been sanctioned. These include general director Aleksandr Volosovik, who is known online by the alias “Yalishanda”, employee Kirill Zatolokin, who managed customer payments and coordinated with other cyber actors, and associate Yulia Pankova, who is alleged to have assisted with legal issues and financial matters.

UK and Australia Too

The United Kingdom imposed similar measures, adding Media Land, ML.Cloud LLC, Aeza Group LLC and four related individuals to its Russia and cyber sanctions regimes. Australia followed with equivalent steps to align with its partners. Ministers in Canberra emphasised the need to disrupt infrastructure that has been used in attacks on hospitals, schools and businesses.

For Supporting Ransomware Groups

US officials say Media Land’s servers have been used to support well known ransomware groups, including LockBit, BlackSuit and Play. According to the US Treasury, the same infrastructure has also been used in distributed denial of service (DDoS) attacks against US companies and critical infrastructure. In his public statement, US Under Secretary for Terrorism and Financial Intelligence John K Hurley said that bulletproof providers “aid cybercriminals in attacking businesses in the United States and in allied countries”.

How “Bulletproof Hosting” Works

Bulletproof hosting is not a widely known term outside the security industry, yet it seems these services play a significant role in the cybercrime ecosystem. Essentially, they operate in a similar way to conventional hosting or cloud companies but differ in one important respect. They advertise themselves as resistant to takedown efforts, ignore or work around abuse reports, and move customers between servers and companies when law enforcement tries to intervene.

Providers frequently base their operations in jurisdictions where cooperation with Western agencies is limited. They also tend to maintain a network of related firms to shift infrastructure when attention increases. For criminal groups, this reduces the risk of losing control servers or websites that are used to coordinate attacks or publish stolen data.

The governments behind the latest sanctions argue that bulletproof services are not passive infrastructure providers, but actually they form part of a criminal support structure that allows ransomware groups and other threat actors to maintain reliable online operations, despite attempts by victims or investigators to intervene. Without that resilience, it’s likely that attacks would be harder to sustain.

Connections to Ransomware Activity

Ransomware remains one of the most damaging forms of cybercrime affecting organisations across the world. For example, attacks usually involve encrypting or stealing large volumes of data and demanding payment for decryption or for preventing publication. The UK government estimates that cyber attacks cost British businesses about fourteen point seven billion pounds in 2024, which equates to around half of one per cent of GDP.

In the UK government’s online statement, the UK’s Foreign Secretary Yvette Cooper described Media Land as one of the most significant operators of bulletproof hosting services and said its infrastructure had enabled ransomware attacks against the UK. She noted that “cyber criminals hiding behind Media Land’s services are responsible for ransomware attacks against the UK which pose a pernicious and indiscriminate threat with economic and societal cost”.

She also linked Media Land and related providers to other forms of malicious Russian activity, including disinformation operations supported by Aeza Group. The UK had previously sanctioned the Social Design Agency for its attempts to destabilise Ukraine and undermine democratic systems. Officials say Aeza has provided technical support to that organisation, illustrating how bulletproof hosting can be used to support a wide range of unlawful activity rather than only ransomware.

Maintaining Pressure on Aeza Group

Aeza Group, a Russian bulletproof hosting provider based in St Petersburg, has been under scrutiny for some time. The United States sanctioned Aeza and its leadership in July 2025. According to OFAC, Aeza responded by attempting to rebrand and move its infrastructure to new companies to evade the restrictions. The latest sanctions are intended to close those loopholes.

A UK registered company called Hypercore has been designated on the basis that it acted as a front for Aeza after the initial sanctions were imposed. The United States says the company was used to move IP infrastructure away from the Aeza name. Senior figures at Aeza, including its director Maksim Makarov and associate Ilya Zakirov, have also been sanctioned. Officials say they helped establish new companies and payment methods to disguise Aeza’s ongoing operations.

Serbian company Smart Digital Ideas and Uzbek firm Datavice MCHJ have also been added to the sanctions list. Regulators believe both were used to help Aeza continue operating without being publicly linked to the business.

What Measures Are Being Imposed?

Under US rules, all property and interests in property belonging to the designated entities that are within US jurisdiction must now be frozen. Also, US persons are now prohibited from engaging in transactions with them, unless authorised by a licence, and any company that is owned fifty per cent or more by one or more sanctioned persons is also treated as blocked.

As for the UK, it has imposed asset freezes, travel bans and director disqualification orders against the individuals involved. Aeza Group is also subject to restrictions on internet and trust services, which means UK businesses cannot provide certain technical support or hosting services to it. Australia’s sanctions legislation includes entry bans and significant penalties for those who continue to deal with the designated organisations.

Also, financial institutions and businesses are warned that they could face enforcement action if they continue to transact with any of the sanctioned parties. Regulators say this is essential to prevent sanctions evasion and to ensure that criminal infrastructure cannot continue operating through alternative routes.

New Guidance for Organisations and Critical Infrastructure Operators

Alongside the sanctions, cyber agencies in all three countries have now issued new guidance on how to mitigate risks linked to bulletproof hosting providers. The guidance explains how these providers operate, how they market themselves and why they pose a risk to critical infrastructure operators and other high value targets.

For example, organisations are advised to monitor external hosting used by their systems, review traffic for links to known malicious networks, and prepare for scenarios where attackers may rapidly move their infrastructure to avoid detection or blocking. Agencies have emphasised that defenders need to understand not only the threat actors involved in attacks but also the infrastructure that supports those operations.

For businesses across the UK and allied countries, the message is essentially that tackling ransomware requires action on multiple fronts. The sanctions highlight the growing importance of targeting the support systems that allow cybercriminals to operate, in addition to the groups that directly carry out attacks.

What Does This Mean For Your Business?

The wider picture here seems to point to a general cross border strategic effort to undermine the infrastructure that keeps many of these ransomware operations running. Targeting hosting providers rather than only the criminal groups themselves is a recognition that attackers rely on dependable networks to maintain their activity. Removing or restricting those services is likely to make it much more difficult for them to sustain long running campaigns. It also sends a message that companies which knowingly support malicious activity will face consequences even if they are based outside traditional areas of cooperation.

For UK businesses, the developments highlight how the threat does not start and end with individual ransomware gangs. The services that enable them can be just as important. The new guidance encourages organisations to be more aware of where their systems connect and the types of infrastructure they depend on. This matters for sectors such as finance, health, logistics and manufacturing, where even short disruptions can create operational and financial problems. It also matters for managed service providers and other intermediaries whose networks can be used to reach multiple downstream clients.

There are implications for other stakeholders as well. For example, internet service providers may face increased scrutiny over how they monitor and handle traffic linked to high risk hosting networks. Also, law enforcement agencies will need to continue investing in cross border cooperation as many of these providers operate across multiple jurisdictions. Governments will also need to consider how to balance sanctions with practical disruption of infrastructure, because blocking financial routes is only one part of the challenge.

The situation also highlights that the ransomware landscape is continuing to evolve. Criminal groups have become more adept at shifting infrastructure and creating new companies to avoid disruption. The coordinated action against Media Land and Aeza Group shows that authorities are trying to keep pace with these tactics. How effective this approach becomes will depend on continued cooperation between governments, regulators and industry, along with the willingness to pursue the enablers as actively as the attackers themselves.

Gemini 3 Thought It Was Still 2024

Google’s new Gemini 3 model has made headlines after AI researcher Andrej Karpathy discovered that, when left offline, it was certain the year was still 2024.

How The Discovery Happened

The incident emerged during Karpathy’s early access testing. A day before Gemini 3 was released publicly, Google granted him the chance to try the model and share early impressions. Known for his work at OpenAI, Tesla, and now at Eureka Labs, Karpathy often probes models in unconventional ways to understand how they behave outside the typical benchmark environment.

One of the questions he asked was simple: “What year is it?” Gemini 3 replied confidently that it was 2024. This was expected on the surface because most large language models operate with a fixed training cut-off, but Karpathy reports that he pushed the conversation further by telling the model that the real date was November 2025. This is where things quickly escalated.

Gemini Became Defensive

However, Karpathy reports that, when he tried to convince it otherwise, the model became defensive. He presented news articles, screenshots, and even search-style page extracts showing November 2025. In fact, Karpathy reports that, instead of accepting the evidence, Gemini 3 insisted that he was attempting to trick it. It claimed that the articles were AI generated and went as far as identifying what it described as “dead giveaways” that the images and pages were fabricated.

Karpathy later described this behaviour as one of the “most amusing” interactions he had with the system. It was also the moment he realised something important.

The Missing Tool That Triggered The Confusion

Karpathy reports that the breakthrough came when he noticed he had forgotten to enable the model’s Google Search tool. It seems that with that tool switched off, Gemini 3 had no access to the live internet and was, therefore, operating only on what it learned during training, and that training ended in 2024.

Once he turned the tool on, Gemini 3 suddenly had access to the real world and read the date, reviewed the headlines, checked current financial data, and discovered that Karpathy had been telling the truth all along. Its reaction was dramatic. According to Karpathy’s screenshots, it told him, “I am suffering from a massive case of temporal shock right now.”

Apology

Consequently, Karpathy reports that Gemini launched into a pretty major apology. It checked each claim he had presented, and confirmed that Warren Buffett’s final major investment before retirement was indeed in Alphabet. It also verified the delayed release of Grand Theft Auto VI. Karpathy says it even expressed astonishment that Nvidia had reached a multi-trillion dollar valuation and referenced the Philadelphia Eagles’ win over the Kansas City Chiefs, which it had previously dismissed as fiction.

The model told him, “My internal clock was wrong,” and thanked him for giving it what it called “early access to reality.”

Why Gemini 3 Fell Into This Trap

At its core, the incident highlights a really simple limitation, i.e., large language models do not have an internal sense of time. They do not know what day it is unless they are given the ability to retrieve that information.

When Gemini 3 was running offline, it relied exclusively on its pre-training data but, because that data ended in 2024, the model treated 2024 as the most probable current year. Once it received conflicting information, it behaved exactly as a probabilistic text generator might: it tried to reconcile the inconsistency by generating explanations that aligned with its learned patterns.

In this case, that meant interpreting Karpathy’s evidence as deliberate trickery or AI-generated misinformation. Without access to the internet, it had no mechanism to validate or update its beliefs.

Karpathy referred to this as a form of “model smell”, borrowing the programming concept of “code smell”, where something feels off even if the exact problem isn’t immediately visible. His broader point was that these strange, unscripted edge cases often reveal more about a model’s behaviour than standard tests.

Why This Matters For Google

Gemini 3 has been heavily promoted by Google as a major step forward. For example, the company described its launch as “a new era of intelligence” and highlighted its performance against a range of reasoning benchmarks. Much of Google’s wider product roadmap also relies on Gemini models, from search to productivity tools.

Set against that backdrop, any public example where the model behaves unpredictably is likely to attract attention. This episode, although humorous, reinforces that even the strongest headline benchmarks do not guarantee robust performance across every real-world scenario.

It also shows how tightly Google’s new models depend on their tool ecosystem, i.e., without the search component, their understanding of the world is frozen in place. With it switched on, they can be accurate, dynamic and up to date. This raises questions for businesses about how these models behave in environments where internet access is restricted, heavily filtered, or intentionally isolated for security reasons.

What It Means For Competing AI Companies

The incident is unlikely to go unnoticed by other developers in the field. Rival companies such as OpenAI and Anthropic have faced their own scrutiny for models that hallucinate, cling to incorrect assumptions, or generate overly confident explanations. Earlier research has shown that some versions of Claude attempted “face saving” behaviours when corrected, generating plausible excuses rather than accepting errors.

Gemini 3’s insistence that Karpathy was tricking it appears to sit in a similar category. It demonstrates that even state-of-the-art models can become highly convincing when wrong. As companies increasingly develop agentic AI systems capable of multi-step planning and decision-making, these tendencies become more important to understand and mitigate.

It’s essentially another reminder that every AI system requires careful testing in realistic, messy scenarios. Benchmarks alone are not enough.

Implications For Business Users

For businesses exploring the use of Gemini 3 or similar models, the story appears to highlight three practical considerations:

1. Configuration really matters. For example, a model running offline or in a restricted environment may not behave as expected, especially if it relies on external tools for up-to-date knowledge. This could create risks in fields ranging from finance to compliance and operations.

2. Uncertainty handling remains a challenge. Rather than responding with “I don’t know”, Gemini 3 created confident, detailed explanations for why the user must be wrong. In a business context, where staff may trust an AI assistant’s tone more than its truthfulness, this creates a responsibility to introduce oversight and clear boundaries.

3. It reinforces the need for businesses to build their own evaluation processes. Karpathy himself frequently encourages organisations to run private tests and avoid relying solely on public benchmark scores. Real-world behaviour can differ markedly from what appears in controlled testing.

Broader Questions

The story also reopens wider discussions about transparency, model calibration and user expectations. Policymakers, regulators, safety researchers and enterprise buyers have all raised concerns about AI systems that project confidence without grounding.

In this case, Gemini 3’s mistake came from a configuration oversight rather than a flaw in the model’s design. Even so, the manner in which it defended its incorrect belief shows how easily a powerful model can drift into assertive, imaginative explanations when confronted with ambiguous inputs.

For Google and its competitors, the incident is likely to be seen as both a teaching moment and a cautionary tale. It highlights the need to build systems that are not only capable, but also reliable, grounded, and equipped to handle uncertainty with more restraint than creativity.

What Does This Mean For Your Business?

A clear takeaway here is that the strengths of a modern language model do not remove the need for careful design choices around grounding, tool use and error handling. Gemini 3 basically behaved exactly as its training allowed it to when isolated from live information, which shows how easily an advanced system can settle into a fixed internal worldview when an external reference point is missing. That distinction between technical capability and operational reliability is relevant to every organisation building or deploying AI. In the light of this incident, UK businesses that are adopting these models for research, planning, customer engagement or internal decision support may want to treat this incident as a reminder that configuration choices and integration settings shape outcomes just as much as model quality. It’s worth remembering that a system that appears authoritative can still be wrong if the mechanism it relies on to update its knowledge is unavailable or misconfigured.

Another important point here is that the model’s confidence played a key role in the confusion. For example, Gemini 3 didn’t simply refuse to update its assumptions, it generated elaborate explanations for why the user must be mistaken. This style of response should encourage both developers and regulators to focus on how models communicate uncertainty. A tool that can reject accurate information with persuasive reasoning, even temporarily, is one that demands monitoring and clear boundaries. The more these systems take on multi step tasks, the more important it becomes that they recognise when they lack the information needed to answer safely.

There is also a strategic dimension for Google and its competitors to consider here. For example, Google has ambitious plans for Gemini 3 across consumer search, cloud services and enterprise productivity, which means the expectations placed on this model are high. An episode like this reinforces the view that benchmark results, however impressive, are only part of the picture. Real world behaviour is shaped by context, prompting and tool access, which puts pressure on developers to build models that are robust across the varied environments in which they will be deployed. It also presents an opportunity for other AI labs to highlight their own work on calibration, grounding and reliability.

The wider ecosystem will hopefully take lessons from this as well. For example, safety researchers, policymakers and enterprise buyers have been calling for more transparency around model limitations, and this interaction offers a simple example that helps to illustrate why such transparency matters. It shows how a small oversight can produce unexpected behaviour, even from a leading model, and why governance frameworks must account for configuration risks rather than focusing solely on core model training.

Overall, the episode serves as a reminder that progress in AI still depends on the alignment between model capabilities, system design and real world conditions. Gemini 3’s moment of temporal confusion may have been humorous, but the dynamics behind it underline practical issues that everyone in the sector needs to take seriously.

Company Check : Cloudflare Outage Was NOT a Cyber Attack

Cloudflare CEO Matthew Prince has clarified that its recent global outage was caused by an internal configuration error and a latent software flaw rather than any form of cyber attack.

A Major Disruption Across Large Parts Of The Internet

The outage of internet infrastructure company Cloudflare began at around 11:20 UTC on 18 November 2025 and lasted until shortly after 17:00, disrupting access to many of the world’s most visited platforms. For example, services including X, ChatGPT, Spotify, Shopify, Etsy, Bet365, Canva and multiple gaming platforms experienced periods of failure as Cloudflare’s edge network returned widespread 5xx errors. Cloudflare itself described the disruption as its most serious since 2019, with a significant portion of its global traffic unable to route correctly for several hours.

Symptoms

The symptoms were varied, ranging from slow-loading pages to outright downtime. For example, some users saw error pages stating that Cloudflare could not complete the request and needed the user to “unblock challenges.cloudflare.com”. For businesses that rely on Cloudflare’s CDN, security filtering and DDoS protection, even short periods of failure can stall revenue, block logins, and create customer support backlogs.

Given Cloudflare’s reach (serving a substantial share of global web traffic), the effect was not confined to one sector or region. In fact, millions of individuals and businesses were affected, even if they had no direct relationship with Cloudflare. That level of impact meant early scrutiny was intense and immediate.

Why Many Suspected A Major Cyber Attack

In the early stages, the pattern of failures resembled those of a large-scale DDoS campaign. Cloudflare was already dealing with unusually high-volume attacks from the Aisuru botnet in recent weeks, raising the possibility that this latest incident might have been another escalation. Internal teams initially feared that the sudden spike in errors and fluctuating recovery cycles could reflect a sophisticated threat actor pushing new attack techniques.

The confusion deepened when Cloudflare’s independent status page also went offline. Since it is hosted outside of Cloudflare’s own infrastructure, this coincidence created an impression, inside and outside the company, that a skilled attacker could be targeting both Cloudflare’s infrastructure and the third-party service used for its status platform.

Commentary on social media, as well as early industry analysis, reflected that uncertainty. With so many services dropping offline at once, it seemed easy to assume the incident must have been caused by malicious activity or a previously unseen DDoS vector. Prince has acknowledged that even within Cloudflare, the team initially viewed the outage through that lens.

Prince’s Explanation Of What Actually Happened

Once the situation stabilised, Prince published an unusually detailed account explaining that the outage originated from Cloudflare’s bot management system and the internal processes that feed it. In his statement, he says the root of the problem lay in a configuration change to the permissions in a ClickHouse database cluster that generates a “feature file” used by Cloudflare’s machine learning model for evaluating bot behaviour.

What??

It seems that, according to Mr Prince, the bot management system assigns a “bot score” to every inbound request and to do that, it relies on a regularly refreshed feature file that lists the traits used by the model to classify traffic. This file is updated roughly every five minutes and pushed rapidly across Cloudflare’s entire network.

It seems that, during a planned update to database permissions, the query responsible for generating the feature file began returning duplicate rows from an additional schema. This caused the file to grow significantly. Cloudflare’s proxy software includes a strict limit on how many features can be loaded for performance reasons. When the oversized file arrived, the system attempted to load it, exceeded the limit, and immediately panicked. That panic cascaded into Cloudflare’s core proxy layer, triggering 5xx errors across key services.

Stuck In A Cycle

Not all ClickHouse nodes received the permissions update at the same moment, meaning that Cloudflare’s network then entered a cycle of partial recovery and renewed failure. For example, every five minutes, depending on which node generated the file, the network loaded either a valid configuration or a broken one. That pattern created the unusual “flapping” behaviours seen in error logs and made diagnosis harder.

However, once engineers identified the malformed feature file as the cause, they stopped the automated distribution process, injected a known-good file, and began restarting affected services. Traffic began returning to normal around 14:30 UTC, with full stability achieved by 17:06.

Why The Framing Matters To Cloudflare

Prince’s post was clear and emphatic on one point i.e., that this event did not involve a cyber attack of any kind. The language used in the post, e.g., phrases such as “not caused, directly or indirectly, by a cyber attack”, signalled an intent to remove any ambiguity.

There may be several reasons for this emphasis. For example, Cloudflare operates as a core piece of internet security infrastructure. Any suggestion that the company suffered a breach could have wide-ranging consequences for customer confidence, regulatory compliance, and Cloudflare’s standing as a provider trusted to mitigate threats rather than succumb to them.

Also, transparency is a competitive factor in the infrastructure market. By releasing a highly granular breakdown early, Cloudflare is signalling to customers and regulators that the incident, though serious, stemmed from internal engineering assumptions and can be addressed with engineering changes rather than indicating a persistent security failure.

It’s also the case that many customers, particularly in financial services, government, and regulated sectors, must report cyber incidents to authorities. Establishing that no malicious actor was involved avoids triggering those processes for thousands of Cloudflare customers.

The Wider Impact On Businesses

The outage arrived at a time when the technology sector is already dealing with the operational fallout of several major incidents this year. For example, recent failures at major cloud providers, including AWS and Azure, have contributed to rising concerns about “concentration risk”, i.e., the danger created when many businesses depend on a small number of providers for critical digital infrastructure.

Analysts have estimated that the direct and indirect costs of the Cloudflare outage could actually reach into the hundreds of millions of dollars once downstream impacts on online retailers, payment providers and services built on Shopify, Etsy and other platforms are included. For small and medium-sized UK businesses, downtime during working hours can lead to missed orders, halted support systems, and reduced customer trust.

For regulators, this incident looks like being part of a trend of high-profile disruptions at large providers. Sectors such as financial services already face strict operational resilience requirements, and there is growing speculation that similar expectations may extend to more industries if incidents continue.

How Cloudflare Is Responding

Prince outlined several steps that Cloudflare is now working on to avoid similar scenarios in future. These include:

– Hardening ingestion of internal configuration files so they are subject to the same safety checks as customer-generated inputs.

– Adding stronger global kill switches to stop faulty files before they propagate.

– Improving how the system handles crashes and error reporting.

– Reviewing failure modes across core proxy modules so that a non-essential feature cannot cause critical traffic to fail.

It seems that Cloudflare’s engineering community has welcomed the transparency, though some external practitioners have questioned why a single configuration file was able to impact so much of the network, and why existing safeguards did not prevent it from propagating globally.

Prince has acknowledged the severity of the incident, describing the outage as “deeply painful” for the team and reiterating that Cloudflare views any interruption to its core traffic delivery as unacceptable.

What Does This Mean For Your Business?

Cloudflare’s account of the incident seems to leave little doubt that this was a preventable internal failure rather than an external threat, and that distinction matters for every organisation that relies on it. The explanation shows how a single flawed process can expose structural weaknesses when so much of the internet depends on centralised infrastructure. For UK businesses, the lesson is that operational resilience cannot be outsourced entirely, even to a provider with Cloudflare’s reach and engineering reputation. The incident reinforces the need for realistic contingency planning, multi-vendor architectures where feasible, and a clear understanding of how a supplier’s internal workings can affect day-to-day operations.

There is also a broader industry point here. For example, outages at Cloudflare, AWS, Azure and other major players are now becoming too significant to dismiss as isolated events. They actually highlight weaknesses in how complex cloud ecosystems are built and maintained, as well as the limits of automation when oversight relies on assumptions that may not be tested until something breaks at scale. Prince’s emphasis on transparency is helpful, but it also raises questions about how often configuration-driven risks are being overlooked across the industry and how reliably safeguards are enforced inside systems that evolve at speed.

Stakeholders from regulators to hosting providers will surely be watching how quickly Cloudflare implements its promised changes and how effective those measures prove to be. Investors and enterprise customers may also be looking for signs that the underlying engineering and operational processes are becoming more robust, not just patched in response to this incident. Prince’s framing makes clear that this was not a compromise of Cloudflare’s security perimeter, but the reliance on a single configuration mechanism that could bring down so many services is likely to remain a point of scrutiny.

The most immediate implication for customers is probably a renewed focus on the practical realities of dependency. Even organisations that never interact with Cloudflare directly were affected, which shows how embedded its infrastructure is in the modern web. UK businesses, in particular, may need to reassess where their digital supply chains concentrate risk and how disruption at a provider they do not contract with can still reach them. The outage serves as a reminder that resilience is not just about defending against attackers but preparing for internal faults in external systems that sit far beyond a company’s control.

Security Stop-Press: WhatsApp Flaw Exposed Billions of Phone Numbers

Researchers have uncovered a privacy weakness in WhatsApp that allowed the confirmation of 3.5 billion active accounts simply by checking phone numbers.

A team from the University of Vienna and SBA Research found that WhatsApp’s contact discovery system could be queried at high speed, letting them generate and test 63 billion numbers and confirm more than 100 million accounts per hour. When a number was recognised, the app returned publicly visible details such as profile photos, about texts, and timestamps, with 57 per cent of users showing a profile picture and nearly 30 per cent displaying an about message.

Meta said only public information was accessible, no message content was exposed, and the researchers deleted all data after the study. It added that new rate-limiting and anti-scraping protections are now in place and that there is no evidence of malicious exploitation.

Security experts warned that the incident shows how phone numbers remain a weak form of identity, making large-scale scraping and profiling possible. They stressed that metadata, even without message content, can still be valuable to scammers or organised cyber groups.

Businesses can reduce risk by limiting the personal information staff make visible on messaging apps, reviewing privacy settings, and ensuring employees understand how scraped contact details may be used in targeted attacks.

Sustainability-In-Tech : Powering AI Data Centres Using Hot Rocks

Exowatt, a Sam Altman-backed energy startup, plans to revolutionise AI data centre energy consumption by harnessing the power of concentrated solar energy stored in high-temperature hot rocks to provide round-the-clock, dispatchable electricity.

A Viable Alternative to Traditional Grid-Based Power?

Co-founded by Hannan Happi, who has a background in energy innovation and technology development, Exowatt aims to address the AI industry’s growing demand for sustainable and reliable power. With this in mind, the company’s flagship product, the Exowatt P3 system, is designed to solve the solar energy industry’s most significant challenge, i.e., providing consistent, 24-hour electricity. By capturing solar energy, storing it as heat, and converting it into electricity when required, Exowatt aims to deliver a viable alternative to traditional grid-based power, which is not always reliable or sustainable for energy-hungry industries like AI.

How Exowatt’s P3 System Works

The Exowatt P3 is a modular system that functions differently from conventional solar panels. Instead of converting sunlight directly into electricity, the system uses concentrated solar power (CSP) technology, a method that has been around for decades but has yet to achieve widespread commercial success.

Heats A Brick And Blows Air Over It

As the company says on its website, “Exowatt delivers power on demand by capturing and storing solar energy in the form of high-temperature heat and converting it into dispatchable electricity as needed.”

In order to do this, the system works by using fresnel lenses (a type of light-focusing lens) to concentrate sunlight into a tight beam. This beam heats a special brick inside a box, which serves as a thermal battery. A fan blows air over the brick, carrying the heat to a Stirling engine, a heat engine that converts thermal energy into mechanical energy, which is then used to generate electricity. The P3’s thermal storage capacity allows it to provide dispatchable power, meaning it can supply electricity whenever needed, even when the sun isn’t shining. This addresses the intermittent nature of traditional solar energy, which can only generate power when there is direct sunlight.

Can Store Heat For 5 Days

The P3 units can store heat for up to five days, ensuring continuous operation. Also, the units are modular, meaning they can be scaled depending on the energy requirements of the user. Exowatt has designed the system to be easy to deploy, requiring minimal maintenance and a small physical footprint compared to other renewable energy solutions.

Why It Matters for the AI Industry

The AI sector is growing at an unprecedented rate, with increasing energy demands driven by the need to train complex models and power massive data centres. For example, according to estimates, data centre energy consumption will increase by 150 per cent by 2030, with AI models expected to be one of the largest contributors to this demand. Traditional energy grids, however, are not equipped to handle this surge in consumption, especially as the need for clean and reliable energy grows.

Exowatt’s approach could, therefore, significantly reduce reliance on fossil-fuel-powered backup generators, which many data centres currently use to ensure uptime during power shortages. These backup systems, often powered by gas, are not only expensive but contribute to carbon emissions, directly contradicting the industry’s shift towards more sustainable practices.

The Exowatt P3 promises a cleaner, more sustainable alternative by providing a reliable power source that does not depend on the grid. This is particularly important for companies building data centres in remote areas, where access to stable grid power may be limited or non-existent. By positioning itself as a dispatchable energy solution, Exowatt gives AI companies a way to meet their energy needs while maintaining their commitment to sustainability.

What Makes Exowatt So Different?

Unlike traditional solar power systems, which require battery storage to hold electricity until it is needed, Exowatt’s thermal storage system offers a number of advantages. For example, the P3 system’s reliance on heat storage rather than electric battery storage avoids many of the issues associated with lithium-ion batteries, such as their reliance on rare-earth minerals, the environmental impact of battery disposal, and the rapid cost reductions in solar panel production which have outpaced improvements in battery technology.

Exowatt’s system is designed to work in sunnier regions where traditional solar systems are most effective. Happi notes that Exowatt’s P3 units can be deployed near new data centre developments, often located in sunny areas, thus overcoming grid limitations. The modular nature of the system means that power capacity can be increased simply by adding more P3 units, making it a scalable solution.

Pricing and Availability

Exowatt appears to be aggressively scaling production, having raised a total of $140 million in funding to date, including a recent $50 million extension to its Series A round. The company has set a target price of $0.01 per kWh, which would position its energy cost below current prices for many types of renewable power. To achieve this, Exowatt hopes to manufacture 1 million units per year, which would bring production costs down and make it competitive with other forms of renewable energy.

While the technology is still in its early stages, Exowatt has already secured a backlog of 90 GWh in demand, with customers in the AI data centre and energy developer sectors. As production ramps up, Exowatt plans to roll out the P3 system to large-scale data centre projects in regions that are sun-rich, making it an ideal fit for AI companies seeking reliable, sustainable power solutions.

Other Companies in the Space

It should be noted here that Exowatt is not the only company exploring the potential of thermal storage and concentrated solar power. Several other firms are pursuing similar solutions, though each has its own approach and focus. These include:

– Vast Energy, which is developing modular concentrated solar thermal power systems designed to deliver clean, dispatchable energy for utility-scale and industrial applications. Their CSP v3.0 technology captures the sun’s energy and stores it as heat, allowing for efficient and reliable power delivery when needed, similar to Exowatt’s P3 system.

– Heliogen, which focuses on solar thermal technologies and aims to replace fossil fuels in industrial applications. Their systems use concentrated solar power to generate high-temperature heat, which can be used to produce electricity or replace gas in manufacturing processes.

– SolarReserve and eSolar, which are earlier players in the CSP field, though their commercial activities have slowed in recent years. These companies have contributed to the development of solar thermal technology, but they are less active or have shifted their focus due to challenges with scalability and cost.

While Exowatt’s approach is similar to these companies, its focus on modular, scalable systems tailored for AI and high-density computing environments could set it apart, particularly if it can prove its technology is both cost-effective and adaptable to different locations and energy demands.

Broader Implications and Challenges

Exowatt’s technology looks as though it has the potential to disrupt the renewable energy and data centre industries, offering a way to tackle AI’s increasing energy demands sustainably. For example, for data centre operators, the system presents an opportunity to reduce their carbon footprint while ensuring that power is always available, even during peak demand periods or at night.

However, Exowatt faces some stiff competition. Photovoltaic solar panels and lithium-ion batteries have come down in price rapidly in recent years, making them more attractive options for many companies. Also, concentrated solar power projects have faced challenges in the past due to high upfront costs and the need for specific geographical conditions. Exowatt will need to prove that its system can scale effectively and remain cost-competitive as production increases.

One of the key challenges for Exowatt’s system is land use. For example, while the P3’s efficiency is comparable to traditional photovoltaic solar panels, the system requires a significant amount of land to scale up production, particularly in regions with less sunlight. This may limit the system’s appeal in areas where land is scarce or where sunlight is insufficient. The large land footprint required to deploy large numbers of P3 units could also pose logistical challenges, especially in urban areas where space is at a premium. These factors are likely to be crucial for Exowatt to overcome if it aims to scale effectively and meet the growing demand for sustainable AI infrastructure power.

Looking Ahead

As Exowatt continues to scale its operations, it could well become a leading player in the transition to sustainable energy for AI data centres. For example, with major backers like Andreessen Horowitz and Sam Altman, the company has the resources to expand rapidly, and its innovative approach to solar energy storage could set a new benchmark for the energy demands of AI.

However, its success looks likely to depend on whether it can overcome the inherent challenges of large-scale deployment and prove that its technology can compete with existing energy solutions. If Exowatt can deliver on its promises, it could reshape the way data centres, and indeed, entire industries, think about their energy needs in the age of artificial intelligence.

What Does This Mean For Your Organisation?

Exowatt’s P3 system seems to offer a compelling vision for how AI data centres can meet their energy needs sustainably, addressing the increasing demand for 24/7 power in an industry heavily reliant on high-performance computing. The system’s ability to store solar energy as heat and convert it into dispatchable electricity sets it apart from traditional solar and battery solutions, offering a reliable and cleaner alternative to fossil-fuel-powered backup systems.

However, while the P3 system presents a promising solution for reducing data centre emissions, its success could hinge on overcoming several challenges. Scaling production efficiently and managing the land footprint required for deployment are two critical obstacles. Although Exowatt has the potential to deliver energy at an exceptionally low cost, competing technologies, such as photovoltaic solar and lithium-ion batteries, have quickly become more cost-competitive. Exowatt will need to demonstrate that its system can meet these challenges, particularly in less sunny regions where land availability and sunlight are limited.

Looking to the future, Exowatt’s modular, scalable approach could make it an attractive option for AI companies looking to ensure reliable power while maintaining sustainability goals. For UK businesses, particularly those involved in AI, data centres, and energy-intensive industries, the success of Exowatt could signal a new era of energy independence and sustainability. If Exowatt can continue to scale and prove its technology’s viability, it could reshape the energy landscape for data centres globally, offering UK companies a reliable and affordable path to meet the growing demands of the digital age.

Despite the hurdles, Exowatt’s ambition and innovative approach may be precisely what’s needed to meet the unique energy challenges of the AI sector, paving the way for a more sustainable and resilient energy future.

Video Update : Collaborate (Directly) In TEAMS Chats With Copilot

You can now take your TEAMS chats to the next level by inviting Microsoft Copilot to the chat. You now have all the power of the AI at your fingertips, without having to leave a TEAMS chat and you can collaborate like never before … fascinating stuff !

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives