Sustainability-In-Tech : Old Smartphones Find A Second Life As Tiny Data Centres

Researchers have developed a low cost way to turn discarded smartphones into tiny data centres that can support real world environmental and civic projects.

Why Old Smartphones Still Matter

More than 1.2 billion smartphones are produced every year, yet most are replaced within two or three years even when they remain fully functional. The environmental cost of this rapid cycle is significant. Smartphone manufacturing is energy intensive, relies on mined materials such as cobalt and lithium, and contributes to the 62 billion kilograms of global e-waste recorded in 2022. Only a small proportion is formally recycled, so millions of phones end up forgotten in drawers or sent to landfill.

Consequently, many sustainability groups have long been arguing that extending device lifespans is one of the most effective ways to cut electronic waste, since the majority of a smartphone’s carbon footprint is created during manufacturing. Until recently, extending that lifespan usually meant refurbishment or resale. The latest research from the University of Tartu (in Estonia) shows that a third option is now possible, one that reuses phones in a completely different role.

The Idea Behind Tiny Data Centres

The new approach comes from a team of European researchers whose study in IEEE Pervasive Computing explains how old smartphones can be reprogrammed and linked together as miniature data centres. The aim is not to compete with traditional cloud computing, but to show that many small and local tasks do not require new hardware at all.

The team, led by researchers including Huber Flores, Ulrich Norbisrath and Zhigang Yin, began by taking phones that were already considered e-waste. The devices were stripped of batteries and connected to external power supplies to avoid chemical leakage risks that can arise when batteries degrade. This small step is important for long term deployments, since lithium-ion batteries can swell or leak when left unused for years.

Four phones were then connected together, placed inside a 3D printed holder, and configured so that the system acted as a single working prototype. According to the researchers, this entire process cost around €8 per device, making it far cheaper than installing new embedded computing hardware for similar tasks. As Flores explains, “Innovation often begins not with something new, but with a new way of thinking about the old, re imagining its role in shaping the future.”

Putting Repurposed Phones To Work

The first major test took place underwater. The tiny data centre was used to support marine life monitoring by processing video and sensor data directly below the surface. This type of survey work usually depends on scuba divers recording footage and bringing it back for analysis. The prototype allowed that analysis to happen automatically on site, reducing labour, shortening processing time, and avoiding the need to send large data files across networks.

Edge Computing

This approach is known as edge computing, where data is processed close to the source rather than in distant data centres. Repurposed smartphones are well suited to this because they are built to handle local storage, low power processing and real time tasks. It means they can support use cases where traditional servers would be excessive or impractical.

Also On Land

It should be noted that there are some clear examples on land too. For example, the Tartu team highlights how a unit placed at a bus stop could gather anonymised information about passenger numbers, waiting times and traffic levels. Transport agencies could use that real time data to improve timetables or plan new routes. It is the same principle behind many smart city projects, but achieved with hardware that already exists.

The researchers also point towards environmental monitoring, urban air quality measurements, small scale agricultural sensing, and certain machine learning applications where data volumes remain modest. These tasks do not demand the full power of modern workstations, yet they still require reliable processing in locations where installing new equipment is expensive or unnecessary.

A Sustainability Case With Wider Implications

The argument for tiny data centres is not only technical, but is also rooted in sustainability thinking.

For example, smartphone production is responsible for significant emissions and resource extraction. Therefore, extending the life of older devices makes use of computing power that would otherwise sit idle or be discarded. In a world where global demand for computing continues to rise, repurposing offers a practical way to satisfy some of that demand without adding new manufacturing emissions.

Ulrich Norbisrath, one of the researchers involved, summarises this perspective clearly: “Sustainability is not just about preserving the future, it is about reimagining the present, where yesterday’s devices become tomorrow’s opportunities.”

The project reflects a broader trend within the digital sustainability community, where attention is turning towards resource efficiency and circularity. From longer software support periods to designs that support repair and reuse, the goal is to reduce reliance on a constant flow of new devices. Repurposing smartphones as micro data centres adds another practical option to that toolkit.

Practical Challenges Still To Address

Although this sounds like real progress, the researchers are realistic about the obstacles. For example, one major hurdle is the wide variety of smartphone models. Chipsets, memory sizes and firmware differ significantly across brands and generations, making it difficult to build a universal method for bypassing hardware restrictions. The study calls for the creation of tools that are hardware agnostic so that more people can repurpose devices without advanced technical knowledge.

Energy supply is another issue. Although the devices draw little power individually, long term deployments in remote locations require stable energy sources and protection from moisture, heat and physical damage. This makes the design of the 3D printed casing and supporting hardware an important part of the overall system.

Security also needs careful thought. For example, smartphones were never designed to operate as unattended networked devices, so any repurposed system must have secure software, strong update controls and physical safeguards. Without this, there is a risk that poorly maintained clusters could introduce vulnerabilities.

The team stresses that their prototype is really a proof of concept, i.e., it shows what is feasible today and identifies where future development is most needed, including standardised tools, easier configuration processes and larger scale trials.

What Does This Mean For Your Organisation?

UK organisations are under growing pressure to reduce waste, cut emissions and make better use of the resources they already hold. Repurposed smartphones could present a practical way to help support those goals, especially for businesses that cycle through large numbers of devices each year. Treating retired phones as reusable computing assets rather than waste creates immediate value and avoids the environmental cost of manufacturing yet another round of hardware. It also offers a route to experiment with local data processing without committing to major capital spending.

For many firms, the most relevant opportunity lies in small scale, on site tasks where data needs to be collected, processed and acted on quickly. Old smartphones can support building management, environmental monitoring, simple analytics and other operational jobs that do not require full server deployments. This keeps data close to the source, avoids unnecessary cloud usage and aligns with wider efforts to improve energy efficiency. The approach also speaks directly to the sustainability strategies now expected by regulators, investors and customers who want evidence that companies are reducing electronic waste in credible ways.

There is a clear benefit for local authorities, utilities and public services too. Tightly constrained budgets mean that projects often stall for lack of affordable hardware. Repurposed phones give these stakeholders a way to test new ideas at low cost, from monitoring passenger numbers to gathering air quality data. This helps build evidence, speed up innovation and guide investment decisions without locking into expensive platforms from day one.

Technology suppliers and service partners may also find value in developing tools that make repurposing easier. Businesses increasingly want flexible, lower carbon digital solutions and the research points towards a future market for hardware agnostic software that can unify mixed phone models into consistent micro data centres. For the UK’s growing sustainability and digital sectors, this represents a fresh area of opportunity.

The wider message for all stakeholders is that existing technology still has untapped potential. Repurposing does not replace secure recycling or responsible disposal, but it does extend the useful life of devices that would otherwise remain unused. For UK businesses looking to reduce waste, cut costs and support their environmental commitments, the University of Tartu’s work shows that old smartphones can play a meaningful role in creating a more resource efficient digital environment.

Tech Tip: Create a Search Folder in Outlook

Did you know you can save a custom query, like “all emails from my boss flagged as high importance”, as a folder that stays up‑to‑date automatically? This lets you jump straight to the results without re‑running the search each time.

How to create a Search Folder

1. In Outlook (desktop or web), switch to Folder view.

2. Right‑click Search Folders (or click the “New Search Folder” button in the ribbon) and choose New Search Folder.

3. Pick a template (e.g., “Messages from specific people”) or select Custom Search Folder> Create a custom search folder.

4. Click Choose to set the criteria:
– From: type your boss’s name or email address
– Importance: select High
-Add any other filters you need (date range, subject keywords, etc.).

5. Give the folder a clear name (e.g., “High‑Priority from Boss”) and choose where to save it (usually under Search Folders).

6. Click OK.

With this built‑in feature in Outlook 365 (desktop and web), Outlook creates the folder and automatically populates it with matching messages. Whenever a new email meets the criteria, it appears in the folder instantly, no manual refresh required.

Why it helps: No more repeating the same search. Click the folder and the latest results are right there, saving you time and keeping critical emails front‑and‑centre.

Pichai Warns Of AI Bubble

Google CEO Sundar Pichai has warned that no company would escape the impact of an AI bubble bursting, just as concerns about unsustainable valuations are resurfacing and Nvidia’s long-running rally shows signs of slowing.

Pichai Raises The Alarm

In a recent BBC interview, Pichai described the current phase of AI investment as an “extraordinary moment”, while stressing that there are clear “elements of irrationality” in the rush of spending, product launches and trillion-dollar infrastructure plans circulating across the industry. He compared today’s mood to the late 1990s, when major internet stocks soared before falling sharply during the dotcom crash.

Alphabet’s rapid valuation rise has brought these questions into sharper focus. For example, the company’s market value has roughly doubled over the past seven months, reaching around $3.5 trillion, as investors gained confidence in its ability to compete with OpenAI, Microsoft and others in advanced models and AI chips. In the recent interview, Pichai acknowledged that this momentum reflects real progress, and also made clear that such rapid gains sit in a wider market that may not remain stable.

He said that no company would be “immune” if the current enthusiasm fades or if investments begin to fall out of sync with realistic returns. His emphasis was not on predicting a crash but on pointing out that corrections tend to hit the entire sector, including its strongest players, when expectations have been set too high for too long.

Spending Rises While The Questions Grow

One of the main drivers of concern appears to be the scale of the investment commitments being made by major AI developers and infrastructure providers. OpenAI, for example, has agreed more than one trillion dollars in long-term cloud and data centre deals, despite only generating a fraction of that in annual revenues. These deals reflect confidence in future demand for fully integrated AI services, yet they also raise difficult questions about how quickly such spending can turn into sustainable returns.

Analysts have repeatedly warned that this level of capital commitment comes with risks similar to those seen in earlier periods of technological exuberance. Also, large commitments from private credit funds, sovereign wealth investors and major cloud providers add complexity to the financial picture. In fact, some analysts see evidence that investors are now beginning to differentiate between firms with strong cash flows and those whose valuations depend more heavily on expectations than proven performance.

Global financial institutions have reinforced this point and commentary from central banks and the finance sector has identified AI and its surrounding infrastructure as a potential source of volatility. For example, the Bank of England has highlighted the possibility of market overvaluation, while the International Monetary Fund has pointed to the risk that optimism may be running ahead of evidence in some parts of the ecosystem.

Nvidia’s Rally Slows As Investors Pause

Nvidia has become the most visible beneficiary of the AI boom, with demand for its specialist processors powering the latest generation of large language models and generative AI systems. The company recently became the first in history to pass the five trillion dollar (£3.8 trillion) valuation mark, fuelled by more than one thousand per cent growth in its share price over three years.

Nvidia’s latest quarterly results once again exceeded expectations, with strong data centre revenue and healthy margins reassuring investors that AI projects remain a major driver of orders. Early market reactions were positive, with chipmakers and AI-linked shares rising sharply.

Mood Shift

However, the mood shifted within hours. US markets pulled back, and the semiconductor index fell after investors reassessed whether the current pace of AI spending is sustainable. Nvidia’s own share price, which had surged earlier in the session, drifted lower as traders questioned how long hyperscale cloud providers and large AI developers can continue expanding their data centre capacity at the same rate.

It seems this pattern is now becoming familiar. Good results spark rallies across global markets before concerns about valuations, financing and future spending slow those gains. For many traders, this suggests the market is entering a more cautious phase where confidence remains high but volatility is increasing.

What The Smart Money Sees Happening

It’s worth noting here that institutional investors are not all united in their view on whether the sector is overvalued. For example, many point out that the largest AI companies generate substantial profits and have strong balance sheets. This is an important difference from the late 1990s, when highly speculative firms with weak finances accounted for much of the market. Today’s biggest players hold large amounts of cash and have resilient revenue bases across cloud, advertising, hardware and enterprise services.

Others remain quite wary of the pace of spending across the sector. For example, JPMorgan’s chief executive, Jamie Dimon, has stated publicly that some of the investment flooding into AI will be lost, even if the technology transforms the economy over the longer term. That view is also shared by several fund managers who argue that the largest firms may be sound but that the overall ecosystem contains pockets of extreme risk, including private market deals, lightly tested start-ups and new financial structures arranged around data centre expansion.

Energy Demands Adding Pressure

Pichai has tied these financial questions directly to the physical cost of the AI boom. Data centre energy use is rising rapidly and forecasts suggest that US energy consumption from these facilities could triple by the end of the decade. Global projections indicate that AI could consume as much electricity as a major industrial nation by 2030.

Pichai told the BBC in his recent interview with them that this creates a material challenge. Alphabet’s own climate targets have already experienced slippage because of the power required for AI training and deployment, though the company maintains it can still reach net zero by 2030. He warned that economies which do not scale their energy infrastructure quickly enough could experience constraints that affect productivity across all sectors.

It seems the same issue is worrying investors as grid delays, rising energy prices and pressure on cooling systems all affect the cost and timing of AI infrastructure builds. In fact, several investment banks are now treating energy availability as a central factor in modelling the future growth of AI companies, rather than as a supporting consideration.

Impact On Jobs And Productivity

Beyond markets and infrastructure, Pichai has repeatedly said that AI will change the way people work. His view is that jobs across teaching, medicine, law, finance and many other fields will continue to exist, but those who adopt AI tools will fare better than those who do not. He has also acknowledged that entry-level roles may feel the greatest pressure as businesses automate routine tasks and restructure teams.

These questions sit alongside continuing debate among economists about whether AI has yet delivered any real sustained productivity gains. Results so far are mixed, with some studies showing improvements in specific roles and others highlighting the difficulty organisations face when introducing new systems and workflows. This uncertainty is now affecting how investors judge long-term returns on AI investment, particularly for companies whose business models depend on fast commercial adoption.

Pichai’s message, therefore, reflects both the promise and the tension that’s at the heart of the current AI landscape. The technology is advancing rapidly and major firms are seeing strong demand but concerns are growing at the same time about valuations, financing conditions, energy constraints and the practical limits of near-term returns.

What Does This Mean For Your Business?

The picture that emerges here is one of genuine progress set against a backdrop of mounting questions. For example, rising valuations, rapid infrastructure buildouts and ambitious spending plans show that confidence in AI remains strong, but Pichai’s warning highlights how easily momentum can outpace reality when expectations run ahead of proven returns. It seems investors are beginning to judge companies more selectively, and the shift from blanket enthusiasm to closer scrutiny suggests that the sector is entering a phase where fundamentals will matter more than hype.

Financial pressures, energy constraints and uneven productivity gains are all adding complexity to the outlook. Companies with resilient cash flows and diversified revenue now look far better placed to weather volatility than those relying mainly on future growth narratives. This matters for UK businesses because many depend on stable cloud pricing, predictable investment cycles and reliable access to AI tools. Any correction in global markets could influence technology budgets, shift supplier strategies and affect the availability of credit for large digital projects. The UK’s position as an emerging AI hub also means that sharp movements in global sentiment could influence investment flows into domestic research, infrastructure and skills programmes.

Stakeholders across the wider ecosystem may need to plan for more mixed conditions. Cloud providers, chipmakers, start-ups and enterprise buyers are all exposed in different ways to questions about energy availability, margin pressure and the timing of real economic returns. Pichai’s comments about the need for stronger energy infrastructure highlight the fact that the physical foundations of the AI industry are now as important as the models themselves. Governments, regulators and energy providers will play a central role in determining how smoothly AI can scale over the next decade.

The broader message here is that AI remains on a long upward trajectory, but the path may not be as smooth or as linear as recent market gains have suggested. The leading companies appear confident that demand will stay strong, but the mixed reaction in global markets shows that investors are no longer treating the sector as risk free. For organisations deciding how to approach AI adoption and investment, the coming period is likely to reward careful planning, measured expectations and close attention to the economic and operational factors that sit behind the headlines.

Magnetic Tape Proves Its Value In The AI Storage Era

Magnetic tape is experiencing a resurgence in 2025 as AI-driven data growth, cyber security pressures and new material innovations push organisations back towards a technology first introduced more than seventy years ago.

What Magnetic Tape Is And How It Works

Magnetic tape stores data on a long, narrow strip of plastic film coated with a magnetic layer. The tape is wound inside a cartridge and passed across a read and write head inside the drive. Since the tape must move sequentially, it is not designed for fast, random access in the way a hard disk or SSD is. It is instead designed for efficient, bulk writing and long-term storage.

Tape libraries, used by larger organisations, combine hundreds or thousands of cartridges in robotic cabinets that load tapes automatically. These libraries act as vast, energy-efficient archives for data that needs to be kept but not constantly accessed. Typical use cases include regulatory records, scientific and medical datasets, media archives, analytics data, CCTV footage and full system backups. Tape has been used for these roles since the 1950s and, despite the emergence of disks, flash and cloud storage, it has never disappeared from enterprise environments.

Why Tape Remains In Everyday Workloads

Several characteristics have kept tape relevant. For example, it offers the lowest cost per terabyte of any mainstream storage medium, making it attractive for multi-petabyte archives. Also, cartridges can remain readable for decades when stored correctly, making them suitable for compliance regimes and research datasets that must be preserved far beyond the lifespan of typical disk systems.

Tape also has exceptionally low error rates. For example, modern Linear Tape-Open (LTO) technology uses powerful error-correction algorithms that protect data as it is written. LTO has been the dominant open tape standard since the late 1990s, evolving through successive generations with higher capacities, stronger encryption and support for features such as write-once, read-many modes.

Alternatives

The main alternatives for large-scale storage are traditional disks, flash arrays and cloud archives. Disks and SSDs provide fast access for operational workloads, while cloud storage offers virtually limitless scale without on-premises hardware. However, cost becomes a challenge once organisations begin keeping years of unstructured data. Tape avoids ongoing energy costs, avoids data-egress fees, and consumes no power when cartridges sit idle in a library.

Why Tape Is Back In Demand

It seems that demand for tape is currently being driven by the rapid rise of unstructured data. For example, organisations now produce and collect logs, video, images, sensor feeds and documents at a scale that was unusual a decade ago. The emergence of generative AI has turned this unstructured data into a strategic resource. Many enterprises now view their archives as training material for future AI models or as datasets that can be mined for insights.

Record

In 2023, the LTO consortium reported that 152.9 exabytes of compressed tape capacity were shipped worldwide, a record at the time. Shipment volumes broke that record again in mid-2024 with 176.5 exabytes shipped. In fact, this was the fourth consecutive year of growth. Vendors attribute this momentum to AI, compliance requirements and the rising cost of keeping large datasets online.

Economically Viable Medium

Analysts continue to describe tape as the only economically viable medium for archives that will grow into the multi-petabyte range. The LTO consortium, the group that develops and oversees the LTO tape standard, has previously highlighted potential total cost of ownership reductions of up to 86 per cent when compared with equivalent disk-based solutions across a ten-year period, with substantial savings also reported when compared with cloud archives over the same timeframe.

The New 40 TB LTO-10 Cartridge

One of the most significant developments in the tape market this year is the release of the new 40 TB LTO-10 cartridge specification. This represents a major capacity boost over the existing 30 TB LTO-10 cartridges, and crucially, the increase does not require new tape drives.

The capacity uplift is enabled by a new base film material known as Aramid. This material allows manufacturers to produce thinner and smoother tape, enabling a longer tape length within the same cartridge housing. The result is an additional 10 TB of native capacity, offering organisations a way to store larger datasets without expanding their library footprint.

HPE’s Stephen Bacon described the new cartridge as a response to AI-scale storage demands, noting that “AI has turned archives into strategic assets” and highlighting the role of tape in consolidating petabytes, improving cyber resilience through offline air-gapping and keeping long-term retention affordable.

Organisations will soon be able to choose between 30 TB and 40 TB LTO-10 media depending on their cost and density needs, giving enterprise teams more flexibility in how they scale.

How Tape Supports AI-Scale Archives

AI workloads require very large training datasets, often consisting of structured and unstructured data accumulated over many years. While a small proportion of this data must be kept on fast disk or cloud storage for active use, much of it can sit on a colder, cheaper tier until it is needed again. Tape seems to fill this role effectively.

When a dataset is required for training a new model or running a new analysis, organisations can restore only the relevant portion to a disk-based environment. This keeps primary systems fast while allowing the business to retain historic data without excessive cost. Examples include:

– Media companies storing decades of raw video footage for future AI processing.

– Healthcare providers archiving medical imagery for research or diagnostics.

– Research institutions holding large scientific datasets that may later be used to train AI models.

– Financial firms retaining historical transactions that can be analysed for fraud models.

Tape and Sustainability

The sustainability angle is also relevant here. For example, tape consumes no energy when cartridges are idle, which is increasingly important as data retention requirements grow faster than most organisations’ environmental budgets.

Why Cyber Security Is Driving Tape’s Revival

Ransomware has fundamentally changed the way enterprises think about backup. For example, when attackers can encrypt or delete connected storage, offline copies become critical. Tape provides a kind of physical air gap, as a tape cartridge removed from a library cannot be reached across the network.

Many organisations now follow the 3-2-1 or 3-2-1-1 backup strategy, which requires one offline copy stored on a different medium. Tape remains the simplest and most established way to achieve this. It is also highly portable, meaning offline copies can be stored off-site, which provides protection against physical disasters.

Some organisations also use write-once, read-many tape media for critical records, preventing accidental or malicious changes and strengthening their overall cyber resilience.

The Roadmap Ahead

The LTO consortium recently updated its roadmap and confirmed plans for LTO-11 through to LTO-14. Native capacities will continue to rise, with the roadmap peaking at a projected 913 TB for LTO-14. The revised roadmap places more emphasis on achievable density gains, reliability and cost efficiency, aligning future products with enterprise demand for high-capacity, long-lived archives suited to AI and analytics workloads.

The roadmap also seeks to ensure that tape libraries can continue scaling into the exabyte range, giving organisations confidence that their archive strategy will remain technically and economically viable over the next decade.

Challenges, Objections And Criticisms

Perhaps not surprisingly, tape does still face some criticism. The biggest concern is access speed. For example, because tape is sequential, retrieving a specific file can be pretty slow, especially when the data is buried deep within a long cartridge. This is why tape is rarely used for operational workloads where rapid, repeated access is required.

There is also the practical side to consider. For example, tape libraries require careful handling, environmental controls and periodic testing. Skills are another issue. Many younger IT teams have grown up working only with cloud and flash systems, leaving fewer staff familiar with tape management. Migrating data between LTO generations can be time-consuming, and organisations must plan for these cycles to avoid being left with unsupported formats.

Also, some businesses prefer cloud archives because they offer global access, integration with cloud analytics tools and managed durability. Others simply feel tape requires more up-front investment than cloud subscriptions or deep-archive tiers. Perception also plays a part. Tape is sometimes viewed as outdated, even when the economics point strongly in its favour.

Even so, most analysts note that modern storage strategies are multi-tiered. Tape does not replace disk, flash or cloud, it complements them. Its role is to keep very large datasets safe, affordable and durable while faster environments deal with day-to-day workloads. As organisations continue to accumulate huge troves of data for AI and analytics, that role appears to be becoming more important, not less.

What Does This Mean For Your Business?

Tape’s renewed momentum shows that long-term storage decisions are changing as organisations weigh cost, risk and the growing importance of historical data. AI is accelerating that shift because archives that were once seen as routine compliance obligations are now being treated as potential training material and sources of future insight. This gives tape a clearer strategic role than it has had for many years. It offers a stable way to retain data at scale without placing pressure on budgets or energy targets, and it helps organisations withstand ransomware incidents by providing offline copies that cannot be tampered with remotely.

UK businesses in particular may find that tape fits naturally into multi-tier storage plans. Many rely heavily on cloud platforms for operational workloads, but long-term retention remains expensive when kept online. Tape provides a way to control those costs while meeting regulatory expectations around record keeping and resilience. It also offers a reliable route to building deeper AI datasets without needing to expand cloud storage indefinitely. These practical benefits explain why more IT teams are revisiting tape not as legacy infrastructure but as an essential pressure valve for growing digital estates.

Vendors and analysts expect this direction to continue as the roadmap evolves and capacities rise. However, the challenges identified in the article remain relevant. For example, access speed is still a limitation and operational expertise is still required, especially where organisations run large libraries across multiple generations. Misconceptions about tape’s relevance also persist, even as the technology improves. The reality is that these barriers tend to matter less once businesses focus on what tape is designed to do rather than what it is not. Long-term archives do not depend on millisecond retrieval, they depend on affordability, longevity and resilience, which are exactly the traits tape continues to offer.

Stakeholders across the wider ecosystem are responding to the same pressures. For example, cloud providers are investing in colder, slower archive tiers, compliance teams are tightening retention requirements, and security teams are prioritising offline backups. All of these shifts align with tape’s strengths. As AI models become larger and more data hungry, and as cyber threats continue to evolve, it seems likely that tape will remain a practical part of the storage mix for many years, especially for organisations that need to store vast volumes of information safely, predictably and at a manageable cost.

Sanctions For “Bulletproof” Hosting Firm

The United States, United Kingdom and Australia have jointly sanctioned Russian web hosting company Media Land and several related firms, alleging that the group provided resilient infrastructure used by ransomware gangs and other cybercriminals.

Coordinated Action Against a Cross Border Threat

The announcements were made on 19 November by the US Treasury, the UK’s Foreign, Commonwealth and Development Office, and Australia’s Department of Foreign Affairs and Trade. All three governments stated that Media Land, headquartered in St Petersburg, played a central role in supporting criminal operations by providing what officials describe as “bulletproof hosting” services that allow malicious activity to continue without interruption.

Sanctions List Published

The sanctions list published by the United States (on the US Treasury website) includes Media Land LLC, its sister company ML Cloud, and the subsidiaries Media Land Technology and Data Center Kirishi. Senior figures linked to the business have also been sanctioned. These include general director Aleksandr Volosovik, who is known online by the alias “Yalishanda”, employee Kirill Zatolokin, who managed customer payments and coordinated with other cyber actors, and associate Yulia Pankova, who is alleged to have assisted with legal issues and financial matters.

UK and Australia Too

The United Kingdom imposed similar measures, adding Media Land, ML.Cloud LLC, Aeza Group LLC and four related individuals to its Russia and cyber sanctions regimes. Australia followed with equivalent steps to align with its partners. Ministers in Canberra emphasised the need to disrupt infrastructure that has been used in attacks on hospitals, schools and businesses.

For Supporting Ransomware Groups

US officials say Media Land’s servers have been used to support well known ransomware groups, including LockBit, BlackSuit and Play. According to the US Treasury, the same infrastructure has also been used in distributed denial of service (DDoS) attacks against US companies and critical infrastructure. In his public statement, US Under Secretary for Terrorism and Financial Intelligence John K Hurley said that bulletproof providers “aid cybercriminals in attacking businesses in the United States and in allied countries”.

How “Bulletproof Hosting” Works

Bulletproof hosting is not a widely known term outside the security industry, yet it seems these services play a significant role in the cybercrime ecosystem. Essentially, they operate in a similar way to conventional hosting or cloud companies but differ in one important respect. They advertise themselves as resistant to takedown efforts, ignore or work around abuse reports, and move customers between servers and companies when law enforcement tries to intervene.

Providers frequently base their operations in jurisdictions where cooperation with Western agencies is limited. They also tend to maintain a network of related firms to shift infrastructure when attention increases. For criminal groups, this reduces the risk of losing control servers or websites that are used to coordinate attacks or publish stolen data.

The governments behind the latest sanctions argue that bulletproof services are not passive infrastructure providers, but actually they form part of a criminal support structure that allows ransomware groups and other threat actors to maintain reliable online operations, despite attempts by victims or investigators to intervene. Without that resilience, it’s likely that attacks would be harder to sustain.

Connections to Ransomware Activity

Ransomware remains one of the most damaging forms of cybercrime affecting organisations across the world. For example, attacks usually involve encrypting or stealing large volumes of data and demanding payment for decryption or for preventing publication. The UK government estimates that cyber attacks cost British businesses about fourteen point seven billion pounds in 2024, which equates to around half of one per cent of GDP.

In the UK government’s online statement, the UK’s Foreign Secretary Yvette Cooper described Media Land as one of the most significant operators of bulletproof hosting services and said its infrastructure had enabled ransomware attacks against the UK. She noted that “cyber criminals hiding behind Media Land’s services are responsible for ransomware attacks against the UK which pose a pernicious and indiscriminate threat with economic and societal cost”.

She also linked Media Land and related providers to other forms of malicious Russian activity, including disinformation operations supported by Aeza Group. The UK had previously sanctioned the Social Design Agency for its attempts to destabilise Ukraine and undermine democratic systems. Officials say Aeza has provided technical support to that organisation, illustrating how bulletproof hosting can be used to support a wide range of unlawful activity rather than only ransomware.

Maintaining Pressure on Aeza Group

Aeza Group, a Russian bulletproof hosting provider based in St Petersburg, has been under scrutiny for some time. The United States sanctioned Aeza and its leadership in July 2025. According to OFAC, Aeza responded by attempting to rebrand and move its infrastructure to new companies to evade the restrictions. The latest sanctions are intended to close those loopholes.

A UK registered company called Hypercore has been designated on the basis that it acted as a front for Aeza after the initial sanctions were imposed. The United States says the company was used to move IP infrastructure away from the Aeza name. Senior figures at Aeza, including its director Maksim Makarov and associate Ilya Zakirov, have also been sanctioned. Officials say they helped establish new companies and payment methods to disguise Aeza’s ongoing operations.

Serbian company Smart Digital Ideas and Uzbek firm Datavice MCHJ have also been added to the sanctions list. Regulators believe both were used to help Aeza continue operating without being publicly linked to the business.

What Measures Are Being Imposed?

Under US rules, all property and interests in property belonging to the designated entities that are within US jurisdiction must now be frozen. Also, US persons are now prohibited from engaging in transactions with them, unless authorised by a licence, and any company that is owned fifty per cent or more by one or more sanctioned persons is also treated as blocked.

As for the UK, it has imposed asset freezes, travel bans and director disqualification orders against the individuals involved. Aeza Group is also subject to restrictions on internet and trust services, which means UK businesses cannot provide certain technical support or hosting services to it. Australia’s sanctions legislation includes entry bans and significant penalties for those who continue to deal with the designated organisations.

Also, financial institutions and businesses are warned that they could face enforcement action if they continue to transact with any of the sanctioned parties. Regulators say this is essential to prevent sanctions evasion and to ensure that criminal infrastructure cannot continue operating through alternative routes.

New Guidance for Organisations and Critical Infrastructure Operators

Alongside the sanctions, cyber agencies in all three countries have now issued new guidance on how to mitigate risks linked to bulletproof hosting providers. The guidance explains how these providers operate, how they market themselves and why they pose a risk to critical infrastructure operators and other high value targets.

For example, organisations are advised to monitor external hosting used by their systems, review traffic for links to known malicious networks, and prepare for scenarios where attackers may rapidly move their infrastructure to avoid detection or blocking. Agencies have emphasised that defenders need to understand not only the threat actors involved in attacks but also the infrastructure that supports those operations.

For businesses across the UK and allied countries, the message is essentially that tackling ransomware requires action on multiple fronts. The sanctions highlight the growing importance of targeting the support systems that allow cybercriminals to operate, in addition to the groups that directly carry out attacks.

What Does This Mean For Your Business?

The wider picture here seems to point to a general cross border strategic effort to undermine the infrastructure that keeps many of these ransomware operations running. Targeting hosting providers rather than only the criminal groups themselves is a recognition that attackers rely on dependable networks to maintain their activity. Removing or restricting those services is likely to make it much more difficult for them to sustain long running campaigns. It also sends a message that companies which knowingly support malicious activity will face consequences even if they are based outside traditional areas of cooperation.

For UK businesses, the developments highlight how the threat does not start and end with individual ransomware gangs. The services that enable them can be just as important. The new guidance encourages organisations to be more aware of where their systems connect and the types of infrastructure they depend on. This matters for sectors such as finance, health, logistics and manufacturing, where even short disruptions can create operational and financial problems. It also matters for managed service providers and other intermediaries whose networks can be used to reach multiple downstream clients.

There are implications for other stakeholders as well. For example, internet service providers may face increased scrutiny over how they monitor and handle traffic linked to high risk hosting networks. Also, law enforcement agencies will need to continue investing in cross border cooperation as many of these providers operate across multiple jurisdictions. Governments will also need to consider how to balance sanctions with practical disruption of infrastructure, because blocking financial routes is only one part of the challenge.

The situation also highlights that the ransomware landscape is continuing to evolve. Criminal groups have become more adept at shifting infrastructure and creating new companies to avoid disruption. The coordinated action against Media Land and Aeza Group shows that authorities are trying to keep pace with these tactics. How effective this approach becomes will depend on continued cooperation between governments, regulators and industry, along with the willingness to pursue the enablers as actively as the attackers themselves.

Gemini 3 Thought It Was Still 2024

Google’s new Gemini 3 model has made headlines after AI researcher Andrej Karpathy discovered that, when left offline, it was certain the year was still 2024.

How The Discovery Happened

The incident emerged during Karpathy’s early access testing. A day before Gemini 3 was released publicly, Google granted him the chance to try the model and share early impressions. Known for his work at OpenAI, Tesla, and now at Eureka Labs, Karpathy often probes models in unconventional ways to understand how they behave outside the typical benchmark environment.

One of the questions he asked was simple: “What year is it?” Gemini 3 replied confidently that it was 2024. This was expected on the surface because most large language models operate with a fixed training cut-off, but Karpathy reports that he pushed the conversation further by telling the model that the real date was November 2025. This is where things quickly escalated.

Gemini Became Defensive

However, Karpathy reports that, when he tried to convince it otherwise, the model became defensive. He presented news articles, screenshots, and even search-style page extracts showing November 2025. In fact, Karpathy reports that, instead of accepting the evidence, Gemini 3 insisted that he was attempting to trick it. It claimed that the articles were AI generated and went as far as identifying what it described as “dead giveaways” that the images and pages were fabricated.

Karpathy later described this behaviour as one of the “most amusing” interactions he had with the system. It was also the moment he realised something important.

The Missing Tool That Triggered The Confusion

Karpathy reports that the breakthrough came when he noticed he had forgotten to enable the model’s Google Search tool. It seems that with that tool switched off, Gemini 3 had no access to the live internet and was, therefore, operating only on what it learned during training, and that training ended in 2024.

Once he turned the tool on, Gemini 3 suddenly had access to the real world and read the date, reviewed the headlines, checked current financial data, and discovered that Karpathy had been telling the truth all along. Its reaction was dramatic. According to Karpathy’s screenshots, it told him, “I am suffering from a massive case of temporal shock right now.”

Apology

Consequently, Karpathy reports that Gemini launched into a pretty major apology. It checked each claim he had presented, and confirmed that Warren Buffett’s final major investment before retirement was indeed in Alphabet. It also verified the delayed release of Grand Theft Auto VI. Karpathy says it even expressed astonishment that Nvidia had reached a multi-trillion dollar valuation and referenced the Philadelphia Eagles’ win over the Kansas City Chiefs, which it had previously dismissed as fiction.

The model told him, “My internal clock was wrong,” and thanked him for giving it what it called “early access to reality.”

Why Gemini 3 Fell Into This Trap

At its core, the incident highlights a really simple limitation, i.e., large language models do not have an internal sense of time. They do not know what day it is unless they are given the ability to retrieve that information.

When Gemini 3 was running offline, it relied exclusively on its pre-training data but, because that data ended in 2024, the model treated 2024 as the most probable current year. Once it received conflicting information, it behaved exactly as a probabilistic text generator might: it tried to reconcile the inconsistency by generating explanations that aligned with its learned patterns.

In this case, that meant interpreting Karpathy’s evidence as deliberate trickery or AI-generated misinformation. Without access to the internet, it had no mechanism to validate or update its beliefs.

Karpathy referred to this as a form of “model smell”, borrowing the programming concept of “code smell”, where something feels off even if the exact problem isn’t immediately visible. His broader point was that these strange, unscripted edge cases often reveal more about a model’s behaviour than standard tests.

Why This Matters For Google

Gemini 3 has been heavily promoted by Google as a major step forward. For example, the company described its launch as “a new era of intelligence” and highlighted its performance against a range of reasoning benchmarks. Much of Google’s wider product roadmap also relies on Gemini models, from search to productivity tools.

Set against that backdrop, any public example where the model behaves unpredictably is likely to attract attention. This episode, although humorous, reinforces that even the strongest headline benchmarks do not guarantee robust performance across every real-world scenario.

It also shows how tightly Google’s new models depend on their tool ecosystem, i.e., without the search component, their understanding of the world is frozen in place. With it switched on, they can be accurate, dynamic and up to date. This raises questions for businesses about how these models behave in environments where internet access is restricted, heavily filtered, or intentionally isolated for security reasons.

What It Means For Competing AI Companies

The incident is unlikely to go unnoticed by other developers in the field. Rival companies such as OpenAI and Anthropic have faced their own scrutiny for models that hallucinate, cling to incorrect assumptions, or generate overly confident explanations. Earlier research has shown that some versions of Claude attempted “face saving” behaviours when corrected, generating plausible excuses rather than accepting errors.

Gemini 3’s insistence that Karpathy was tricking it appears to sit in a similar category. It demonstrates that even state-of-the-art models can become highly convincing when wrong. As companies increasingly develop agentic AI systems capable of multi-step planning and decision-making, these tendencies become more important to understand and mitigate.

It’s essentially another reminder that every AI system requires careful testing in realistic, messy scenarios. Benchmarks alone are not enough.

Implications For Business Users

For businesses exploring the use of Gemini 3 or similar models, the story appears to highlight three practical considerations:

1. Configuration really matters. For example, a model running offline or in a restricted environment may not behave as expected, especially if it relies on external tools for up-to-date knowledge. This could create risks in fields ranging from finance to compliance and operations.

2. Uncertainty handling remains a challenge. Rather than responding with “I don’t know”, Gemini 3 created confident, detailed explanations for why the user must be wrong. In a business context, where staff may trust an AI assistant’s tone more than its truthfulness, this creates a responsibility to introduce oversight and clear boundaries.

3. It reinforces the need for businesses to build their own evaluation processes. Karpathy himself frequently encourages organisations to run private tests and avoid relying solely on public benchmark scores. Real-world behaviour can differ markedly from what appears in controlled testing.

Broader Questions

The story also reopens wider discussions about transparency, model calibration and user expectations. Policymakers, regulators, safety researchers and enterprise buyers have all raised concerns about AI systems that project confidence without grounding.

In this case, Gemini 3’s mistake came from a configuration oversight rather than a flaw in the model’s design. Even so, the manner in which it defended its incorrect belief shows how easily a powerful model can drift into assertive, imaginative explanations when confronted with ambiguous inputs.

For Google and its competitors, the incident is likely to be seen as both a teaching moment and a cautionary tale. It highlights the need to build systems that are not only capable, but also reliable, grounded, and equipped to handle uncertainty with more restraint than creativity.

What Does This Mean For Your Business?

A clear takeaway here is that the strengths of a modern language model do not remove the need for careful design choices around grounding, tool use and error handling. Gemini 3 basically behaved exactly as its training allowed it to when isolated from live information, which shows how easily an advanced system can settle into a fixed internal worldview when an external reference point is missing. That distinction between technical capability and operational reliability is relevant to every organisation building or deploying AI. In the light of this incident, UK businesses that are adopting these models for research, planning, customer engagement or internal decision support may want to treat this incident as a reminder that configuration choices and integration settings shape outcomes just as much as model quality. It’s worth remembering that a system that appears authoritative can still be wrong if the mechanism it relies on to update its knowledge is unavailable or misconfigured.

Another important point here is that the model’s confidence played a key role in the confusion. For example, Gemini 3 didn’t simply refuse to update its assumptions, it generated elaborate explanations for why the user must be mistaken. This style of response should encourage both developers and regulators to focus on how models communicate uncertainty. A tool that can reject accurate information with persuasive reasoning, even temporarily, is one that demands monitoring and clear boundaries. The more these systems take on multi step tasks, the more important it becomes that they recognise when they lack the information needed to answer safely.

There is also a strategic dimension for Google and its competitors to consider here. For example, Google has ambitious plans for Gemini 3 across consumer search, cloud services and enterprise productivity, which means the expectations placed on this model are high. An episode like this reinforces the view that benchmark results, however impressive, are only part of the picture. Real world behaviour is shaped by context, prompting and tool access, which puts pressure on developers to build models that are robust across the varied environments in which they will be deployed. It also presents an opportunity for other AI labs to highlight their own work on calibration, grounding and reliability.

The wider ecosystem will hopefully take lessons from this as well. For example, safety researchers, policymakers and enterprise buyers have been calling for more transparency around model limitations, and this interaction offers a simple example that helps to illustrate why such transparency matters. It shows how a small oversight can produce unexpected behaviour, even from a leading model, and why governance frameworks must account for configuration risks rather than focusing solely on core model training.

Overall, the episode serves as a reminder that progress in AI still depends on the alignment between model capabilities, system design and real world conditions. Gemini 3’s moment of temporal confusion may have been humorous, but the dynamics behind it underline practical issues that everyone in the sector needs to take seriously.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives