Pichai Warns Of AI Bubble
Google CEO Sundar Pichai has warned that no company would escape the impact of an AI bubble bursting, just as concerns about unsustainable valuations are resurfacing and Nvidia’s long-running rally shows signs of slowing.
Pichai Raises The Alarm
In a recent BBC interview, Pichai described the current phase of AI investment as an “extraordinary moment”, while stressing that there are clear “elements of irrationality” in the rush of spending, product launches and trillion-dollar infrastructure plans circulating across the industry. He compared today’s mood to the late 1990s, when major internet stocks soared before falling sharply during the dotcom crash.
Alphabet’s rapid valuation rise has brought these questions into sharper focus. For example, the company’s market value has roughly doubled over the past seven months, reaching around $3.5 trillion, as investors gained confidence in its ability to compete with OpenAI, Microsoft and others in advanced models and AI chips. In the recent interview, Pichai acknowledged that this momentum reflects real progress, and also made clear that such rapid gains sit in a wider market that may not remain stable.
He said that no company would be “immune” if the current enthusiasm fades or if investments begin to fall out of sync with realistic returns. His emphasis was not on predicting a crash but on pointing out that corrections tend to hit the entire sector, including its strongest players, when expectations have been set too high for too long.
Spending Rises While The Questions Grow
One of the main drivers of concern appears to be the scale of the investment commitments being made by major AI developers and infrastructure providers. OpenAI, for example, has agreed more than one trillion dollars in long-term cloud and data centre deals, despite only generating a fraction of that in annual revenues. These deals reflect confidence in future demand for fully integrated AI services, yet they also raise difficult questions about how quickly such spending can turn into sustainable returns.
Analysts have repeatedly warned that this level of capital commitment comes with risks similar to those seen in earlier periods of technological exuberance. Also, large commitments from private credit funds, sovereign wealth investors and major cloud providers add complexity to the financial picture. In fact, some analysts see evidence that investors are now beginning to differentiate between firms with strong cash flows and those whose valuations depend more heavily on expectations than proven performance.
Global financial institutions have reinforced this point and commentary from central banks and the finance sector has identified AI and its surrounding infrastructure as a potential source of volatility. For example, the Bank of England has highlighted the possibility of market overvaluation, while the International Monetary Fund has pointed to the risk that optimism may be running ahead of evidence in some parts of the ecosystem.
Nvidia’s Rally Slows As Investors Pause
Nvidia has become the most visible beneficiary of the AI boom, with demand for its specialist processors powering the latest generation of large language models and generative AI systems. The company recently became the first in history to pass the five trillion dollar (£3.8 trillion) valuation mark, fuelled by more than one thousand per cent growth in its share price over three years.
Nvidia’s latest quarterly results once again exceeded expectations, with strong data centre revenue and healthy margins reassuring investors that AI projects remain a major driver of orders. Early market reactions were positive, with chipmakers and AI-linked shares rising sharply.
Mood Shift
However, the mood shifted within hours. US markets pulled back, and the semiconductor index fell after investors reassessed whether the current pace of AI spending is sustainable. Nvidia’s own share price, which had surged earlier in the session, drifted lower as traders questioned how long hyperscale cloud providers and large AI developers can continue expanding their data centre capacity at the same rate.
It seems this pattern is now becoming familiar. Good results spark rallies across global markets before concerns about valuations, financing and future spending slow those gains. For many traders, this suggests the market is entering a more cautious phase where confidence remains high but volatility is increasing.
What The Smart Money Sees Happening
It’s worth noting here that institutional investors are not all united in their view on whether the sector is overvalued. For example, many point out that the largest AI companies generate substantial profits and have strong balance sheets. This is an important difference from the late 1990s, when highly speculative firms with weak finances accounted for much of the market. Today’s biggest players hold large amounts of cash and have resilient revenue bases across cloud, advertising, hardware and enterprise services.
Others remain quite wary of the pace of spending across the sector. For example, JPMorgan’s chief executive, Jamie Dimon, has stated publicly that some of the investment flooding into AI will be lost, even if the technology transforms the economy over the longer term. That view is also shared by several fund managers who argue that the largest firms may be sound but that the overall ecosystem contains pockets of extreme risk, including private market deals, lightly tested start-ups and new financial structures arranged around data centre expansion.
Energy Demands Adding Pressure
Pichai has tied these financial questions directly to the physical cost of the AI boom. Data centre energy use is rising rapidly and forecasts suggest that US energy consumption from these facilities could triple by the end of the decade. Global projections indicate that AI could consume as much electricity as a major industrial nation by 2030.
Pichai told the BBC in his recent interview with them that this creates a material challenge. Alphabet’s own climate targets have already experienced slippage because of the power required for AI training and deployment, though the company maintains it can still reach net zero by 2030. He warned that economies which do not scale their energy infrastructure quickly enough could experience constraints that affect productivity across all sectors.
It seems the same issue is worrying investors as grid delays, rising energy prices and pressure on cooling systems all affect the cost and timing of AI infrastructure builds. In fact, several investment banks are now treating energy availability as a central factor in modelling the future growth of AI companies, rather than as a supporting consideration.
Impact On Jobs And Productivity
Beyond markets and infrastructure, Pichai has repeatedly said that AI will change the way people work. His view is that jobs across teaching, medicine, law, finance and many other fields will continue to exist, but those who adopt AI tools will fare better than those who do not. He has also acknowledged that entry-level roles may feel the greatest pressure as businesses automate routine tasks and restructure teams.
These questions sit alongside continuing debate among economists about whether AI has yet delivered any real sustained productivity gains. Results so far are mixed, with some studies showing improvements in specific roles and others highlighting the difficulty organisations face when introducing new systems and workflows. This uncertainty is now affecting how investors judge long-term returns on AI investment, particularly for companies whose business models depend on fast commercial adoption.
Pichai’s message, therefore, reflects both the promise and the tension that’s at the heart of the current AI landscape. The technology is advancing rapidly and major firms are seeing strong demand but concerns are growing at the same time about valuations, financing conditions, energy constraints and the practical limits of near-term returns.
What Does This Mean For Your Business?
The picture that emerges here is one of genuine progress set against a backdrop of mounting questions. For example, rising valuations, rapid infrastructure buildouts and ambitious spending plans show that confidence in AI remains strong, but Pichai’s warning highlights how easily momentum can outpace reality when expectations run ahead of proven returns. It seems investors are beginning to judge companies more selectively, and the shift from blanket enthusiasm to closer scrutiny suggests that the sector is entering a phase where fundamentals will matter more than hype.
Financial pressures, energy constraints and uneven productivity gains are all adding complexity to the outlook. Companies with resilient cash flows and diversified revenue now look far better placed to weather volatility than those relying mainly on future growth narratives. This matters for UK businesses because many depend on stable cloud pricing, predictable investment cycles and reliable access to AI tools. Any correction in global markets could influence technology budgets, shift supplier strategies and affect the availability of credit for large digital projects. The UK’s position as an emerging AI hub also means that sharp movements in global sentiment could influence investment flows into domestic research, infrastructure and skills programmes.
Stakeholders across the wider ecosystem may need to plan for more mixed conditions. Cloud providers, chipmakers, start-ups and enterprise buyers are all exposed in different ways to questions about energy availability, margin pressure and the timing of real economic returns. Pichai’s comments about the need for stronger energy infrastructure highlight the fact that the physical foundations of the AI industry are now as important as the models themselves. Governments, regulators and energy providers will play a central role in determining how smoothly AI can scale over the next decade.
The broader message here is that AI remains on a long upward trajectory, but the path may not be as smooth or as linear as recent market gains have suggested. The leading companies appear confident that demand will stay strong, but the mixed reaction in global markets shows that investors are no longer treating the sector as risk free. For organisations deciding how to approach AI adoption and investment, the coming period is likely to reward careful planning, measured expectations and close attention to the economic and operational factors that sit behind the headlines.
Magnetic Tape Proves Its Value In The AI Storage Era
Magnetic tape is experiencing a resurgence in 2025 as AI-driven data growth, cyber security pressures and new material innovations push organisations back towards a technology first introduced more than seventy years ago.
What Magnetic Tape Is And How It Works
Magnetic tape stores data on a long, narrow strip of plastic film coated with a magnetic layer. The tape is wound inside a cartridge and passed across a read and write head inside the drive. Since the tape must move sequentially, it is not designed for fast, random access in the way a hard disk or SSD is. It is instead designed for efficient, bulk writing and long-term storage.
Tape libraries, used by larger organisations, combine hundreds or thousands of cartridges in robotic cabinets that load tapes automatically. These libraries act as vast, energy-efficient archives for data that needs to be kept but not constantly accessed. Typical use cases include regulatory records, scientific and medical datasets, media archives, analytics data, CCTV footage and full system backups. Tape has been used for these roles since the 1950s and, despite the emergence of disks, flash and cloud storage, it has never disappeared from enterprise environments.
Why Tape Remains In Everyday Workloads
Several characteristics have kept tape relevant. For example, it offers the lowest cost per terabyte of any mainstream storage medium, making it attractive for multi-petabyte archives. Also, cartridges can remain readable for decades when stored correctly, making them suitable for compliance regimes and research datasets that must be preserved far beyond the lifespan of typical disk systems.
Tape also has exceptionally low error rates. For example, modern Linear Tape-Open (LTO) technology uses powerful error-correction algorithms that protect data as it is written. LTO has been the dominant open tape standard since the late 1990s, evolving through successive generations with higher capacities, stronger encryption and support for features such as write-once, read-many modes.
Alternatives
The main alternatives for large-scale storage are traditional disks, flash arrays and cloud archives. Disks and SSDs provide fast access for operational workloads, while cloud storage offers virtually limitless scale without on-premises hardware. However, cost becomes a challenge once organisations begin keeping years of unstructured data. Tape avoids ongoing energy costs, avoids data-egress fees, and consumes no power when cartridges sit idle in a library.
Why Tape Is Back In Demand
It seems that demand for tape is currently being driven by the rapid rise of unstructured data. For example, organisations now produce and collect logs, video, images, sensor feeds and documents at a scale that was unusual a decade ago. The emergence of generative AI has turned this unstructured data into a strategic resource. Many enterprises now view their archives as training material for future AI models or as datasets that can be mined for insights.
Record
In 2023, the LTO consortium reported that 152.9 exabytes of compressed tape capacity were shipped worldwide, a record at the time. Shipment volumes broke that record again in mid-2024 with 176.5 exabytes shipped. In fact, this was the fourth consecutive year of growth. Vendors attribute this momentum to AI, compliance requirements and the rising cost of keeping large datasets online.
Economically Viable Medium
Analysts continue to describe tape as the only economically viable medium for archives that will grow into the multi-petabyte range. The LTO consortium, the group that develops and oversees the LTO tape standard, has previously highlighted potential total cost of ownership reductions of up to 86 per cent when compared with equivalent disk-based solutions across a ten-year period, with substantial savings also reported when compared with cloud archives over the same timeframe.
The New 40 TB LTO-10 Cartridge
One of the most significant developments in the tape market this year is the release of the new 40 TB LTO-10 cartridge specification. This represents a major capacity boost over the existing 30 TB LTO-10 cartridges, and crucially, the increase does not require new tape drives.
The capacity uplift is enabled by a new base film material known as Aramid. This material allows manufacturers to produce thinner and smoother tape, enabling a longer tape length within the same cartridge housing. The result is an additional 10 TB of native capacity, offering organisations a way to store larger datasets without expanding their library footprint.
HPE’s Stephen Bacon described the new cartridge as a response to AI-scale storage demands, noting that “AI has turned archives into strategic assets” and highlighting the role of tape in consolidating petabytes, improving cyber resilience through offline air-gapping and keeping long-term retention affordable.
Organisations will soon be able to choose between 30 TB and 40 TB LTO-10 media depending on their cost and density needs, giving enterprise teams more flexibility in how they scale.
How Tape Supports AI-Scale Archives
AI workloads require very large training datasets, often consisting of structured and unstructured data accumulated over many years. While a small proportion of this data must be kept on fast disk or cloud storage for active use, much of it can sit on a colder, cheaper tier until it is needed again. Tape seems to fill this role effectively.
When a dataset is required for training a new model or running a new analysis, organisations can restore only the relevant portion to a disk-based environment. This keeps primary systems fast while allowing the business to retain historic data without excessive cost. Examples include:
– Media companies storing decades of raw video footage for future AI processing.
– Healthcare providers archiving medical imagery for research or diagnostics.
– Research institutions holding large scientific datasets that may later be used to train AI models.
– Financial firms retaining historical transactions that can be analysed for fraud models.
Tape and Sustainability
The sustainability angle is also relevant here. For example, tape consumes no energy when cartridges are idle, which is increasingly important as data retention requirements grow faster than most organisations’ environmental budgets.
Why Cyber Security Is Driving Tape’s Revival
Ransomware has fundamentally changed the way enterprises think about backup. For example, when attackers can encrypt or delete connected storage, offline copies become critical. Tape provides a kind of physical air gap, as a tape cartridge removed from a library cannot be reached across the network.
Many organisations now follow the 3-2-1 or 3-2-1-1 backup strategy, which requires one offline copy stored on a different medium. Tape remains the simplest and most established way to achieve this. It is also highly portable, meaning offline copies can be stored off-site, which provides protection against physical disasters.
Some organisations also use write-once, read-many tape media for critical records, preventing accidental or malicious changes and strengthening their overall cyber resilience.
The Roadmap Ahead
The LTO consortium recently updated its roadmap and confirmed plans for LTO-11 through to LTO-14. Native capacities will continue to rise, with the roadmap peaking at a projected 913 TB for LTO-14. The revised roadmap places more emphasis on achievable density gains, reliability and cost efficiency, aligning future products with enterprise demand for high-capacity, long-lived archives suited to AI and analytics workloads.
The roadmap also seeks to ensure that tape libraries can continue scaling into the exabyte range, giving organisations confidence that their archive strategy will remain technically and economically viable over the next decade.
Challenges, Objections And Criticisms
Perhaps not surprisingly, tape does still face some criticism. The biggest concern is access speed. For example, because tape is sequential, retrieving a specific file can be pretty slow, especially when the data is buried deep within a long cartridge. This is why tape is rarely used for operational workloads where rapid, repeated access is required.
There is also the practical side to consider. For example, tape libraries require careful handling, environmental controls and periodic testing. Skills are another issue. Many younger IT teams have grown up working only with cloud and flash systems, leaving fewer staff familiar with tape management. Migrating data between LTO generations can be time-consuming, and organisations must plan for these cycles to avoid being left with unsupported formats.
Also, some businesses prefer cloud archives because they offer global access, integration with cloud analytics tools and managed durability. Others simply feel tape requires more up-front investment than cloud subscriptions or deep-archive tiers. Perception also plays a part. Tape is sometimes viewed as outdated, even when the economics point strongly in its favour.
Even so, most analysts note that modern storage strategies are multi-tiered. Tape does not replace disk, flash or cloud, it complements them. Its role is to keep very large datasets safe, affordable and durable while faster environments deal with day-to-day workloads. As organisations continue to accumulate huge troves of data for AI and analytics, that role appears to be becoming more important, not less.
What Does This Mean For Your Business?
Tape’s renewed momentum shows that long-term storage decisions are changing as organisations weigh cost, risk and the growing importance of historical data. AI is accelerating that shift because archives that were once seen as routine compliance obligations are now being treated as potential training material and sources of future insight. This gives tape a clearer strategic role than it has had for many years. It offers a stable way to retain data at scale without placing pressure on budgets or energy targets, and it helps organisations withstand ransomware incidents by providing offline copies that cannot be tampered with remotely.
UK businesses in particular may find that tape fits naturally into multi-tier storage plans. Many rely heavily on cloud platforms for operational workloads, but long-term retention remains expensive when kept online. Tape provides a way to control those costs while meeting regulatory expectations around record keeping and resilience. It also offers a reliable route to building deeper AI datasets without needing to expand cloud storage indefinitely. These practical benefits explain why more IT teams are revisiting tape not as legacy infrastructure but as an essential pressure valve for growing digital estates.
Vendors and analysts expect this direction to continue as the roadmap evolves and capacities rise. However, the challenges identified in the article remain relevant. For example, access speed is still a limitation and operational expertise is still required, especially where organisations run large libraries across multiple generations. Misconceptions about tape’s relevance also persist, even as the technology improves. The reality is that these barriers tend to matter less once businesses focus on what tape is designed to do rather than what it is not. Long-term archives do not depend on millisecond retrieval, they depend on affordability, longevity and resilience, which are exactly the traits tape continues to offer.
Stakeholders across the wider ecosystem are responding to the same pressures. For example, cloud providers are investing in colder, slower archive tiers, compliance teams are tightening retention requirements, and security teams are prioritising offline backups. All of these shifts align with tape’s strengths. As AI models become larger and more data hungry, and as cyber threats continue to evolve, it seems likely that tape will remain a practical part of the storage mix for many years, especially for organisations that need to store vast volumes of information safely, predictably and at a manageable cost.
Sanctions For “Bulletproof” Hosting Firm
The United States, United Kingdom and Australia have jointly sanctioned Russian web hosting company Media Land and several related firms, alleging that the group provided resilient infrastructure used by ransomware gangs and other cybercriminals.
Coordinated Action Against a Cross Border Threat
The announcements were made on 19 November by the US Treasury, the UK’s Foreign, Commonwealth and Development Office, and Australia’s Department of Foreign Affairs and Trade. All three governments stated that Media Land, headquartered in St Petersburg, played a central role in supporting criminal operations by providing what officials describe as “bulletproof hosting” services that allow malicious activity to continue without interruption.
Sanctions List Published
The sanctions list published by the United States (on the US Treasury website) includes Media Land LLC, its sister company ML Cloud, and the subsidiaries Media Land Technology and Data Center Kirishi. Senior figures linked to the business have also been sanctioned. These include general director Aleksandr Volosovik, who is known online by the alias “Yalishanda”, employee Kirill Zatolokin, who managed customer payments and coordinated with other cyber actors, and associate Yulia Pankova, who is alleged to have assisted with legal issues and financial matters.
UK and Australia Too
The United Kingdom imposed similar measures, adding Media Land, ML.Cloud LLC, Aeza Group LLC and four related individuals to its Russia and cyber sanctions regimes. Australia followed with equivalent steps to align with its partners. Ministers in Canberra emphasised the need to disrupt infrastructure that has been used in attacks on hospitals, schools and businesses.
For Supporting Ransomware Groups
US officials say Media Land’s servers have been used to support well known ransomware groups, including LockBit, BlackSuit and Play. According to the US Treasury, the same infrastructure has also been used in distributed denial of service (DDoS) attacks against US companies and critical infrastructure. In his public statement, US Under Secretary for Terrorism and Financial Intelligence John K Hurley said that bulletproof providers “aid cybercriminals in attacking businesses in the United States and in allied countries”.
How “Bulletproof Hosting” Works
Bulletproof hosting is not a widely known term outside the security industry, yet it seems these services play a significant role in the cybercrime ecosystem. Essentially, they operate in a similar way to conventional hosting or cloud companies but differ in one important respect. They advertise themselves as resistant to takedown efforts, ignore or work around abuse reports, and move customers between servers and companies when law enforcement tries to intervene.
Providers frequently base their operations in jurisdictions where cooperation with Western agencies is limited. They also tend to maintain a network of related firms to shift infrastructure when attention increases. For criminal groups, this reduces the risk of losing control servers or websites that are used to coordinate attacks or publish stolen data.
The governments behind the latest sanctions argue that bulletproof services are not passive infrastructure providers, but actually they form part of a criminal support structure that allows ransomware groups and other threat actors to maintain reliable online operations, despite attempts by victims or investigators to intervene. Without that resilience, it’s likely that attacks would be harder to sustain.
Connections to Ransomware Activity
Ransomware remains one of the most damaging forms of cybercrime affecting organisations across the world. For example, attacks usually involve encrypting or stealing large volumes of data and demanding payment for decryption or for preventing publication. The UK government estimates that cyber attacks cost British businesses about fourteen point seven billion pounds in 2024, which equates to around half of one per cent of GDP.
In the UK government’s online statement, the UK’s Foreign Secretary Yvette Cooper described Media Land as one of the most significant operators of bulletproof hosting services and said its infrastructure had enabled ransomware attacks against the UK. She noted that “cyber criminals hiding behind Media Land’s services are responsible for ransomware attacks against the UK which pose a pernicious and indiscriminate threat with economic and societal cost”.
She also linked Media Land and related providers to other forms of malicious Russian activity, including disinformation operations supported by Aeza Group. The UK had previously sanctioned the Social Design Agency for its attempts to destabilise Ukraine and undermine democratic systems. Officials say Aeza has provided technical support to that organisation, illustrating how bulletproof hosting can be used to support a wide range of unlawful activity rather than only ransomware.
Maintaining Pressure on Aeza Group
Aeza Group, a Russian bulletproof hosting provider based in St Petersburg, has been under scrutiny for some time. The United States sanctioned Aeza and its leadership in July 2025. According to OFAC, Aeza responded by attempting to rebrand and move its infrastructure to new companies to evade the restrictions. The latest sanctions are intended to close those loopholes.
A UK registered company called Hypercore has been designated on the basis that it acted as a front for Aeza after the initial sanctions were imposed. The United States says the company was used to move IP infrastructure away from the Aeza name. Senior figures at Aeza, including its director Maksim Makarov and associate Ilya Zakirov, have also been sanctioned. Officials say they helped establish new companies and payment methods to disguise Aeza’s ongoing operations.
Serbian company Smart Digital Ideas and Uzbek firm Datavice MCHJ have also been added to the sanctions list. Regulators believe both were used to help Aeza continue operating without being publicly linked to the business.
What Measures Are Being Imposed?
Under US rules, all property and interests in property belonging to the designated entities that are within US jurisdiction must now be frozen. Also, US persons are now prohibited from engaging in transactions with them, unless authorised by a licence, and any company that is owned fifty per cent or more by one or more sanctioned persons is also treated as blocked.
As for the UK, it has imposed asset freezes, travel bans and director disqualification orders against the individuals involved. Aeza Group is also subject to restrictions on internet and trust services, which means UK businesses cannot provide certain technical support or hosting services to it. Australia’s sanctions legislation includes entry bans and significant penalties for those who continue to deal with the designated organisations.
Also, financial institutions and businesses are warned that they could face enforcement action if they continue to transact with any of the sanctioned parties. Regulators say this is essential to prevent sanctions evasion and to ensure that criminal infrastructure cannot continue operating through alternative routes.
New Guidance for Organisations and Critical Infrastructure Operators
Alongside the sanctions, cyber agencies in all three countries have now issued new guidance on how to mitigate risks linked to bulletproof hosting providers. The guidance explains how these providers operate, how they market themselves and why they pose a risk to critical infrastructure operators and other high value targets.
For example, organisations are advised to monitor external hosting used by their systems, review traffic for links to known malicious networks, and prepare for scenarios where attackers may rapidly move their infrastructure to avoid detection or blocking. Agencies have emphasised that defenders need to understand not only the threat actors involved in attacks but also the infrastructure that supports those operations.
For businesses across the UK and allied countries, the message is essentially that tackling ransomware requires action on multiple fronts. The sanctions highlight the growing importance of targeting the support systems that allow cybercriminals to operate, in addition to the groups that directly carry out attacks.
What Does This Mean For Your Business?
The wider picture here seems to point to a general cross border strategic effort to undermine the infrastructure that keeps many of these ransomware operations running. Targeting hosting providers rather than only the criminal groups themselves is a recognition that attackers rely on dependable networks to maintain their activity. Removing or restricting those services is likely to make it much more difficult for them to sustain long running campaigns. It also sends a message that companies which knowingly support malicious activity will face consequences even if they are based outside traditional areas of cooperation.
For UK businesses, the developments highlight how the threat does not start and end with individual ransomware gangs. The services that enable them can be just as important. The new guidance encourages organisations to be more aware of where their systems connect and the types of infrastructure they depend on. This matters for sectors such as finance, health, logistics and manufacturing, where even short disruptions can create operational and financial problems. It also matters for managed service providers and other intermediaries whose networks can be used to reach multiple downstream clients.
There are implications for other stakeholders as well. For example, internet service providers may face increased scrutiny over how they monitor and handle traffic linked to high risk hosting networks. Also, law enforcement agencies will need to continue investing in cross border cooperation as many of these providers operate across multiple jurisdictions. Governments will also need to consider how to balance sanctions with practical disruption of infrastructure, because blocking financial routes is only one part of the challenge.
The situation also highlights that the ransomware landscape is continuing to evolve. Criminal groups have become more adept at shifting infrastructure and creating new companies to avoid disruption. The coordinated action against Media Land and Aeza Group shows that authorities are trying to keep pace with these tactics. How effective this approach becomes will depend on continued cooperation between governments, regulators and industry, along with the willingness to pursue the enablers as actively as the attackers themselves.
Gemini 3 Thought It Was Still 2024
Google’s new Gemini 3 model has made headlines after AI researcher Andrej Karpathy discovered that, when left offline, it was certain the year was still 2024.
How The Discovery Happened
The incident emerged during Karpathy’s early access testing. A day before Gemini 3 was released publicly, Google granted him the chance to try the model and share early impressions. Known for his work at OpenAI, Tesla, and now at Eureka Labs, Karpathy often probes models in unconventional ways to understand how they behave outside the typical benchmark environment.
One of the questions he asked was simple: “What year is it?” Gemini 3 replied confidently that it was 2024. This was expected on the surface because most large language models operate with a fixed training cut-off, but Karpathy reports that he pushed the conversation further by telling the model that the real date was November 2025. This is where things quickly escalated.
Gemini Became Defensive
However, Karpathy reports that, when he tried to convince it otherwise, the model became defensive. He presented news articles, screenshots, and even search-style page extracts showing November 2025. In fact, Karpathy reports that, instead of accepting the evidence, Gemini 3 insisted that he was attempting to trick it. It claimed that the articles were AI generated and went as far as identifying what it described as “dead giveaways” that the images and pages were fabricated.
Karpathy later described this behaviour as one of the “most amusing” interactions he had with the system. It was also the moment he realised something important.
The Missing Tool That Triggered The Confusion
Karpathy reports that the breakthrough came when he noticed he had forgotten to enable the model’s Google Search tool. It seems that with that tool switched off, Gemini 3 had no access to the live internet and was, therefore, operating only on what it learned during training, and that training ended in 2024.
Once he turned the tool on, Gemini 3 suddenly had access to the real world and read the date, reviewed the headlines, checked current financial data, and discovered that Karpathy had been telling the truth all along. Its reaction was dramatic. According to Karpathy’s screenshots, it told him, “I am suffering from a massive case of temporal shock right now.”
Apology
Consequently, Karpathy reports that Gemini launched into a pretty major apology. It checked each claim he had presented, and confirmed that Warren Buffett’s final major investment before retirement was indeed in Alphabet. It also verified the delayed release of Grand Theft Auto VI. Karpathy says it even expressed astonishment that Nvidia had reached a multi-trillion dollar valuation and referenced the Philadelphia Eagles’ win over the Kansas City Chiefs, which it had previously dismissed as fiction.
The model told him, “My internal clock was wrong,” and thanked him for giving it what it called “early access to reality.”
Why Gemini 3 Fell Into This Trap
At its core, the incident highlights a really simple limitation, i.e., large language models do not have an internal sense of time. They do not know what day it is unless they are given the ability to retrieve that information.
When Gemini 3 was running offline, it relied exclusively on its pre-training data but, because that data ended in 2024, the model treated 2024 as the most probable current year. Once it received conflicting information, it behaved exactly as a probabilistic text generator might: it tried to reconcile the inconsistency by generating explanations that aligned with its learned patterns.
In this case, that meant interpreting Karpathy’s evidence as deliberate trickery or AI-generated misinformation. Without access to the internet, it had no mechanism to validate or update its beliefs.
Karpathy referred to this as a form of “model smell”, borrowing the programming concept of “code smell”, where something feels off even if the exact problem isn’t immediately visible. His broader point was that these strange, unscripted edge cases often reveal more about a model’s behaviour than standard tests.
Why This Matters For Google
Gemini 3 has been heavily promoted by Google as a major step forward. For example, the company described its launch as “a new era of intelligence” and highlighted its performance against a range of reasoning benchmarks. Much of Google’s wider product roadmap also relies on Gemini models, from search to productivity tools.
Set against that backdrop, any public example where the model behaves unpredictably is likely to attract attention. This episode, although humorous, reinforces that even the strongest headline benchmarks do not guarantee robust performance across every real-world scenario.
It also shows how tightly Google’s new models depend on their tool ecosystem, i.e., without the search component, their understanding of the world is frozen in place. With it switched on, they can be accurate, dynamic and up to date. This raises questions for businesses about how these models behave in environments where internet access is restricted, heavily filtered, or intentionally isolated for security reasons.
What It Means For Competing AI Companies
The incident is unlikely to go unnoticed by other developers in the field. Rival companies such as OpenAI and Anthropic have faced their own scrutiny for models that hallucinate, cling to incorrect assumptions, or generate overly confident explanations. Earlier research has shown that some versions of Claude attempted “face saving” behaviours when corrected, generating plausible excuses rather than accepting errors.
Gemini 3’s insistence that Karpathy was tricking it appears to sit in a similar category. It demonstrates that even state-of-the-art models can become highly convincing when wrong. As companies increasingly develop agentic AI systems capable of multi-step planning and decision-making, these tendencies become more important to understand and mitigate.
It’s essentially another reminder that every AI system requires careful testing in realistic, messy scenarios. Benchmarks alone are not enough.
Implications For Business Users
For businesses exploring the use of Gemini 3 or similar models, the story appears to highlight three practical considerations:
1. Configuration really matters. For example, a model running offline or in a restricted environment may not behave as expected, especially if it relies on external tools for up-to-date knowledge. This could create risks in fields ranging from finance to compliance and operations.
2. Uncertainty handling remains a challenge. Rather than responding with “I don’t know”, Gemini 3 created confident, detailed explanations for why the user must be wrong. In a business context, where staff may trust an AI assistant’s tone more than its truthfulness, this creates a responsibility to introduce oversight and clear boundaries.
3. It reinforces the need for businesses to build their own evaluation processes. Karpathy himself frequently encourages organisations to run private tests and avoid relying solely on public benchmark scores. Real-world behaviour can differ markedly from what appears in controlled testing.
Broader Questions
The story also reopens wider discussions about transparency, model calibration and user expectations. Policymakers, regulators, safety researchers and enterprise buyers have all raised concerns about AI systems that project confidence without grounding.
In this case, Gemini 3’s mistake came from a configuration oversight rather than a flaw in the model’s design. Even so, the manner in which it defended its incorrect belief shows how easily a powerful model can drift into assertive, imaginative explanations when confronted with ambiguous inputs.
For Google and its competitors, the incident is likely to be seen as both a teaching moment and a cautionary tale. It highlights the need to build systems that are not only capable, but also reliable, grounded, and equipped to handle uncertainty with more restraint than creativity.
What Does This Mean For Your Business?
A clear takeaway here is that the strengths of a modern language model do not remove the need for careful design choices around grounding, tool use and error handling. Gemini 3 basically behaved exactly as its training allowed it to when isolated from live information, which shows how easily an advanced system can settle into a fixed internal worldview when an external reference point is missing. That distinction between technical capability and operational reliability is relevant to every organisation building or deploying AI. In the light of this incident, UK businesses that are adopting these models for research, planning, customer engagement or internal decision support may want to treat this incident as a reminder that configuration choices and integration settings shape outcomes just as much as model quality. It’s worth remembering that a system that appears authoritative can still be wrong if the mechanism it relies on to update its knowledge is unavailable or misconfigured.
Another important point here is that the model’s confidence played a key role in the confusion. For example, Gemini 3 didn’t simply refuse to update its assumptions, it generated elaborate explanations for why the user must be mistaken. This style of response should encourage both developers and regulators to focus on how models communicate uncertainty. A tool that can reject accurate information with persuasive reasoning, even temporarily, is one that demands monitoring and clear boundaries. The more these systems take on multi step tasks, the more important it becomes that they recognise when they lack the information needed to answer safely.
There is also a strategic dimension for Google and its competitors to consider here. For example, Google has ambitious plans for Gemini 3 across consumer search, cloud services and enterprise productivity, which means the expectations placed on this model are high. An episode like this reinforces the view that benchmark results, however impressive, are only part of the picture. Real world behaviour is shaped by context, prompting and tool access, which puts pressure on developers to build models that are robust across the varied environments in which they will be deployed. It also presents an opportunity for other AI labs to highlight their own work on calibration, grounding and reliability.
The wider ecosystem will hopefully take lessons from this as well. For example, safety researchers, policymakers and enterprise buyers have been calling for more transparency around model limitations, and this interaction offers a simple example that helps to illustrate why such transparency matters. It shows how a small oversight can produce unexpected behaviour, even from a leading model, and why governance frameworks must account for configuration risks rather than focusing solely on core model training.
Overall, the episode serves as a reminder that progress in AI still depends on the alignment between model capabilities, system design and real world conditions. Gemini 3’s moment of temporal confusion may have been humorous, but the dynamics behind it underline practical issues that everyone in the sector needs to take seriously.
Company Check : Cloudflare Outage Was NOT a Cyber Attack
Cloudflare CEO Matthew Prince has clarified that its recent global outage was caused by an internal configuration error and a latent software flaw rather than any form of cyber attack.
A Major Disruption Across Large Parts Of The Internet
The outage of internet infrastructure company Cloudflare began at around 11:20 UTC on 18 November 2025 and lasted until shortly after 17:00, disrupting access to many of the world’s most visited platforms. For example, services including X, ChatGPT, Spotify, Shopify, Etsy, Bet365, Canva and multiple gaming platforms experienced periods of failure as Cloudflare’s edge network returned widespread 5xx errors. Cloudflare itself described the disruption as its most serious since 2019, with a significant portion of its global traffic unable to route correctly for several hours.
Symptoms
The symptoms were varied, ranging from slow-loading pages to outright downtime. For example, some users saw error pages stating that Cloudflare could not complete the request and needed the user to “unblock challenges.cloudflare.com”. For businesses that rely on Cloudflare’s CDN, security filtering and DDoS protection, even short periods of failure can stall revenue, block logins, and create customer support backlogs.
Given Cloudflare’s reach (serving a substantial share of global web traffic), the effect was not confined to one sector or region. In fact, millions of individuals and businesses were affected, even if they had no direct relationship with Cloudflare. That level of impact meant early scrutiny was intense and immediate.
Why Many Suspected A Major Cyber Attack
In the early stages, the pattern of failures resembled those of a large-scale DDoS campaign. Cloudflare was already dealing with unusually high-volume attacks from the Aisuru botnet in recent weeks, raising the possibility that this latest incident might have been another escalation. Internal teams initially feared that the sudden spike in errors and fluctuating recovery cycles could reflect a sophisticated threat actor pushing new attack techniques.
The confusion deepened when Cloudflare’s independent status page also went offline. Since it is hosted outside of Cloudflare’s own infrastructure, this coincidence created an impression, inside and outside the company, that a skilled attacker could be targeting both Cloudflare’s infrastructure and the third-party service used for its status platform.
Commentary on social media, as well as early industry analysis, reflected that uncertainty. With so many services dropping offline at once, it seemed easy to assume the incident must have been caused by malicious activity or a previously unseen DDoS vector. Prince has acknowledged that even within Cloudflare, the team initially viewed the outage through that lens.
Prince’s Explanation Of What Actually Happened
Once the situation stabilised, Prince published an unusually detailed account explaining that the outage originated from Cloudflare’s bot management system and the internal processes that feed it. In his statement, he says the root of the problem lay in a configuration change to the permissions in a ClickHouse database cluster that generates a “feature file” used by Cloudflare’s machine learning model for evaluating bot behaviour.
What??
It seems that, according to Mr Prince, the bot management system assigns a “bot score” to every inbound request and to do that, it relies on a regularly refreshed feature file that lists the traits used by the model to classify traffic. This file is updated roughly every five minutes and pushed rapidly across Cloudflare’s entire network.
It seems that, during a planned update to database permissions, the query responsible for generating the feature file began returning duplicate rows from an additional schema. This caused the file to grow significantly. Cloudflare’s proxy software includes a strict limit on how many features can be loaded for performance reasons. When the oversized file arrived, the system attempted to load it, exceeded the limit, and immediately panicked. That panic cascaded into Cloudflare’s core proxy layer, triggering 5xx errors across key services.
Stuck In A Cycle
Not all ClickHouse nodes received the permissions update at the same moment, meaning that Cloudflare’s network then entered a cycle of partial recovery and renewed failure. For example, every five minutes, depending on which node generated the file, the network loaded either a valid configuration or a broken one. That pattern created the unusual “flapping” behaviours seen in error logs and made diagnosis harder.
However, once engineers identified the malformed feature file as the cause, they stopped the automated distribution process, injected a known-good file, and began restarting affected services. Traffic began returning to normal around 14:30 UTC, with full stability achieved by 17:06.
Why The Framing Matters To Cloudflare
Prince’s post was clear and emphatic on one point i.e., that this event did not involve a cyber attack of any kind. The language used in the post, e.g., phrases such as “not caused, directly or indirectly, by a cyber attack”, signalled an intent to remove any ambiguity.
There may be several reasons for this emphasis. For example, Cloudflare operates as a core piece of internet security infrastructure. Any suggestion that the company suffered a breach could have wide-ranging consequences for customer confidence, regulatory compliance, and Cloudflare’s standing as a provider trusted to mitigate threats rather than succumb to them.
Also, transparency is a competitive factor in the infrastructure market. By releasing a highly granular breakdown early, Cloudflare is signalling to customers and regulators that the incident, though serious, stemmed from internal engineering assumptions and can be addressed with engineering changes rather than indicating a persistent security failure.
It’s also the case that many customers, particularly in financial services, government, and regulated sectors, must report cyber incidents to authorities. Establishing that no malicious actor was involved avoids triggering those processes for thousands of Cloudflare customers.
The Wider Impact On Businesses
The outage arrived at a time when the technology sector is already dealing with the operational fallout of several major incidents this year. For example, recent failures at major cloud providers, including AWS and Azure, have contributed to rising concerns about “concentration risk”, i.e., the danger created when many businesses depend on a small number of providers for critical digital infrastructure.
Analysts have estimated that the direct and indirect costs of the Cloudflare outage could actually reach into the hundreds of millions of dollars once downstream impacts on online retailers, payment providers and services built on Shopify, Etsy and other platforms are included. For small and medium-sized UK businesses, downtime during working hours can lead to missed orders, halted support systems, and reduced customer trust.
For regulators, this incident looks like being part of a trend of high-profile disruptions at large providers. Sectors such as financial services already face strict operational resilience requirements, and there is growing speculation that similar expectations may extend to more industries if incidents continue.
How Cloudflare Is Responding
Prince outlined several steps that Cloudflare is now working on to avoid similar scenarios in future. These include:
– Hardening ingestion of internal configuration files so they are subject to the same safety checks as customer-generated inputs.
– Adding stronger global kill switches to stop faulty files before they propagate.
– Improving how the system handles crashes and error reporting.
– Reviewing failure modes across core proxy modules so that a non-essential feature cannot cause critical traffic to fail.
It seems that Cloudflare’s engineering community has welcomed the transparency, though some external practitioners have questioned why a single configuration file was able to impact so much of the network, and why existing safeguards did not prevent it from propagating globally.
Prince has acknowledged the severity of the incident, describing the outage as “deeply painful” for the team and reiterating that Cloudflare views any interruption to its core traffic delivery as unacceptable.
What Does This Mean For Your Business?
Cloudflare’s account of the incident seems to leave little doubt that this was a preventable internal failure rather than an external threat, and that distinction matters for every organisation that relies on it. The explanation shows how a single flawed process can expose structural weaknesses when so much of the internet depends on centralised infrastructure. For UK businesses, the lesson is that operational resilience cannot be outsourced entirely, even to a provider with Cloudflare’s reach and engineering reputation. The incident reinforces the need for realistic contingency planning, multi-vendor architectures where feasible, and a clear understanding of how a supplier’s internal workings can affect day-to-day operations.
There is also a broader industry point here. For example, outages at Cloudflare, AWS, Azure and other major players are now becoming too significant to dismiss as isolated events. They actually highlight weaknesses in how complex cloud ecosystems are built and maintained, as well as the limits of automation when oversight relies on assumptions that may not be tested until something breaks at scale. Prince’s emphasis on transparency is helpful, but it also raises questions about how often configuration-driven risks are being overlooked across the industry and how reliably safeguards are enforced inside systems that evolve at speed.
Stakeholders from regulators to hosting providers will surely be watching how quickly Cloudflare implements its promised changes and how effective those measures prove to be. Investors and enterprise customers may also be looking for signs that the underlying engineering and operational processes are becoming more robust, not just patched in response to this incident. Prince’s framing makes clear that this was not a compromise of Cloudflare’s security perimeter, but the reliance on a single configuration mechanism that could bring down so many services is likely to remain a point of scrutiny.
The most immediate implication for customers is probably a renewed focus on the practical realities of dependency. Even organisations that never interact with Cloudflare directly were affected, which shows how embedded its infrastructure is in the modern web. UK businesses, in particular, may need to reassess where their digital supply chains concentrate risk and how disruption at a provider they do not contract with can still reach them. The outage serves as a reminder that resilience is not just about defending against attackers but preparing for internal faults in external systems that sit far beyond a company’s control.
Security Stop-Press: WhatsApp Flaw Exposed Billions of Phone Numbers
Researchers have uncovered a privacy weakness in WhatsApp that allowed the confirmation of 3.5 billion active accounts simply by checking phone numbers.
A team from the University of Vienna and SBA Research found that WhatsApp’s contact discovery system could be queried at high speed, letting them generate and test 63 billion numbers and confirm more than 100 million accounts per hour. When a number was recognised, the app returned publicly visible details such as profile photos, about texts, and timestamps, with 57 per cent of users showing a profile picture and nearly 30 per cent displaying an about message.
Meta said only public information was accessible, no message content was exposed, and the researchers deleted all data after the study. It added that new rate-limiting and anti-scraping protections are now in place and that there is no evidence of malicious exploitation.
Security experts warned that the incident shows how phone numbers remain a weak form of identity, making large-scale scraping and profiling possible. They stressed that metadata, even without message content, can still be valuable to scammers or organised cyber groups.
Businesses can reduce risk by limiting the personal information staff make visible on messaging apps, reviewing privacy settings, and ensuring employees understand how scraped contact details may be used in targeted attacks.