Sustainability-in-Tech : AI Bots Overtaking Human Web
AI driven bots are rapidly overtaking humans as the primary consumers of online content, creating growing sustainability concerns around energy use, digital efficiency, and the future structure of the open web.
Report
The latest State of the Bots report from AI bot traffic measurement company TollBit shows a marked acceleration in automated web traffic during the second half of 2025, alongside a measurable decline in human visits. It seems that what was once framed primarily as a debate about AI training data has evolved into a broader structural change, with AI systems now reading the live internet at scale to support search, chat, and information retrieval tools.
Rising Bot Traffic And Declining Human Visits
TollBit’s analysis shows that the ratio of AI bot traffic to human traffic has changed rapidly over a short period. For example, in the first quarter of 2025, the average site monitored by TollBit saw one AI bot visit for every 200 human visits. By the end of the year, that ratio had increased to one AI bot visit for every 31 human visits.
Over the same period, human web traffic declined. Between the third and fourth quarters of 2025 alone, TollBit recorded a 5 per cent fall in human visits across its partner sites. The report stresses that these figures likely understate the true scale of automated activity, as many modern bots are designed to closely mimic human browsing behaviour.
In its findings, TollBit says “from the tests we ran, many of these web scrapers are indistinguishable from human visitors on sites”, adding that the data should be treated as conservative. This increasing difficulty in separating human and automated traffic complicates both measurement and mitigation efforts.
From Training Crawlers To Live Web Retrieval
Earlier concerns around AI and the web focused largely on large scale scraping for model training. While training related crawling continues, TollBit’s data actually shows it is no longer the dominant driver of AI bot activity.
In fact, it seems that training crawler traffic fell by around 15 per cent between the second and fourth quarters of 2025. However, over the same period, traffic from retrieval augmented generation bots increased by 33 per cent.
RAG systems, which fetch live web content to answer user prompts, allow AI tools to provide current answers rather than relying solely on static training data.
This distinction has some important implications. For example, training crawlers typically access content once and store it for offline use. RAG bots, by contrast, return to the same pages repeatedly. TollBit found that in the fourth quarter of 2025, RAG bots made roughly ten page requests for every single page request made by training bots. This repeated access reflects the growing role of AI tools as substitutes for traditional search engines and direct browsing.
The Role Of AI Search Indexing
Alongside RAG bots, AI search indexing activity is expanding rapidly. Indexing crawlers systematically map the web so that RAG systems can locate relevant pages when responding to prompts. TollBit recorded a 59 per cent increase in AI search indexer traffic between the second and fourth quarters of 2025.
This growth seems to show that AI driven search is building out its own parallel infrastructure to support real time information retrieval. While indexing has long been a feature of traditional search engines, the combination of indexing and repeated live retrieval increases the volume of automated traffic moving across the web.
Concentration Of Scraping Activity
TollBit’s data also shows that AI scraping activity is unevenly distributed across providers. For example, OpenAI’s ChatGPT User agent was identified as the most active RAG bot across monitored sites. In the fourth quarter of 2025, it averaged around five times as many scrapes per page as the second most active scraper, attributed to Meta.
Other major contributors include bots operated by Google, Perplexity, Anthropic, and Amazon, each running multiple user agents for training, indexing, and user triggered retrieval. The combined effect is a background layer of automated traffic that now rivals human browsing in scale on many sites.
Which Parts Of The Web Are Most Affected?
It should be noted here that not all content categories seem to be affected equally. For example, TollBit reports that B2B and professional sites, national news outlets, and lifestyle content are among the most heavily scraped. Technology and consumer electronics content experienced the fastest growth in scraping activity, increasing by 107 per cent since the second quarter of 2025.
According to TollBit, the most frequently scraped pages tend to relate to time sensitive topics. In the third quarter of 2025, heavily scraped URLs included political controversies and live sports coverage. By the fourth quarter, entertainment releases and shopping related content, such as streaming series and seasonal buying guides, featured more prominently.
This pattern could be said to reflect how users are increasingly turning to AI tools for up-to-date information, prompting RAG bots to revisit high demand pages repeatedly throughout the day.
The Sustainability Cost Of Repeated Access
From a sustainability perspective, the rise of RAG driven browsing introduces a less visible but growing cost. For example, each automated page request consumes energy across data centres, networks, and supporting infrastructure. When the same content is retrieved repeatedly to support similar prompts, overall energy demand increases significantly.
TollBit, therefore, describes the current environment as inefficient for both publishers and AI developers. AI companies invest heavily in scraping infrastructure, proxy services, and evasion techniques, while publishers spend increasing sums on defensive technologies. This duplication of effort results in higher processing and energy use, alongside increased indirect emissions.
In fact, the report notes that advanced scraping services can charge more than 22 dollars per 1,000 pages retrieved. At the scale required to support popular consumer AI applications, data acquisition costs alone can reach tens of millions of dollars per year. These financial costs sit alongside rising electricity demand in data centres, which sustainability researchers already identify as a growing contributor to global emissions.
Robots Txt And Escalating Inefficiency
Existing mechanisms for controlling automated access seem to have proven ineffective. For example, in the fourth quarter of 2025, around 30 per cent of AI bot scrapes recorded by TollBit did not actually comply with robots.txt permissions. In categories such as deals and shopping, non-permitted scrapes exceeded permitted ones by a factor of four.
OpenAI’s ChatGPT User bot showed the highest rate of non-compliance among major bots, accessing blocked content in 42 per cent of cases. TollBit argues that this environment encourages increasingly sophisticated evasion strategies, including IP rotation, user agent spoofing, and cloud based headless browsers.
Each layer of evasion and detection adds computational overhead. Bots expend more resources to appear human, while websites consume more resources attempting to identify and block them. From an environmental standpoint, this escalation increases energy use without delivering proportional value to end users.
Low Referral Traffic And Structural Implications
The sustainability issue is closely tied to the economics of online publishing. TollBit reports that referral traffic from AI applications remains extremely low and continues to decline. Average click through rates from AI tools actually fell from 0.8 per cent in the second quarter of 2025 to 0.27 per cent by the end of the year.
Even websites with direct licensing agreements saw some pretty sharp declines. For example, click through rates for sites with one-to-one AI deals fell from 8.8 per cent early in 2025 to 1.33 per cent in the fourth quarter. This indicates that licensing arrangements alone are not insulating publishers from reduced human traffic.
The result, therefore, appears to be a system in which machines read and reuse content at scale, while fewer people visit the original sources. For example, TollBit’s report states that “AI traffic will continue to surge and replace direct human visitors to sites”, pointing to a future in which automated systems become the primary readers of the internet.
The data suggests that this transition is already underway, with some significant implications for sustainability, digital infrastructure, and the long-term viability of the content ecosystem that AI systems depend on.
What Does This Mean For Your Organisation?
The picture emerging from TollBit’s data seems to be one of structural change rather than a short-term disruption, where AI systems are no longer just indexing the web or training on it in the background. In fact, it seems they are now repeatedly consuming live content at scale, with clear consequences for energy use, infrastructure efficiency, and the sustainability of the wider digital ecosystem. Without changes to how AI systems access content, the current pattern risks locking in higher energy demand and escalating inefficiencies across both AI development and online publishing.
For UK businesses, this trend has practical implications on several fronts. For example, organisations increasingly relying on AI tools for research, search, and decision support are indirectly contributing to rising digital energy use and associated emissions. At the same time, UK publishers, professional services firms, and content driven businesses face growing operational costs from defending their websites against automated access, while seeing diminishing human engagement in return. These pressures sit alongside wider regulatory and sustainability expectations, particularly as UK businesses are required to demonstrate progress on energy efficiency, emissions reporting, and responsible technology use.
For AI developers, publishers, regulators, and end users, the data shows that the current scrape and block dynamic appears inefficient, costly, and environmentally counterproductive. If AI systems are to become permanent fixtures in how information is accessed, it looks as though the underlying mechanics of content access will need to evolve in a way that supports sustainability, fair value exchange, and long-term viability. Without that recalibration, the growth of AI driven web consumption risks undermining both the digital economy it depends on and the sustainability goals many organisations are now expected to meet.
Video Update : Pinned Chats
Well, it might only be a small (new) feature, yet it’s a handy one! Being able to pin your ChatGPT chats is surprisingly helpful and once you’ve started to use this feature, you’ll wonder why it wasn’t introduced before …
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip: Insert Screenshots Directly Into Outlook Emails
Outlook includes a built in Screenshot tool that lets you quickly capture a window or part of your screen and insert it straight into an email, saving time and avoiding the need to use separate screenshot tools.
How to do it
Outlook:
– Open a new email message.
– Place your cursor where you want the image to appear.
– Select the Insert tab in the ribbon.
– Click Screenshot.
Choose a full open window, or select Screen Clipping to capture a specific area.
The screenshot is then inserted directly into your email, making it easier to explain issues clearly and keep conversations moving without extra attachments or back and forth.
Note: This feature is only available in the desktop version of Outlook on Windows.
Government Offers Free AI Training for All UK Adults
UK adults are being offered free, government-benchmarked AI training for work as part of a national programme to upskill 10 million people by 2030 and address low confidence and adoption of artificial intelligence across the economy.
UK Government Expands Free AI Training Programme
The UK government has announced a major expansion of its national AI skills programme, making free AI training available to every adult in the country through the AI Skills Boost initiative. Led by the Department for Science, Innovation and Technology in partnership with Skills England, the programme is being positioned as a response to growing concerns about workforce readiness as artificial intelligence becomes more widely embedded across workplaces.
10 Million People By 2030
The expansion builds on a commitment made in June 2025, when government and industry partners first set out plans to train 7.5 million workers in AI-related skills. The latest announcement increases that ambition to 10 million people by the end of the decade, equivalent to nearly a third of the UK workforce, and frames the initiative as the largest targeted training programme since the creation of the Open University.
Who Can Access The Training And How?
The training is open to all UK adults and is delivered online through the government’s AI Skills Hub, a free platform where users can create a learning profile and follow a structured learning journey. No prior technical knowledge is required, and the courses are designed to be accessible alongside existing work or caring commitments.
Courses vary in length, with some taking under 20 minutes to complete, while others run for several hours. Participation is voluntary, and learners can choose which courses to take based on their role, interests or level of confidence with digital tools. The government has said that NHS staff and local government employees will be among the first groups actively encouraged to take part, supported by their employers and representative bodies.
What Do The Courses Teach?
The focus of the training is on practical workplace use rather than technical development of AI systems. For example, courses concentrate on helping workers use commonly available AI tools safely and effectively as part of everyday tasks.
This includes learning how to write and refine prompts for generative AI tools, use AI to draft text and create content, automate routine administrative processes, and interpret simple AI dashboards to identify trends. The training also covers responsible use, including understanding the risks, limitations and potential consequences of using AI at work.
All approved courses have been assessed against Skills England’s AI foundation skills for work benchmark, which sets out a nationally defined baseline for AI literacy in the workplace. Anyone who completes a course that meets the benchmark receives a government-backed virtual AI foundations badge, which can be used on CVs and professional profiles to demonstrate recognised skills.
Why The Government Is Prioritising AI Skills
The expansion of AI training reflects evidence that AI adoption in the UK remains uneven and that confidence among workers is low. For example, research published alongside the announcement found that only 21 per cent of UK workers currently feel confident using AI in their jobs. Business adoption data suggests that as of mid-2025 only around one in six UK businesses were using AI at all, with much lower uptake among small and micro businesses.
Government analysis suggests that improving adoption and confidence could deliver significant productivity gains. Ministers estimate that wider use of AI could unlock up to £140 billion in additional annual economic output by reducing time spent on routine tasks and enabling workers to focus on higher value activity.
Technology Secretary Liz Kendall highlighted how the training is intended to ensure the benefits of AI are widely shared, saying, “We want AI to work for Britain, and that means ensuring Britons can work with AI,” adding that, “Change is inevitable, but the consequences of change are not. We will protect people from the risks of AI while ensuring everyone can share in its benefits.”
The Role Of Industry And Public Sector Partners
Delivery of the programme relies on a large partnership between government, industry and public sector organisations. For example, founding partners including Accenture, Amazon, Google, IBM, Microsoft, Salesforce, Sage and SAS have been joined by a wider group that now includes the NHS, British Chambers of Commerce, Federation of Small Businesses, Institute of Directors, Local Government Association, Cisco, Cognizant, Multiverse, Pax8 and techUK.
Industry partners are responsible for developing many of the courses hosted on the AI Skills Hub, while representative organisations are expected to promote the training to their members and workforces. The involvement of the NHS, the UK’s largest employer, is intended to support large scale uptake in the public sector and reinforce the relevance of AI skills beyond technology focused roles.
Phil Smith, Chair of Skills England, has said the benchmark was designed to provide clarity for both learners and employers about what AI skills are needed for work. He said the digital badges awarded on completion would provide clear recognition of learning and help set consistent standards for AI upskilling across the economy.
Funding And Wider Skills Measures
The training offer forms part of a broader package of measures aimed at preparing the UK workforce for AI-driven change. For example, the government has announced £27 million in funding for a new TechLocal scheme, part of the wider £187 million TechFirst programme, which will support local employers and education providers to develop AI-related jobs, professional practice courses, graduate traineeships and work experience opportunities.
Alongside this, the government has launched applications for the Spärck AI Scholarship, which will fund up to 100 master’s students in AI and STEM subjects at nine UK universities. The scholarships will cover tuition and living costs while providing access to industry placements and mentoring.
A new AI and the Future of Work Unit has also been established to monitor the economic and labour market impact of AI. Supported by an expert panel drawn from business, academia and trade unions, the unit is intended to provide evidence-based advice on when policy interventions may be needed to support workers and communities as roles and skills evolve.
The Implications For Employers And Businesses
For employers, particularly small and medium-sized enterprises, the programme offers a low-cost route to building basic AI capability across teams. Business groups including the Federation of Small Businesses and the British Chambers of Commerce have welcomed the initiative, citing uncertainty among employers about what AI skills staff need and how to support responsible adoption.
Large employers involved in the programme have pointed to their own experience of rolling out AI tools internally, noting that productivity gains depend heavily on shared understanding and confidence rather than access to technology alone. The government argues that a nationally recognised benchmark will help employers set clearer expectations and reduce the risk of misuse or unrealistic assumptions about AI.
Criticisms And Questions
Despite broad support, the initiative has attracted criticism from some policy groups and professional bodies. For example, the Institute for Public Policy Research has warned that short, tool-focused courses risk oversimplifying what it means to be prepared for AI-enabled work. Critics argue that effective adaptation also requires judgement, critical thinking, leadership and organisational change, which cannot be delivered through brief online modules alone.
There are also questions about how impact will be measured over time. For example, while the government has committed to reaching 10 million workers by 2030, it has not yet set out detailed plans for tracking completion rates, long-term skills retention or productivity outcomes across different sectors. Concerns have also been raised about the mix of free and subsidised courses on the AI Skills Hub and whether this could cause confusion about access.
The government has said the AI Skills Boost programme will continue to evolve, with new courses, partners and benchmarks added as workplace use of AI develops and expectations around skills mature.
What Does This Mean For Your Business?
The expansion of free AI training marks a clear attempt by government to address one of the most persistent barriers to AI adoption in the UK, which is a lack of confidence and shared understanding rather than access to technology itself. By setting a national benchmark and backing it with widely accessible courses, the programme establishes a common baseline for what it means to use AI responsibly at work, something many employers and workers have so far lacked.
For UK businesses, particularly small and medium-sized firms, the initiative could lower the practical and financial threshold for experimenting with AI tools in everyday operations. A clearer definition of core skills may help employers move beyond uncertainty and begin integrating AI in measured, realistic ways, while also supporting better internal governance and expectations around use. Larger organisations and public sector bodies may benefit from a more consistent skills foundation across teams, reducing fragmentation and uneven uptake.
For workers, the availability of short, recognised courses offers a route to building confidence without committing to formal retraining or specialist qualifications. The emphasis on practical use, risk awareness and responsible adoption reflects an acknowledgement that AI will increasingly sit alongside existing roles rather than replace them outright in the near term.
At a national level, the programme aligns skills policy more closely with the government’s wider ambitions on productivity, economic growth and technological adoption. Whether it delivers lasting impact will depend on uptake, the quality of training, and how effectively it connects to broader workforce development and organisational change. The creation of the AI and the Future of Work Unit suggests an awareness that skills alone will not resolve all challenges, but it also places responsibility on government, employers and industry partners to ensure the transition is managed in a way that supports workers and delivers tangible economic benefit.
Google Integrates Gemini Into Chrome To Enable Agentic Browsing
Google has announced that it has begun integrating its Gemini artificial intelligence system directly into the Chrome browser as part of a wider effort to turn everyday web browsing into a more automated and assistant-led experience.
Why?
Chrome remains the world’s most widely used web browser, accounting for over 70 per cent of global desktop usage (StatCounter), and Google’s latest changes seem to reflect growing pressure from AI-focused rivals offering built-in assistants and automated task handling. For example, over the past year, browsers and browser features from Microsoft, OpenAI-backed projects, Perplexity and Opera have increasingly promoted AI agents as a way to reduce manual searching, form filling and comparison across multiple websites.
Therefore, rather than thinking about replacing Chrome or launching a separate AI browser, Google is now embedding Gemini directly into the existing product. The aim is to reshape how users interact with websites while preserving Chrome’s central role in daily computing and maintaining continuity for its vast installed user base.
Moving Gemini From A Floating Tool To A Built-In Side Panel
Google first added Gemini to Chrome in 2024, but its early implementation was limited. The assistant appeared in a floating window that sat apart from the main browsing experience and offered only limited contextual awareness. This latest update replaces that approach with a side panel that sits alongside web pages and can be opened across tabs.
According to Parisa Tabriz, Vice President of Chrome, the intention is to allow users to work across the web without losing context. In a Google blog post announcing the changes, she wrote that the new side panel “can help you save time and multitask without interruption” by letting users “keep your primary work open on one tab while using the side panel to handle a different task”.
This design allows Gemini to analyse the page currently being viewed, reference other open tabs, and respond to questions without forcing users to break their workflow. When several tabs originate from the same site or topic, such as product listings or reviews, Gemini can treat them as a related group, making it possible to summarise information or compare options across pages.
When and Where?
The update is rolling out now to Chrome users in the US on Windows, macOS and Chromebook Plus devices, extending availability beyond the platforms supported during earlier testing.
Built On Gemini 3 And Multimodal Capabilities
The new Chrome features are built on Gemini 3, which Google describes as its most capable AI model so far. Gemini is a multimodal system, meaning it can work with text, images and other structured inputs rather than relying solely on written prompts.
Google says this capability supports its aim of making Chrome more useful during complex tasks. In its announcement, the company described Gemini in Chrome as “an assistant that helps you find information and get things done on the web easier than ever before”, particularly when tasks involve multiple steps or different forms of content.
Multimodal understanding also enables Gemini to work directly with images viewed in the browser. For example, through integration with Google’s Nano Banana tool, users can modify images without downloading files or opening separate applications. Google appears to be positioning this as a practical feature for tasks such as visual planning or transforming information into graphics while remaining within the same browsing session.
Tighter Integration With Google Services
A key element of Google’s approach is deeper integration between Chrome and its wider ecosystem of services. Gemini in Chrome supports Connected Apps, including Gmail, Calendar, YouTube, Maps, Google Shopping and Google Flights.
With user permission, Gemini can reference information from these services to help complete tasks. In its announcement, Google highlighted examples such as travel planning, where Gemini can locate event details from an email, check flight options, and draft a message to colleagues about arrival times without requiring the user to move between applications.
Google has also confirmed that its Personal Intelligence feature will be brought to Chrome in the coming months. This feature allows Gemini to retain context from previous interactions and tailor responses over time. Tabriz stated that users remain in control, writing that people can opt in and choose whether to connect apps, with the ability to disconnect them at any time.
From Autofill To Agentic Browsing
The most substantial development is probably Google’s move towards agentic browsing, which refers to software systems capable of carrying out tasks across websites on a user’s behalf. For subscribers to Google AI Pro and AI Ultra in the United States, Chrome now includes a feature called auto browse.
Google is presenting auto browse as an extension of existing automation rather than a replacement for user involvement. In the blog post, Tabriz wrote, “For years, Chrome autofill has handled the small stuff, like automatically entering your address or credit card, to help you finish tasks faster.” She added that Chrome is now moving “beyond simple tasks to helping with agentic action”.
Auto browse is designed to handle multi-step workflows such as researching travel options, collecting documents, filling in online forms, requesting quotes, or managing subscriptions. Google says early testers have used it to schedule appointments, assemble tax documents, file expense reports and renew driving licences.
More advanced scenarios combine multimodal input and commerce. For example, Google describes cases where Gemini can identify items shown in an image, search for similar products online, add them to a shopping basket, apply discount codes and remain within a set budget. When sensitive actions are involved, such as signing in or completing purchases, auto browse pauses and asks the user to take control.
Google has stated that its AI models are not exposed to saved passwords or payment details, even when Chrome’s password manager is used to support these actions. The company says auto browse is designed to request explicit confirmation before completing actions such as purchases or social media posts.
Commercial Context And Industry Resistance
Google’s decision to deepen Gemini’s role in Chrome comes amid intensifying competition around AI-driven browsing and automation, for example Microsoft has integrated similar capabilities into Edge, while newer browsers have been designed from the outset around the use of AI agents.
There is also increasing interest in agent-led online commerce. For example, management consultancy McKinsey has projected that agentic commerce for business-to-consumer retail in the United States could reach $1 trillion by 2030. Google has indicated that Chrome will support its Universal Commerce Protocol, an open standard developed with companies including Shopify, Etsy, Wayfair and Target, which is intended to allow AI agents to carry out transactions in a structured and authorised way.
At the same time, some websites and platforms have begun limiting automated access or requiring explicit human review for transactions. Google appears to be positioning auto browse as a more controlled approach, with human confirmation built into sensitive steps, as it explores how agentic browsing can operate within existing legal and commercial frameworks.
What Does This Mean For Your Business?
Google’s decision to embed Gemini directly into Chrome seems to point to a future where the browser becomes an active participant in work rather than a passive gateway to information. For users, this could concentrate research, comparison and administrative tasks inside a single interface that already sits at the centre of daily digital activity. The immediate impact is likely to be incremental rather than transformational, with benefits most visible in time saved on repetitive or fragmented tasks, balanced against ongoing limits around accuracy, intent recognition and website compatibility.
For UK businesses, the changes could have practical implications across productivity, procurement and digital workflows. For example, tools such as auto browse could reduce the time staff spend on routine administration, travel planning, expense management and supplier research, particularly for small and medium sized organisations without dedicated support teams. At the same time, businesses that rely on web traffic, online forms or e-commerce will need to consider how agent-led browsing interacts with existing processes, security controls and customer journeys, especially as automated interactions become more common.
Website operators, retailers and platforms face a more complex picture, weighing potential efficiency gains against concerns over loss of control, while regulators and standards bodies are paying closer attention to how automated agents access data and complete transactions. Google’s emphasis on user confirmation, permissions and open standards reflects these pressures, while also highlighting that agentic browsing remains an evolving area. Chrome’s scale gives Google a strong position in shaping how this develops, although wider adoption and trust are likely to depend on how reliably these tools perform in real-world conditions rather than on their technical ambition alone.
AI-Written Virus Marks a New Step Towards Lab-Designed Life
A research team in California has demonstrated that artificial intelligence can design a fully synthetic virus from scratch, raising questions about how life itself may be engineered in the future.
What Has Been Developed?
The work centres on a new synthetic virus known as Evo-Φ2147, created using a generative AI model developed by researchers at Stanford University (in Silicon Valley, California) in collaboration with the Arc Institute and the University of California, Berkeley. The research was led by Brian Hie, who runs Stanford’s Laboratory of Evolutionary Design and works at the intersection of machine learning and biology.
Evo-Φ2147 was generated using Evo 2, an advanced version of Evo, a large language model for DNA. Instead of predicting words or images, Evo analyses genetic sequences and learns the underlying patterns that govern how DNA, RNA and proteins function together inside living organisms. Once trained, it can generate entirely new genetic sequences that have never existed in nature.
Asked The Evo2 Model To Make The Virus
As a proof of concept, the researchers asked Evo 2 to design new bacteriophages, viruses that infect bacteria rather than humans. The model then generated 285 complete viral genomes, all designed computationally rather than derived from natural viruses. Sixteen of those synthetic viruses were then tested in the lab and shown to successfully infect and kill Escherichia coli, a bacterium responsible for serious infections and a growing problem due to antibiotic resistance.
Evo-Φ2147 emerged as one of the most effective designs and, while it remains simple by biological standards, containing just 11 genes, it demonstrated that an AI-designed genome could function inside a living cell exactly as intended.
Why Evo-Φ2147 Matters Scientifically
What makes Evo-Φ2147 scientifically so significant is not the virus itself, but the method used to create it. For the first time, researchers have shown that an AI system can design a complete, functional genome at once, rather than tweaking or modifying existing biological sequences.
In the paper published in Science (.org), the authors describe Evo as “a genomic foundation model that enables prediction and generation tasks from the molecular to the genome scale.” Put simply, this means that Evo is AI that can understand DNA, predict what genetic changes will do, and design new genetic code, from tiny DNA parts right up to almost whole genomes (the complete set of genetic instructions inside an organism).
Trained
The model was trained on 2.7 million prokaryotic and phage genomes, representing around 300 billion DNA nucleotides. This scale allowed Evo to learn how tiny changes at the level of individual DNA bases can affect the fitness and behaviour of an entire organism.
The researchers emphasised that Evo operates at single-nucleotide resolution (at the level of individual DNA letters) and across very long sequences, up to 131,000 DNA bases at once. This matters because even the simplest microbes contain millions of base pairs, and previous AI tools struggled to capture long-range genetic interactions.
In the study, Evo was able to generate DNA sequences exceeding one million base pairs that showed realistic genome-like structure, including gene clusters and regulatory patterns seen in natural organisms. The researchers said that Evo “learns both the multimodality of the central dogma and the multiscale nature of evolution,” meaning it understands how DNA, RNA and proteins interact across molecular, cellular and organism-wide levels.
Evo-Φ2147 demonstrates that this understanding is not purely theoretical. It translated into a working biological system that could infect bacteria and replicate within them.
Is It Really “Life”?
Describing Evo-Φ2147 as life is a little controversial. For example, it is true to say that it does behave like a virus, which sits in a grey area between living and non-living systems. It contains genetic information, interacts with a host, and replicates using cellular machinery. However, it cannot reproduce independently and lacks the complexity associated with autonomous life.
The researchers themselves are being quite cautious, perhaps because Evo-Φ2147 does not meet most biological definitions of life, and its genome is vastly simpler than even the smallest free-living organisms. To give it some context, the smallest known bacterial genomes contain around 580,000 DNA base pairs, while a human genome contains roughly 20,000 genes.
What makes this situation a bit different is that the genome was not discovered or evolved through natural selection, but was written intentionally by an AI system. British molecular biologist Adrian Woolfson described this as a turning point, arguing that evolution has historically been blind, while genome-scale AI introduces foresight and design into biology for the first time.
This is why some researchers view Evo-Φ2147 as an early step towards lab-grown life, even if it does not yet qualify as life in a strict sense.
How This Fits Into The World of Synthetic Biology
Synthetic biology has long aimed to redesign living systems, but progress has typically relied on modifying existing organisms, and Evo represents a move from editing life to generating it computationally.
Earlier advances such as CRISPR gene editing allowed scientists to cut and paste DNA with precision. Evo goes further by designing entire genetic systems at once. In the Science paper, the authors reported that Evo successfully generated novel CRISPR-Cas systems and transposable elements that were validated experimentally, marking “the first examples of protein-RNA and protein-DNA codesign with a language model.”
This essentially places Evo within a growing movement to treat biology as an information science. DNA becomes a form of code, evolution a dataset, and AI a design engine capable of exploring biological possibilities far faster than natural processes or traditional lab work.
The researchers have explicitly framed Evo as a foundation model, comparable to large language models in AI, designed to underpin many downstream applications rather than a single use case.
Ethical, Security and Governance Questions
The ability to design and generate complete genetic systems using AI also raises some legitimate concerns about misuse, because the same tools that can create beneficial biological systems could, in theory, be applied in harmful ways, an issue the Evo team addressed directly by excluding viruses that infect humans and other eukaryotes from the training data.
For example, in the published Science paper, the authors warned that genome-scale AI “simultaneously raises biosafety and ethical considerations” and called for “clear, comprehensive guidelines that delineate ethical practices for the field.”
They pointed to frameworks such as those developed by the Global Alliance for Genomics and Health as a starting point, stressing the need for transparency, international cooperation and shared responsibility.
Importantly, in documenting their discovery, the researchers seem to have avoided too much sensationalism. For example, Evo does not enable the creation of dangerous organisms overnight, and its outputs remain constrained by biological reality, laboratory validation, and existing safety controls. The risks are real, but incremental rather than immediate.
Its Value to Humanity (and Business)
The most immediate promise of this discovery lies in medicine and biotechnology. For example, AI-designed bacteriophages could offer new ways to fight antibiotic-resistant infections, a growing global health threat. During the COVID-19 pandemic, the researchers noted that similar tools could have dramatically reduced vaccine development timelines.
Beyond healthcare, genome-scale design could also influence agriculture, materials science, and environmental remediation. The Stanford team highlighted potential applications such as reprogramming microbes to improve photosynthesis, capture carbon, or break down microplastics.
For businesses, this could signal a future where biological design cycles become faster, more predictable, and more software-driven. Companies working in pharmaceuticals, bio-manufacturing, and sustainable materials are likely to be among the earliest beneficiaries, while regulators and insurers will face new questions about oversight and risk.
Challenges and Questions
Despite the technical breakthrough, some significant challenges remain, because Evo’s generated genomes still lack many features found in natural organisms, including full sets of essential genes and robust regulatory systems, leading the researchers to describe current genome-scale outputs as “blurry images” of life that capture high-level structure while missing fine-grained detail.
Critics also argue that calling such systems a step towards creating life risks overstating what has actually been achieved. It is worth noting here that Evo accelerates design, but it does not eliminate the complexity, uncertainty, and failure rates inherent in biology.
Other critics have pointed to possible governance gaps, particularly around who decides what kinds of genomes should or should not be designed. As Woolfson put it, society will need to decide “who is going to define the guard rails” as these tools become more capable.
What Evo-Φ2147 ultimately represents is not the arrival of artificial life but offers a clear signal that the boundary between computation and biology is rapidly dissolving, with consequences that science, industry, and society are only beginning to understand.
What Does This Mean For Your Business?
This research shows that AI is no longer just analysing biology but beginning to shape it, turning genome design into something closer to a computational process that is then tested in the lab. Evo-Φ2147 does not redefine life, but it does change how genetic systems can be created and refined, replacing slow trial-and-error approaches with AI-driven design followed by targeted validation.
The wider impact of this capability lies in what it could unlock, because faster genome design has the potential to accelerate medical research, support the development of new treatments, and shorten response times during future health crises, while also increasing the importance of clear ethical oversight and realistic safety governance. For UK businesses operating in life sciences, pharmaceuticals, and sustainable manufacturing, this development points towards shorter development cycles and a growing reliance on advanced computing and biological expertise working together.
Taken together, Evo-Φ2147 highlights how quickly the boundary between computation and biology is fading, placing responsibility for how these tools are used not just with researchers, but with regulators, businesses, and wider society that will ultimately shape where genome-scale AI is allowed to go next.