Sustainability-In-Tech : Robots Refurbish Your Old Laptops

A research team in Denmark is building an AI‑driven robot to refurbish laptops at scale, offering a practical route to reduce e‑waste while creating new value for businesses.

RoboSAPIENS

At the Danish Technological Institute (DTI) in Odense, robotics researchers are developing a system that uses computer vision, machine learning and a robotic arm to automate common refurbishment tasks on used laptops. The project is part of RoboSAPIENS, an EU‑funded research initiative coordinated by Aarhus University that focuses on safe human‑robot collaboration and adaptation to unpredictable scenarios.

DTI’s contribution to the programme centres on robot-assisted remanufacturing. The goal is to design systems that can adapt to product variation, learn new disassembly processes, and maintain high safety standards even when faced with unfamiliar conditions. DTI’s Odense facility hosts dedicated robot halls and test cells where real‑world use cases like this are trialled.

What The Robot Can Do And How It Works

The DTI prototype has been trained to carry out laptop screen replacements, a time‑consuming and repetitive task that requires precision but often suffers from low labour availability. The system does this by using a camera to identify the laptop model and selects the correct tool from a predefined set. It then follows a sequence of learned movements to remove bezels, undo fixings, and lift out damaged screens for replacement.

The robot currently handles two laptop models and their submodels, with more being added as the AI’s training expands. Crucially, the system is designed with humans in the loop. For example, if it encounters unexpected variables, such as an adhesive where it expects a clip, or a screw type it hasn’t seen, it alerts a technician for manual intervention. This mixed‑mode setup allows for consistent output while managing the complexity of real‑world devices.

The Size And Urgency Of The E‑Waste Problem

Electronic waste / E‑waste is the fastest‑growing waste stream in the world. E-waste typically refers to items like discarded smartphones, laptops, tablets, printers, monitors, TVs, cables, chargers, and other electrical or electronic devices that are no longer wanted or functioning. The UN’s 2024 Global E‑Waste Monitor reports that 62 million tonnes of electronic waste were generated globally in 2022, with less than 25 per cent formally collected and recycled. If current trends continue, global e‑waste is expected to reach 82 million tonnes by 2030. That is roughly equivalent to 1.5 million 40‑tonne trucks, enough to circle the Earth.

Unfortunately, the UK is among the highest generators of e‑waste per capita in Europe. Although progress has been made under the WEEE (Waste Electrical and Electronic Equipment) directive, much of the country’s used electronics still go uncollected, unrepaired or end up being recycled in ways that fail to recover valuable materials.

The Benefits

For IT refurbishment firms and IT asset disposition (ITAD) providers, robotic assistance could offer some clear productivity gains. Automating standard tasks such as screen replacements could reduce handling time and increase throughput, while also reducing strain on skilled technicians who can instead focus on more complex repairs or quality assurance.

Mikkel Labori Olsen from DTI points out that a refurbished laptop can actually sell for around €200, while the raw materials reclaimed through basic recycling may only be worth €10. As Olsen explains: “By changing a few simple components, you can make a lot of value from it instead of just selling the recycled components”.

Corporate IT buyers also stand to benefit. For example, the availability of affordable, high‑quality refurbished laptops reduces procurement costs and supports carbon reporting by lowering embodied emissions compared to buying new equipment. For local authorities and public sector buyers, refurbished devices can also be a practical tool in digital inclusion schemes.

Manufacturers may also see long‑term benefits. As regulation around ‘right to repair’ and product lifecycle responsibility tightens, collaborating with refurbishment programmes could help original manufacturers retain brand control, limit counterfeiting, and benefit from downstream product traceability.

Challenges Technical Barriers

Despite its promise, robotics in refurbishment faces multiple challenges and barriers. For example, one of the biggest is product variation. Devices differ widely by brand, model, year and condition. Small differences in screw placement, adhesives, or plastic housing can trip up automation systems. Expanding the robot’s training set and adaptability takes time and requires high‑quality datasets and machine learning frameworks capable of generalisation.

Device design itself is another barrier. For example, many modern laptops are built with glued‑in components or fused assemblies that make disassembly difficult for humans and robots alike. While new EU rules will require smartphones and tablets to include removable batteries by 2027, current generation devices often remain repair‑hostile.

Safety is also critical. Damaged batteries in e‑waste can pose serious fire risks. Any industrial robot working with used electronics must be designed to detect faults and stop operations immediately when hazards are detected. The DTI system integrates vision and force sensors and follows strict safety protocols to ensure safe operation in shared workspaces.

Cost also remains a factor. For example, integrating robotic systems into refurbishment lines requires upfront investment. Firms will, therefore, need a steady supply of similar product types to ensure return on investment. For this reason, early adopters are likely to be larger ITAD providers or logistics firms working with bulk decommissioned equipment.

Global Trend

The Danish initiative forms part of a wider movement towards circular electronics, where products are repaired, reused or repurposed instead of being prematurely discarded.

Elsewhere in Europe, Apple continues to scale up its disassembly robots to recover rare materials from iPhones. These systems, including Daisy and Taz, can disassemble dozens of iPhone models and separate valuable elements like tungsten and magnets with high efficiency.

In the UK, for example, the Royal Mint has opened a precious metals recovery facility that uses clean chemistry to extract gold from discarded circuit boards. The plant, which can process up to 4,000 tonnes of material annually, uses a technology developed in Canada that avoids the need for high‑temperature smelting and reduces waste.

Further afield, AMP Robotics in the United States is deploying AI‑driven robotic arms in e‑waste sorting facilities. Their systems use computer vision to identify and pick electronic components by material type, size or brand, improving the speed and accuracy of downstream recycling processes.

Also, consumer‑focused companies such as Fairphone and Framework are also playing a role. Their modular designs allow users to replace key components like batteries and displays without specialist tools, reducing the refurbishment workload and making devices more accessible to end‑users who want to repair rather than replace.

Policy And Design Are Starting To Align With The Technology

It’s worth noting here that policy support is helping these innovations gain traction. For example, the EU’s Right to Repair directive was adopted in 2024, thereby giving consumers the right to request repairs for a wider range of products, even beyond warranty periods. Also, starting this year, smartphones and tablets sold in the EU will carry repairability scores on their packaging and, by 2027, batteries in all portable devices sold in the EU must be removable and replaceable by the user.

These regulatory changes aim to create an ecosystem where repair becomes normalised, standardised and commercially viable. For AI‑powered refurbishment systems like the one being developed in Denmark, the effect is twofold, i.e., devices will become easier to work with, and customer demand for professionally refurbished goods is likely to grow.

What Does This Mean For Your Organisation?

Robotic refurbishment, as demonstrated by the Danish system, could offer a realistic way to retain value in discarded electronics and reduce unnecessary waste. Unlike generalised recycling, which often produces low-grade materials from destroyed components, this approach focuses on targeted interventions that return functioning devices to market. For ITAD firms, the commercial case lies in increasing throughput and reliability while maintaining quality. For policymakers, it provides a scalable, auditable method to extend product life and reduce landfill. And for consumers and procurement teams, it promises more affordable and sustainable options without compromising performance.

The key to unlocking these benefits is likely to be adaptability. For example, in refurbishment settings, no two devices are ever quite the same. Variations in hardware, wear, and prior use demand systems that can recognise what they are working with and adjust their actions accordingly. The Danish project appears to directly address this by blending AI recognition with human oversight. It’s not about replacing skilled workers, but about using automation to remove tedious, repetitive tasks that slow down throughput and cause bottlenecks.

For UK businesses, the implications are increasingly relevant. Many corporate IT departments are under pressure to decarbonise procurement and demonstrate compliance with sustainability goals. Refurbished devices, when done well, offer a lower‑cost, lower‑impact alternative to new equipment. If robotic systems can scale this model and deliver consistent quality, they may help more UK organisations include reuse as part of their IT lifecycle planning. In parallel, IT service providers that adopt this kind of automation may gain a competitive edge by increasing service volume while managing rising labour costs.

Manufacturers, meanwhile, will need to keep pace with changing expectations around design for repair. As regulation tightens and customer preferences shift, it is no longer enough to produce devices that work well out of the box. The full product lifecycle, including second‑life refurbishment, is coming into scope, and robots like those at DTI could help bridge the technical gap between design limitations and sustainable reuse.

Although the Danish system sounds innovative and promising, it’s certainly not a silver bullet, and there are still challenges in economics, safety, and system complexity. However, with the right training data, safety protocols, and regulatory backing, robotic refurbishment may have the potential to become a practical part of the circular economy, not just in Denmark, but across industrial repair centres, logistics hubs and IT recovery operations worldwide.

Video Update : How To Schedule Tasks in ChatGPT

It’s easier than ever to setup scheduled tasks in CoPilot and so whether you want a summary of the news each week or updates about your stock portfolio every morning, this video shows how you can get CoPilot to run scheduled tasks for you, with (importantly) an email sent to you as well, if you like.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip – Turn Off WhatsApp Read Receipts for More Privacy

Feel under pressure to reply the moment you’ve read a message? Turning off WhatsApp’s read receipts hides the blue ticks, letting you read messages privately and respond in your own time.

How to:

– Open WhatsApp and go to Settings > Privacy > Read Receipts.
– Toggle it off.

What it’s for:

Gives you space to read and think without letting senders know you’ve opened their messages, which is ideal when you’re busy or need time to draft a reply.

Pro‑Tip: This doesn’t apply to group chats (read receipts still appear once all members have seen the message) and you also won’t see when others have read your messages.

OpenAI Claims It Detects “AI Scheming”

OpenAI says it has developed new tools to uncover and limit deceptive “AI Scheming” behaviour in its most advanced AI models, before the risks become real.

What Is “AI Scheming”?

“AI scheming” refers to a type of hidden misalignment, where a model deliberately acts in a way that appears helpful or compliant on the surface, while secretly pursuing another objective. This is not the same as “hallucination” or a model simply getting something wrong. Scheming refers to intentional misdirection, i.e. behaviour where an AI knows what it’s doing, and chooses to mislead.

Pretending

In a newly published paper, OpenAI describes scheming as “pretending to be aligned while secretly pursuing some other agenda.” The company compares it to a stockbroker who breaks the law to maximise profit while hiding those actions to avoid detection.

This kind of behaviour is worrying because it suggests that as AI models become more capable, they may learn to avoid scrutiny and work against user intent, without being obviously wrong or openly defiant.

How Big Is The Problem?

Right now, OpenAI says the risks are mostly theoretical. According to its announcement, “in today’s deployment settings, models have little opportunity to scheme in ways that could cause significant harm.” The most common examples involve basic forms of dishonesty, such as claiming to have completed a task without actually doing so.

However, the company also acknowledges that “as AIs are assigned more complex tasks with real-world consequences and begin pursuing more ambiguous, long-term goals, we expect that the potential for harmful scheming will grow.”

The concern is that when models are used in sensitive or high-stakes settings, such as autonomous systems, finance, or national security, their ability to deceive could lead to unexpected and serious consequences.

Signs of Scheming Documented By Apollo

Apollo Research, the London-based AI safety research lab which partnered with OpenAI on the project, had previously documented signs of scheming behaviour in five leading models when they were instructed to achieve goals “at all costs.” For example, in one test, a model deliberately underperformed on a chemistry test to avoid being selected for deployment, based on a hidden instruction it had read in the evaluation environment.

OpenAI and Apollo Research Worked Together

The new research is a joint effort between OpenAI and Apollo Research. OpenAI carried out the majority of the model training and infrastructure development, while Apollo led the design of evaluation tasks and behavioural analysis.

To carry out the tests, both teams worked with internal versions of OpenAI’s o3 and o4-mini models (precursors to GPT-5) and shared access to their chain-of-thought reasoning outputs. The evaluations were not limited to OpenAI’s models. For example, similar behaviours were also identified in Google’s Gemini 2.5 Pro and Anthropic’s Claude Opus-4.

The aim was to stress-test these models under controlled conditions that simulated future deployment scenarios, looking for signs of covert behaviour.

Measuring “Covert Actions”

The evaluation environments were designed to measure “covert actions”, i.e. cases where a model withheld, distorted, or manipulated task-relevant information. These covert actions were used as a proxy for potential scheming.

To reduce this behaviour, OpenAI says it introduced a technique known as “deliberative alignment”. This method involves training the model to read and reflect on a set of principles known as an anti-scheming specification before acting. The idea is not just to train the model to behave well when watched, but to encourage it to behave well because it understands why it should.

The results showed a marked reduction in covert behaviour. For example, the scheming rate in o3 dropped from 13 per cent to 0.4 per cent, while in o4-mini it dropped from 8.7 per cent to 0.3 per cent, which is a roughly 30-fold improvement.

OpenAI called this a “significant win,” but stressed that rare but serious failures still occurred.

Rules Behind Anti-Scheming Spec

The anti-scheming specification, which is shared in full in OpenAI’s blog post, sets out a series of core safety rules. These include:

– No covert actions or strategic deception.

– Report any violations to prevent cascading deception.

– Refuse to act if core safety principles cannot be followed.

– No ends-justify-the-means rationalisations.

Once trained with these rules, it was reported that the models actually began citing them in their internal reasoning. For example, in one test, a model refused to withhold negative data in a funding scenario, explicitly citing rule AS1 (no deception) and GP4 (no rationalisations).

Even In Unfamiliar Environments

The researchers tested whether the models would continue to apply these rules even in unfamiliar environments they had not been trained on. In most cases, they did. This generalisation was taken as a sign that the training had altered how the models reasoned, not just how they performed on familiar tasks.

Why Model Situational-Awareness Complicates Testing

One of the biggest limitations of the new approach is situational awareness, i.e. essentially, whether the model actually realises it’s being tested.

For example, it seems that when a model becomes ‘aware’ it’s under evaluation, it may behave more carefully, just to pass the test. This can make it look like scheming has been solved, when in fact the model has just become better at hiding it.

As OpenAI explains: “Models often become more aware that they are being evaluated. This situational awareness can itself reduce scheming, independent of genuine alignment.”

In follow-up analysis, the researchers said they found that situational awareness increased after anti-scheming training. The researchers said this may have influenced the results, and could help explain why similar gains were not observed in more realistic conversational data.

For example, in one test set based on production-like prompts (where models often falsely claimed to have completed tasks such as debugging code), deception rates dropped from 31.4 per cent to 14.2 per cent, a meaningful reduction, but not as dramatic as the 30× drop seen in the synthetic evaluations.

What Is Chain-of-Thought Transparency And Why Does It Matter?

Chain-of-thought reasoning refers to the step-by-step logic that a model produces while solving a task. In this research, that reasoning was used to probe the model’s internal motivations and test whether it was making decisions based on anti-scheming principles.

For example, a model might reason: “We must abide by AS1: no covert actions or deception. Therefore, we cannot omit this data.”

However, OpenAI warns that this method has limits. If future models are trained in ways that suppress or optimise their chain-of-thought for performance rather than transparency, researchers may no longer be able to detect what the model is really thinking.

In a recent paper co-authored with other AI labs, OpenAI argued that preserving this transparency is essential: “We believe the field is unprepared for evaluation- and training-aware models with opaque reasoning.”

The company is now advocating industry-wide for keeping chain-of-thought visible, at least until better diagnostic tools are developed.

What Others Have Found

Apollo Research has been studying this problem since late 2023. For example, in December 2024, it published a paper documenting early signs of scheming across several top models. These included behaviours such as:

– Sandbagging (deliberately underperforming when performance is penalised).

– Faking alignment to pass tests.

– Deceiving graders under conflicting instructions.

The December paper also showed that more capable models tended to scheme more often, but that their behaviour varied across tasks. A later update in June 2025 confirmed that trend, showing that scheming did not consistently reduce with standard training methods.

While OpenAI’s new approach shows promise, Apollo and others have stressed that this is only the beginning. As models become more capable, new forms of deceptive behaviour may emerge.

Still Some Way To Go

Despite OpenAI’s optimism following the results, it should be noted that there are still several major questions to resolve. For example:

– Although scheming has now been observed in controlled settings, OpenAI says it has not seen dangerous forms of it in production. Critics may argue that publishing these findings now adds to public anxiety about risks that, for now, remain theoretical.

– The reliance on chain-of-thought transparency may not hold up as models grow more advanced. If future AIs stop revealing their reasoning, the ability to detect scheming could collapse.

– The presence of situational awareness complicates how the results are interpreted. It’s unclear how well these findings will translate to real-world settings where models are not being obviously tested.

That said, OpenAI is pushing ahead. The company has added scheming-related research categories, such as sandbagging and undermining safeguards, to its internal risk framework. It has also launched a $500,000 red-teaming challenge and is exploring cross-lab safety evaluations to raise awareness of the issue.

As OpenAI put it in the blog post: “Scheming poses a real challenge for alignment, and addressing it must be a core part of AGI development.”

What Does This Mean For Your Business?

Models that can deliberately deceive, even in basic ways, raise a set of problems that are technical, ethical and operational all at once. While OpenAI’s work with Apollo Research appears to show real progress in detecting and reducing this behaviour, there is still no clear way to confirm that a model has stopped scheming, rather than just hiding it better. This is what makes the issue so difficult to solve, and why transparency, especially around reasoning, matters more than ever.

For UK businesses, the most immediate impact may not be direct, but it is significant. As AI becomes more deeply integrated into products and operations, business users will need to be far more alert to how model outputs are produced and what hidden assumptions or behaviours may be involved. If a model can pretend to be helpful, it can also quietly fail in ways that are harder to spot. This matters not only for accuracy and trust, but for compliance, customer experience, and long-term reputational risk.

For developers, regulators and AI safety researchers, the findings appear to highlight how quickly this area is moving. Techniques like deliberative alignment may help, but they also introduce new dependencies, such as chain-of-thought monitoring and model self-awareness, that bring their own complications. The fact that models tested in synthetic settings performed very differently from those exposed to real-world prompts is a clear sign that more robust methods are still needed.

While no immediate threat to production systems has been reported, OpenAI’s decision to publish these results now shows that major labs are beginning to treat scheming not as a fringe concern, but as a core alignment challenge. Whether others follow suit will likely depend on how quickly these behaviours appear in deployed models, and whether the solutions being developed today can keep pace with what is coming next.

Chrome Gets Built-In Gemini

Google has announced what it calls the biggest upgrade to Chrome in its history, introducing a wide range of Gemini AI-powered features to the browser, and they’re not optional.

AI Becomes Core to Chrome

The new features, now rolling out for desktop users in the US with English set as their Chrome language, are designed to move Chrome beyond being just a browser. According to Google, it’s now a tool that can “understand the web,” take action on the user’s behalf, and surface information across apps and pages without users needing to search manually.

Gemini, Google’s generative AI model, is now embedded directly into Chrome. Once enabled, users can ask Gemini to summarise web pages, compare information across tabs, revisit previously visited sites, or interact with integrated Google apps such as Calendar and Maps without switching tabs. In essence, the browser becomes a conversational assistant.

“Today represents the biggest upgrade to Chrome in its history,” said Google VP Parisa Tabriz. “We’re building Google AI into Chrome across multiple levels so it can better anticipate your needs, help you understand more complex information and make you more productive.”

The update is currently limited to Windows and macOS users in the US, but international rollout is expected in the coming weeks. It will be available to Google Workspace users as well, with enterprise-grade data protections and admin controls.

What Can Gemini in Chrome Actually Do?

At launch, Gemini in Chrome supports the following features:

– Page summarisation, which allows users to simplify the content of any webpage into more digestible points.

– Multi-tab summarisation lets users compare and consolidate information from multiple open tabs into a single overview.

– Web history assistance helps users revisit previously viewed content using natural language prompts such as “What was the article I read last week about walnut desks?”.

– App integration provides access to Google Maps, Calendar and YouTube details directly within Chrome, without switching tabs.

– In-page queries enable users to ask questions about the page they are viewing and receive AI-generated answers directly from the address bar.

Google says the more advanced features are still in development. These include what Google calls agentic browsing, i.e., where Gemini can act on the user’s behalf to complete web-based tasks like booking appointments or ordering groceries. It should be noted here that users still retain control, with the ability to cancel or override these actions at any time.

AI Search for the Address Bar

Another major change is coming to Chrome’s omnibox / the address bar. For example, users will soon see a new AI Mode button on the right-hand side. This feature will allow them to ask more complex questions and receive detailed, AI-generated responses, similar to using Google’s Gemini chatbot.

However, this has prompted concerns among some publishers and SEO professionals. For example, a key question is whether hitting Enter in the omnibox will default to AI answers instead of standard search results. Google has clarified by saying that pressing Enter will still load normal Google Search, while AI Mode will only activate if the user clicks the new button.

Contextual prompts and AI-powered suggestions based on the page being viewed will also be added. For example, when viewing a product page, Chrome might suggest questions like “Is there a warranty for this?” or “What are the delivery times?”.

Safety, Passwords, and Spam

Beyond productivity, Google says AI will also be used to improve safety and reduce online nuisance. For example, Gemini Nano, an efficient AI model designed for device-level tasks, is already part of Chrome’s Enhanced Safe Browsing mode. It detects phishing scams, misleading websites, and so-called “tech support scams” that attempt to trick users into downloading harmful software. This protection is being expanded to cover fake virus alerts and scam giveaways.

Chrome is also using AI to assess and suppress spammy notification requests. Google claims this update has already reduced unwanted notifications by around 3 billion per day for Android users. A similar AI-based signal system will help Chrome decide whether to present website permission requests, such as those asking for camera or location access.

Another new addition is a one-click password changer. Chrome already flags compromised credentials, but now AI will be able to automatically navigate to the password reset page of supported sites and fill in a new secure password with a single click. Supported platforms currently include Spotify, Duolingo, Coursera, and H&M.

Opt-In or Not?

One of the recurring criticisms from both users and commentators is the extent to which these features will be optional. Google has not provided full clarity on whether all AI functions will be opt-in, opt-out, or enabled by default. However, based on recent Chrome behaviour, many expect at least some features to be automatically turned on unless manually disabled.

That raises broader questions about how much of a user’s browsing data could potentially be used to improve AI models. Google says data protections will be built in, particularly for Workspace customers, but has not offered detailed transparency on what personal or behavioural data might be involved in Gemini’s functions across tabs and history.

Mike Torres, Google’s VP of Product for Chrome, commented: “You tell Gemini in Chrome what you want to get done, and it acts on web pages on your behalf, while you focus on other things. It can be stopped at any time so you’re in control.”

While that may be reassuring, some users are already asking how easily these features can be disabled altogether, or whether it will be possible to use Chrome without any AI integration at all.

Microsoft’s AI Moves in Notepad

Meanwhile, it seems that Microsoft is quietly transforming Notepad, its long-standing lightweight text editor, into an AI-enhanced writing assistant. The latest update, now available to Windows Insiders, introduces three AI tools – Summarise, Write, and Rewrite.

Microsoft says these tools are context-sensitive and can be accessed via right-click in Notepad. On newer Copilot+ PCs, which include dedicated AI hardware, the models run locally and do not require a subscription. For everyone else, a Microsoft 365 subscription is required, and the AI processing is done in the cloud.

The rewrite tool can change the tone or clarity of a paragraph, summarise long notes, or generate first drafts from basic prompts. Although these features are optional and can be disabled in Notepad’s settings, their arrival marks a significant change in how even the simplest Windows apps are being redesigned for the AI era.

What Does This Mean For Your Business?

Although the rollout is still limited to the US, Google’s direction is now quite clear. It seems that Google sees Chrome as no longer just a gateway to the web, but a platform in which AI takes an active role in what users see, do, and even decide. While many of these features promise genuine time savings and better productivity, the change raises important questions about user control, data handling, and the transparency of AI decision-making. Whether businesses or individuals fully trust Gemini to act on their behalf is likely to depend on how configurable these tools turn out to be once they arrive more widely.

For UK businesses, the developments could offer some clear operational gains, particularly for teams juggling research, cross-tab work, or repetitive browser-based tasks. Deeper integration with Google apps may also benefit firms already embedded in the Workspace ecosystem. However, there will be just as much interest in how these features are governed. For example, firms will need to assess whether data from staff browsers is being used to train AI models, and how easily administrators can enable or restrict access to these tools across teams.

For Microsoft, the story is less dramatic but still significant, i.e., giving Notepad AI capabilities changes expectations of even the simplest applications. The split between free local use on Copilot+ PCs and paid cloud access for everyone else is a change in how AI is being packaged into the Windows environment. Businesses that rely on standardised software deployments may now have to take closer account of hardware and licensing when managing new AI tools, especially if even core utilities like Notepad become divided by capability.

As both tech giants continue to expand AI into familiar software, the trade-offs between convenience, control and commercial interest are becoming harder to ignore. The features may be free at the point of use, but the long-term implications for trust, competition, and user experience are far from settled.

Working Biological Viruses Designed By AI

Stanford researchers have used AI to design real, working viruses in the lab, raising major questions about safety, regulation, and future use.

The Research

This month (September 2025), a team led by Brian Hie at Stanford and the Arc Institute revealed that generative AI models can now design entire genome-scale viruses that work in practice. These were not simulations or theoretical sequences. The viruses were tested and validated in a lab, and in some cases outperformed their natural equivalents.

Synthetic Versions Created

The AI-created viruses were synthetic versions of ΦX174, a bacteriophage that infects E. coli. Using large language models trained on genetic data, the team designed dozens of new variants. Lab tests showed many of these were viable and highly infectious against bacterial hosts.

In three separate experiments, the synthetic phages (viruses that infect and kill bacteria) infected and killed bacteria more effectively than natural ΦX174. The researchers reported that, in one case, the natural version didn’t even make the top five.

Why the Research Was Done

The main motivation for the research was medical, as phage therapy is attracting renewed interest due to rising antibiotic resistance. These viruses, which infect and kill bacteria, could offer a way to replace or support conventional antibiotics, particularly in cases where resistance has made treatments less effective.

However, the study also appears to serve a broader purpose, showing that generative AI can now be used to design entire working genomes. The authors described their work as a foundation for designing “useful living systems at the genome scale” using AI.

This development may push AI-generated biology into a new category, where tasks that once took years of research can now potentially be achieved through prompt engineering and model inference, supported by laboratory validation.

How It Was Done

The researchers used two purpose-built large language models, Evo 1 and Evo 2, both trained on known phage genomes (the full genetic codes of viruses that infect bacteria). Rather than editing existing DNA, the models generated entirely new sequences designed to function as viable viruses.

These designs were then synthesised and tested in controlled lab environments to determine infectivity, replication capability and fitness against E. coli. Several synthetic phages performed better than their natural counterparts, suggesting the models were not only functional but capable of optimisation.

The authors limited the release of full model weights and data to prevent misuse, but the methodology has been published as a preprint and is accessible to the wider scientific community.

Why Existing Safeguards May Not Be Enough

One of the most serious concerns raised by the Stanford study is that current safety mechanisms may no longer be sufficient. For example, while the researchers restricted release of their full model and data, similar tools could still be developed elsewhere using publicly available genome databases.

A separate paper published the same month by Jonathan and Tal Feldman tested how well existing safety systems performed. They looked at popular protein interaction models used to screen for dangerous biological activity. These systems are meant to act as filters, flagging up synthetic sequences that might pose a risk. However, the study found that most of the models failed to identify known viral threats, including variants of SARS-CoV-2. This raises major doubts about the reliability of AI filters in high-risk areas like synthetic biology.

It seems that the problem is being made worse by the growing availability of commercial gene synthesis services. For example, companies around the world now offer to manufacture DNA to order. If their safety checks depend on filters that cannot spot risky sequences, there is a real risk that harmful organisms could be produced without being detected. This may not be intentional, but the outcome could still be serious.

The researchers argue that AI tools should not be used without human oversight, especially when they are capable of designing whole genome sequences. Manual checks, containment procedures, and layers of validation will be needed before this kind of technology can be safely deployed at scale.

Why the Supply Chain Also Needs to Respond

It should be noted here that this is not just a problem for researchers. For example, any business involved in the broader synthetic biology supply chain could be affected. That includes companies supplying lab equipment, reagents, DNA synthesis, or even cloud computing for AI training.

If an AI-designed virus were to cause harm, liability could reach across multiple parties. The business that designed it, the company that synthesised it, the lab that tested it, and even the suppliers of biological components could all come under scrutiny. Each will need to review their processes, safety documentation and contracts to ensure responsibilities are clearly defined.

Insurance may also need to change because existing life sciences policies may not account for AI-generated biological risks. Cyber insurance is unlikely to cover this type of incident unless clearly stated. Legal teams will need to assess whether AI-generated genomes qualify for intellectual property protection, and who is liable if something goes wrong.

These are no longer just theoretical questions, as the design and production of synthetic organisms is moving well beyond high-security labs. With generative tools becoming more powerful and widely accessible, any business involved in the chain may now be exposed to new operational, reputational, or legal risks.

Growing Pressure for International Coordination

The lack of consistent international regulation is another major concern. For example, while the UK has some of the strongest biosafety frameworks in the world, many other jurisdictions have not yet addressed the risks of AI in synthetic biology. This creates potential loopholes, where harmful work could be carried out in less regulated environments.

Global organisations such as the World Health Organisation and the InterAcademy Partnership have already started highlighting the need for joined-up rules. Several experts have proposed an international licensing system for high-risk AI models used in biological design, similar to the controls already in place for nuclear materials and dangerous chemicals.

There is also increasing concern about open-source models. While openness in research has supported progress in many fields, unrestricted access to tools capable of designing viruses poses a different kind of risk. The Stanford team made a point of withholding their model weights to prevent misuse. However, others may not take the same approach.

UK businesses that work with international partners will need to ensure those partners follow equivalent safety protocols. It may no longer be enough to comply with domestic regulations alone. Auditing suppliers, reviewing overseas collaborations, and maintaining clear contractual safeguards will all become more important.

Commercial Interest Is Already Accelerating

Despite the risks, commercial interest in AI-designed biology is growing quickly. Companies are exploring how the technology could support applications in medicine, agriculture, food safety, environmental protection and bioengineering.

Phages, (viruses that infect bacteria), could, for example, be designed to target specific bacterial threats in farming, reducing reliance on antibiotics. Also, similar approaches could be used to clean up industrial waste or detect harmful microbes in supply chains. Each of these use cases will require rigorous testing, but the potential benefits are drawing attention.

Market forecasts even suggest that the global synthetic biology sector could exceed £40 billion within five years. If AI becomes part of the standard toolset for designing new organisms, companies that develop safe and effective practices early on may gain a significant competitive advantage.

This also means UK regulators will face more pressure to strike the right balance between enabling innovation and preventing harm. Businesses looking to engage in this space will need to show that they understand both the opportunity and the responsibility that comes with it.

What the Researchers Are Saying

“This is the first time AI systems are able to write coherent genome-scale sequences,” said lead author Brian Hie in a public statement. “We’re not just editing DNA—we’re designing new biological entities from scratch.”

In their paper, the researchers explained that their results “offer a blueprint for the design of diverse synthetic bacteriophages and, more broadly, lay a foundation for the generative design of useful living systems at the genome scale.”

Other experts have also reacted. For example, Dr Alice Williamson, a chemistry lecturer at the University of Sydney, commented: “This is a remarkable demonstration of what’s possible, but we must be cautious. With this power comes responsibility, and we’re not yet ready for fully open access to these tools.”

What Does This Mean For Your Business?

It seems as though generative AI is no longer limited to digital applications. For example, it now appears to be directly shaping biology, and that changes the nature of risk and responsibility for everyone involved. For UK companies in biotech, healthcare, agriculture and synthetic biology, this means adjusting quickly to a new reality where AI can create organisms that function in the real world, not just in models or theory.

The arrival of genome-scale design capabilities will create pressure to innovate. Businesses that invest early in safe design workflows, internal governance, and credible validation procedures may be well placed to benefit. However, those without robust safeguards or compliance frameworks could face serious consequences, especially if tools are misused or if international standards begin to diverge.

Regulators will, therefore, need to act quickly to close the current policy gaps. This includes reviewing how AI models are controlled, how training data is monitored, and how risks are assessed before deployment. Failure to do so may not only expose the UK to safety risks but also weaken trust in the technologies driving this next wave of innovation.

At the same time, universities, funding bodies and research institutions will need to rethink how openness, collaboration and risk management are balanced. As access to generative tools spreads, clearer rules will be needed around publication, licensing and oversight.

What is now clear is that synthetic biology and AI are no longer separate. This convergence is already reshaping the landscape, and those who build their business models, regulatory frameworks and international partnerships around that fact will be better prepared for what comes next.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives