Tech Insight : UK’s New Cyber Severity Scale
The UK’s Cyber Monitoring Centre (CMC) has now started categorising cyber events using a scale designed to assess the impact and severity of attacks (similar to the Richter scale for earthquakes).
What is the Cyber Monitoring Centre?
The Cyber Monitoring Centre (CMC) is an independent, non-profit organisation founded by the UK’s insurance industry to enhance trust in cyber insurance markets and improve national understanding of digital threats. Officially unveiled at a Royal United Services Institute (RUSI) event on 6 February 2025, the CMC has been operating behind the scenes for a year, refining its methodology before making its system publicly available.
How Does the Cyber Event Severity Scale Work?
The CMC has introduced a five-level categorisation system to rank cyber events based on their severity and financial impact. The scale ranges from one (least severe) to five (most severe), considering two key factors:
1. The proportion of UK-based organisations affected.
2. The overall financial impact of the event.
Only incidents with a potential financial impact exceeding £100 million, affecting multiple organisations, and with sufficient available data will be classified. The CMC will collect insights from polling, technical indicators, and other incident data, all reviewed by a Technical Committee of cyber security experts.
Once categorised, cyber events will be published along with detailed reports that outline the impact, methodology, and response strategies. This information will be freely available to businesses and individuals worldwide.
CMC CEO Will Mayes emphasised the importance of this classification system, stating: “The risk of major cyber events is greater now than at any time in the past as UK organisations have become increasingly reliant on technology. The CMC has the potential to help businesses and individuals better understand the implications of cyber events, mitigate their impact on people’s lives, and improve cyber resilience and response plans.”
The rating system initiative is being spearheaded by a team of cyber security experts and industry leaders, with former National Cyber Security Centre (NCSC) chief Ciaran Martin serving as Chair. Explaining the importance of the CMC’s work, Martin says: “Measuring the severity of incidents has proved very challenging. This could be a huge leap forward. I have no doubt the CMC will improve the way we tackle, learn from, and recover from cyber incidents. If we crack this, and I’m confident that we will, ultimately it could be a huge boost to cyber security efforts, not just here but internationally too.”
Why Is the UK Introducing a Cyber Severity Scale?
The new initiative has been launched in the UK to essentially help measure the severity of cyber threats, thereby (hopefully) bringing much-needed clarity to an ever-evolving digital battleground.
Cyber attacks have become increasingly frequent and damaging. In 2023 alone, the UK suffered over seven million cyber attacks, costing the economy an estimated £27 billion per year. From ransomware crippling hospitals to large-scale data breaches exposing personal and financial information, the need for an organised, systematic approach to assessing cyber threats has never been greater.
Martin has stressed that a standardised metric for cyber event severity has been long overdue, and has highlighted how: “If you get a major incident in a large organisation, the results can be absolutely devastating. Hospitals can be brought to their knees.”
Martin has also noted the fact that because international threat actors, including state-backed groups from Russia and China, are constantly evolving their tactics, the UK must now be better prepared.
How Will This Benefit UK Businesses?
For UK businesses, the introduction of the CMC’s cyber severity scale could be an important step in cyber risk management and its benefits could include:
– Clarity and consistency. Businesses will have an easily understood, objective framework to gauge the severity of cyber incidents and make informed decisions.
– Better risk assessment. Insurers, regulators, and industry leaders will be able to assess cyber risks more effectively, leading to better cyber insurance policies and risk management strategies.
– Faster response times. With categorised reports on cyber incidents, organisations can respond more quickly and appropriately to emerging threats.
– Improved cyber resilience. Detailed incident reports will help organisations refine their cyber security measures and prepare for future attacks.
CMC CEO Will Mayes has also highlighted how the CMC’s work will be supported by a broad range of global cyber security experts, saying: “I would also like to acknowledge the support from a wide range of world-leading experts who have contributed so much time and expertise to help establish the CMC, and continue to provide data and insights during events. Their ongoing support will be vital, and we look forward to adding further expertise to our growing cohort of partners in the months and years ahead.”
Potential Challenges and Drawbacks
Despite its promise, and although it’s still very early days, it should be acknowledged that the CMC’s classification system is not without potential challenges. These include:
– Accuracy and data availability. Since categorisation relies on accurate data collection, incomplete or delayed reporting could affect the reliability of classifications.
– Speed (or lack of it) of assessment. The CMC aims to classify events within 30 days, but in 2025 this timeline may take longer. Delays in categorisation could impact real-time responses.
– The threshold for categorisation. By focusing on incidents causing over £100 million in damage, smaller but still significant attacks may not be classified, potentially leaving some businesses without crucial insights.
– The potential for misinterpretation. While the scale is designed to simplify communication, businesses and the public may misinterpret severity rankings, leading to unnecessary alarm or complacency.
UK Not The First Country To Try It
The UK is not the first nation to attempt a structured approach to cyber threat classification, but the CMC’s initiative represents a more comprehensive framework than many existing models. The US, for instance, has the Cyber Incident Severity Schema, a classification system used by federal agencies, but it does not currently have the public-facing clarity or structured ranking system that the CMC intends to implement.
Other European nations have also been watching the CMC’s developments closely, with cyber security experts suggesting that if successful, this model could be replicated in the EU or even standardised internationally. According to industry insiders, discussions are already taking place regarding cross-border data sharing agreements to strengthen global cyber response strategies.
Some cyber security experts have noted how a universal classification that could be used by all countries would make for a better system and, as the CMC begins classifying real-world incidents, there is potential for the UK to take a leading role in shaping a globally recognised cyber threat severity scale. Such a scale would help both businesses and governments get the data needed to make informed, strategic decisions in the fight against digital threats.
What Does This Mean For Your Business?
The introduction of the CMC’s severity scale could offer a clearer, more structured approach to understanding and responding to cyber threats. As cyber attacks grow in frequency and complexity, businesses, insurers, and policymakers require reliable data to assess risk and improve resilience. The CMC’s initiative looks like it could provide just that, i.e. a structured, transparent framework that could transform how the UK, and potentially the wider world, categorises and responds to major cyber incidents.
However, while the system has some clear benefits, it’s not without its limitations. The reliance on accurate and timely data presents an ongoing challenge, particularly given the complex and often opaque nature of cyber incidents. The CMC’s approach of only classifying large-scale events, while logical for identifying major risks, may also leave some significant but smaller-scale attacks unaccounted for. Also, the speed at which classifications are made will determine how effective the system is in providing real-time insights for businesses and policymakers.
Despite these concerns, the CMC’s work has already garnered some strong backing from cyber security experts and industry leaders, who recognise its potential to standardise risk assessment in a sector where clear benchmarks have long been lacking. The fact that other nations are closely monitoring the UK’s efforts also suggests that this initiative could, in time, help shape a globally recognised classification system, which is something that could prove invaluable in the fight against international cyber threats.
The success of the CMC’s cyber event severity scale will depend on its ability to consistently deliver accurate, timely, and actionable insights. If it achieves this, it has the potential to improve cyber resilience not just for UK businesses but for organisations worldwide. With cyber threats showing no signs of slowing, initiatives like this are going to be increasingly necessary.
Tech News : Google Lifts AI Ban on Weapons and Surveillance
Google has revised its AI principles, lifting its ban on using artificial intelligence (AI) for the development of weapons and surveillance tools.
What Did the Previous Principles State?
In 2018, Google established its Responsible AI Principles to guide the ethical use of artificial intelligence in its products and services. Among these was a clear commitment not to develop AI applications intended for use in weapons or where the primary purpose was surveillance. The company also pledged not to design or deploy AI that would cause overall harm or contravene widely accepted principles of international law and human rights.
These principles emerged in response to employee protests and backlash over Google’s involvement in Project Maven, a Pentagon initiative using AI to analyse drone footage. Thousands of employees signed a petition, and some resigned, fearing their work could be used for military purposes.
What Has Changed and Why?
Google’s new AI principles, as outlined in a blog on its website by senior executives James Manyika and Sir Demis Hassabis, remove the explicit ban on military and surveillance uses of AI. Instead, the principles emphasise a broader commitment to developing AI in alignment with human rights and international law but do not rule out national security applications.
The update comes amidst what Google describes as a “global competition for AI leadership.”
The company argues that democratic nations and private organisations need to work together on AI development to safeguard security and uphold values like freedom, equality, and human rights.
“We believe democracies should lead in AI development, guided by core values,” Google stated, highlighting its role in advancing AI responsibly while supporting national security efforts.
The strategic importance of AI to Google’s business has been highlighted when its parent company, Alphabet, committed to spending $75 billion on AI projects last year, a 29 per cent increase from previous estimates. Alphabet has again significantly increased its AI investment for 2025, and the latest budget allocations indicate a strong push towards AI infrastructure, research, and applications across various sectors, including national security.
Criticism from Human Rights Organisations
Google’s decision to change its AI policy in this way has sparked debate and concern, with human rights advocates warning of serious consequences.
Human Rights Watch (HRW) and other advocacy groups have expressed grave concerns about Google’s policy shift.
For example, Human Rights Watch says in a blog post on its website that: “For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever.” The organisation also warns that AI-powered military tools complicate accountability for battlefield decisions, which can have life-or-death consequences.
HRW’s blog post also makes the point that voluntary corporate guidelines are insufficient to protect human rights and that enforceable regulations are necessary, saying: “Existing international human rights law and standards do apply in the use of AI, and regulation can be crucial in translating norms into practice.”
Doomsday Clock
The Doomsday Clock, an assessment of existential threats facing humanity, recently cited the growing use of AI in military targeting systems as a factor in its latest assessment. The report highlighted that AI-powered military systems have already been used in conflicts in Ukraine and the Middle East, raising concerns about machines making lethal decisions.
The Militarisation of AI
The potential for AI to transform warfare has been a topic of intense debate for some time now. For example, AI can automate complex military operations, assist in intelligence gathering, and enhance logistics. However, concerns about autonomous weapons, sometimes called “killer robots”, have led to calls for stricter regulation.
In the UK, a recent parliamentary report emphasised the strategic advantages AI offers on the battlefield. Emma Lewell-Buck, the MP who chaired the report, noted that AI would “change the way defence works, from the back office to the frontline.”
In the United States, the Department of Defense is investing heavily in AI as part of its $500 billion modernisation plan. This competitive pressure is likely one reason Google has shifted its stance on military AI applications. Analysts believe that Alphabet is positioning itself to compete with tech rivals such as Microsoft and Amazon, which have maintained partnerships with military agencies.
Implications for Google and the World
The decision to lift the ban on AI for weapons and surveillance could have significant implications for Google, its users, and the global AI market. For example:
– Reputation and trust. It may put Google’s reputation as a socially responsible company at risk. The company’s historic “Don’t be evil” mantra, which was later replaced by “Do the right thing,” had helped it maintain a positive image. Critics argue that compromising on its AI principles undermines this legacy.
– Employee dissent could also resurface. Back in 2018, internal protests were instrumental in Google walking away from Project Maven (a Pentagon AI project for drone surveillance). While the company has emphasised transparency and responsible AI governance, it remains to be seen whether employees and users will accept these assurances.
– Human rights and security risks. Human rights organisations warn that AI’s deployment in military and surveillance contexts poses significant risks. Autonomous weapons, for example, could reduce accountability for lethal actions, while AI-driven surveillance could be misused to suppress dissent and violate privacy.
The United Nations has called for greater regulation of AI in military contexts. A 2023 report by the UN’s High Commissioner for Human Rights described the lack of oversight of AI technologies as a “serious threat to global stability.”
– Impact on AI regulation. Google’s policy shift highlights what many see as a need for stronger regulations. As HRW points out, voluntary principles are not a substitute for enforceable laws. Governments around the world are already grappling with how to regulate AI effectively, with the European Union advancing its AI Act and the United States updating its National Institute of Standards and Technology (NIST) framework.
If democratic nations fail to establish clear rules, there is a risk of a global “race to the bottom” in AI development, where companies and countries prioritise technological dominance over ethical considerations.
– AI Industry Competition. Google’s decision is likely to intensify competition within the AI industry. The company’s increased investment in AI aligns with its strategic priorities, particularly in areas such as AI-powered search, healthcare, and cybersecurity.
Competitors such as OpenAI, Microsoft, and Amazon Web Services have also prioritised national security partnerships. As AI becomes a key element of economic and geopolitical power, companies may feel compelled to follow Google’s lead to remain competitive.
The Road Ahead
Google insists that its revised principles will still prioritise responsible AI development and that it will assess projects based on whether the benefits outweigh the risks. However, critics remain sceptical.
“As AI development progresses, new capabilities may present new risks,” Google wrote in its 2024 Responsible AI Progress Report. The report outlines measures to mitigate these risks, including the implementation of a Frontier Safety Framework designed to prevent misuse of critical capabilities.
Despite these reassurances, concerns about AI’s potential to disrupt global stability remain. As Google moves forward, the world will be watching closely to see whether its actions match its rhetoric on responsibility and human rights.
What This Means For Your Business?
Google’s decision to revise its AI principles could be seen as a pivotal moment not only for the company but for the broader debate on the ethical use of AI. While Google argues that democratic nations must lead AI development to ensure security and uphold core values, the removal of explicit restrictions on military and surveillance applications raises serious ethical and practical concerns.
On the one hand, AI’s role in national security matters is undeniably growing, with governments around the world investing heavily in AI-driven defence and intelligence. Google, like its competitors, faces immense commercial and strategic pressure to remain at the forefront of this race. By lifting its self-imposed restrictions, the company is therefore positioning itself as a major player in AI applications for national security, an area where rivals such as Microsoft and Amazon have already established strong partnerships. Given the increasing intersection between technology and global power dynamics, Google’s shift could actually be seen as basically a pragmatic business decision.
However, this pragmatic approach comes with some risks. The concerns raised by human rights organisations, ethicists, and AI watchdogs highlight the potential consequences of allowing AI to shape military and surveillance operations.
Tech News : Banning Mobiles : Impact On School Children
A recent study by the University of Birmingham has revealed that banning smartphones during school hours does not necessarily lead to improved mental health or academic performance among students.
The SMART Schools Study
The SMART Schools study, conducted by the University of Birmingham, set out to evaluate whether banning phone use throughout the school day leads to better mental health and wellbeing among adolescents. Given growing concerns over the potential negative effects of excessive smartphone use, such as increased anxiety and depression, disrupted sleep, reduced physical activity, lower academic performance, and greater classroom distractions, many schools have introduced restrictive phone policies. However, despite these widespread bans, there has been little empirical evidence assessing their actual effectiveness.
The study compared outcomes among students in schools with restrictive policies (where recreational phone use was not permitted) and those in schools with more permissive policies (where phones could be used during breaks or in designated areas).
The findings (published in The Lancet) suggest that simply prohibiting phone use during school hours is not enough to address these broader issues, highlighting the need for a more comprehensive approach to managing adolescent smartphone use.
The Methodology
Conducted over a 12-month period ending in November 2023, the study involved 1,227 students aged 12 to 15 from 30 secondary schools across England. Among these schools, 20 had restrictive phone policies, prohibiting recreational phone use during school hours, while 10 had permissive policies, allowing phone use during breaks or in designated areas. The researchers collected data on various health and educational outcomes, including mental wellbeing (assessed using the Warwick–Edinburgh Mental Well-Being Scale), anxiety and depression levels, physical activity, sleep patterns, academic attainment in English and Maths, and instances of disruptive classroom behaviour. Also, participants reported their smartphone and social media usage.
Key Findings
The study found no significant differences between students in restrictive and permissive schools concerning mental wellbeing, anxiety, depression, physical activity, sleep, academic performance, or classroom behaviour. While students in schools with phone bans reported slightly less phone (approximately 40 minutes) and social media use (about 30 minutes) during school hours, there was no meaningful reduction in overall daily usage. On average, students across both types of schools used their smartphones between four to six hours daily.
Link Found, But Need To Do More
In comments that appear to be somewhat contrary to the published findings, Dr. Victoria Goodyear, Associate Professor at the University of Birmingham and lead author of the study, says, “We did find a link between more time spent on phones and social media and worse outcomes, with worse mental wellbeing and mental health outcomes, less physical activity and poorer sleep, lower educational attainment and a greater level of disruptive classroom behaviour. This suggests that reducing this time spent on phones is an important focus. But we need to do more than focus on schools alone, and consider phone use within and outside of school, across a whole day and the whole week.”
Implications of the Findings
The results indicate that while excessive smartphone and social media use is associated with negative health and educational outcomes, banning phones during school hours alone is insufficient to address these issues. The study also seems to suggest that interventions should extend beyond the school environment, encompassing strategies that encourage responsible phone use throughout the entire day and week. A holistic approach of this kind could involve educating students on digital wellness, promoting alternative activities that do not involve screen time, and engaging parents in monitoring and guiding their children’s phone use.
Challenges and Criticisms
One challenge highlighted by the study is the pervasive nature of smartphone use among adolescents, making it difficult for school policies alone to effect significant change. Critics may also argue that the study’s cross-sectional design limits the ability to establish causation between phone policies and student outcomes. Also, the reliance on self-reported data for smartphone usage could introduce reporting biases. Further longitudinal research may, therefore, be needed to explore the long-term effects of phone use and the efficacy of comprehensive intervention strategies.
Comparative Studies
While Birmingham University’s study is being hailed as a ‘landmark’ one, in fact, several other studies and campaigns over the past decade have focused on the impact of smartphone use on adolescents’ mental health, academic performance, and overall wellbeing. For example:
– In 2015, the London School of Economics conducted a study examining the effects of mobile phone bans in schools. The research found that students’ academic performance improved when cell phone usage was banned in schools. This ban not only helped students score higher in exams but also reduced the temptation to use cell phones for non-scholarly purposes.
– In 2024, social psychologist Jonathan Haidt spearheaded a global campaign to reduce smartphone dependency among children. His book, “The Anxious Generation,” explores the profound impact of smartphones on child development and the emerging mental health crisis since 2012. Haidt argues that excessive screen time displaces traditional child activities, contributing to widespread anxiety and depression. He emphasises the importance of creating smartphone-free environments in schools and encouraging more real-world play and social interactions. Despite some criticism about oversimplifying the connection between smartphones and mental health issues, Haidt’s efforts sparked significant discussion on the topic and inspired many to advocate for a balanced approach to technology in childhood.
The collective takeaway from all these kinds of studies appears to be that while reducing smartphone usage is beneficial, focusing solely on school policies may not be sufficient. Therefore, may commentators now believe a broader, more comprehensive approach involving educators, parents, and policymakers is essential to effectively address the challenges associated with adolescent smartphone use.
What Does This Mean For Your Business?
While the SMART Schools study primarily focuses on adolescent wellbeing and education, its findings have broader implications for businesses, particularly those operating in the technology, education, and workplace wellbeing sectors. The key takeaway from the study (i.e. that simply banning smartphone usage is not enough to mitigate its negative effects) raises important questions about digital policies in professional and commercial settings.
For businesses in the technology and social media industries, the study highlights the growing scrutiny over excessive smartphone use and its potential negative impact on mental health. With increasing evidence suggesting that overuse of digital platforms can contribute to anxiety, depression, and sleep disruption, there is a mounting expectation for tech companies to take more responsibility. This could mean a greater push for ethical design, such as introducing more effective screen time management tools, promoting digital wellbeing features, and even redesigning platforms to encourage healthier usage habits. Companies that fail to acknowledge these concerns risk facing regulatory scrutiny and reputational damage, as governments and consumers alike demand action.
The findings also have implications for businesses operating in the education and training sectors. Schools are not the only places struggling to balance technology use with productivity. Employers, for example, also face challenges in managing digital distractions in the workplace. The study suggests that outright bans on devices may not be the most effective solution, prompting organisations to rethink their approach to workplace technology policies. Rather than restricting access to phones entirely, businesses may benefit from fostering a culture of responsible use, similar to the approach recommended for schools. Encouraging employees to set boundaries around phone use, providing digital wellbeing workshops, and even implementing workplace policies that promote focused, distraction-free time could improve productivity and overall job satisfaction.
Also, companies in the health and wellbeing sector may see increased demand for services that help individuals manage their screen time. From mindfulness apps and digital detox retreats to workplace wellbeing programmes that promote better work-life balance, businesses that provide solutions for managing technology overuse could find new opportunities for growth. As more research emerges on the effects of smartphone use, there may also be a stronger market for advisory services that help organisations develop balanced digital policies.
Businesses that rely on digital engagement, such as marketers, advertisers, and online content creators, should take note of shifting attitudes toward screen time. If consumers (particularly younger demographics) begin to adopt more mindful technology habits, engagement strategies may need to adapt. Brands that prioritise ethical marketing, promote digital wellbeing, or offer tools to help users moderate their time online may find themselves better positioned in a marketplace where excessive smartphone use is increasingly seen as a problem rather than a convenience.
In essence, the study’s findings appear to serve as a reminder that technology policies, whether in schools, workplaces, or broader society, need to be a bit more nuanced than simple bans. Businesses that proactively address these challenges, whether by promoting digital wellbeing, rethinking workplace policies, or innovating new ways to foster healthier technology habits, are likely to be best placed to navigate the evolving digital landscape.
Company Check : Microsoft 365 Users Must Opt Out to Avoid Price Hike for Copilot
Microsoft 365 subscribers are facing a price increase unless they actively opt out of Microsoft’s Copilot AI.
The tech giant has announced that its AI assistant will now be bundled into Microsoft 365 Personal and Family plans, leading to higher subscription fees for users who do not take action. While Microsoft claims this reflects added value, critics argue that the company is effectively forcing AI adoption by making the opt-out process cumbersome.
The price of Microsoft 365 Personal is rising from £5.99 to £8.99 per month, or from £59.99 to £89.99 per year. Microsoft 365 Family is increasing from £7.99 to £10.99 per month, or from £79.99 to £109.99 annually. This marks the first price hike for these plans since their introduction in 2020. Microsoft says the changes reflect over a decade of added benefits and investment in innovation. However, many subscribers are frustrated, as these increases primarily result from the inclusion of Copilot, rather than general improvements to the service.
Microsoft Copilot, the company’s AI-powered assistant, integrates directly into Word, Excel, PowerPoint, Outlook, and OneNote, offering AI-generated text, data insights, and automation features. Microsoft argues that Copilot will improve productivity and is worth the additional cost. However, many users feel they are being forced into an AI subscription they did not ask for, with no clear option to decline at the outset. Those who do not want Copilot must actively opt out to avoid paying extra.
Reports indicate that the opt-out process itself can be frustratingly difficult. For example, instead of offering a simple option to remove Copilot, Microsoft users need to go to their account settings and select “Cancel subscription” before being presented with alternative plans. These include “Personal Classic” and “Family Classic,” which retain the original pricing but exclude Copilot. Some critics have described this as a ‘dark pattern’, i.e. a tactic designed to push users towards more expensive options by making the alternative harder to find.
With over 84 million Microsoft 365 subscribers, this move could generate an estimated £2.5 billion in additional annual revenue for Microsoft. The company has made significant investments in AI and cloud infrastructure, and this pricing shift suggests a push to monetise those developments. This mirrors similar moves by other tech firms, which are integrating AI into existing products while charging a premium for access.
For users who want to retain their current pricing, time is limited. Microsoft has stated that the ability to switch to Classic plans will only be available for a “limited time,” though it has not specified an exact deadline. Subscribers who do not act will see their costs rise automatically, making it essential for those who do not want Copilot to opt out as soon as possible.
What Does This Mean For Your Business?
For many Microsoft 365 subscribers, the issue here is not just the price increase, but the way it has been introduced. While Microsoft frames this as an enhancement to its service, the reality is that Copilot is an optional feature being added by default, with users expected to take action to avoid paying for it. The decision to make this an opt-out rather than opt-in change has left many feeling that they are being steered towards higher costs without a clear and upfront choice.
That said, some users may find Copilot a valuable addition, particularly those who regularly use Microsoft 365 applications for work or study. The AI-powered assistant has the potential to improve productivity, automate repetitive tasks, and generate useful insights. However, whether these benefits justify the increased cost is a decision that should ultimately be left to each user, rather than being imposed by default.
Microsoft’s approach highlights a growing trend in the tech industry, where companies are seeking to monetise AI by embedding it into existing services. While innovation inevitably comes with a price, the key concern here is transparency and user choice. By making the opt-out process more difficult than necessary, Microsoft risks alienating long-term subscribers who may feel that they are being pushed into paying for something they neither need nor want.
The message here is this : for those who do not wish to pay extra for Copilot, time is of the essence. Microsoft has confirmed that opting out is possible, but with no clear deadline on how long the Classic plans will remain available, delaying could lead to unnecessary costs. Users must therefore weigh up whether Copilot is worth the additional outlay and, if not, take steps to opt out before the price increase takes effect.
Security Stop Press : Australia Bans DeepSeek From Government Devices
Australia has banned DeepSeek from all government devices, citing national security concerns.
The directive mandates the removal of all DeepSeek products from government systems. Home Affairs Minister Tony Burke called it an “unacceptable risk.”
DeepSeek, a Chinese AI start-up, recently launched a chatbot rivalling Western models at a lower cost, but scrutiny over its data handling practices has grown. The platform stores user data on Chinese servers, raising concerns about potential government access. Italy and Taiwan have also restricted its use, while the US and several European nations are investigating its security implications.
The ban follows similar actions against Chinese tech firms, including Huawei and TikTok, reflecting wider geopolitical tensions. DeepSeek’s launch has also disrupted global AI investments, leading to a decline in AI-related stocks, including Australian chipmaker BrainChip.
For businesses, the move highlights the need for strict AI security policies. Organisations should vet AI applications, ensure compliance with data regulations, and restrict sensitive data interactions to mitigate risks.
Sustainability-in-Tech : New Class Of Sustainable Bacteria-Made Textiles
London-based biomaterials company Modern Synthesis has unveiled a new class of nonwoven materials derived from ‘bacterial nanocellulose’.
Sustainable Alternative To Other Materials
These innovative textiles can be made to replace everything from plastic films to leathers, thereby offering a sustainable alternative to some of the fashion and automotive industries’ most environmentally damaging materials.
Who Is Modern Synthesis?
Founded by former Adidas designer Jen Keane and biomaterials specialist Ben Reeve, Modern Synthesis is an emerging leader in the development of next-generation textiles that move beyond petrochemical and animal-derived materials. The company, headquartered in London, is focused on harnessing bacterial nanocellulose to create a versatile range of fabrics that are not only high-performance but also fully biodegradable.
Keane, now CEO, came into the spotlight in 2018 when she successfully ‘grew’ a shoe using bacteria, demonstrating the potential of biofabrication. However, she believes the true potential lies not in shaping materials as they grow, but in using bacterial cellulose as a foundational fibre that can be manipulated and scaled like traditional textiles.
What is this New Material and Why is it Significant?
Modern Synthesis’ material is primarily made from bacterial nanocellulose, i.e. a natural fibre that is eight times stronger than steel when produced at the nanoscale (materials measured in nanometres, typically less than 100 nm). Unlike plant-based cellulose (which requires intensive farming, land, and water), bacterial nanocellulose is cultivated through fermentation. This makes it a highly efficient and sustainable alternative.
What sets Modern Synthesis apart is its proprietary process, which integrates bacterial nanocellulose with a woven or knitted textile scaffold. This method allows the final material to be fine-tuned for different textures and mechanical properties, making it possible to replace synthetic leathers, coated fabrics, and even high-performance technical textiles.
Unlike most bio-based leather alternatives (which often rely on synthetic binders to achieve durability), Modern Synthesis’ process is entirely free from petrochemicals. The result is a fully biodegradable material that behaves much like conventional textiles but with a significantly reduced environmental footprint.
How is the Material Made?
The production process is an advanced form of microbial fermentation. The company uses a strain of bacteria known as Komagataeibacter rhaeticus, which naturally produces nanocellulose when fed with agricultural sugars. As the bacteria grow, they deposit nanocellulose fibres around a specially designed yarn scaffold, resulting in a unique, nonwoven textile structure.
This controlled approach allows for the fine-tuning of material properties, such as flexibility, strength, and texture, by adjusting the bacterial growth conditions and the composition of the scaffold. Unlike synthetic textiles that require chemical treatments to achieve similar properties, Modern Synthesis’ materials develop these characteristics organically.
Environmental and Industry Implications
The implications for both the fashion industry and the wider materials market could be vast. For example, leather and synthetic textiles, such as polyurethane-based vegan leathers, are major contributors to greenhouse gas emissions, plastic pollution, and deforestation. The carbon footprint of Modern Synthesis’ bacterial nanocellulose-based textiles is expected to be significantly lower than that of both traditional leather and synthetic alternatives.
Water usage is another key area of impact. Traditional leather production requires thousands of litres of water per square metre, whereas bacterial nanocellulose fermentation uses a fraction of that amount. Also, Modern Synthesis’ material does not involve toxic tanning chemicals, further reducing its environmental impact.
For businesses, this innovation could offer a way to meet growing consumer demand for sustainability without sacrificing quality or performance. Luxury brands and sportswear companies have already shown interest, with Danish fashion house Ganni collaborating with Modern Synthesis in 2023 to create a handbag made entirely without petrochemicals.
Potential Applications
Beyond fashion, Modern Synthesis’ materials have potential applications in:
– Footwear. As a lightweight, durable replacement for leather and synthetic uppers.
– Automotive interiors. The material’s high-temperature resistance and durability make it an attractive option for dashboards and upholstery.
– Smart textiles. The company is exploring how nanocellulose can be integrated with electronics for wearable technology.
Keane has highlighted the versatility of the material, stating, “Cellulosic materials don’t melt like synthetics do. If you think about car dashboards, how they start to warp when left in the sun too long—our materials won’t do that.”
Challenges and Limitations
While Modern Synthesis’ technology seems promising, there are still many hurdles to overcome before widespread adoption. For example, the company recognises that scaling production to meet industrial demand remains a major challenge. With this in mind, the company is currently working to increase production at its pilot facility fivefold, but larger-scale manufacturing will require further investment and infrastructure.
Another challenge is recyclability. While the material is biodegradable, ensuring it is also recyclable without compromising its durability remains a key focus. Many bio-based materials require additional treatments that can hinder their ability to be repurposed at the end of their life cycle. Modern Synthesis is understood to be actively working with “green chemistries” to develop formulations that balance performance with circularity.
Who Else is Developing Similar Materials?
Modern Synthesis is actually part of a growing movement towards microbial-based textiles. Other players exploring bacterial nanocellulose include:
– MycoWorks, which specialises in mushroom-derived mycelium leather.
– Bolt Threads. Developed Mylo, another mycelium-based leather alternative.
– Ananas Anam, the creators of Piñatex, a plant-based leather alternative derived from pineapple leaves.
However, most of these alternatives still require synthetic binders, whereas Modern Synthesis’ approach stands out for being entirely bio-based and customisable at the nanoscale.
What Does This Mean For Your Organisation?
Modern Synthesis’ bacterial nanocellulose-based textiles could be an important advancement in the quest for sustainable materials. By leveraging microbial fermentation to create high-performance, biodegradable fabrics, the company is offering an alternative that challenges both traditional leather and synthetic textiles on environmental grounds. Unlike many bio-based alternatives that still rely on petrochemical additives, this new class of material is not only renewable but also fully biodegradable, making it a compelling solution for industries seeking to reduce their ecological footprint.
However, while the potential is undeniable, challenges remain. Scaling up production to meet commercial demand is a critical hurdle, as is ensuring the material can be seamlessly integrated into existing supply chains. Also, achieving true circularity (i.e. where the material is not just biodegradable but also efficiently recyclable) will be essential in determining its long-term impact. Modern Synthesis appears to be actively addressing these concerns, but success will likely depend on continued innovation and investment.
What appears to set Modern Synthesis apart is not just its scientific approach but its vision for redefining how materials are made. By collaborating with major fashion brands and exploring applications beyond apparel, the company is positioning its technology as a viable replacement for some of the most environmentally damaging materials in use today. If production challenges can be overcome, bacterial nanocellulose textiles could play a key role in reducing reliance on fossil fuels, lowering carbon emissions, and transforming industries that have long been dependent on resource-intensive materials.
This innovation could, therefore, offer a promising glimpse into the future of sustainable manufacturing. While it may take time to achieve widespread adoption, the foundations are being laid for a possible material revolution and one that moves beyond extraction and towards biofabrication. If Modern Synthesis and others in the field can bridge the gap between laboratory breakthroughs and large-scale industry use, bacterial nanocellulose textiles could become a defining material of the sustainable era.