Tech Insight : Do Noise-Cancelling Headphones Damage Hearing?
Noise-cancelling headphones are becoming increasingly popular, yet experts are raising concerns that prolonged use may be contributing to a rise in auditory processing issues, particularly among young people.
Do They Re-Train Your Brain?
The soothing silence offered by noise-cancelling headphones has made them indispensable for many, particularly younger users navigating busy cities or working in noisy environments. However, some recent findings suggest that this constant isolation from environmental sounds may actually be training the brain to ignore background noise too well, potentially leading to auditory processing disorder (APD).
A Surge in Hearing Issues Among Young Adults
Audiologists across several UK NHS departments have reported a noticeable increase in referrals for young people experiencing hearing-related issues. Surprisingly, standard hearing tests often reveal no physical damage to the ear. Instead, victims of this particular problem struggle to process sounds effectively. This is a hallmark of APD, a neurological condition where the brain essentially fails to interpret auditory information correctly.
Sophie (a 25-year-old used as an example in the recent BBC story about this emerging problem) highlights how, despite having no measurable hearing loss, a person can experience difficulty distinguishing voices in noisy environments and find it challenging to locate where sounds originate. According to the BBC, following a private consultation, Sophie was diagnosed with APD and her audiologist suspected that her extensive use of noise-cancelling headphones (up to five hours a day!) may be a contributing factor.
The Science Behind the Concern
Auditory processing is a complex function where the brain filters, prioritises, and interprets sounds. Experts, including Renee Almeida from Imperial College Healthcare NHS Trust, have warned that overuse of noise-cancelling features might deprive the brain of its natural ability to filter background noise. As Renee Almeida explains: “There is a difference between hearing and listening. We can see that listening skills are suffering.”
Also, Claire Benton, vice-president of the British Academy of Audiology, has added to the common explanation of why this phenomenon could come about, suggesting that prolonged isolation from environmental sounds could actually result in the brain “forgetting” how to manage auditory input effectively. Benton has also highlighted how these high-level listening skills continue to develop into the late teens, making adolescents particularly vulnerable to potential over-reliance on noise-cancelling technology.
A Call for Further Research
Despite the growing number of anecdotal cases, concrete scientific evidence remains fairly limited. Audiologists and healthcare professionals are therefore now urging for comprehensive research to investigate whether a causal link exists between noise-cancelling headphone use and the onset of APD.
Dr Angela Alexander of APD Support has voiced concerns over the potential long-term impacts, especially for children and teenagers, asking “What does the future look like if we don’t investigate this link?” and emphasising the urgency of understanding how constant auditory isolation might be affecting young people’s development.
Dr Amjad Mahmood from Great Ormond Street Hospital has also noted a sharp rise in demand for APD assessments among under-16s, particularly those struggling with concentration and communication in noisy classrooms.
The Implications for Users and Manufacturers
Should future research confirm a definitive link, the implications could be far-reaching. For example, users might need to reconsider their reliance on noise-cancelling technology, especially during critical developmental years. Awareness campaigns could be essential in promoting safe usage habits.
For manufacturers, the challenge will be to innovate without compromising user health. This might involve designing headphones that allow for controlled exposure to background noise or integrating intelligent transparency features that adjust sound isolation levels dynamically.
A Variation Between Brands
Lisa Barber, technology editor at Which?, has pointed out that while some models already offer adjustable transparency modes, there is significant variation between brands and models. A standardised approach to balancing noise cancellation with ambient sound exposure could become an industry priority.
Negative Effects and Symptoms of Overuse
Prolonged use of noise-cancelling headphones has been linked to a range of potential negative effects, particularly in individuals who rely on them for extended periods. While the direct impact on hearing remains under investigation, several symptoms and associated issues have been observed. These include:
– Auditory processing difficulties. Users may experience difficulty distinguishing between similar sounds or following conversations in noisy environments due to reduced exposure to natural background sounds.
– Tinnitus. Persistent use at high volumes can contribute to the development of tinnitus, a condition characterised by a constant ringing or buzzing sensation in the ears.
– Sound localisation issues. Over-reliance on noise-cancelling technology may impair the brain’s ability to determine where sounds are coming from, which can affect spatial awareness and safety in certain situations.
– Ear discomfort and pressure. For example, some users report a sensation of pressure in the ears, particularly when active noise cancellation is enabled, which can lead to headaches or mild discomfort.
– Increased sensitivity to noise. Known as hyperacusis, some individuals may find that their tolerance for everyday sounds decreases after prolonged periods of isolating themselves from ambient noise.
Recognising these symptoms early and adjusting listening habits accordingly may help mitigate potential risks associated with prolonged headphone use.
Practical Advice for Headphone Users
Until more definitive research emerges, some experts recommend adopting cautious usage habits, which could include:
– Limiting the duration. Restrict the usage of noise-cancelling headphones to essential periods, especially in safe, quiet environments.
– Taking regular breaks from the headphones. Allow your ears and brain to engage with natural environmental sounds periodically.
– Monitoring volume levels. Ensure audio is kept at a safe level to prevent potential hearing damage.
– Using transparency features. Opt for models that offer adjustable ambient sound modes when possible.
A Silent Risk?
Noise-cancelling headphones may have improved the quality of life for many, offering respite from the chaos of modern life but it appears that as the popularity of these devices grows, so too does the need for awareness of their potential downsides. The challenge ahead is to strike a balance, i.e. enjoying the benefits of silence without compromising our ability to process the sounds that matter most.
What Does This Mean For Your Business?
As the conversation around noise-cancelling headphones and their potential impact on auditory processing deepens, a clearer picture emerges, one that calls for a balanced and informed approach. While these devices offer undeniable benefits, especially in our increasingly noisy environments, the concerns raised by healthcare professionals highlight a crucial need for caution and moderation. The growing body of anecdotal evidence suggesting a link between prolonged use of noise-cancelling technology and auditory processing issues, particularly among young people, cannot be ignored.
For users, particularly younger individuals and their caregivers, this means cultivating healthier listening habits. This isn’t about vilifying technology, but rather understanding its proper place in daily life. Integrating periods of natural sound exposure, making use of transparency modes, and limiting headphone usage during critical developmental years could help mitigate potential long-term effects. The key lies in moderation—using these devices as tools for comfort and focus, without allowing them to become a crutch that inadvertently hampers auditory development.
The implications stretch beyond personal use and into the broader responsibilities of manufacturers and businesses. For headphone makers, the challenge now is to innovate responsibly. This might involve developing smarter features such as adaptive noise control, which allows for the dynamic integration of environmental sounds, or software that encourages breaks after extended use. A standardised approach across brands to offer adjustable noise cancellation could not only help preserve auditory health but also set new benchmarks for responsible technology design.
For workplaces where noise-cancelling headphones are commonly used to aid concentration, particularly in open-plan offices or customer service environments, businesses must also reconsider their policies. Encouraging staff to take listening breaks, offering education on safe usage practices, and ensuring that headphone use complements (rather than replaces) effective sound management strategies could help protect employees’ long-term hearing health while maintaining productivity.
Further research will be vital in confirming whether a direct link exists between noise-cancelling headphone use and auditory processing disorders. Until then, fostering awareness and encouraging responsible usage can help users enjoy the benefits of these devices without compromising their ability to engage with the world around them.
Tech News : Google’s Fingerprinting Policy Shift Sparks Privacy Concerns
Google’s recent decision to allow device fingerprinting for advertising purposes has triggered alarm among privacy advocates, regulators, and businesses alike.
What Is Device Fingerprinting?
Device fingerprinting is a sophisticated tracking method that collects various data points from a user’s device (e.g. screen size, browser type, language settings, battery level, and time zone) to create a unique digital profile. Unlike cookies, which users can clear or block, fingerprinting operates behind the scenes, offering far fewer opportunities for user control.
When combined with IP address data, fingerprinting allows advertisers to track users across multiple platforms and devices without explicit consent. This makes it a particularly powerful tool for targeted advertising but raises serious questions about transparency and user choice.
Google’s Policy Change
Effective from 16 February 2025, Google’s new policy will allow advertisers using its platform to deploy fingerprinting techniques. This marks a stark reversal from the company’s previous stance. For example, in a 2019 blog post, Google had unequivocally stated that fingerprinting “subverts user choice and is wrong.”
Critics argue that this change in Google’s policy threatens user privacy. However, Google claims it reflects necessary adaptations to changing technology trends and shifts in user behaviour.
As more people access content through devices like smart TVs and gaming consoles, traditional data collection methods (e.g. third-party cookies) are becoming less effective. Google argues that fingerprinting will enable advertisers to reach users across a fragmented digital landscape while maintaining privacy safeguards through privacy-enhancing technologies (PETs) like on-device processing and secure multi-party computation.
Google maintains that these technologies will offer new ways for advertisers to operate on emerging platforms without compromising user privacy.
The ICO and Privacy Campaigners Respond
The UK’s Information Commissioner’s Office (ICO) has criticised the decision, calling it a “threat to user control and transparency.” According to Stephen Almond, the ICO’s Executive Director of Regulatory Risk, fingerprinting reduces users’ ability to control how their data is collected and processed. In a December 2024 blog post, Almond labelled Google’s policy shift as “irresponsible,” stating: “Fingerprinting is not a fair means of tracking users online because it is likely to reduce people’s choice and control over how their information is collected.”
The ICO also warned businesses that deploying fingerprinting techniques would not exempt them from adhering to the UK’s stringent data protection laws, including the requirement to obtain clear user consent and offer transparent information on data usage.
Privacy organisations have echoed these concerns. For example, the Electronic Frontier Foundation has argued that Google’s new policy highlights a shift in focus from prioritising user privacy to maximising business profits. They’ve also raised concerns about how fingerprinting could expose users’ sensitive information to data brokers and surveillance entities.
A Business-Centric Shift?
The advertising technology sector appears divided on Google’s decision. For example, Pete Wallace of GumGum, an ad tech company specialising in contextual advertising, has been quoted as describing the policy shift as a “business-centric approach to the use of consumer data.”
“Fingerprinting sits in a grey area,” Wallace said. “While it offers advertisers powerful targeting capabilities, it simultaneously erodes consumer privacy. This inconsistency is detrimental to the industry’s previous attempts to put user privacy at the forefront.”
Some businesses, however, see fingerprinting as a necessary evolution to replace third-party cookies, which are being phased out by most major browsers. As traditional tracking methods diminish, fingerprinting could become the go-to strategy for advertisers seeking to maintain high levels of ad personalisation and effectiveness.
What About Businesses, Advertisers, and the Public?
For businesses and advertisers, Google’s policy shift could offer new opportunities to refine audience targeting and improve ad performance across platforms. The ability to collect detailed user profiles without relying on cookies could be a game-changer for marketers struggling with the limitations imposed by recent privacy regulations.
However, this comes at a potential cost. Organisations using fingerprinting must still comply with data protection laws such as the UK General Data Protection Regulation (GDPR) and the Privacy and Electronic Communications Regulations (PECR). For example, companies will need to demonstrate that they have obtained meaningful user consent and are transparent about their data practices—standards that many privacy experts believe will be difficult to meet given the covert nature of fingerprinting.
For the public, the implications are more concerning. Unlike cookies, fingerprinting is harder to detect and nearly impossible to block using conventional browser settings. This reduces individuals’ ability to manage their digital footprints, potentially exposing them to more invasive tracking by advertisers, data brokers, and even surveillance agencies.
According to the ICO’s draft guidance, businesses will need to provide clear information about fingerprinting and ensure that users can exercise their data rights, including the right to erasure. However, privacy campaigners argue that even with these measures, true user control is unlikely to be restored.
Google’s Defence
In response to the backlash, Google insists that its use of fingerprinting will adhere to strict privacy standards, leveraging PETs to anonymise data and prevent user re-identification. The company highlights its use of techniques like on-device processing to ensure that sensitive information never leaves the user’s device unless necessary.
Google claims that these measures will allow advertisers to reach their audiences effectively while safeguarding user privacy.
The tech giant also argues that fingerprinting is already widely used across the digital advertising ecosystem and that its new policy merely formalises existing practices while setting a higher bar for privacy.
Ongoing Developments and Industry Implications
As the February 2025 implementation date approaches, the debate around Google’s policy change is likely to intensify. The ICO has pledged to engage further with Google and provide updated guidance for businesses on how to lawfully implement fingerprinting techniques.
The advertising industry, privacy advocates, and regulators will, no doubt, be monitoring the effects of this shift closely, with the broader question remaining, i.e. will the industry’s drive for better ad targeting ultimately undermine the fundamental rights of internet users to control their personal information?
What Does This Mean For Your Business?
Google’s shift in policy towards enabling device fingerprinting for advertising presents a complex dilemma at the intersection of technological innovation, business interests, and individual privacy rights. While the company is defending its decision as an evolution of tracking methods, the concerns raised by regulators, privacy advocates, and sections of the advertising industry can’t be dismissed lightly.
On one hand, fingerprinting offers advertisers a powerful tool to maintain personalisation and relevance in a post-cookie world. For businesses, this represents an opportunity to sustain revenue streams, particularly as users increasingly consume content across diverse devices and platforms. Google’s assurances about deploying privacy-enhancing technologies (PETs) such as on-device processing and secure multi-party computation offer some degree of comfort that sensitive data will be handled with greater care than before.
However, these reassurances do little to address the core issue, i.e. the lack of meaningful user control. Unlike cookies, which users can manage or block, fingerprinting operates invisibly, making it nearly impossible for individuals to opt out without significant technical expertise. This shift risks undermining the principles of transparency and consent that underpin data protection laws such as the GDPR. The criticisms voiced by the ICO and privacy organisations are valid and highlight the tension between commercial interests and the fundamental rights of users to control their personal information.
The challenge ahead for regulators, therefore, will be ensuring that the use of fingerprinting remains within the bounds of legal and ethical standards. While Google’s policy formalises practices already in use, it simultaneously sets a precedent that could normalise more intrusive forms of tracking under the guise of innovation.
Tech News : AI Solves a Decade-Old Superbug Mystery in Just Two Days
A complex scientific problem that took microbiologists a decade to unravel has been cracked in just 48 hours by an advanced artificial intelligence (AI) system developed by Google.
A Decade of Research Solved in 48 Hours
In what many are calling a revolutionary moment for science, researchers at Imperial College London were somewhat stunned when Google’s AI tool, aptly named ‘co-scientist’, managed to solve a mystery that had challenged microbiologists for ten years. The team, led by Professor José R. Penadés, had dedicated years to investigating how superbugs (bacteria resistant to multiple antibiotics) developed their dangerous immunity.
Tails
The crux of their research focused on understanding how some bacteria could acquire ‘tails’ from viruses, enabling them to transfer resistance between different species. This process is akin to bacteria acquiring ‘keys’ that allow them to move between hosts, posing a severe risk to global health.
However, when Prof Penadés submitted a simple prompt to the AI system, without feeding it unpublished data, the tool not only replicated the team’s hypothesis but did so in less than two days.
Four Extra Hypotheses
Incredibly, the AI went further than simply replicating the team’s conclusions and generated four additional hypotheses, all of which, according to the researchers, were scientifically plausible. One of these entirely new insights is now actively being explored by the team, potentially opening up uncharted avenues in the fight against antibiotic resistance.
How Did the AI Crack the Code / Confirm Their Hypothesis?
The AI tool behind this breakthrough, developed by Google DeepMind, was designed as a collaborative assistant rather than a full replacement for human researchers. Branded as a “co-scientist”, the system is purpose-built to aid scientists in hypothesis generation, experimental design, and data analysis.
Rather than simply trawling publicly available data, the AI can synthesise information from a range of inputs, including academic papers, scientific databases, specialised AI feedback loops, and manually submitted private documents.
AI Can Navigate Through Scientific ‘Dead Ends’
According to Dr Tiago Dias da Costa, who co-led the experimental validation work, the true power of the AI lies in its ability to navigate through scientific “dead ends”. These are common in research, with scientists often spending months or even years testing hypotheses that ultimately yield no fruitful results. As Dr Costa points out: “AI has the potential to synthesise all the available evidence and direct us to the most important questions and experimental designs.”
The AI’s ability to eliminate unlikely paths and highlight the most promising ones could dramatically shorten research timelines, potentially bringing life-saving treatments to market much faster than current processes allow.
What Makes This Breakthrough Special?
Perhaps the most astonishing aspect of the discovery is that the AI system managed to reach a complex scientific conclusion without prior access to unpublished research. Prof Penadés initially suspected foul play, jokingly emailing Google to ask if it had somehow accessed his computer. The company confirmed that the AI had only used publicly available information.
This suggests that the AI was able to draw novel conclusions independently, which is something even seasoned scientists can struggle with, especially in fields as intricate as microbiology.
Supporting Scientific Discovery
Professor Mary Ryan, Vice Provost for Research and Enterprise at Imperial, has highlighted the broader implications of this breakthrough, saying: “The world is facing multiple complex challenges—from pandemics to environmental sustainability and food security. To address these urgent needs means accelerating traditional R&D processes, and AI will increasingly support scientific discovery and pioneering developments.”
What Are the Wider Implications?
The research team believes that if they had access to such AI capabilities from the outset, it could have saved them years of work. This has sparked a broader conversation about the role of AI in research in, for example:
– Accelerating discoveries. AI can help researchers rapidly test and refine hypotheses, cutting down on lengthy trial-and-error processes.
– Reducing costs. Speeding up research timelines could dramatically cut the financial costs associated with long-term scientific projects.
– Democratising research. AI could also help level the playing field, giving smaller research teams access to powerful analytical tools once reserved for larger institutions.
Concerns
However, the rise of AI in science isn’t without controversy. There are concerns over the potential loss of jobs and the diminishing role of human intuition in scientific discovery. That said, Prof Penadés offers a different perspective, saying: “It’s not about replacing scientists. It’s about having an extremely powerful tool to help us work smarter and faster. This will change science, definitely.”
A Glimpse into the Future of Scientific Research?
The implications of this breakthrough extend beyond the immediate challenge of antibiotic resistance. As the technology matures, AI systems like Google’s co-scientist could actually redefine how research is conducted across multiple fields, from climate science to drug discovery.
Google researchers suggest that AI could be used to accelerate the literature review process, one of the most time-consuming aspects of scientific research. By quickly analysing vast amounts of information, AI could help scientists identify gaps in existing knowledge and generate novel hypotheses at a rate previously unimaginable.
Also, partnerships like the one between Imperial College London and Google could become a model for future collaborations between academia and the tech industry. The Fleming Initiative, which focuses on combating antimicrobial resistance, aims to expand this model to other pressing global challenges, including:
– Developing rapid diagnostic tools for early detection of infections.
– Leading drug discovery efforts using AI-driven analysis.
– Building international networks of research experts to tackle global health crises.
Cautious Steps
While the technology is still in its early stages, this breakthrough has shown what’s possible when human expertise and AI capabilities work together. For now, researchers remain cautiously optimistic about what’s to come. As Prof Penadés put it: “It’s like playing a Champions League match with the best tools possible—we’re finally competing at the highest level, and the possibilities are spectacular.”
What Does This Mean For Your Business?
This apparently remarkable breakthrough, where Google’s AI ‘co-scientist’ managed to solve a decade-old scientific mystery in just two days, could signal more than just a milestone for microbiology, it could offer a glimpse into the future of scientific discovery and technological collaboration. By demonstrating the capacity to generate not only accurate hypotheses but also entirely new, scientifically plausible insights, AI has proven itself, in this case, as an invaluable asset in pushing the boundaries of human knowledge.
For researchers, the ability to bypass years of trial-and-error, sidestep scientific dead ends, and fast-track promising avenues of investigation could redefine research timelines across countless fields. For example, no longer will progress be bound by human limitations in data processing and analysis. Instead, AI will enable researchers to focus their expertise on refining experiments and validating results with unprecedented efficiency.
The significance of this breakthrough may also stretch far beyond the scientific realm. For businesses, particularly those looking to harness AI to drive growth and innovation, this development offers a lesson in that AI’s greatest strength lies not in replacing human insight but in amplifying it. Companies hoping to leverage AI for commercial gain, whether in pharmaceuticals, retail, finance, or any other sector, can take inspiration from how this technology accelerates discovery and sharpens strategic focus. The same capabilities that help researchers avoid dead ends could help businesses streamline decision-making, predict market trends, and personalise offerings with remarkable precision.
However, as with any transformative technology, there is a need for cautious optimism. Ethical considerations, potential job displacement, and the risks of over-reliance on AI should not be overlooked. The key will be fostering a collaborative relationship between human expertise and machine intelligence, much like the partnership between Imperial College London’s researchers and Google’s AI tool.
Looking ahead, the real triumph will come from how effectively industries and institutions integrate AI into their workflows, not as a replacement for human creativity but as a co-pilot that enhances our ability to solve problems. For both science and business, this breakthrough could represent not just a faster path to solutions, but an entirely new way of thinking about what’s possible when human ingenuity meets machine precision.
Company Check : New Chip Means Quantum Computing In Years, Not Decades
Microsoft has unveiled Majorana 1, the world’s first quantum chip powered by a ‘Topological Core architecture’, which it claims could enable quantum computers to solve complex, industrial-scale problems within years rather than decades.
The Issue
The Majorana 1 chip could signify a pivotal shift in quantum computing development. Unlike conventional processors, which rely on classical bits (the familiar ones and zeroes of modern computing), quantum computers use qubits, i.e. quantum bits that can represent both states simultaneously. While this promises an exponential increase in processing power, qubits are notoriously difficult to stabilise and control due to environmental interference.
Microsoft’s Revolutionary Approach to Quantum Architecture
In the case of Microsoft’s Majorana 1, instead of relying on traditional qubit designs, the company has taken a more ambitious route by developing a new material called a topoconductor. This breakthrough enables the manipulation of elusive Majorana particles, which were once purely theoretical and only recently demonstrated in laboratory conditions.
The creation of this topological state of matter, a new form distinct from solids, liquids, or gases, has therefore allowed Microsoft to produce topological qubits. The advantage is that these are expected to be more stable, less prone to error, and capable of being controlled digitally rather than through complex analogue mechanisms.
Years Rather Than Decades
Chetan Nayak, a technical fellow at Microsoft, has explained the significance of the innovative technology used in the new chip, saying: “Many people have said that useful quantum computers are decades away. I think that this brings us into years rather than decades.” This optimism appears to be built on the company’s ability to scale its technology, aiming for an unprecedented one million qubits on a single chip.
Industrial-Scale Problems Within Reach
The potential impact of this innovation could be transformative across industries. Quantum computers have the capacity to simulate molecular interactions, design new materials, and solve optimisation problems that would take today’s most powerful supercomputers millions of years to process. Microsoft believes these capabilities could unlock advancements in:
– Pharmaceuticals. Accelerating drug discovery by simulating molecular structures with unprecedented precision.
– Energy storage. Designing better, more efficient batteries for electric vehicles and renewable energy.
– Environmental solutions. Developing catalysts to break down microplastics or reduce carbon emissions.
– Advanced manufacturing. Creating self-healing materials for infrastructure, reducing maintenance costs and enhancing safety.
A New Front in the Quantum Computing Race
Microsoft’s announcement about its new chip will, no doubt, have sent ripples across the already competitive quantum technology landscape. Rivals such as Google and IBM have made significant strides with quantum processors using alternative qubit designs. Google’s “Sycamore” processor, for example, made headlines in 2019 for achieving quantum supremacy by solving a problem in 200 seconds that would take classical computers 10,000 years. However, Microsoft’s strategy, though slower in producing short-term results, may prove more scalable in the long run.
While Microsoft’s prototype currently houses eight topological qubits, far fewer than the hundreds achieved by competitors, the company’s promise of a clear path to a million qubits sets it apart. However, experts believe that if Microsoft’s technology can indeed scale as planned, it could actually leapfrog its rivals in the race to build commercially viable quantum machines.
Business and Industry
For businesses and industries poised to embrace quantum computing, this development could radically shift the landscape. For example, being able to solve industrial-scale problems within years rather than decades could lead to:
– Faster innovation cycles. Products designed and tested virtually with quantum precision could dramatically reduce time-to-market.
– Cost reductions. More efficient materials and manufacturing processes could slash production costs.
– Sustainability breakthroughs. Quantum modelling could enable the development of eco-friendly materials and more efficient energy solutions.
Accessing Quantum Capabilities Through The Cloud
Microsoft’s integration of the Majorana 1 chip into its Azure Quantum platform means that businesses will eventually be able to harness these capabilities through cloud services, thereby democratising access to quantum power without the need for prohibitively expensive infrastructure.
A High-Risk, High-Reward Strategy
Microsoft’s focus on topological qubits appears to have been quite a high-risk strategy, given the scientific and engineering challenges involved. For example, until recently, Majorana particles had never been observed in nature and had to be coaxed into existence through precise manipulation of materials at the atomic level.
However, as Krysta Svore (another Microsoft technical fellow) pointed out, the architecture’s simplicity could allow for rapid scalability. Svore said: “It’s complex in that we had to show a new state of matter to get there, but after that, it’s fairly simple. You have a much simpler architecture that promises a faster path to scale.”
The Next Steps for Quantum Computing
Microsoft’s inclusion in the US Defence Advanced Research Projects Agency’s (DARPA) Underexplored Systems for Utility-Scale Quantum Computing (US2QC) programme signals the strategic importance of this technology. If successful, the company could deliver the world’s first utility-scale, fault-tolerant quantum computer, a machine whose computational value exceeds its operational costs.
For now, though, the road ahead remains fraught with technical challenges. Scaling from eight qubits to a million will require solving issues of coherence, error correction, and manufacturing precision on an unprecedented scale.
That said, if Microsoft’s bet pays off, the promise of solving industrial-scale problems within a matter of years could mark the beginning of a new technological era, one where quantum computing transforms everything from materials science to global sustainability efforts.
What Does This Mean For Your Business?
Microsoft’s unveiling of the Majorana 1 chip represents a potential shift in the trajectory of quantum computing itself. The company’s bold move to pursue topological qubits through the manipulation of Majorana particles looks like being both an audacious scientific gamble and a forward-thinking strategy aimed at overcoming some of the most persistent obstacles in the field.
While rivals like Google and IBM have made headlines with short-term achievements using more traditional qubit designs, Microsoft’s approach seeks to tackle the longer-term challenge of scalability and stability. By leveraging a fundamentally different quantum architecture, the company may ultimately sidestep the fragility that plagues conventional quantum systems. If successful, this could place Microsoft at the forefront of a technological race that has, until now, seemed more theoretical than practical.
It should be noted that, although the signs are good, caution is needed because technical hurdles like maintaining coherence and error correction are not trivial and could be pretty challenging for Microsoft. That said, Microsoft’s confidence, underpinned by integration with its Azure Quantum platform, suggests a readiness to bring quantum capabilities to businesses and researchers sooner than previously imagined.
The implications for industry and society at large could be transformative. From revolutionising drug discovery to enabling breakthroughs in clean energy and sustainable manufacturing, the possibilities of scalable quantum computing extend far beyond academic curiosity. The prospect of solving industrial-scale problems in years rather than decades could accelerate innovation cycles, reduce costs, and unlock sustainable solutions previously out of reach.
Security Stop Press : Cybercriminals Bypassing MFA With Device Code Phishing
Microsoft has reported uncovering a cyberattack campaign by Storm-2372, a group linked to Russian interests, using a technique called device code phishing to bypass multi-factor authentication (MFA) and steal access tokens.
Active since August 2024, the group targets governments, NGOs, and industries including defence, telecoms, energy, and healthcare across Europe, North America, Africa, and the Middle East. In device code phishing, attackers trick users into entering a legitimate authentication code, sent via fake meeting invites on platforms like Microsoft Teams and WhatsApp, on a genuine sign-in page. This hands over valid tokens, granting unauthorised access.
Recent activity shows a shift towards using Microsoft Authentication Broker’s client ID to gain persistent access by registering rogue devices inside compromised networks. Microsoft warns these attacks are especially effective because they mimic legitimate login workflows.
To defend against device code phishing, businesses should block unnecessary device code flows, strengthen Conditional Access policies, educate users about phishing risks, and use phishing-resistant MFA methods such as FIDO tokens.
Sustainability-in-Tech : Recyclable Plastics Using Light and Solvent
Scientists from a Swiss university have discovered a way to break down Plexiglass into its original building blocks using violet light and a common solvent, thereby making recycling plastics far more efficient and potentially helping to tackle global plastic waste.
Cracking the Plastic Code: The Science Behind the Discovery
The process, developed by lead researcher Dr Hyun Suk Wang at ETH Zurich, works by exposing Plexiglass, a type of polymethacrylate, to violet light while it is submerged in dichlorobenzene solvent. The scientists discovered that this exposure releases chlorine radicals from the solvent, which then break apart the strong carbon-carbon bonds in the plastic. The result is the recovery of methyl methacrylate (MMA), the original monomer building blocks from which Plexiglass is made.
This recovered monomer can be purified and repolymerised without losing any material quality, unlike traditional recycling methods that involve shredding, cleaning, and remelting. Those older methods degrade the properties of plastic with each cycle, whereas this new chemical process allows the material to be fully restored to its original state.
The Scale of the Plastic Waste Problem
The scale of plastic pollution globally remains a significant challenge. For example, over 400 million metric tonnes of plastic waste are produced worldwide each year and yet, only around 9 per cent of this waste is successfully recycled. Also, rather than being recycled, half ends up in landfills, while another 19 per cent is incinerated. One particularly damaging effect of our plastic use is that around 11 million metric tonnes of plastic enter the ocean annually, harming ecosystems and marine life.
Plexiglass Particularly Problematic
Polymethacrylates like Plexiglass are particularly problematic due to their durability and widespread use in industries ranging from construction to electronics. This resilience, while useful in manufacturing, makes them resistant to breaking down in traditional recycling systems.
Closing the Loop
Lead researcher Dr Hyun Suk Wang and his team have said they believe their light-based method could transform how Plexiglass and similar plastics are recycled. Dr Wang says: “By recovering monomers in near-pristine condition, we can effectively close the loop on Plexiglass production.”
The Implications
If adopted at scale, the implications of this breakthrough could include:
– Reduced use of fossil fuels. Since virgin plastic production depends on fossil resources, recycling monomers could significantly cut demand for petrochemical feedstocks.
– Lower energy consumption. The process requires less energy than current methods, which often involve high temperatures and extensive mechanical processing.
– Industrial adaptability. Preliminary tests suggest that the process can be applied on a larger scale with precision and control, making it a candidate for industrial recycling operations.
Is It Scalable?
It should be noted, however, that for this discovery to be commercially viable, several key challenges need to be addressed, which include:
– Being able to generate violet light at scale. The process depends on specific wavelengths of light, meaning industrial-level violet light sources would be necessary.
– Handling dichlorobenzene safely. The solvent used in the process is hazardous and would require strict handling protocols to ensure worker and environmental safety.
– Economic feasibility. Any new technology must be cost-competitive with the low expense of producing virgin plastics from petrochemicals.
Despite these hurdles, the researchers remain optimistic. As co-author Professor Athina Anastasaki points out, “What makes this process so promising is its ability to work on a wide range of polymethacrylates, regardless of how they were originally manufactured.”
What Next?
The research team is now working on refining the technique to handle mixed plastic waste streams, a major obstacle in current recycling systems. They are also exploring alternative, less toxic solvents to improve the process’s environmental impact.
At the same time, discussions are taking place with industrial partners to assess how this technology might be integrated into existing recycling facilities.
What Does This Mean For Your Organisation?
This breakthrough in recycling Plexiglass using violet light and a common solvent could mark a promising step forward in addressing the global plastic waste crisis. The discovery by Dr Hyun Suk Wang and his team at ETH Zurich presents a genuinely innovative approach – one that allows plastics to be broken down into their original building blocks without degrading their quality. By recovering monomers in a near-pristine state, this method could redefine what it means to “recycle” plastics, moving beyond the traditional processes that weaken materials with each cycle.
The potential environmental benefits are clear. If this technology can be successfully scaled, it could significantly reduce the dependence on fossil fuels required for producing virgin plastics, cutting both carbon emissions and petrochemical consumption. Furthermore, the process’s lower energy demands compared to conventional recycling could provide a more sustainable and economically viable solution, particularly for industries with high energy consumption rates.
For businesses, especially those in manufacturing, construction, and consumer goods, this development could offer both economic and strategic advantages. Companies that rely heavily on plastics might see reduced costs in sourcing high-quality recycled materials, avoiding the need to purchase more expensive virgin plastics. Also, integrating this technology into supply chains could help businesses meet increasingly stringent sustainability targets and regulatory demands around recycling and carbon emissions.
Beyond compliance, there is also the potential for businesses to strengthen their brand reputation by aligning with environmentally responsible practices. Early adopters of such groundbreaking recycling methods could position themselves as leaders in sustainability, attracting eco-conscious consumers and investors alike. However, industries will need to assess the commercial feasibility carefully, considering factors such as the cost of installing violet light technology and handling hazardous solvents like dichlorobenzene.
That said, significant obstacles remain. The need for scalable violet light sources and safe handling of potentially hazardous solvents are non-trivial challenges that could slow widespread adoption. Also, the economic viability of this method will need to be thoroughly tested against the low costs associated with producing virgin plastics, a factor that has historically undermined efforts to expand plastic recycling.
The optimism shown by researchers like Professor Athina Anastasaki highlights the broader potential of this technology. If successful refinements are made, particularly in handling mixed plastic waste streams and identifying safer solvents, the process could become adaptable enough for industrial-scale use.
While this innovation is not without its hurdles, this research looks as though it could open an exciting new chapter in the fight against plastic pollution. If industry stakeholders, policymakers, and scientists can work together to overcome the technical and economic barriers, this light-driven recycling method could play a pivotal role in creating a truly circular economy for plastics.