Sustainability-In-Tech : UCLA Polymer Device Cools Without Fans or Refrigerants
A small, flexible cooling device developed by UCLA scientists can continuously reduce surrounding temperatures by up to 16°F, offering a sustainable alternative to traditional air conditioning.
A Compact, Solid-State Breakthrough in Cooling
Researchers at UCLA have unveiled a new cooling technology that operates without refrigerants, fans or compressors. Instead, it uses layers of flexible polymer films that expand and contract in response to an electric field, thereby actively removing heat. The tiny device, just under an inch wide and a quarter of an inch thick, offers a lightweight, energy-efficient alternative to conventional systems, and has already demonstrated the ability to lower ambient temperatures by nearly 9°C (16°F) continuously in lab tests.
Uses The ‘Electrocaloric Effect’
The prototype uses the electrocaloric effect, which is a property found in certain materials that causes them to change temperature when exposed to an electric field. However, this project has gone further than earlier experiments by pairing this effect with electrostrictive motion, i.e. the polymer also physically moves when charged, allowing the researchers to create a dynamic pumping action that shifts heat away from the source.
Designed With Wearables and Portables in Mind
The lead developer, Professor Qibing Pei of the UCLA Samueli School of Engineering, described the innovation as “a self-regenerative heat pump” and believes it could be ideal for wearable cooling systems. “Coping with heat is becoming a critical health issue,” he said, citing the growing dangers of heat stress in both industrial and consumer contexts. “We need multiple strategies to address it.”
The UCLA team sees wide potential for the design in personal cooling accessories, flexible electronics, and mobile systems used in hot environments. The films are flexible, lightweight, and made without liquid coolant or moving parts, which means they could be incorporated into garments, safety gear, or on-the-go electronic equipment where heat management is essential.
For example, warehouse and outdoor logistics workers in hot climates could benefit from clothing-integrated cooling components. Also, remote field technicians or engineers working on battery-heavy devices in poorly ventilated spaces could also deploy portable cooling pads to protect both personnel and electronics.
A Re-think of How Cooling Systems Are Built
Traditional cooling systems rely on vapour compression, a process that typically uses refrigerants such as hydrofluorocarbons (HFCs). These are powerful greenhouse gases, and while the Kigali Amendment and other measures have helped phase them down, their use remains widespread. Vapour-compression cooling is also relatively mechanically complex, energy-intensive, and bulky.
By contrast, UCLA’s design eliminates the need for refrigerants entirely. Each layer in the stack is coated with carbon nanotubes and acts both as a charge carrier and a heat exchanger. As an electric field is applied, alternating pairs of layers compress and expand in sequence, creating a kind of mechanical ‘accordion’ that actively moves heat from the source through the material and out into the environment.
Hanxiang Wu, one of the paper’s co-lead authors and a postdoctoral scholar in Pei’s lab, explained that the device’s core advantage is its simplicity. “The polymer films use a circuit to shuttle charges between pairs of stacked layers,” he said. “This makes the flexible cooling device more efficient than air conditioners and removes the need for bulky heat sinks or refrigerants.”
Sustainability Advantages for the Built Environment
For commercial and industrial sectors, the implications of this development could be significant. While the current model is small-scale, the underlying principle could enable more energy-efficient climate control in buildings and vehicles if adapted into broader system designs.
For example, smaller commercial premises, off-grid cabins, or remote infrastructure hubs could use scaled-up polymer-based systems to passively remove heat without heavy energy use. Similarly, businesses looking to reduce their cooling-related carbon footprint could integrate such systems into server racks, battery storage units, or sensitive workspaces where localised heat management is critical.
Unlike passive radiative cooling materials, which typically require exposure to the open sky and only work under certain conditions, this system functions independently of ambient humidity, weather, or sunlight. Its electricity-only operation means that when powered by renewables, the cooling process can be entirely emissions-free.
Markets and Use Cases with the Most to Gain
While mainstream residential HVAC systems are unlikely to be replaced overnight, sectors requiring portable, distributed, or wearable cooling solutions may see faster uptake. This includes defence, first responders, sports performance, outdoor event staffing, and high-temperature industrial roles such as glass or steel manufacturing.
The research team has already filed a patent and is exploring future product development. Pei confirmed the device could also be adapted to cool flexible electronics and embedded sensors. In particular, industries working on wearable tech, soft robotics, and thermal regulation in electric vehicles may find these materials offer a compact and scalable solution.
The innovation also opens the door to new kinds of thermal design for electronics. For example, temperature-sensitive components such as lithium batteries, processors, or optical sensors could benefit from localised solid-state cooling that does not compromise device flexibility or mobility.
Still in the Early Stages
Despite the promise, this technology is still in its early stages and, as with many materials science innovations, scaling up from lab to market presents challenges. Currently, the temperature drop of 8.8°C below ambient was achieved under carefully controlled test conditions and for small surface areas.
However, maintaining this level of performance over larger spaces, longer durations, or in real-world outdoor environments will require further development, particularly around durability, power consumption, and integration with fabrics or casings.
Another limitation is cost. While the polymers and carbon nanotubes used are relatively accessible, mass-manufacturing precision-layered ferroelectric film stacks could prove complex and expensive without production breakthroughs. Reliability under repeated use and extreme conditions is another consideration, especially for use in wearables or industrial settings.
Energy consumption is also an issue that really matters. For example, while the device itself uses low-voltage electricity, constant operation across large areas would still draw power, meaning the overall carbon footprint depends on the source of that electricity.
Concerns have also been raised in the wider field about the longevity of electrocaloric materials under stress. For example, ferroelectric polymers can degrade over time, especially under high cycling rates, and the cumulative effects of charge and discharge cycling on mechanical integrity are not yet fully known.
What Does This Mean For Your Organisation?
For now, the most immediate value for this innovation appears to lie in small-scale, high-impact use cases. Businesses operating in hot environments, whether in logistics, manufacturing, or field services, may be among the first to benefit from wearable or portable versions of this cooling technology. If the materials can be manufactured at scale and integrated into clothing or equipment affordably, it could improve productivity, reduce health risks, and lower demand for energy-hungry air conditioning. UK companies involved in the design of smart workwear, industrial safety gear, or modular electronics may also find opportunities in applying or adapting this technology into their own products.
Beyond wearables, the principle behind this cooling system offers a fresh approach to thermal management that could influence future designs in everything from data centres to electric vehicles. For UK firms in clean tech, energy-efficient infrastructure, or defence systems, this could represent a new avenue for collaboration or licensing. It also sits comfortably alongside national net zero goals, particularly in cutting energy consumption and phasing out refrigerant-based systems. However, progress will depend on whether UCLA’s lab success can translate into real-world resilience, cost efficiency, and ease of integration.
The wider lesson is that cooling does not have to mean compressors, gas, or fans. By embedding thermal functionality directly into the material structure, this research challenges long-held assumptions and opens up routes to smarter, lighter, and greener alternatives. For now, the technology is experimental and best seen as part of a wider portfolio of next-generation cooling methods. However, as climate challenges grow and energy costs rise, pressure is mounting on both researchers and businesses to bring practical alternatives like this to market sooner rather than later.
Video Update : Using Different Personalities in ChatGPT
ChatGPT offers four distinct pre-made ‘personalities’, namely : cynic, robot, nerd, and listener. You can ask for your content to be output through any (or all) of these different personality-types, thereby giving you the ability to get different responses, according to each personality type. Depending on your audiences and/or research, getting different perspectives could be very useful indeed.
[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]
Tech Tip – How To Remove Document Metadata for Security
Use Word’s Document Inspector to remove potentially sensitive metadata, such as author names and comments, to protect your information when sharing documents.
– Go to File > Info > Check for Issues > Inspect Document.
– Select the types of metadata to inspect and remove.
– Run the inspection and review the results.
– Remove the metadata as needed.
This helps ensure your document doesn’t inadvertently expose sensitive information.
Featured Article : 300,000 + Grok Chats Exposed Online
Hundreds of thousands of conversations with Elon Musk’s Grok chatbot have been discovered in Google Search results, while China has released DeepSeek V3.1 as a direct rival to GPT-5, together raising urgent questions about privacy, competition and security in the AI market.
How the Grok Leak Happened
The exposure of Grok transcripts was first reported by Forbes, which identified more than 370,000 indexed conversations, and later confirmed by the BBC, which counted nearly 300,000 visible through Google Search. The numbers differ slightly depending on how the search engine indexes the material, but both point to a vast volume of data becoming public without users’ awareness.
It appears that the cause lies in how Grok’s “share” feature works. For example, each time a user chose to share a chat, Grok generated a unique webpage containing the transcript. Since those pages were not blocked from being crawled, search engines such as Google indexed them automatically. What many users may have assumed would be a private or semi-private link was, in fact, publicly available to anyone searching the web.
What Was Revealed?
Reports indicate that the published material varied widely in content. Some transcripts showed harmless exchanges, such as meal plans or password suggestions. Others contained much more sensitive prompts, including questions about medical issues, details about mental health, and even confidential business information.
More troubling still, some indexed conversations reportedly included Grok’s responses to attempts at “red teaming” the system, essentially testing its limits. These produced instructions for making illicit drugs, coding malware and building explosives. In at least one case, the chatbot provided a detailed description of an assassination plot.
This mixture of personal data, sensitive queries and dangerous instructions underlines the scale of the problem. Once indexed, such material can be copied, cached and shared indefinitely, making it difficult (if not impossible) to remove entirely.
Risks for Users and Businesses
For individual users, the risk is obvious. Even if names and account details are obscured, prompts often contain information that could identify a person, their health status, or their location. Privacy experts also warn that conversations about relationships, finances or mental wellbeing could resurface years later.
For businesses, the implications may be more severe still. Many companies now experiment with AI tools for drafting documents, brainstorming ideas or even testing security scenarios. If such exchanges end up publicly indexed, they could inadvertently reveal trade secrets, security weaknesses or sensitive commercial plans. For regulated sectors like healthcare and finance, this creates potential compliance issues.
A Setback for Musk’s AI Venture
For xAI, the start-up behind Grok, this discovery is particularly awkward. Grok has been marketed as a distinctive alternative to established players like OpenAI’s ChatGPT or Google’s Gemini, with direct integration into Musk’s social platform X. However, the exposure of hundreds of thousands of conversations clearly undermines that positioning, fuelling questions over whether xAI has adequate safeguards in place.
The incident is also notable because Musk had previously criticised rivals over similar missteps. Earlier this year, OpenAI briefly allowed shared ChatGPT conversations to be indexed before reversing course after user complaints. At the time, Musk mocked the issue and celebrated Grok as safer. The latest revelations make that claim harder to sustain.
Not the First Time
This is not the first case of chatbot transcripts spreading more widely than users expected. OpenAI’s trial of a shareable ChatGPT link caused uproar earlier in the year, and Meta’s AI tool has faced similar criticism for publishing shared chats in a public feed.
What sets the Grok case apart, however, is the apparent scale and duration. The indexing appears to have been ongoing for months, creating a large reservoir of material in search engines. The mix of personal information with instructions for harmful activity adds another layer of controversy.
China’s DeepSeek Raises the Stakes
Just as Grok’s making the news for the wrong reasons, China’s AI sector has added a new dimension to the debate with the release of DeepSeek V3.1, an open-weight model that experts say matches GPT-5 on some benchmarks while being priced to undercut it. The model was quietly launched via WeChat and posted on the Hugging Face platform, and has been optimised to perform well on Chinese-made chips. This reflects Beijing’s determination to build advanced AI systems without relying on Western hardware, particularly Nvidia GPUs, which are increasingly subject to U.S. export controls.
Technically, DeepSeek V3.1 is striking because of its architecture. With 685 billion parameters, it sits at the level of many so-called “frontier” models. However, its mixture-of-experts design means only a fraction of the model activates when answering a query, cutting costs and energy use while combining fast recall with step-by-step reasoning in a single system. This hybrid approach is something only the very top commercial models have offered until now.
The release clearly has some significant competitive implications. OpenAI chief executive Sam Altman admitted that Chinese open-source models such as DeepSeek influenced his company’s decision to publish its own open-weight models this summer. If developers can access a powerful model for a fraction of the cost, the balance of adoption may tilt quickly, especially outside the United States.
Security and Governance Concerns Around DeepSeek
While DeepSeek’s technical capabilities are impressive, it seems that some serious security concerns remain. For example, Cisco researchers previously flagged critical flaws in the DeepSeek R1 model that made it vulnerable to prompt attacks and misuse, with tests showing a 100 per cent success rate in bypassing safeguards. Researchers have also observed the model internally reasoning through restricted queries but censoring its outputs, raising questions about hidden biases and control.
For businesses, the central issue here is data governance. UK security experts warn that using DeepSeek for workplace tasks is effectively the same as sending confidential information directly into mainland China, where it may be subject to state access and outside the reach of UK and EU data protection laws. Surveys of UK security leaders show widespread concern, with six in ten believing tools like DeepSeek will increase cyber attacks on their organisations, and many calling for government guidance or outright restrictions.
It should be noted that some countries have already acted. For example, South Korea has suspended new downloads of DeepSeek over privacy compliance, while Germany and Australia have imposed limits on its use in government and critical sectors. In the U.S., senators have urged a formal investigation into Chinese open models, citing risks around data security and intellectual property.
What This Means for the AI Market
The Grok exposure shows how a simple design oversight can turn into a mass privacy failure, while DeepSeek’s release highlights how quickly competition can shift when cost and accessibility align with national industrial strategy. Together they underscore a market where the pace of innovation is outstripping safeguards, leaving both users and regulators struggling to keep up.
Expert and Privacy Group Reactions
Researchers and privacy advocates have warned that the Grok incident highlights a growing structural risk. Dr Luc Rocher of the Oxford Internet Institute has called AI chatbots a “privacy disaster in progress,” noting that sensitive human information is being exposed in ways that current regulation has not kept pace with.
Carissa Véliz, an ethicist at Oxford University, has similarly argued that technologies which fail to clearly inform users about how their data is handled are eroding public trust. She stresses that users deserve transparency and choice over whether their data is made public.
These warnings are consistent with long-standing concerns that AI providers are moving faster than regulators, with weak or inconsistent controls around data sharing.
Who Is Responsible?
Google has confirmed that website owners, not search engines, determine whether content is indexed. In practice, this means responsibility lies with xAI, which could have prevented the problem by blocking the shared pages from being indexed. So far, xAI has not issued a detailed statement, leaving open questions about when the feature was introduced, why there were no clear warnings for users, and what steps will now be taken.
What Can Be Done Now?
The most immediate step would be for xAI to change how its share feature works, either by making links private by default or adding technical restrictions to stop indexing. Privacy experts also stress the need for clearer disclaimers so users understand the risks before sharing.
For users, the advice is to avoid using the share button altogether until changes are made. Screenshots or secure document sharing may be safer alternatives for distributing chatbot outputs. However, those people whose conversations are already exposed face a harder challenge because, even if pages are taken down, cached versions may persist online.
DeepSeek’s rise shows that these issues are not limited to one provider. With its open-weight release, concerns focus less on accidental exposure and more on where data is processed and how it may be governed under Chinese law. Security specialists warn, therefore, that uploading business information into DeepSeek could mean that sensitive material is stored in mainland China, beyond the reach of UK or European compliance frameworks. For companies, this means risk management must cover both inadvertent leaks, as with Grok, and structural governance gaps, as with DeepSeek.
For regulators and policymakers, the combined picture will likely feed into calls for stronger oversight of AI services, particularly as businesses increasingly rely on them for sensitive tasks. Voluntary measures may no longer be enough in a landscape where user data can be published at scale with a single click or transmitted to jurisdictions with very different rules on privacy and access.
What Does This Mean For Your Business?
The Grok exposure highlights the risks of rapid AI deployment without basic data protections, while DeepSeek’s open-weight advance illustrates how quickly competition can shift the ground beneath established players. For UK businesses, the lesson is that generative AI tools cannot be treated as safe environments for sensitive or commercially valuable information unless they are placed behind clear enterprise guardrails. Any organisation using these tools should assume that prompts and outputs could be made public, and should ensure that procurement, data governance frameworks and regulatory compliance are in place before rolling out AI systems at scale.
Privacy advocates and academics have been clear that these events illustrate systemic flaws, not isolated mistakes. Governments are already responding, with bans and suspensions in some countries and calls for investigations in others, and further measures are likely if risks are not addressed. For xAI, the task is to regain trust by fixing its sharing features. For DeepSeek, the challenge is to prove that low-cost open models can also deliver robust safeguards. For the AI industry as a whole, the message is that transparency, data protection and security must move from the margins to the centre of product design. Without that shift, trust from users and businesses will remain fragile, and the adoption of these tools will be held back.
Tech Insight : How Your ‘Metadata’ Helps Scammers
In this tech insight, we look at how hidden metadata (embedded in files, emails, images and documents) is increasingly being used by scammers to profile, deceive and attack UK businesses, and how firms can protect themselves.
What Is Metadata and Why Does It Matter to Businesses?
Metadata is often described as “data about data”. It is the invisible layer of information attached to digital content, i.e. emails, Word documents, PDFs, spreadsheets, photographs, that describes how, when and by whom the file was created. Most people only ever see the visible content, but underneath lies a wealth of additional detail.
For example, a photo shared externally might contain GPS coordinates and the type of device used. Also, a Word document may carry the author’s name, the company domain, editing history and even internal file paths. Emails include headers that record the sending IP address, the mail server used, and the route taken.
Therefore, as highlighted by these examples, this invisible data matters because criminals do not always need to break into a system to learn about it. Metadata provides them with an information-rich trail they can use to understand how a business operates, who works there, and what technologies are in place. For UK businesses already facing high levels of phishing and fraud, this exposure creates another avenue for attack.
How Scammers Exploit Metadata
– Mapping the Organisation
In the early stages of a cyberattack, reconnaissance is everything and metadata is a valuable source of intelligence, helping attackers map out how an organisation works. For example, email headers can reveal communication patterns between staff. File metadata can identify the software tools a business relies on. Author names, revision histories and internal folder structures point to job roles and responsibilities. All of this can help scammers to build a picture of the target before any overt intrusion is attempted.
– Spear Phishing and Business Email Compromise
Metadata turns generic phishing into precision-engineered deception. For example, fraudsters can use internal project names or document formats drawn from metadata to make their phishing emails look authentic. In Business Email Compromise (BEC) scams, where criminals impersonate senior executives or trusted partners, metadata-derived details lend credibility and increase the likelihood of success.
The scale of phishing in the UK highlights the danger. For example, the UK Cyber Security Breaches Survey 2025 found that 43 per cent of businesses suffered a cyberattack or breach in the past year, equating to around 612,000 organisations. Of these, 85 per cent identified phishing as the cause, making it by far the leading threat. Also, separate research by Visa reports that 41 per cent of UK SMEs suffered fraud in the last year, with phishing, invoice scams and bank hacks the most common methods.
– Document-Level Social Engineering
Documents uploaded to websites or sent externally can inadvertently expose staff names, revision histories and company systems. Attackers use these details to craft fake invoices, letters or reports that look convincing. Security firm Outpost24 has shown how document metadata can reveal usernames, shared drive paths and software versions, all of which can be weaponised in targeted scams.
Real-World Lessons from Metadata
Several cases over the past two decades show how metadata, often overlooked in day-to-day business use, can surface in ways that expose sensitive information or provide attackers with a clear advantage.
– Merck Vioxx Litigation. In a landmark legal case, Microsoft Word documents disclosed revision histories showing that negative clinical trial results had been deleted. While not a cyberattack, it underlines how damaging metadata can be when exposed.
– Public Document Reconnaissance. Researchers at cybersecurity company Outpost24 demonstrated how simple metadata inspection of public files can expose organisational hierarchies and IT systems, effectively handing attackers a blueprint for intrusion.
– Email Metadata Inference. Academic studies have shown how even anonymised email metadata can reveal relationships between employees, peak activity times and internal workflows, demonstrating the power of metadata even without direct content access.
The Bigger Picture
The 2025 Cyber Security Breaches Survey also revealed that ransomware incidents, though less common than phishing, doubled from 0.5 per cent to 1 per cent of UK businesses, affecting nearly 19,000 firms. Meanwhile, cyber-enabled fraud hit 3 per cent of businesses, with average losses of £5,900, rising to £10,000 when excluding zero-cost cases.
Visa’s SME research shows that fraud cost small firms an average of £3,808 each, while the UK’s National Crime Agency continues to highlight phishing and social engineering as dominant forms of cyber-enabled crime.
These findings illustrate how metadata sits at the heart of many of today’s most prevalent attacks. By offering a hidden but rich data source, it makes phishing easier to personalise and fraud more convincing.
Metadata’s Dual Role
It should be noted, however, that metadata is not always a liability. For example, investigators and compliance officers use it to verify documents, trace timelines and detect manipulation. Revision histories, for example, can prove when a file was altered, while consistent timestamps across files can support fraud detection.
The problem is that criminals are equally aware of this. Fraudsters often scrub or alter metadata to conceal tampering, complicating detection efforts. Shift Technology has noted that this deliberate scrubbing is now a common tactic to cover fraudulent activity.
For businesses, the challenge is striking a balance: retain metadata internally where it supports compliance and investigation, but ensure sensitive metadata is removed before documents are shared externally.
Practical Steps for Businesses
Thankfully, there are straightforward measures that both individual employees and organisations can take to reduce the risks posed by metadata exposure. For example:
User-Level Actions
– Remove metadata before sharing externally. Tools such as Microsoft Office’s “Inspect Document” or PDF sanitisation features can strip out hidden data.
– Use VPNs when remote working. This helps mask IP addresses that could otherwise be logged in email headers.
– Be wary of attachments. Metadata-driven spear phishing makes fraudulent documents look highly credible.
– Provide staff training. Employees must understand that even ordinary files can carry sensitive metadata that exposes the business.
Organisational Controls
– Enforce metadata hygiene policies. Configure systems to automatically remove sensitive properties from outgoing files.
– Conduct metadata audits. Regularly check websites, shared drives and repositories to ensure sensitive details are not exposed.
– Harden email systems. Configure Microsoft 365 and other platforms to minimise metadata leakage, anonymise IPs and encrypt communications.
– Preserve metadata for internal use. Maintain full records for audit, compliance and fraud detection, while ensuring only sanitised files leave the organisation.
What Does This Mean For Your Business?
Metadata has become one of the least visible but most powerful tools in the arsenal of cybercriminals. What appears to be an ordinary email or document can, in fact, provide scammers with all the intelligence they need to plan their next move. For UK businesses already contending with phishing, invoice fraud and cyber-enabled crime on a large scale, the risk is not theoretical but immediate. The figures from recent surveys underline the point that metadata is often the hidden enabler of attacks that are already costing firms time, money and trust.
The picture is complicated by the fact that metadata is also useful. Security teams, regulators and auditors depend on it to investigate wrongdoing and prove authenticity. Stripping it away entirely can weaken fraud detection and compliance efforts, while leaving it exposed can give criminals the information they need. This balancing act is one that every organisation, large or small, must now face.
For business leaders, the message is clear. Metadata management can no longer be treated as a technical afterthought. It must be factored into security policies, training programmes and compliance strategies. Firms that take proactive steps will not only reduce their exposure to scams but also strengthen their ability to investigate incidents and demonstrate resilience. Those that fail to act risk leaving themselves open to increasingly sophisticated fraud that leverages the very information they generate every day.
Beyond individual businesses, the issue has wider implications. For example, regulators, technology providers and law enforcement agencies all have a stake in how metadata is handled. The growing use of artificial intelligence in both cyber defence and criminal activity means metadata is likely to play an even larger role in the future. For the UK economy, where small and medium-sized enterprises form the backbone, raising awareness and embedding good practice will be crucial in reducing vulnerability across the board.
News : New AI ‘Always-On’ Smart Glasses With ‘Infinite Memory’
Two former Harvard students are preparing to launch a pair of ‘always-on’ AI smart glasses that record and transcribe every conversation, which offers wearers an unprecedented digital memory and real-time information, but which also raises concerns about privacy and surveillance.
Who Is Behind the Project?
The device, named Halo X, has been developed by AnhPhu Nguyen and Caine Ardayfio, who left Harvard to pursue the venture. The duo previously made the news when they built a facial recognition app capable of identifying strangers using Meta’s Ray-Ban glasses. That earlier experiment was intended to highlight privacy risks, but their latest work shifts focus to turning wearable AI into a mainstream productivity tool.
Backers
Backed by $1 million in seed funding from US investors, including Pillar VC and Soma Capital, the pair are pitching Halo X as a way to extend human intelligence. They describe the glasses as offering “infinite memory” by capturing and processing every spoken word in real time.
How the Glasses Work
Halo X looks like conventional eyewear, but its frame conceals microphones and a discreet display. Conversations are captured continuously, transcribed by speech recognition software, and then fed through AI systems that provide real-time prompts, reminders, or supporting information via the lens.
The transcription engine is provided by California-based firm Soniox, while reasoning comes from Google’s Gemini model, and internet search is integrated through Perplexity. The company says audio is deleted once transcribed, rather than stored, and stresses that it is working towards stronger compliance and encryption measures to reassure future buyers.
Why ‘Always-On’ Matters
The standout difference is that Halo X does not require activation. For example, unlike Meta’s Ray-Bans or Snap’s Spectacles, which focus on recording photos or short clips, Nguyen and Ardayfio’s glasses are designed to listen continuously. The idea is that conversations should never be missed, whether in meetings, social interactions or chance encounters.
For the founders, this constant operation appears to be a key way to make the product genuinely useful and extra convenient to users. By eliminating the need to press record or take manual notes, they believe the glasses can function as a true cognitive assistant, capturing every word and supplying relevant data instantly.
Used For What?
The potential business applications are wide-ranging. For example, in meetings, the glasses could create a full transcript without a separate note-taker. In sales, staff could receive live prompts about a client’s history or preferences. In medicine or law, professionals could rely on transcripts to support record-keeping.
For companies, the attraction is therefore the ability to document and analyse conversations automatically could save time and improve accuracy. However, the risks are equally apparent. For example, in industries where confidentiality is paramount, the presence of an always-listening device could be problematic, and firms would need to establish strict policies on when and how such glasses could be used.
A Challenge to Established Players
By launching at $249, Halo X is priced far below devices like Apple’s Vision Pro, which retails at over $3,000. While those products emphasise immersive mixed reality, Halo X focuses squarely on augmenting everyday communication. This positions the glasses as less of an entertainment device and more as a professional tool, potentially creating a new niche in the wearables market.
For larger rivals such as Meta, Google and Apple, the arrival of Halo X shows that smaller startups can still push wearable AI in radical directions. Whether consumers accept the “always-on” trade-off remains to be seen, but the glasses represent a more practical, lightweight alternative to bulkier headsets.
Privacy and Security at the Forefront
The design has inevitably raised concerns about privacy. Unlike Meta’s glasses, Halo X does not include a recording light or other visible indicator. While the company insists that audio is deleted after transcription, critics argue that the very act of constant listening could erode expectations of privacy in public or private settings.
In the US, state-level two-party consent laws make it illegal to record a conversation without permission from all participants. In the UK, covert recording in workplaces or client meetings could also breach both legal and ethical standards. For businesses, allowing staff to wear Halo X may therefore carry compliance risks as well as reputational ones.
Practical Limitations
Aside from privacy, technical factors may also determine the glasses’ fate. For example, battery life is one question. Continuous recording and processing could quickly drain power, making it difficult to wear the glasses all day. Comfort and social acceptability are other issues. Google Glass failed partly because wearers faced social backlash, and some analysts suggest the same could happen with Halo X if people feel uncomfortable being around someone whose glasses are always listening.
On the technical front, accuracy in noisy environments remains another test. Speech recognition systems have improved dramatically, but background noise, multiple speakers and varied accents can still reduce reliability. For the glasses to gain traction in professional settings, they will need to deliver consistently accurate transcriptions and responses.
Regulation and Adoption Challenges
It should be noted that, useful features aside, for UK businesses and regulators, Halo X poses some immediate questions. Under the General Data Protection Regulation (GDPR), the recording and processing of personal data requires a lawful basis, and covert capture of conversations could put companies in breach of strict compliance rules. The UK’s Information Commissioner’s Office (ICO) has previously warned firms against deploying surveillance technology without clear justification, and wearable devices such as Halo X may well fall into that category.
Sectors that deal with highly sensitive information, from financial services to healthcare, would face particular scrutiny. Employers would need to weigh the potential productivity benefits against the risk of breaching confidentiality or data protection law. Even if Halo X deletes recordings after transcription, the act of processing still constitutes data handling, meaning it falls under regulatory oversight.
At the same time, adoption patterns are likely to differ by industry. Technology and creative sectors, which often embrace early experimentation, may be more open to trialling the glasses. By contrast, regulated professions such as law and medicine may take a more cautious approach until clearer guidelines are established.
On the competitive side, Halo X enters a market where the biggest technology firms are betting heavily on wearables. Apple’s Vision Pro and Meta’s Ray-Ban glasses emphasise entertainment and communication, while Microsoft continues to back its HoloLens for enterprise. By focusing on continuous transcription and contextual intelligence, Halo X is carving out a different niche, but it remains to be seen whether customers will accept the trade-offs required by an always-on design.
What Does This Mean For Your Business?
For UK businesses, the decision to engage with technology like Halo X will hinge on balancing potential productivity gains against legal, ethical and reputational risks. The glasses could transform how meetings are run and how records are kept, offering speed and convenience that current tools cannot match. Yet the same features that make them useful also create liability. Firms operating in sectors with strict confidentiality obligations may find adoption more of a risk than an opportunity unless stronger safeguards are developed.
For regulators, Halo X represents the next stage in a wider debate over wearable AI. Questions around informed consent, data processing, and acceptable limits on surveillance are not new, but devices that operate constantly and silently bring these issues to the surface in sharper terms. Authorities such as the ICO will almost certainly be pressed to clarify how existing rules apply, and whether new measures may be required.
Competitors and investors will also be watching closely. If Halo X gains traction, larger players may be forced to rethink their own wearables strategies and consider more enterprise-focused designs. If the glasses falter, it will reinforce the view that the public is not ready to accept always-on recording in daily life. Either way, the launch underlines how quickly AI is moving beyond phones and laptops into devices worn on the body, with all the social and commercial consequences that brings.
For individual users, the promise of “infinite memory” is enticing but the trade-offs are stark. To wear Halo X is to invite a layer of surveillance into every interaction, whether or not others consent. That tension between utility and intrusion will decide whether the glasses become an accepted business tool or another ambitious idea that fails to gain social acceptance.