App Pays You For Your Phone Calls

A new iPhone app that pays users for their call recordings to train AI systems rose rapidly in late September. However, it then went offline after a security flaw exposed user data.

What Neon Is And Who Is Behind It?

Neon is a consumer app that pays users to record their phone calls and sells the anonymised data to artificial intelligence companies for use in training machine learning models. Marketed as a way to “cash in” on phone data, it positions itself as a fairer alternative to tech firms that profit from user data without compensation. The app is operated by Neon Mobile, Inc., whose New York-based founder, Alex Kiam, is a former data broker who previously helped sell training data to AI developers.

Only Just Launched

The app launched in the United States this month (September 2025). According to app analytics tracking, Neon entered the U.S. App Store charts on 18 September, ranking 476th in the Social Networking category. Amazingly, by 25 September, it had climbed to the No. 2 spot, and reached the top 10 overall ! On its peak day, it was downloaded more than 75,000 times. No official launch has yet taken place in the UK.

How Does The App Work?

Neon allows users to place phone calls using its in-app dialler, which routes audio through its servers. Calls made to other Neon users are recorded on both sides, while calls to non-users are recorded on one side only. Transcripts and recordings are then anonymised, with personal details such as names and phone numbers removed, before being sold to third parties. Neon says these include AI firms building voice assistants, transcription systems, and speech recognition tools.

Users are then paid in cash for the calls, credited to a linked account. The earnings model actually promises up to $30 per day, with 30 cents per minute for calls to other Neon users and lower rates for calls to non-users. Referral bonuses are also offered. While consumer data is routinely collected by many apps, Neon stands out because it offers direct financial incentives for the collection of real human speech, a form of data that is more intimate and sensitive than most.

The Legal Language Behind The Data Deal

Neon’s terms of service give the company an unusually broad licence to use and resell recordings. This includes a worldwide, irrevocable, exclusive right to reproduce, host, modify, distribute, and create derivative works from user submissions. The licence is royalty-free, transferable, and allows for sublicensing through multiple tiers. Neon also claims full ownership of outputs created from user data, such as training models or audio derivatives. For most users, this means permanently giving up control over how their voice data may be reused, sold, or processed in future.

Why The App Took Off So Quickly

Neon’s rapid growth appears to have been driven by a combination of curiosity, novelty, and, of course, cash and referral-led incentives. Many users were drawn in by the promise of payment for something they do every day anyway, i.e., talking on the phone. The idea of monetising phone calls is also likely to have appealed particularly to users who are increasingly aware that their data is being collected and sold elsewhere.

Social media posts promoting referral links and earnings screenshots also seem to have really helped fuel viral growth. At the same time, widespread interest in AI tools has normalised the idea of systems that listen, learn, and improve through exposure to large datasets.

What Went Wrong?

Unfortunately, it seems that shortly after Neon became one of the most downloaded apps in the U.S., independent analysis revealed a serious security flaw. The app’s backend was found to be exposing not only user recordings and transcripts but also associated metadata. This included phone numbers, call durations, timestamps, and payment amounts. Audio files could be accessed via direct URLs without authentication, creating a significant privacy risk for anyone whose voice was captured.

Neon’s response was to take the servers offline temporarily. In an email to users, the company said it was “adding extra layers of security” to protect data. However, the email did not mention the specific details of the exposure or what user information had been compromised. The app itself remained listed in the App Store, but was no longer functional due to the server shutdown.

Legal And Ethical Concerns Around Recording

Neon’s approach raises a number of legal questions, particularly around consent and data protection. For example, in the United States, phone call recording laws differ by state. Some states require consent from all participants, while others allow one-party consent. By only recording one side of a call when the other participant is not a Neon user, the company appears to be trying to avoid falling foul of two-party consent laws. However, experts have questioned whether this distinction is sufficient, especially when metadata and transcript content may still reveal personal information about the other party.

In the UK, where GDPR rules apply, the bar for lawful processing of voice data is much higher. Call recordings here are considered personal data, and companies must have a lawful basis to record and process them. This could be consent, contractual necessity, legal obligation, or legitimate interest. In practice, UK organisations must be transparent, inform all parties at the start of a call, and apply strict safeguards around storage, retention, and third-party sharing. If the recording includes special category data, such as health or political views, the legal threshold is even higher.

Why The Terms May Create Future Risk

The app’s terms of service not only cover the use of call data for AI training, but also grant Neon the right to redistribute or modify that data without further input from the user. That includes the right to create and sell synthetic voice products based on recordings, or to allow third-party developers to embed user speech in new datasets. This means that, once the data is sold, users have no real practical way of tracking where it ends up, who uses it, or for what purpose. That includes the potential for misuse in deepfake technologies or other forms of AI-generated impersonation.

Trust Issue For Neon?

The exposure of call data so early in the app’s lifecycle does seem to have caused (not surprisingly) a major trust issue. While the company has said it is fixing the security problem, it will now be subject to much higher scrutiny from app platforms, data buyers, and regulators. If Neon wants to relaunch, it may need to undergo independent security audits, publish full transparency reports, and add explicit call recording notifications and consent features. Commercially, the setback may impact deals with AI firms if those companies decide to distance themselves from controversial datasets.

What About The AI Companies Using Voice Data?

For companies developing speech models, the incident highlights the importance of knowing exactly how training data has been sourced. For example, buyers of voice datasets will now need to ask more detailed questions about licensing, user consent, jurisdiction, and security. Any material flaw in the source of data can invalidate models downstream, especially if it leads to legal challenges or regulatory action. Data provenance and ethical sourcing are likely to become higher priorities in due diligence processes for commercial AI development.

Issues For Users

While Neon claims to anonymise data, voice recordings generally carry an inherent risk. For example, voice is increasingly used as a biometric identifier, and recorded speech can be used to train systems that replicate tone, mannerisms, and emotional expression. For individuals, this could lead to impersonation or fraud. For businesses, there is a separate concern. If employees use Neon to record work calls, they may be exposing client conversations, proprietary information, or regulated data without authorisation. This could result in GDPR breaches, disciplinary action, or reputational harm. Companies should review their mobile and communications policies and block unvetted recording apps from use on managed devices.

Regulators And App Platforms

The rise and fall of Neon within a matter of days really shows how quickly new data models can go from idea to mass adoption. Platforms such as the App Store are now likely to face more pressure to assess the privacy implications of data-for-cash apps before they are allowed to scale. Referral schemes that incentivise covert recording or encourage over-sharing are likely to be reviewed more closely. Regulators may also revisit guidance on audio data, especially where recordings are repackaged and resold to machine learning companies. Voice data governance, licensing standards, and ethical AI sourcing are likely to become more prominent areas of focus in the months ahead.

Evaluating Tools Like Neon

For organisations operating in the UK, the launch of Neon should serve as a prompt to tighten call recording policies and educate staff on data risk. If a similar service becomes available locally, any use would need a clear lawful basis, robust security controls, and transparency for all parties involved. This includes notifying people before recording begins, limiting the types of calls that can be recorded, and putting strict controls on where that data is sent. In regulated industries, the use of external apps to record voice data could also breach sector-specific rules or codes of conduct. A risk assessment and DPIA would be required in most business contexts.

What Does This Mean For Your Business?

The Neon episode shows just how fast the appetite for AI training data is reshaping the boundaries of consumer tech. In theory, Neon offered a way for users to reclaim some value from a data economy that usually runs without them. In practice, it seems to have revealed how fragile the balance is between innovation and responsibility. When that data includes private conversations, even anonymised, the margin for error is narrow. Voice is not like search history or location data because it’s personal, expressive, and hard to replace if misused.

What happened with Neon also appears to show how little control users have once they opt in. For example, the terms of service handed the company almost total freedom to store, repackage, and resell recordings and outputs, with no practical ability for users to track where their voice ends up. Even if users are comfortable making that trade, the people they speak to may not be. From an ethical standpoint, recording conversations for profit, especially with people unaware they are being recorded, raises serious questions about consent and accountability.

For UK businesses, the risks are not just theoretical. If employees start using similar apps to generate income, they could unintentionally upload sensitive or regulated information to unknown third parties. That creates exposure under GDPR, commercial contracts, and sector-specific codes, and may breach client trust. Businesses will need to move quickly to block such apps on company devices and reinforce clear internal rules around recording, call handling, and use of AI data services.

For AI companies, the lesson is equally clear. The hunger for diverse, real-world training data must be matched with rigorous scrutiny of how that data is sourced. Datasets obtained through poorly controlled consumer schemes are more likely to carry risk, not only in terms of legality but also model quality and future auditability. Voice data is especially sensitive, and provenance will now need to be a standard consideration in every procurement and development process.

More broadly, Neon’s brief rise exposes the gap between platform rules, regulatory oversight, and the speed of public adoption. App marketplaces now face growing pressure to vet data-collection models more stringently, particularly those that monetise content recorded from other people. It also raises a wider challenge: how to build the AI systems people want without normalising tools that trade in privacy. As interest in AI grows, the burden of building that future responsibly will only increase for every stakeholder involved.

Ad‑Free Facebook & Insta … For £3.99 Monthly

Meta will let UK users pay a monthly fee to use Facebook and Instagram without adverts, introducing a lower‑priced “consent or pay” model in response to UK data protection guidance.

Users Offered A Choice

Meta has confirmed that UK users will soon be offered a choice, i.e., continue using Facebook and Instagram for free with personalised ads, or pay a monthly subscription to remove them. The subscription will cost £2.99 per month when accessed on the web, or £3.99 per month on iOS and Android. These rates will apply to a user’s first Meta account. If additional Facebook or Instagram accounts are linked via Meta’s Accounts Centre, extra accounts can be added to the subscription for £2 a month (web) or £3 a month (mobile). A dismissible notification will begin appearing to users in the coming weeks, giving adults over 18 time to review and decide.

When?

Meta has not provided an exact date for when the ad-free subscription will go live in the UK, but it has stated that it will begin rolling out “in the coming weeks” as of its official announcement on 26 September 2025.

How The Subscription Model Will Work

Meta (Facebook) says subscribing will essentially remove all ads from Facebook and Instagram feeds, Stories, Reels, and other surfaces. Meta says that subscriber data will no longer be used to deliver personalised advertising and the company has also stated that it is charging a higher price for mobile subscriptions due to Apple and Google’s in‑app transaction fees.

The subscription applies across all accounts linked to a user’s Meta Accounts Centre. This means that users managing both a personal and a business account, or other multiple accounts, can pay one primary fee and then add extra accounts at a reduced monthly rate.

People who choose not to subscribe will continue to see ads, but will retain access to existing tools such as Ad Preferences, activity-based targeting controls, and the “Why am I seeing this ad?” explainer.

Why Meta Is Making This Change

It seems that the subscription model is being launched in direct response to regulatory pressure in the UK. For example, Meta said the approach was developed following “extensive engagement” with the Information Commissioner’s Office (ICO), which has recently clarified that online personalised advertising should be treated as a form of direct marketing. Under UK data protection law, users have the right to object to their data being used in this way.

In a high-profile settlement earlier this year, Meta agreed to stop using the personal data of human rights campaigner Tanya O’Carroll for targeted advertising. The ICO publicly supported O’Carroll’s position and urged Meta to offer clearer choices to users over how their data is used. Meta now says the subscription offers a fair and transparent way for people to choose whether to consent to personalised advertising or pay to avoid it entirely.

The UK Regulatory Context

The ICO’s interpretation of data rights has shaped the new model. For example, its March 2025 statement emphasised that organisations must give people a way to opt out of their personal data being used for direct marketing, including targeted online ads. Following its settlement with Meta, the ICO confirmed that the company had significantly reduced the originally proposed subscription price and welcomed the introduction of the new model as an example of compliance with UK data protection obligations.

It should also be noted that the UK pricing tier is substantially lower than the EU equivalent, where Meta had introduced a similar subscription model in 2023 priced at around €9.99 per month. That model attracted regulatory criticism, fines, and calls for more privacy-friendly alternatives.

The European Backdrop

In April 2024, the European Data Protection Board published an opinion stating that “consent or pay” models must not pressure people into accepting data use. In their view, consent must be freely given and fully informed, and platforms like Facebook must offer real alternatives rather than a binary choice. Regulators have argued that due to Meta’s market dominance, users may feel they have no realistic option but to accept personal data tracking or start paying to keep using services that are widely embedded in social and professional life.

In April 2025, Meta was fined €200 million by the European Commission under the Digital Markets Act for failing to provide a compliant version of its subscription model across the EU. Meta is appealing the decision but has framed the UK rollout as an example of how “pro-innovation” regulatory engagement can lead to workable outcomes.

What It Means For Everyday Users

For individual users in the UK, the subscription appears to create a direct trade-off between privacy and cost. For example, those who do not want to see ads can now remove them for a relatively low monthly fee, particularly when compared to the higher pricing seen in Europe. The pricing structure may also appeal to users who manage multiple accounts, as they can cover all of them under one bundled subscription.

People who continue using the free tier will still see ads, but Meta says they will remain in control of how their data is used to shape ad experiences. Existing privacy tools will remain available, including options to turn off activity-based ad targeting and to manage interests and advertiser interactions.

And For Business Users?

UK business users who rely on Facebook and Instagram for customer engagement, lead generation, or ecommerce should not see significant disruption. The free tier remains intact, and most users are expected to continue using the platform without subscribing, at least initially.

However, business users who also use Facebook and Instagram for personal reasons may choose to pay for the ad-free experience. This could help reduce distraction, but it also raises questions for businesses managing multiple accounts. Meta’s Account Centre lets users link multiple profiles, but additional accounts incur a fee, potentially adding monthly costs for businesses using more than one profile across different functions.

Advertisers

The launch of the subscription model essentially introduces a new form of audience segmentation. People who pay for the ad-free experience will not be shown any ads and will also be excluded from data processing for advertising purposes. This means they will not be available for targeting, retargeting, or inclusion in lookalike audience models.

In practical terms, this could result in slightly smaller campaign reach, reduced effectiveness of retargeting strategies, and less data for ad performance optimisation. However, the actual impact will depend on how many people choose to subscribe. Meta has positioned the new subscription as a supplement rather than a replacement for its ad business, which continues to power most of its revenue and remains core to its UK economic contribution.

Competitors

The move follows broader industry trends, with other major platforms already offering ad-free tiers. For example, YouTube Premium removes all adverts across videos and music and charges more than Meta’s proposed rate. X (formerly Twitter) offers a Premium Plus plan to remove almost all ads, and Snapchat has experimented with removing ads from key surfaces in its Platinum plan.

Meta’s UK pricing is among the lowest, undercutting most other ad-free subscription options. This may give the company a competitive edge with privacy-conscious users and could create pressure on rivals to adjust pricing or introduce similar models.

A Compliance Measure … And An Opportunity

Meta has positioned the change as a regulatory compliance measure, but it also presents an opportunity to test new revenue streams and reduce legal exposure. By charging a relatively low price and tying it to UK-specific guidance, the company is attempting to avoid further fines and litigation while learning how users respond to a consent-based subscription model.

The pricing structure reflects wider industry dynamics, including the growing cost of mobile transactions and the limitations placed on data processing by new data laws. Meta has also used the announcement to promote the economic value of its advertising tools, saying its platforms supported over 357,000 jobs and £65 billion in UK economic activity in 2024 alone.

Others Who Will Be Watching Closely

Those likely to be most affected by or involved in the rollout include regulators, privacy campaigners, advertisers, and everyday users of the platforms. The ICO is expected to monitor how the subscription model works in practice and whether it meets legal standards for free and informed consent. Privacy groups may also be looking for evidence that Meta genuinely stops using subscriber data for advertising. Advertisers will be watching for any impact on campaign performance, particularly around reach and targeting. Rival platforms in the UK and beyond may also be studying how effectively Meta manages the balance between regulation, user experience, and revenue.

Concerns

Privacy experts have already raised some concerns that the model places a price tag on privacy, forcing people to pay to prevent their data being used for tracking and targeting. Critics argue that data protection rights should not depend on a person’s ability to pay. The ICO’s current position is that the subscription represents a valid approach to consent, but some legal observers suggest further scrutiny may follow if complaints emerge about how the choice is presented or how data is processed.

Campaigners also point out that a paid subscription will not necessarily solve deeper issues with surveillance advertising, including the scale of data collection and the risks it poses to vulnerable users. Others have noted that people in low-income groups, young users, and those with limited digital literacy may be less able to make informed decisions or afford the subscription, reinforcing digital inequality.

What Does This Mean For Your Business?

Meta’s new ad-free subscription introduces a clearer line between paid privacy and free access, but it also raises significant questions about fairness, regulation, and business impact. For UK businesses, the ability to continue reaching a large audience on Facebook and Instagram remains largely unchanged in the short term. However, if a growing number of users pay to avoid ads, the addressable audience for paid campaigns may begin to shrink, thereby making it harder for small firms to rely on low-cost, highly targeted advertising. Meta’s economic contribution to UK advertising is significant, but maintaining that value depends on how many users continue opting into the ad-supported model.

The low UK price point is likely to encourage adoption compared to similar schemes in the EU, and it gives Meta a way to meet regulatory demands without heavily disrupting its business model. It also gives other tech firms a benchmark for what regulators might accept in similar contexts. For regulators and privacy advocates, the coming months will be a test of whether offering a paid alternative is enough to uphold the principle of free and informed consent.

For users, the offer may feel fairer than being given no choice at all, but the framing still forces a trade-off that not everyone will find acceptable. For competitors, the low pricing could trigger reassessments of their own ad-free offerings. For campaigners, the subscription will not address wider concerns about surveillance-based business models, and for Meta, the rollout could either become a blueprint for future compliance or a flashpoint if uptake leads to new scrutiny.

Company Check : Claude In CoPilot & Google Data Commons

Microsoft has confirmed it is adding Anthropic’s Claude models to its Copilot AI assistant, giving enterprise users a new option alongside OpenAI for handling complex tasks in Microsoft 365.

Microsoft Expands Model Choice In Copilot

Microsoft has begun rolling out support for Claude Sonnet 4 and Claude Opus 4.1, two of Anthropic’s large language models, within Copilot features in Word, Excel, Outlook and other Microsoft 365 apps. The update applies to both the Copilot “Researcher” agent, used for generating reports and conducting deep analysis, and Copilot Studio, the tool businesses use to build their own AI assistants.

The move significantly expands Microsoft’s model options. Until now, Copilot was powered primarily by OpenAI’s models, such as GPT‑4 and GPT‑4 Turbo, which run on Microsoft’s Azure cloud. With the addition of Claude, Microsoft is now allowing businesses to choose which AI model they want to power specific tasks, with the aim of offering more flexibility and improved performance in different enterprise contexts.

Researcher users can now toggle between OpenAI and Anthropic models once enabled by an administrator. Claude Opus 4.1 is geared towards deep reasoning, coding and multi‑step problem solving, while Claude Sonnet 4 is optimised for content generation, large‑scale data tasks and routine enterprise queries.

Why Microsoft Is Doing This Now

Microsoft has said the goal is to give customers access to “the best AI innovation from across the industry” and to tailor Copilot more closely to different work needs. However, the timing also reflects a broader shift in Microsoft’s AI strategy.

While Microsoft remains OpenAI’s largest financial backer and primary cloud host, the company is actively reducing its dependence on a single partner. It is building its own in‑house model, MAI‑1, and has recently confirmed plans to integrate AI models from other firms such as Meta, xAI, and DeepSeek. Anthropic’s Claude is the first of these to be made available within Microsoft 365 Copilot.

This change also follows a wave of high‑value partnerships between OpenAI and other tech companies. For example, in recent weeks, OpenAI has secured billions in new infrastructure support from Nvidia, Oracle and Broadcom, suggesting a broader distribution of influence across the AI landscape. Microsoft’s latest move helps hedge against any future change in the balance of that relationship.

Microsoft And Its Customers

The introduction of Claude into Copilot is being made available first to commercial users who are enrolled in Microsoft’s Frontier programme, i.e. the early access rollout for experimental Copilot features. Admins must opt in and approve access through the Microsoft 365 admin centre before staff can begin using Anthropic’s models.

Importantly, the Claude models will not run on Microsoft infrastructure. Anthropic’s AI systems are currently hosted on Amazon Web Services (AWS), meaning that any data processed by Claude will be handled outside Microsoft’s own cloud. Microsoft has made clear that this data flow is subject to Anthropic’s terms and conditions.

This external hosting has raised concerns in some quarters, particularly for organisations operating under strict compliance or data residency requirements. Microsoft has responded by emphasising the opt‑in nature of the integration and the ability for administrators to fully control which models are available to users.

For Microsoft, the move appears to strengthen its claim to be a platform‑agnostic AI provider. By integrating Anthropic alongside OpenAI and offering seamless switching between models in both Researcher and Copilot Studio, Microsoft positions itself as a central point of access for enterprise AI, regardless of where the models originate.

Business Relevance And Industry Impact

The change is likely to be welcomed by business users seeking more powerful or specialised models for specific workflows. It may also create new pressure on OpenAI to continue improving performance and pricing for enterprise use.

From a competitive standpoint, Microsoft’s ability to offer Claude inside its productivity suite puts further distance between Copilot and rival AI products from Google Workspace and Apple’s AI integrations. It also allows Microsoft to keep pace with fast‑moving developments in multi‑model orchestration, the ability to run different tasks through different models depending on context or output goals.

For Microsoft’s competitors in the cloud and productivity space, the integration also highlights a growing interoperability challenge. Anthropic is mainly backed by Amazon, and its models run on both AWS and Google Cloud. Microsoft’s decision to incorporate those models into 365 tools represents a break from traditional cloud loyalty and suggests that, in the era of generative AI, usability and capability may matter more than where the models are hosted.

The Google Data Commons Update

While Microsoft is focusing on model integration, Google has taken a different step by making structured real‑world data easier for AI developers to use. This month, it launched the Data Commons Model Context Protocol (MCP) Server, a new tool that allows developers and AI agents to access public datasets using plain natural language.

The MCP Server acts as a bridge between AI systems and the vast Data Commons database, which includes datasets from governments, international organisations, and local authorities. This means that developers can now build agents that access census data, climate statistics or economic indicators simply by asking for them in natural language, without needing to write complex code or API queries.

The launch aims to address the two long‑standing challenges in AI of hallucination and poor data quality. For example, many generative models are trained on unverified web data, which makes them prone to guessing when they lack information. Google’s approach should, therefore, help ground AI responses in verifiable, structured public datasets, improving both reliability and relevance.

ONE Data Agent

One of the first use cases is the ONE Data Agent, created in partnership with the ONE Campaign to support development goals in Africa. The agent uses the MCP Server to surface health and economic data for use in policy and advocacy work. However, Google has confirmed that the server is open to all developers, and has released tools and sample code to help others build similar agents using any large language model.

For Google, this expands its role in the AI ecosystem beyond model development and into data infrastructure. For developers, it lowers the technical barrier to creating trustworthy data‑driven AI agents and opens up new opportunities in sectors such as education, healthcare, environmental analysis and finance.

What Does This Mean For Your Business?

The addition of Claude to Microsoft 365 Copilot marks a clear move towards greater AI optionality, but it also introduces new complexities for both Microsoft and its enterprise customers. While the ability to switch between models gives businesses more control and the potential for improved task performance, it also means IT teams must assess where and how their data is being processed, especially when it leaves the Microsoft cloud. For some UK businesses operating in regulated sectors, this could raise concerns around data governance, third-party hosting, and contractual clarity. Admin-level opt-in gives organisations some control, but the responsibility for managing risk now falls more squarely on IT decision-makers.

For Microsoft, this is both a technical and strategic milestone. The company is reinforcing its Copilot brand as a neutral gateway to the best models available, regardless of origin. It sends a signal that AI delivery will be less about vendor exclusivity and more about task-specific effectiveness. For competitors, the integration of Anthropic models into Microsoft 365 may accelerate demand for open, composable AI stacks that can handle model switching, multi-agent coordination, and fine-grained prompt routing, especially in workplace applications.

Google’s decision to open up real-world data through the MCP Server supports a different but equally important part of the AI ecosystem. For example, many UK developers struggle to ground their AI agents in reliable facts without investing heavily in custom pipelines. The MCP Server simplifies this process, making structured public data directly accessible in plain language. If adopted widely, it could help reduce hallucinations and increase the usefulness of AI across sectors such as policy, healthcare, sustainability, and finance.

Together, these announcements suggest that the next phase of AI will be shaped not only by which models are most powerful, but also by who can offer the most useful data, the clearest integration paths, and the most practical tools for real-world business use. For UK organisations already exploring generative AI, both moves offer new possibilities, but also demand closer scrutiny of how choices around models and data infrastructure will affect operational control, user trust, and long-term value.

Security Stop-Press: Insider Threats : BBC Reporter Shares Story

Cybercriminals are increasingly targeting employees as a way into company systems, with insider threats now posing a serious and growing risk.

In one recent case, a BBC reporter revealed how a ransomware gang tried to recruit him through a messaging app, offering a share of a ransom if he provided access to BBC systems. The attempt escalated into an MFA bombing attack on his phone, a method used to pressure targets into approving login requests.

This form of insider targeting is becoming more common. For example, the UK’s Information Commissioner’s Office recently found that over half of insider cyber attacks in schools were carried out by students, often using guessed or stolen credentials. In the private sector, insiders have caused major breaches, including a former FinWise employee who accessed data on nearly 700,000 customers after leaving the firm.

Security researchers warn that ransomware groups now actively seek staff willing to trade access for money, rather than relying solely on technical exploits.

To reduce the risk, businesses are advised to enforce strong offboarding, monitor user behaviour, implement phishing-resistant MFA, and raise staff awareness about insider recruitment tactics.

Sustainability-In-Tech : Robots Refurbish Your Old Laptops

A research team in Denmark is building an AI‑driven robot to refurbish laptops at scale, offering a practical route to reduce e‑waste while creating new value for businesses.

RoboSAPIENS

At the Danish Technological Institute (DTI) in Odense, robotics researchers are developing a system that uses computer vision, machine learning and a robotic arm to automate common refurbishment tasks on used laptops. The project is part of RoboSAPIENS, an EU‑funded research initiative coordinated by Aarhus University that focuses on safe human‑robot collaboration and adaptation to unpredictable scenarios.

DTI’s contribution to the programme centres on robot-assisted remanufacturing. The goal is to design systems that can adapt to product variation, learn new disassembly processes, and maintain high safety standards even when faced with unfamiliar conditions. DTI’s Odense facility hosts dedicated robot halls and test cells where real‑world use cases like this are trialled.

What The Robot Can Do And How It Works

The DTI prototype has been trained to carry out laptop screen replacements, a time‑consuming and repetitive task that requires precision but often suffers from low labour availability. The system does this by using a camera to identify the laptop model and selects the correct tool from a predefined set. It then follows a sequence of learned movements to remove bezels, undo fixings, and lift out damaged screens for replacement.

The robot currently handles two laptop models and their submodels, with more being added as the AI’s training expands. Crucially, the system is designed with humans in the loop. For example, if it encounters unexpected variables, such as an adhesive where it expects a clip, or a screw type it hasn’t seen, it alerts a technician for manual intervention. This mixed‑mode setup allows for consistent output while managing the complexity of real‑world devices.

The Size And Urgency Of The E‑Waste Problem

Electronic waste / E‑waste is the fastest‑growing waste stream in the world. E-waste typically refers to items like discarded smartphones, laptops, tablets, printers, monitors, TVs, cables, chargers, and other electrical or electronic devices that are no longer wanted or functioning. The UN’s 2024 Global E‑Waste Monitor reports that 62 million tonnes of electronic waste were generated globally in 2022, with less than 25 per cent formally collected and recycled. If current trends continue, global e‑waste is expected to reach 82 million tonnes by 2030. That is roughly equivalent to 1.5 million 40‑tonne trucks, enough to circle the Earth.

Unfortunately, the UK is among the highest generators of e‑waste per capita in Europe. Although progress has been made under the WEEE (Waste Electrical and Electronic Equipment) directive, much of the country’s used electronics still go uncollected, unrepaired or end up being recycled in ways that fail to recover valuable materials.

The Benefits

For IT refurbishment firms and IT asset disposition (ITAD) providers, robotic assistance could offer some clear productivity gains. Automating standard tasks such as screen replacements could reduce handling time and increase throughput, while also reducing strain on skilled technicians who can instead focus on more complex repairs or quality assurance.

Mikkel Labori Olsen from DTI points out that a refurbished laptop can actually sell for around €200, while the raw materials reclaimed through basic recycling may only be worth €10. As Olsen explains: “By changing a few simple components, you can make a lot of value from it instead of just selling the recycled components”.

Corporate IT buyers also stand to benefit. For example, the availability of affordable, high‑quality refurbished laptops reduces procurement costs and supports carbon reporting by lowering embodied emissions compared to buying new equipment. For local authorities and public sector buyers, refurbished devices can also be a practical tool in digital inclusion schemes.

Manufacturers may also see long‑term benefits. As regulation around ‘right to repair’ and product lifecycle responsibility tightens, collaborating with refurbishment programmes could help original manufacturers retain brand control, limit counterfeiting, and benefit from downstream product traceability.

Challenges Technical Barriers

Despite its promise, robotics in refurbishment faces multiple challenges and barriers. For example, one of the biggest is product variation. Devices differ widely by brand, model, year and condition. Small differences in screw placement, adhesives, or plastic housing can trip up automation systems. Expanding the robot’s training set and adaptability takes time and requires high‑quality datasets and machine learning frameworks capable of generalisation.

Device design itself is another barrier. For example, many modern laptops are built with glued‑in components or fused assemblies that make disassembly difficult for humans and robots alike. While new EU rules will require smartphones and tablets to include removable batteries by 2027, current generation devices often remain repair‑hostile.

Safety is also critical. Damaged batteries in e‑waste can pose serious fire risks. Any industrial robot working with used electronics must be designed to detect faults and stop operations immediately when hazards are detected. The DTI system integrates vision and force sensors and follows strict safety protocols to ensure safe operation in shared workspaces.

Cost also remains a factor. For example, integrating robotic systems into refurbishment lines requires upfront investment. Firms will, therefore, need a steady supply of similar product types to ensure return on investment. For this reason, early adopters are likely to be larger ITAD providers or logistics firms working with bulk decommissioned equipment.

Global Trend

The Danish initiative forms part of a wider movement towards circular electronics, where products are repaired, reused or repurposed instead of being prematurely discarded.

Elsewhere in Europe, Apple continues to scale up its disassembly robots to recover rare materials from iPhones. These systems, including Daisy and Taz, can disassemble dozens of iPhone models and separate valuable elements like tungsten and magnets with high efficiency.

In the UK, for example, the Royal Mint has opened a precious metals recovery facility that uses clean chemistry to extract gold from discarded circuit boards. The plant, which can process up to 4,000 tonnes of material annually, uses a technology developed in Canada that avoids the need for high‑temperature smelting and reduces waste.

Further afield, AMP Robotics in the United States is deploying AI‑driven robotic arms in e‑waste sorting facilities. Their systems use computer vision to identify and pick electronic components by material type, size or brand, improving the speed and accuracy of downstream recycling processes.

Also, consumer‑focused companies such as Fairphone and Framework are also playing a role. Their modular designs allow users to replace key components like batteries and displays without specialist tools, reducing the refurbishment workload and making devices more accessible to end‑users who want to repair rather than replace.

Policy And Design Are Starting To Align With The Technology

It’s worth noting here that policy support is helping these innovations gain traction. For example, the EU’s Right to Repair directive was adopted in 2024, thereby giving consumers the right to request repairs for a wider range of products, even beyond warranty periods. Also, starting this year, smartphones and tablets sold in the EU will carry repairability scores on their packaging and, by 2027, batteries in all portable devices sold in the EU must be removable and replaceable by the user.

These regulatory changes aim to create an ecosystem where repair becomes normalised, standardised and commercially viable. For AI‑powered refurbishment systems like the one being developed in Denmark, the effect is twofold, i.e., devices will become easier to work with, and customer demand for professionally refurbished goods is likely to grow.

What Does This Mean For Your Organisation?

Robotic refurbishment, as demonstrated by the Danish system, could offer a realistic way to retain value in discarded electronics and reduce unnecessary waste. Unlike generalised recycling, which often produces low-grade materials from destroyed components, this approach focuses on targeted interventions that return functioning devices to market. For ITAD firms, the commercial case lies in increasing throughput and reliability while maintaining quality. For policymakers, it provides a scalable, auditable method to extend product life and reduce landfill. And for consumers and procurement teams, it promises more affordable and sustainable options without compromising performance.

The key to unlocking these benefits is likely to be adaptability. For example, in refurbishment settings, no two devices are ever quite the same. Variations in hardware, wear, and prior use demand systems that can recognise what they are working with and adjust their actions accordingly. The Danish project appears to directly address this by blending AI recognition with human oversight. It’s not about replacing skilled workers, but about using automation to remove tedious, repetitive tasks that slow down throughput and cause bottlenecks.

For UK businesses, the implications are increasingly relevant. Many corporate IT departments are under pressure to decarbonise procurement and demonstrate compliance with sustainability goals. Refurbished devices, when done well, offer a lower‑cost, lower‑impact alternative to new equipment. If robotic systems can scale this model and deliver consistent quality, they may help more UK organisations include reuse as part of their IT lifecycle planning. In parallel, IT service providers that adopt this kind of automation may gain a competitive edge by increasing service volume while managing rising labour costs.

Manufacturers, meanwhile, will need to keep pace with changing expectations around design for repair. As regulation tightens and customer preferences shift, it is no longer enough to produce devices that work well out of the box. The full product lifecycle, including second‑life refurbishment, is coming into scope, and robots like those at DTI could help bridge the technical gap between design limitations and sustainable reuse.

Although the Danish system sounds innovative and promising, it’s certainly not a silver bullet, and there are still challenges in economics, safety, and system complexity. However, with the right training data, safety protocols, and regulatory backing, robotic refurbishment may have the potential to become a practical part of the circular economy, not just in Denmark, but across industrial repair centres, logistics hubs and IT recovery operations worldwide.

Video Update : How To Schedule Tasks in ChatGPT

It’s easier than ever to setup scheduled tasks in CoPilot and so whether you want a summary of the news each week or updates about your stock portfolio every morning, this video shows how you can get CoPilot to run scheduled tasks for you, with (importantly) an email sent to you as well, if you like.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives