UK Digital ID Mandatory By 2029
UK Prime Minister, Keir Starmer, has announced that digital ID will become mandatory to prove the right to work in the UK by 2029, triggering both ministerial praise and civil liberties concerns.
Interestingly, a petition on the UK Government’s site : https://petition.parliament.uk/ had attracted approaching three million signatures of people opposed to the bill, within a week of the announcement being made.
Rolled out by 2029
The Prime Minister has confirmed that a new digital identity scheme will be introduced across the UK by 2029, with every citizen and legal resident required to use a digital ID to prove their right to work.
Mandatory
The new ID will be free and optional for those not seeking employment, but will be compulsory for anyone taking up paid work. The government says it will replace paper documents and National Insurance numbers for right-to-work checks, with full implementation expected before the next general election. The government also says that, by law, this must take place no later than August 2029.
What Form Will It Take?
The government says the digital ID will be a secure, app-based credential stored on people’s mobile phones using the GOV.UK Wallet system. It will include core personal information such as name, date of birth, nationality or residency status, and a photo. The app will act as a proof of identity and legal right to work, with data encrypted and held directly on the user’s device.
The system has been designed to allow users to share only the information needed in each situation, for example, confirming eligibility to work without revealing unrelated personal details. If a phone is lost or stolen, the credential can be revoked remotely and reissued.
The government says this will replace the need to provide paper copies of documents such as passports or residence permits, and will become the standard method of proving work eligibility across the UK labour market.
Why?
The government says the scheme is designed to reduce illegal working, deter unauthorised migration, and improve the consistency of identity checks. Ministers argue that illegal employment remains a key draw for people entering the UK without permission, and that a digital system will make enforcement more effective.
The new ID is also framed as a broader tool for improving access to public services. It is hoped that over time, it could be used to simplify applications for childcare, benefits, driving licences, and tax records, although these uses will be optional, not mandatory.
In a statement issued through Downing Street, Prime Minister Keir Starmer said: “Digital ID is an enormous opportunity for the UK. It will make it tougher to work illegally in this country, making our borders more secure.”
However, some opponents believe the move is motivated more by political positioning than practical enforcement. For example, with pressure mounting over small boat crossings and immigration policy, privacy campaigners argue that the scheme could have been designed primarily to reassure voters rather than address the root causes of illegal working.
Previous attempts
It should be noted here that this is not the first time a UK government has proposed a national identity scheme. Back in the early 2000s, then-Prime Minister Tony Blair introduced plans for a physical ID card, which became law in 2006. The cards were intended to help combat terrorism, immigration abuse, and benefit fraud, and were linked to a central National Identity Register.
However, the scheme faced widespread opposition on civil liberties grounds and was criticised for being expensive, intrusive, and ineffective. In 2010, the incoming Conservative-Liberal Democrat coalition government scrapped the programme and destroyed the database. At the time, the Home Secretary called it a “high-cost, high-risk” scheme that offered little public benefit.
Although the new digital ID plan differs in format, with no central identity register and no requirement to carry or show ID in public, it seems that many of the same concerns about privacy and state overreach have re-emerged.
Encrypted
Although the digital ID will be held on a person’s phone in the form of a secure app-based wallet, similar to the NHS app or mobile payment cards, it will use encrypted, on-device storage so that if a phone is lost, the credential can be immediately revoked and reissued.
For Working Legally
Current right-to-work rules already require employers to check and retain copies of identity documents, such as passports or biometric residence permits, or to use the Home Office online service. Civil penalties for non-compliance can be up to £60,000 per illegal worker for repeat offences.
Ministers say the new digital ID will therefore reduce the risk of fraud, speed up hiring, and close off loopholes that currently allow the use of borrowed or forged documents. It is also intended to help enforcement agencies identify patterns of non-compliance across the labour market, including in casual and gig economy roles.
According to the Cabinet Office, “a new streamlined digital system to check right to work will simplify the process, drive up compliance, crack down on forged documents and create intelligence data on businesses.”
Border Security
The policy has also been presented by the Prime Minister as a key part of the government’s approach to tackling illegal migration (which has been much in the news lately). In a statement issued through Downing Street, he said: “Digital ID is an enormous opportunity for the UK. It will make it tougher to work illegally in this country, making our borders more secure.”
He added: “We are doing the hard graft to deliver a fairer Britain for those who want to see change, not division. That is at the heart of our Plan for Change.”
Ministers argue that access to informal work is a major incentive for people entering the country without permission. By requiring all legal workers to use digital ID, the government hopes to reduce the so-called “pull factor” of illegal employment.
What Is (And Isn’t) Required
The government says the digital ID will be required only for those seeking paid employment. There are no plans to require it for everyday activities such as accessing healthcare or public spaces, and people will not be expected to carry proof of identity at all times. For example, the government materials explicitly state that “there will be no requirement for individuals to carry their ID or be asked to produce it” outside of employment-related checks.
However, the digital ID is expected to become increasingly useful for other tasks, such as accessing childcare, welfare, or tax records. It’s understood these uses will be optional, with ministers presenting them as convenience features rather than legal requirements.
Access And Inclusion
While the system is designed primarily for smartphone use, ministers have also confirmed that physical alternatives will be made available for people who are digitally excluded. This may include older people, those experiencing homelessness, or individuals without regular access to internet-connected devices.
Consultation Planned
A formal public consultation will launch later this year, seeking input on how to design the system inclusively. The government says this will include engagement with charities and local authorities, as well as face-to-face outreach and support services.
The Cabinet Office says the aim is to create “a service that takes the best aspects of the digital identification systems that are already up and running around the world,” while ensuring it “works for those who aren’t able to use a smartphone.”
Used In Other Countries
Some other countries already have working digital ID schemes. Examples of these that the UK’s digital ID model draws on include Estonia, Denmark, Australia, and India. For example:
– In Estonia, citizens use a mandatory digital ID for voting, healthcare, banking, and education, supported by strong encryption and decentralised systems.
– In Denmark, a MitID credential is used for logging into government and banking services, though it is not compulsory for all citizens.
– Australia’s national Digital ID system allows residents to access public services through apps like myGov, with varying levels of identity strength depending on the use case.
– In India, the Aadhaar system assigns a unique biometric ID number to over a billion people, primarily to streamline welfare and reduce fraud.
Ministers say the UK version will focus on privacy by design, with data stored locally on the user’s device and shared selectively.
Public Reaction And Political Response
The announcement has triggered a divided response across the political spectrum. Supporters argue it will modernise outdated systems and improve national security, while opponents say it risks overreach and mission creep.
More than one million people have already signed a Parliamentary petition opposing the introduction of digital ID, with civil liberties groups warning of long-term consequences for personal freedom. For example, Big Brother Watch, a UK-based privacy campaign group, said: “Plans for a mandatory digital ID would make us all reliant on a digital pass to go about our daily lives, turning us into a checkpoint society that is wholly un-British.”
Also, Liberty, the human rights organisation, expressed concern, stating that the proposals raise “huge concerns about mass surveillance” and could increase barriers for vulnerable people trying to access work or support.
Opposition politicians have also criticised both the scale of the scheme and the lack of debate. For example, Conservative leader Kemi Badenoch has questioned the cost, saying the government should focus on better enforcement of existing laws. The SNP and Northern Ireland’s First Minister have also raised concerns about the implications for devolved powers and the rights of Irish citizens.
Employers And Service Providers
Businesses will need to adjust their onboarding and compliance processes once the new system is in place. The government says it will issue new guidance and offer integration options, but employers may face practical questions around adoption timelines, system compatibility, and staff training.
The Home Office is expected to update its employer toolkits and codes of practice during the rollout. Officials have said the changes will reduce red tape in the long term but acknowledge that transitional support may be needed.
There is no requirement yet for employers to take any action, but the digital ID scheme is likely to become the default verification method once legislation is passed. The Department for Science, Innovation and Technology has said it is working with industry groups and software providers to ensure compatibility and reduce disruption.
Security And Safeguards
In terms of security and privacy, according to the Cabinet Office, the digital ID will use “state-of-the-art encryption and user authentication to ensure data is held and accessed securely.” The information will remain under the control of the user, stored on their device and not in a centralised database.
The government says the system is designed to limit personal data sharing, with users able to present only the specific information required for a given situation. For example, an employer might only see proof of work eligibility without accessing unrelated personal details.
If a device is lost or compromised, the credential can be cancelled and reissued. The government says this offers better protection than paper-based documents, which are easier to forge or misuse.
Challenges And Unanswered Questions
Despite assurances around data security and voluntary usage beyond employment, it must be said that there remain some unresolved concerns about the scope and risks of the new digital ID system. For example:
– Inclusion will require careful planning and proper resourcing to ensure fair access for people without smartphones, stable housing, or standard documents.
– Privacy and data safety remain a concern, with campaigners warning that even encrypted systems are not immune to hacking or misuse.
– Cost and complexity are still unclear, as the government has not yet published a full estimate of programme costs or explained how the rollout will be phased.
– Public trust will be critical, especially given the level of opposition from civil liberties groups and the wider concerns already raised across Parliament.
What Does This Mean For Your Business?
If delivered effectively, it’s possible to see how a digital ID scheme could bring some long-term operational benefits to UK businesses, i.e. by reducing the administrative burden of right-to-work checks and making fraud harder to commit. A single, standardised credential could simplify hiring, especially in sectors where temporary or remote onboarding is common. Employers, however, will want clear timelines, technical support, and assurance that they won’t be exposed to new liabilities during the transition.
Public reaction to the scheme is likely to remain mixed. While those in work will be legally required to adopt the new system, others may choose to use it to access public services more easily. The success of the rollout will depend heavily on how well the government delivers inclusive access for people who do not have smartphones or consistent digital connectivity. Ministers have promised support and consultation, but this remains a key point of scrutiny.
However, it’s clear already that the wider political and civil liberties questions are unlikely to go away. Campaigners continue to warn of surveillance risks and creeping functionality, especially if the ID becomes more widely used in everyday life over time. The comparison with previous ID card proposals is unavoidable. Although this version is digital-only, decentralised, and limited in scope, it revives long-standing concerns about privacy and state control.
As with other large digital infrastructure programmes, the practical outcomes will depend on delivery, not just design. That includes building trust, preventing mission creep, and ensuring the system works reliably in the real world. For now, businesses and citizens alike will be watching closely as the consultation opens and the legislation begins its passage through Parliament.
Friendship Tech Goes From Novelty To Necessity
In this Tech Insight, we look at how a new generation of digital platforms and community initiatives is rising to meet the growing UK demand for meaningful friendship, tackling loneliness through apps, events, and innovative social design that prioritises connection over dating.
A Growing Demand (And Rising Cost)
Loneliness in the UK is no longer just a personal struggle, but is now a public health issue. For example, according to the government’s 2023–24 Community Life Survey, around 3.1 million people in England report feeling lonely “often or always.” The Office for National Statistics puts the broader figure closer to 1 in 4 adults when occasional loneliness is included.
It seems that young people are among the most affected. Adults aged 16–24 are consistently more likely to report high levels of loneliness than any other age group. The same is true for people living in deprived areas, those with disabilities, and individuals whose gender identity differs from their sex registered at birth.
In health terms, the consequences are serious. For example, prolonged loneliness has been linked to increased risk of heart disease, depression, cognitive decline, and even early death. In fact, former US Surgeon General Vivek Murthy called loneliness “a greater threat to health than smoking 15 cigarettes a day.”
Why now?
Several factors have created new urgency, and opportunity, for digital tools focused on friendship, such as:
– Post-pandemic social gaps. Covid disrupted many people’s social routines. Friendships thinned out, and some never recovered.
-Life transitions. Moving for work, returning from university, post-divorce, all create social disconnection.
– Dating app fatigue. Many younger users are burned out by ghosting, mismatched intentions, or pressure in romantic spaces.
– Desire for real-world connection. There’s growing appetite for platforms that lead to shared experiences, not just online chat.
– Social infrastructure decline. Pubs, churches, clubs, and libraries aren’t what they were. New tools are stepping in to fill the gap.
Friendship-First Apps Gaining Ground
This growing demand has meant that several new and emerging apps are now tackling platonic connection head-on, with different approaches to solving the problem. Examples of these include:
Clyx
London-based, launched in 2023, and aimed directly at building real-life friendships. It scrapes event listings (from Ticketmaster, TikTok and others), then lets users see who’s attending and suggests potential matches based on shared interests. The app recently raised $14 million and is gaining traction with young adults looking for local events and new faces.
Gofrendly
This has a growing UK user base, especially among women. It focuses on interest matching, local discovery, and verified profiles to encourage safe, meaningful friendships, not dating. It’s one of the more community-led platforms in this space.
Bumble BFF
A mode within the main Bumble app that lets users connect platonically. Benefits from scale and user familiarity, but some users still report confusion about intentions, as the app straddles both friendship and romance.
Peanut
Originally created for new mums, Peanut now supports women across life stages, including those navigating menopause or fertility. It blends interest-based communities with discussion boards, making it a more supportive and topic-led experience.
Patook
This app is strictly platonic, with rules that penalise flirtation. It’s aimed at people who want clarity about the nature of their connections.
Hey! VINA
Marketed as “Tinder for girlfriends,” this app is designed to help women find new female friends, often during life transitions or moves to new cities.
Friender
A more traditional matching app that connects users based on shared activities, from walking to photography.
Timeleft
Focuses on time-based group meetups, e.g. 7 strangers meeting at 7pm. Aims to reduce the awkwardness of one-to-one planning.
Wyzr Friends, Les Amís, Pie, Meet5, BFF
Other platforms with varying levels of UK presence. Many focus on events or interest groups, but success often depends on having enough users in each area.
vTime XR
A UK-developed app offering avatar-based conversations in shared 3D virtual spaces. This is an example of more experimental social design, perhaps appealing to more tech-savvy users.
Other Ways To Connect Digitally
Obviously, not all digital friendship-building happens on dedicated apps. For example, some people find new friends in forums, comment sections, local Facebook or WhatsApp groups. Others use platforms like Reddit, Discord or Meetup to join interest-based spaces that lead to real-world interaction.
These alternatives may lack the structure of a matching app, but often feel more organic, and have the advantage of existing community momentum.
UK Initiatives Tackling Loneliness
It should be noted that the UK also has a growing number of non-app projects that support social connection in different ways. A few that stand out include:
The Chatty Café Scheme
This encourages cafés to offer “chatter and natter” tables where anyone can sit down and talk. Over 600 UK venues have taken part. Low effort, high impact.
The Lonely Girls Club
A UK-based organisation helping women make friends. It runs walks, brunches and local meetups in cities including London, Manchester and Brighton. Over 145,000 members and growing.
Do It and local volunteering schemes
Volunteering is a tried-and-tested way to build friendships. Sites like Do It help people find causes to support locally, often leading to lasting connections.
NSPCC’s Building Connections
This pairs young people with trained volunteers via text chat to help tackle loneliness in under-19s. It’s a structured, supported approach that’s designed to build trust gradually.
Silver Line
A free helpline for older people who feel lonely or isolated. Offers conversation, advice, and links to services.
Social prescribing
GPs and healthcare providers can now regularly refer patients to non-medical services, like walking clubs or creative groups, to help combat isolation. It’s a growing part of NHS practice.
The Gaps
Even with momentum, it seems that friendship apps and digital schemes face some difficult challenges. These include:
– User density. Many platforms only work well in big cities. In smaller towns, there just aren’t enough local users.
– Safety and moderation. Users want reassurance that people are who they say they are, and that harassment will be taken seriously.
– Drop-off after first contact. Even if a match is made, many connections fizzle out. Apps that don’t lead to real interaction risk compounding the loneliness they aim to solve.
– Unrealistic promises. No app can guarantee friendship. When expectations aren’t met, users may feel worse, not better.
– Privacy and data. Platforms must be careful not to over-collect personal data, or create social graphs that users wouldn’t want shared.
Where Businesses Fit In
Friendship isn’t just a personal issue. For example, loneliness affects productivity, mental health and team cohesion. With this in mind, ways in which forward-thinking employers are starting to act include:
– Social clubs and interest groups. Walking, running, book clubs and other low-stakes gatherings can help staff connect.
– Peer matching. Pairing employees for coffee chats, especially across departments, builds new bonds.
– Sponsored meetups. Subsidised lunches, away days, and wellbeing events give employees time and space to talk outside of work tasks.
– Coworking support. Remote staff can be encouraged to work from shared hubs once a week, keeping them socially active.
– Onboarding support. Helping new joiners build a social network, especially those who’ve moved, reduces early drop-off and increases engagement.
– Leadership by example. When senior staff take part in informal social activities, others follow.
Helping people build friendships at work isn’t just “nice to have”, but can also be very good business. For example, when people feel seen, included and socially healthy, they stay longer, perform better, and support others more effectively.
What Does This Mean For Your Business?
The rise of friendship apps and local initiatives reflects a growing effort to redesign how people find and build social connection in everyday life. It seems that these tools are no longer fringe or experimental but are now part of a wider ecosystem responding to a social problem that governments, charities, employers and individuals all recognise. For some users, these platforms offer a lifeline out of isolation. For others, they may simply provide a way to expand social circles, build support networks or feel more rooted in a new place.
That said, the picture is far from complete. For example, many of the tools gaining attention still rely on user density, active moderation and effective onboarding to work well. Without enough people nearby, or a clear route from match to meeting, the experience can quickly disappoint. Also, while digital platforms play an important role, they cannot replace the value of shared activity, physical presence or community familiarity that real-world connection offers.
That’s why non-app initiatives remain essential. For example, programmes like the Chatty Café Scheme or The Lonely Girls Club don’t just reduce friction, they change norms. They make conversation with strangers feel less unusual and give people permission to reach out without awkwardness. These models, grounded in familiarity and low-pressure interaction, can succeed in ways algorithms sometimes cannot.
For UK businesses, this raises new questions. Employers have become more focused on wellbeing in recent years, but friendship and social support often remain under-addressed. A lonely employee may not present as unwell, but over time the impact can be felt in engagement, collaboration and retention. That means employers are not just well-placed to help, they may be expected to. Practical steps like encouraging interest-based groups, supporting social meetups and offering flexible coworking options are not just soft benefits, but they are investments in team cohesion and long-term workforce resilience.
Policymakers will also need to think carefully. While loneliness is now recognised at a national level, digital inclusion, funding for local groups and access to social infrastructure will all shape how far these efforts reach. That includes ensuring these tools and spaces are safe, accessible and open to all, regardless of postcode, age, ability or income.
Ultimately, friendship is hard to manufacture but easy to overlook. What this growing sector of tools, platforms and initiatives reveals is that the need is clear, the demand is growing, and the routes to connection must be many, not just digital, but human, local and shared.
App Pays You For Your Phone Calls
A new iPhone app that pays users for their call recordings to train AI systems rose rapidly in late September. However, it then went offline after a security flaw exposed user data.
What Neon Is And Who Is Behind It?
Neon is a consumer app that pays users to record their phone calls and sells the anonymised data to artificial intelligence companies for use in training machine learning models. Marketed as a way to “cash in” on phone data, it positions itself as a fairer alternative to tech firms that profit from user data without compensation. The app is operated by Neon Mobile, Inc., whose New York-based founder, Alex Kiam, is a former data broker who previously helped sell training data to AI developers.
Only Just Launched
The app launched in the United States this month (September 2025). According to app analytics tracking, Neon entered the U.S. App Store charts on 18 September, ranking 476th in the Social Networking category. Amazingly, by 25 September, it had climbed to the No. 2 spot, and reached the top 10 overall ! On its peak day, it was downloaded more than 75,000 times. No official launch has yet taken place in the UK.
How Does The App Work?
Neon allows users to place phone calls using its in-app dialler, which routes audio through its servers. Calls made to other Neon users are recorded on both sides, while calls to non-users are recorded on one side only. Transcripts and recordings are then anonymised, with personal details such as names and phone numbers removed, before being sold to third parties. Neon says these include AI firms building voice assistants, transcription systems, and speech recognition tools.
Users are then paid in cash for the calls, credited to a linked account. The earnings model actually promises up to $30 per day, with 30 cents per minute for calls to other Neon users and lower rates for calls to non-users. Referral bonuses are also offered. While consumer data is routinely collected by many apps, Neon stands out because it offers direct financial incentives for the collection of real human speech, a form of data that is more intimate and sensitive than most.
The Legal Language Behind The Data Deal
Neon’s terms of service give the company an unusually broad licence to use and resell recordings. This includes a worldwide, irrevocable, exclusive right to reproduce, host, modify, distribute, and create derivative works from user submissions. The licence is royalty-free, transferable, and allows for sublicensing through multiple tiers. Neon also claims full ownership of outputs created from user data, such as training models or audio derivatives. For most users, this means permanently giving up control over how their voice data may be reused, sold, or processed in future.
Why The App Took Off So Quickly
Neon’s rapid growth appears to have been driven by a combination of curiosity, novelty, and, of course, cash and referral-led incentives. Many users were drawn in by the promise of payment for something they do every day anyway, i.e., talking on the phone. The idea of monetising phone calls is also likely to have appealed particularly to users who are increasingly aware that their data is being collected and sold elsewhere.
Social media posts promoting referral links and earnings screenshots also seem to have really helped fuel viral growth. At the same time, widespread interest in AI tools has normalised the idea of systems that listen, learn, and improve through exposure to large datasets.
What Went Wrong?
Unfortunately, it seems that shortly after Neon became one of the most downloaded apps in the U.S., independent analysis revealed a serious security flaw. The app’s backend was found to be exposing not only user recordings and transcripts but also associated metadata. This included phone numbers, call durations, timestamps, and payment amounts. Audio files could be accessed via direct URLs without authentication, creating a significant privacy risk for anyone whose voice was captured.
Neon’s response was to take the servers offline temporarily. In an email to users, the company said it was “adding extra layers of security” to protect data. However, the email did not mention the specific details of the exposure or what user information had been compromised. The app itself remained listed in the App Store, but was no longer functional due to the server shutdown.
Legal And Ethical Concerns Around Recording
Neon’s approach raises a number of legal questions, particularly around consent and data protection. For example, in the United States, phone call recording laws differ by state. Some states require consent from all participants, while others allow one-party consent. By only recording one side of a call when the other participant is not a Neon user, the company appears to be trying to avoid falling foul of two-party consent laws. However, experts have questioned whether this distinction is sufficient, especially when metadata and transcript content may still reveal personal information about the other party.
In the UK, where GDPR rules apply, the bar for lawful processing of voice data is much higher. Call recordings here are considered personal data, and companies must have a lawful basis to record and process them. This could be consent, contractual necessity, legal obligation, or legitimate interest. In practice, UK organisations must be transparent, inform all parties at the start of a call, and apply strict safeguards around storage, retention, and third-party sharing. If the recording includes special category data, such as health or political views, the legal threshold is even higher.
Why The Terms May Create Future Risk
The app’s terms of service not only cover the use of call data for AI training, but also grant Neon the right to redistribute or modify that data without further input from the user. That includes the right to create and sell synthetic voice products based on recordings, or to allow third-party developers to embed user speech in new datasets. This means that, once the data is sold, users have no real practical way of tracking where it ends up, who uses it, or for what purpose. That includes the potential for misuse in deepfake technologies or other forms of AI-generated impersonation.
Trust Issue For Neon?
The exposure of call data so early in the app’s lifecycle does seem to have caused (not surprisingly) a major trust issue. While the company has said it is fixing the security problem, it will now be subject to much higher scrutiny from app platforms, data buyers, and regulators. If Neon wants to relaunch, it may need to undergo independent security audits, publish full transparency reports, and add explicit call recording notifications and consent features. Commercially, the setback may impact deals with AI firms if those companies decide to distance themselves from controversial datasets.
What About The AI Companies Using Voice Data?
For companies developing speech models, the incident highlights the importance of knowing exactly how training data has been sourced. For example, buyers of voice datasets will now need to ask more detailed questions about licensing, user consent, jurisdiction, and security. Any material flaw in the source of data can invalidate models downstream, especially if it leads to legal challenges or regulatory action. Data provenance and ethical sourcing are likely to become higher priorities in due diligence processes for commercial AI development.
Issues For Users
While Neon claims to anonymise data, voice recordings generally carry an inherent risk. For example, voice is increasingly used as a biometric identifier, and recorded speech can be used to train systems that replicate tone, mannerisms, and emotional expression. For individuals, this could lead to impersonation or fraud. For businesses, there is a separate concern. If employees use Neon to record work calls, they may be exposing client conversations, proprietary information, or regulated data without authorisation. This could result in GDPR breaches, disciplinary action, or reputational harm. Companies should review their mobile and communications policies and block unvetted recording apps from use on managed devices.
Regulators And App Platforms
The rise and fall of Neon within a matter of days really shows how quickly new data models can go from idea to mass adoption. Platforms such as the App Store are now likely to face more pressure to assess the privacy implications of data-for-cash apps before they are allowed to scale. Referral schemes that incentivise covert recording or encourage over-sharing are likely to be reviewed more closely. Regulators may also revisit guidance on audio data, especially where recordings are repackaged and resold to machine learning companies. Voice data governance, licensing standards, and ethical AI sourcing are likely to become more prominent areas of focus in the months ahead.
Evaluating Tools Like Neon
For organisations operating in the UK, the launch of Neon should serve as a prompt to tighten call recording policies and educate staff on data risk. If a similar service becomes available locally, any use would need a clear lawful basis, robust security controls, and transparency for all parties involved. This includes notifying people before recording begins, limiting the types of calls that can be recorded, and putting strict controls on where that data is sent. In regulated industries, the use of external apps to record voice data could also breach sector-specific rules or codes of conduct. A risk assessment and DPIA would be required in most business contexts.
What Does This Mean For Your Business?
The Neon episode shows just how fast the appetite for AI training data is reshaping the boundaries of consumer tech. In theory, Neon offered a way for users to reclaim some value from a data economy that usually runs without them. In practice, it seems to have revealed how fragile the balance is between innovation and responsibility. When that data includes private conversations, even anonymised, the margin for error is narrow. Voice is not like search history or location data because it’s personal, expressive, and hard to replace if misused.
What happened with Neon also appears to show how little control users have once they opt in. For example, the terms of service handed the company almost total freedom to store, repackage, and resell recordings and outputs, with no practical ability for users to track where their voice ends up. Even if users are comfortable making that trade, the people they speak to may not be. From an ethical standpoint, recording conversations for profit, especially with people unaware they are being recorded, raises serious questions about consent and accountability.
For UK businesses, the risks are not just theoretical. If employees start using similar apps to generate income, they could unintentionally upload sensitive or regulated information to unknown third parties. That creates exposure under GDPR, commercial contracts, and sector-specific codes, and may breach client trust. Businesses will need to move quickly to block such apps on company devices and reinforce clear internal rules around recording, call handling, and use of AI data services.
For AI companies, the lesson is equally clear. The hunger for diverse, real-world training data must be matched with rigorous scrutiny of how that data is sourced. Datasets obtained through poorly controlled consumer schemes are more likely to carry risk, not only in terms of legality but also model quality and future auditability. Voice data is especially sensitive, and provenance will now need to be a standard consideration in every procurement and development process.
More broadly, Neon’s brief rise exposes the gap between platform rules, regulatory oversight, and the speed of public adoption. App marketplaces now face growing pressure to vet data-collection models more stringently, particularly those that monetise content recorded from other people. It also raises a wider challenge: how to build the AI systems people want without normalising tools that trade in privacy. As interest in AI grows, the burden of building that future responsibly will only increase for every stakeholder involved.
Ad‑Free Facebook & Insta … For £3.99 Monthly
Meta will let UK users pay a monthly fee to use Facebook and Instagram without adverts, introducing a lower‑priced “consent or pay” model in response to UK data protection guidance.
Users Offered A Choice
Meta has confirmed that UK users will soon be offered a choice, i.e., continue using Facebook and Instagram for free with personalised ads, or pay a monthly subscription to remove them. The subscription will cost £2.99 per month when accessed on the web, or £3.99 per month on iOS and Android. These rates will apply to a user’s first Meta account. If additional Facebook or Instagram accounts are linked via Meta’s Accounts Centre, extra accounts can be added to the subscription for £2 a month (web) or £3 a month (mobile). A dismissible notification will begin appearing to users in the coming weeks, giving adults over 18 time to review and decide.
When?
Meta has not provided an exact date for when the ad-free subscription will go live in the UK, but it has stated that it will begin rolling out “in the coming weeks” as of its official announcement on 26 September 2025.
How The Subscription Model Will Work
Meta (Facebook) says subscribing will essentially remove all ads from Facebook and Instagram feeds, Stories, Reels, and other surfaces. Meta says that subscriber data will no longer be used to deliver personalised advertising and the company has also stated that it is charging a higher price for mobile subscriptions due to Apple and Google’s in‑app transaction fees.
The subscription applies across all accounts linked to a user’s Meta Accounts Centre. This means that users managing both a personal and a business account, or other multiple accounts, can pay one primary fee and then add extra accounts at a reduced monthly rate.
People who choose not to subscribe will continue to see ads, but will retain access to existing tools such as Ad Preferences, activity-based targeting controls, and the “Why am I seeing this ad?” explainer.
Why Meta Is Making This Change
It seems that the subscription model is being launched in direct response to regulatory pressure in the UK. For example, Meta said the approach was developed following “extensive engagement” with the Information Commissioner’s Office (ICO), which has recently clarified that online personalised advertising should be treated as a form of direct marketing. Under UK data protection law, users have the right to object to their data being used in this way.
In a high-profile settlement earlier this year, Meta agreed to stop using the personal data of human rights campaigner Tanya O’Carroll for targeted advertising. The ICO publicly supported O’Carroll’s position and urged Meta to offer clearer choices to users over how their data is used. Meta now says the subscription offers a fair and transparent way for people to choose whether to consent to personalised advertising or pay to avoid it entirely.
The UK Regulatory Context
The ICO’s interpretation of data rights has shaped the new model. For example, its March 2025 statement emphasised that organisations must give people a way to opt out of their personal data being used for direct marketing, including targeted online ads. Following its settlement with Meta, the ICO confirmed that the company had significantly reduced the originally proposed subscription price and welcomed the introduction of the new model as an example of compliance with UK data protection obligations.
It should also be noted that the UK pricing tier is substantially lower than the EU equivalent, where Meta had introduced a similar subscription model in 2023 priced at around €9.99 per month. That model attracted regulatory criticism, fines, and calls for more privacy-friendly alternatives.
The European Backdrop
In April 2024, the European Data Protection Board published an opinion stating that “consent or pay” models must not pressure people into accepting data use. In their view, consent must be freely given and fully informed, and platforms like Facebook must offer real alternatives rather than a binary choice. Regulators have argued that due to Meta’s market dominance, users may feel they have no realistic option but to accept personal data tracking or start paying to keep using services that are widely embedded in social and professional life.
In April 2025, Meta was fined €200 million by the European Commission under the Digital Markets Act for failing to provide a compliant version of its subscription model across the EU. Meta is appealing the decision but has framed the UK rollout as an example of how “pro-innovation” regulatory engagement can lead to workable outcomes.
What It Means For Everyday Users
For individual users in the UK, the subscription appears to create a direct trade-off between privacy and cost. For example, those who do not want to see ads can now remove them for a relatively low monthly fee, particularly when compared to the higher pricing seen in Europe. The pricing structure may also appeal to users who manage multiple accounts, as they can cover all of them under one bundled subscription.
People who continue using the free tier will still see ads, but Meta says they will remain in control of how their data is used to shape ad experiences. Existing privacy tools will remain available, including options to turn off activity-based ad targeting and to manage interests and advertiser interactions.
And For Business Users?
UK business users who rely on Facebook and Instagram for customer engagement, lead generation, or ecommerce should not see significant disruption. The free tier remains intact, and most users are expected to continue using the platform without subscribing, at least initially.
However, business users who also use Facebook and Instagram for personal reasons may choose to pay for the ad-free experience. This could help reduce distraction, but it also raises questions for businesses managing multiple accounts. Meta’s Account Centre lets users link multiple profiles, but additional accounts incur a fee, potentially adding monthly costs for businesses using more than one profile across different functions.
Advertisers
The launch of the subscription model essentially introduces a new form of audience segmentation. People who pay for the ad-free experience will not be shown any ads and will also be excluded from data processing for advertising purposes. This means they will not be available for targeting, retargeting, or inclusion in lookalike audience models.
In practical terms, this could result in slightly smaller campaign reach, reduced effectiveness of retargeting strategies, and less data for ad performance optimisation. However, the actual impact will depend on how many people choose to subscribe. Meta has positioned the new subscription as a supplement rather than a replacement for its ad business, which continues to power most of its revenue and remains core to its UK economic contribution.
Competitors
The move follows broader industry trends, with other major platforms already offering ad-free tiers. For example, YouTube Premium removes all adverts across videos and music and charges more than Meta’s proposed rate. X (formerly Twitter) offers a Premium Plus plan to remove almost all ads, and Snapchat has experimented with removing ads from key surfaces in its Platinum plan.
Meta’s UK pricing is among the lowest, undercutting most other ad-free subscription options. This may give the company a competitive edge with privacy-conscious users and could create pressure on rivals to adjust pricing or introduce similar models.
A Compliance Measure … And An Opportunity
Meta has positioned the change as a regulatory compliance measure, but it also presents an opportunity to test new revenue streams and reduce legal exposure. By charging a relatively low price and tying it to UK-specific guidance, the company is attempting to avoid further fines and litigation while learning how users respond to a consent-based subscription model.
The pricing structure reflects wider industry dynamics, including the growing cost of mobile transactions and the limitations placed on data processing by new data laws. Meta has also used the announcement to promote the economic value of its advertising tools, saying its platforms supported over 357,000 jobs and £65 billion in UK economic activity in 2024 alone.
Others Who Will Be Watching Closely
Those likely to be most affected by or involved in the rollout include regulators, privacy campaigners, advertisers, and everyday users of the platforms. The ICO is expected to monitor how the subscription model works in practice and whether it meets legal standards for free and informed consent. Privacy groups may also be looking for evidence that Meta genuinely stops using subscriber data for advertising. Advertisers will be watching for any impact on campaign performance, particularly around reach and targeting. Rival platforms in the UK and beyond may also be studying how effectively Meta manages the balance between regulation, user experience, and revenue.
Concerns
Privacy experts have already raised some concerns that the model places a price tag on privacy, forcing people to pay to prevent their data being used for tracking and targeting. Critics argue that data protection rights should not depend on a person’s ability to pay. The ICO’s current position is that the subscription represents a valid approach to consent, but some legal observers suggest further scrutiny may follow if complaints emerge about how the choice is presented or how data is processed.
Campaigners also point out that a paid subscription will not necessarily solve deeper issues with surveillance advertising, including the scale of data collection and the risks it poses to vulnerable users. Others have noted that people in low-income groups, young users, and those with limited digital literacy may be less able to make informed decisions or afford the subscription, reinforcing digital inequality.
What Does This Mean For Your Business?
Meta’s new ad-free subscription introduces a clearer line between paid privacy and free access, but it also raises significant questions about fairness, regulation, and business impact. For UK businesses, the ability to continue reaching a large audience on Facebook and Instagram remains largely unchanged in the short term. However, if a growing number of users pay to avoid ads, the addressable audience for paid campaigns may begin to shrink, thereby making it harder for small firms to rely on low-cost, highly targeted advertising. Meta’s economic contribution to UK advertising is significant, but maintaining that value depends on how many users continue opting into the ad-supported model.
The low UK price point is likely to encourage adoption compared to similar schemes in the EU, and it gives Meta a way to meet regulatory demands without heavily disrupting its business model. It also gives other tech firms a benchmark for what regulators might accept in similar contexts. For regulators and privacy advocates, the coming months will be a test of whether offering a paid alternative is enough to uphold the principle of free and informed consent.
For users, the offer may feel fairer than being given no choice at all, but the framing still forces a trade-off that not everyone will find acceptable. For competitors, the low pricing could trigger reassessments of their own ad-free offerings. For campaigners, the subscription will not address wider concerns about surveillance-based business models, and for Meta, the rollout could either become a blueprint for future compliance or a flashpoint if uptake leads to new scrutiny.
Company Check : Claude In CoPilot & Google Data Commons
Microsoft has confirmed it is adding Anthropic’s Claude models to its Copilot AI assistant, giving enterprise users a new option alongside OpenAI for handling complex tasks in Microsoft 365.
Microsoft Expands Model Choice In Copilot
Microsoft has begun rolling out support for Claude Sonnet 4 and Claude Opus 4.1, two of Anthropic’s large language models, within Copilot features in Word, Excel, Outlook and other Microsoft 365 apps. The update applies to both the Copilot “Researcher” agent, used for generating reports and conducting deep analysis, and Copilot Studio, the tool businesses use to build their own AI assistants.
The move significantly expands Microsoft’s model options. Until now, Copilot was powered primarily by OpenAI’s models, such as GPT‑4 and GPT‑4 Turbo, which run on Microsoft’s Azure cloud. With the addition of Claude, Microsoft is now allowing businesses to choose which AI model they want to power specific tasks, with the aim of offering more flexibility and improved performance in different enterprise contexts.
Researcher users can now toggle between OpenAI and Anthropic models once enabled by an administrator. Claude Opus 4.1 is geared towards deep reasoning, coding and multi‑step problem solving, while Claude Sonnet 4 is optimised for content generation, large‑scale data tasks and routine enterprise queries.
Why Microsoft Is Doing This Now
Microsoft has said the goal is to give customers access to “the best AI innovation from across the industry” and to tailor Copilot more closely to different work needs. However, the timing also reflects a broader shift in Microsoft’s AI strategy.
While Microsoft remains OpenAI’s largest financial backer and primary cloud host, the company is actively reducing its dependence on a single partner. It is building its own in‑house model, MAI‑1, and has recently confirmed plans to integrate AI models from other firms such as Meta, xAI, and DeepSeek. Anthropic’s Claude is the first of these to be made available within Microsoft 365 Copilot.
This change also follows a wave of high‑value partnerships between OpenAI and other tech companies. For example, in recent weeks, OpenAI has secured billions in new infrastructure support from Nvidia, Oracle and Broadcom, suggesting a broader distribution of influence across the AI landscape. Microsoft’s latest move helps hedge against any future change in the balance of that relationship.
Microsoft And Its Customers
The introduction of Claude into Copilot is being made available first to commercial users who are enrolled in Microsoft’s Frontier programme, i.e. the early access rollout for experimental Copilot features. Admins must opt in and approve access through the Microsoft 365 admin centre before staff can begin using Anthropic’s models.
Importantly, the Claude models will not run on Microsoft infrastructure. Anthropic’s AI systems are currently hosted on Amazon Web Services (AWS), meaning that any data processed by Claude will be handled outside Microsoft’s own cloud. Microsoft has made clear that this data flow is subject to Anthropic’s terms and conditions.
This external hosting has raised concerns in some quarters, particularly for organisations operating under strict compliance or data residency requirements. Microsoft has responded by emphasising the opt‑in nature of the integration and the ability for administrators to fully control which models are available to users.
For Microsoft, the move appears to strengthen its claim to be a platform‑agnostic AI provider. By integrating Anthropic alongside OpenAI and offering seamless switching between models in both Researcher and Copilot Studio, Microsoft positions itself as a central point of access for enterprise AI, regardless of where the models originate.
Business Relevance And Industry Impact
The change is likely to be welcomed by business users seeking more powerful or specialised models for specific workflows. It may also create new pressure on OpenAI to continue improving performance and pricing for enterprise use.
From a competitive standpoint, Microsoft’s ability to offer Claude inside its productivity suite puts further distance between Copilot and rival AI products from Google Workspace and Apple’s AI integrations. It also allows Microsoft to keep pace with fast‑moving developments in multi‑model orchestration, the ability to run different tasks through different models depending on context or output goals.
For Microsoft’s competitors in the cloud and productivity space, the integration also highlights a growing interoperability challenge. Anthropic is mainly backed by Amazon, and its models run on both AWS and Google Cloud. Microsoft’s decision to incorporate those models into 365 tools represents a break from traditional cloud loyalty and suggests that, in the era of generative AI, usability and capability may matter more than where the models are hosted.
The Google Data Commons Update
While Microsoft is focusing on model integration, Google has taken a different step by making structured real‑world data easier for AI developers to use. This month, it launched the Data Commons Model Context Protocol (MCP) Server, a new tool that allows developers and AI agents to access public datasets using plain natural language.
The MCP Server acts as a bridge between AI systems and the vast Data Commons database, which includes datasets from governments, international organisations, and local authorities. This means that developers can now build agents that access census data, climate statistics or economic indicators simply by asking for them in natural language, without needing to write complex code or API queries.
The launch aims to address the two long‑standing challenges in AI of hallucination and poor data quality. For example, many generative models are trained on unverified web data, which makes them prone to guessing when they lack information. Google’s approach should, therefore, help ground AI responses in verifiable, structured public datasets, improving both reliability and relevance.
ONE Data Agent
One of the first use cases is the ONE Data Agent, created in partnership with the ONE Campaign to support development goals in Africa. The agent uses the MCP Server to surface health and economic data for use in policy and advocacy work. However, Google has confirmed that the server is open to all developers, and has released tools and sample code to help others build similar agents using any large language model.
For Google, this expands its role in the AI ecosystem beyond model development and into data infrastructure. For developers, it lowers the technical barrier to creating trustworthy data‑driven AI agents and opens up new opportunities in sectors such as education, healthcare, environmental analysis and finance.
What Does This Mean For Your Business?
The addition of Claude to Microsoft 365 Copilot marks a clear move towards greater AI optionality, but it also introduces new complexities for both Microsoft and its enterprise customers. While the ability to switch between models gives businesses more control and the potential for improved task performance, it also means IT teams must assess where and how their data is being processed, especially when it leaves the Microsoft cloud. For some UK businesses operating in regulated sectors, this could raise concerns around data governance, third-party hosting, and contractual clarity. Admin-level opt-in gives organisations some control, but the responsibility for managing risk now falls more squarely on IT decision-makers.
For Microsoft, this is both a technical and strategic milestone. The company is reinforcing its Copilot brand as a neutral gateway to the best models available, regardless of origin. It sends a signal that AI delivery will be less about vendor exclusivity and more about task-specific effectiveness. For competitors, the integration of Anthropic models into Microsoft 365 may accelerate demand for open, composable AI stacks that can handle model switching, multi-agent coordination, and fine-grained prompt routing, especially in workplace applications.
Google’s decision to open up real-world data through the MCP Server supports a different but equally important part of the AI ecosystem. For example, many UK developers struggle to ground their AI agents in reliable facts without investing heavily in custom pipelines. The MCP Server simplifies this process, making structured public data directly accessible in plain language. If adopted widely, it could help reduce hallucinations and increase the usefulness of AI across sectors such as policy, healthcare, sustainability, and finance.
Together, these announcements suggest that the next phase of AI will be shaped not only by which models are most powerful, but also by who can offer the most useful data, the clearest integration paths, and the most practical tools for real-world business use. For UK organisations already exploring generative AI, both moves offer new possibilities, but also demand closer scrutiny of how choices around models and data infrastructure will affect operational control, user trust, and long-term value.
Security Stop-Press: Insider Threats : BBC Reporter Shares Story
Cybercriminals are increasingly targeting employees as a way into company systems, with insider threats now posing a serious and growing risk.
In one recent case, a BBC reporter revealed how a ransomware gang tried to recruit him through a messaging app, offering a share of a ransom if he provided access to BBC systems. The attempt escalated into an MFA bombing attack on his phone, a method used to pressure targets into approving login requests.
This form of insider targeting is becoming more common. For example, the UK’s Information Commissioner’s Office recently found that over half of insider cyber attacks in schools were carried out by students, often using guessed or stolen credentials. In the private sector, insiders have caused major breaches, including a former FinWise employee who accessed data on nearly 700,000 customers after leaving the firm.
Security researchers warn that ransomware groups now actively seek staff willing to trade access for money, rather than relying solely on technical exploits.
To reduce the risk, businesses are advised to enforce strong offboarding, monitor user behaviour, implement phishing-resistant MFA, and raise staff awareness about insider recruitment tactics.