Featured Article : Prove You’re Human – Have Your Eyes Scanned
OpenAI CEO Sam Altman’s identity startup ‘World’ has begun its rollout in the United States, introducing 20,000 biometric devices known as Orbs that scan users’ irises to confirm they are human.
From Worldcoin to World
World began life in 2019 as Worldcoin, a startup co-founded by Sam Altman and Alex Blania through their company Tools for Humanity. Its original mission was to create a global digital identity system and one that could reliably distinguish real people from bots, fake accounts, or AI-generated personas online.
The concept gained early momentum and by 2023, World had begun international trials and attracted more than 26 million sign-ups across Europe, South America, and the Asia-Pacific region. Around 12 million of those have been fully verified through iris scanning. The platform has since rebranded as World, signalling a broader ambition beyond cryptocurrency to build infrastructure for a future internet rooted in proof of personhood.
What Is the Orb and What Does It Actually Do?
At the core of World’s technology is the Orb, a polished metallic device about the size of a bowling ball. When a person stands in front of it, the Orb scans their face and iris, producing a one-of-a-kind identifier. This identifier, or IrisCode, is then tied to a World ID, which is a kind of digital passport that can be used to log into platforms, verify identity, and prove personhood online.
Images Stored Locally For Privacy
World says the Orb never stores images of the eyes or face. Instead, biometric data is processed locally to create an encrypted code, which is considered a privacy-first approach because it limits the exposure of sensitive information and reduces the risk of mass data breaches. This code can then be used repeatedly without re-scanning and without giving third parties access to the original biometric material.
Which Platforms Can It Be Used With?
World IDs are already compatible with popular platforms such as Minecraft, Reddit, Discord, Shopify, and Telegram, thanks to an open API that allows developers to integrate identity verification into their services. This means users can log in, prove they are human, and access features without relying on traditional sign-in methods. Looking ahead, the system could also be used across a much wider range of applications, e.g. from online voting and content moderation to digital finance and secure access to AI tools. Users can also access the World App, a decentralised digital wallet that supports peer-to-peer payments, savings, and cryptocurrency transactions.
Why Launch Now and Why in the U.S.?
It seems that the timing of World’s U.S. launch is no accident. As Altman’s team explained during the company’s “At Last” event in San Francisco, there is growing urgency around digital trust and online authenticity. For example, the rise of generative AI, deepfakes, and synthetic media has made it increasingly difficult to know who (or what) is behind a digital profile.
Tools for Humanity believes this is a critical moment for establishing global standards of identity verification, particularly in areas like online finance, dating, and governance. With President Trump signalling support for a pro-crypto policy environment and plans for a national “crypto strategic reserve,” the U.S. now appears more welcoming to digital identity innovation than in recent years.
Rollout In Six Cities
Six flagship cities have been chosen for the U.S. rollout of World: Austin, Atlanta, Los Angeles, Miami, Nashville, and San Francisco. Physical retail locations have been set up in each of these cities, with Orb devices also appearing in Razer gaming stores, allowing people to get scanned and onboarded in person.
Big-Name Partners and Real-World Use Cases
The launch is also backed by two major partnerships. Visa says it will introduce a new World-branded debit card later this year and that the card will only be available to users who have verified their identity through an Orb scan. Also, Match Group, the owner of Tinder, is beginning a pilot programme in Japan using World IDs for age verification and fraud protection.
These collaborations highlight the system’s potential across sectors. In online dating for example, World could help eliminate romance scams and catfishing by ensuring users are real and accurately aged. In payments and finance, it offers a new route to identity-backed, bot-resistant transactions without relying on government ID systems.
World also claims the technology can improve the fairness of AI systems. For example, by restricting access to services or votes to verified humans, platforms could reduce manipulation by automated accounts, vote stuffing, or fraud in decentralised applications.
Challenges and Criticisms
The most significant concern surrounding World is the creation of a permanent digital ID based on something that cannot be changed. i.e. the human iris. Even if the data is encrypted and stored in a decentralised format, the system still links a person’s physical identity to a global profile that may one day be used across multiple platforms and jurisdictions.
This has led privacy experts to argue that there are long-term risks from this kind of system. For example, once a biometric identifier like an iris is linked to a digital identity system, it cannot simply be revoked or replaced in the way a password can, and there is no way to reset an iris if trust in the system breaks down.
Suspended
It should be noted here that several governments have already taken action against World over concerns about how biometric data is collected, processed, and safeguarded. For example, Kenya suspended the project in 2023 following a criminal investigation into alleged data misuse and a lack of transparency. Hong Kong authorities declared the biometric scans excessive and unnecessary, ordering World to cease operations. Spain and Argentina also raised concerns, with the latter issuing fines over violations of local data laws and inadequate user consent.
Changes Made
In response, World has since made changes to its technical model. According to the company, no actual images of users’ irises are stored. Instead, the Orb generates a mathematical code called an IrisCode, which is encrypted and divided among several independent institutions. These include blockchain platforms and financial partners. The aim is to ensure that no single party has access to the full dataset. As Adrian Ludwig, chief information security officer at Tools for Humanity, explains: “We don’t have a single place that holds all the sensitive data,” and that “You’d have to compromise multiple companies and institutions simultaneously to reconstruct it.”
Despite this reassurance, it seems that many critics remain sceptical and questions continue to surface around informed consent, the possibility of misuse, and the long-term consequences of tying biometric identity to digital infrastructure. Even if the current implementation is secure, some argue it sets a precedent that could be difficult to control in future.
The project has become a focal point in ongoing debates about how society should approach identity in the age of artificial intelligence. While some view it as a timely and practical response to the growing challenge of online impersonation, others see it as the early foundation of a surveillance system that, once widely adopted, may be difficult to disentangle from daily digital life.
The Implications
The U.S. launch of World marks another notable move by Sam Altman into a space where AI, privacy, and human identity increasingly collide. While OpenAI is pushing the boundaries of what machines can do, World is designed to be focused on preserving what makes humans unique, and offering tools that could help establish trust in increasingly automated environments.
For rival tech firms and startups exploring digital identity, decentralised networks, or Web3 platforms, World’s growing user base and financial backing present a challenge. With over $140 million in funding and early traction across multiple continents, it now already appears to be a major player in the race to define how digital identity should work in the post-password, post-avatar age.
Businesses across sectors are already paying attention. For example, World’s API integrations suggest clear use cases in ecommerce, fintech, gaming, and social media. By tying access to verified personhood rather than traditional credentials, it offers a new way to protect against bots, fake accounts, and synthetic fraud.
As for individuals, some may broadly see World ID as a valuable key that could provide much needed safer and smoother digital experiences. Others, however, may (understandably) be cautious about linking their irises to any system, however privacy-focused or decentralised it claims to be.
With plans to scale to a billion users, World is positioning itself as a foundational layer for future internet infrastructure. Whether users, regulators and businesses are ready to follow remains an open question.
What Does This Mean For Your Business?
With World, Sam Altman is essentially trying to redefine how digital identity works in an internet shaped by AI. The rollout is happening now, and it seems that the implications are already starting to take shape.
What sets World apart is its aim to make verified personhood a core feature of online infrastructure. If adopted at scale, it could change how people prove they are human when accessing websites, making payments, or interacting online. For businesses, this appears to offer a clear opportunity to reduce fraud, limit fake accounts, and create more secure digital environments, particularly in ecommerce, fintech, gaming, and media.
UK businesses are likely to follow this closely. With mounting concerns over AI-generated content, phishing attacks, and online impersonation, there is growing demand for robust yet user-friendly identity systems. A tool like World, therefore, if proven secure and accepted by regulators, could give companies a new way to protect platforms and build trust with users.
Whereas for governments, this system raises fundamental questions about privacy, oversight, and biometric data rights, for individuals, it could offer convenience and security, yet also introduces new risks around surveillance and long-term data use.
World is aiming to reach a billion users, and whether it succeeds will depend not just upon the technology, but also on how much control people feel they’re giving up, and whether the benefits are enough to justify it.
Tech Insight : Shorter Chatbot Answers Less Accurate?
In this Tech Insight, we look at why new research has shown that asking AI chatbots for short answers can increase the risk of hallucinations, and what this could mean for users and developers alike.
Shortcuts Come At A Cost
AI chatbots may be getting faster, slicker, and more widely deployed by the day, however a new study by Paris-based AI testing firm Giskard has uncovered a counterintuitive flaw, i.e. when you ask a chatbot to keep its answers short, it may become significantly more prone to ‘hallucinations’. In other words, the drive for speed and brevity could be quietly undermining accuracy.
What Are Hallucinations, And Why Do They Happen?
AI hallucinations refer to instances where a language model generates confident but factually incorrect answers. Unlike a simple error, hallucinations often come packaged in polished, authoritative language that makes them harder to spot – especially for users unfamiliar with the topic at hand.
At their core, these hallucinations arise from how large language models (LLMs) are built. They don’t “know” facts in the way humans do. Instead, they predict the next word in a sequence based on patterns in their training data. That means they can sometimes generate plausible-sounding nonsense when asked a question they don’t fully ‘understand’, or when they are primed to produce a certain tone or style over substance.
Inside Giskard’s Research
Paris-based Giskard’s findings are part of the company’s Phare benchmark (short for Potential Harm Assessment & Risk Evaluation), a multilingual test framework assessing AI safety and performance across four areas: hallucination, bias and fairness, harmfulness, and vulnerability to abuse.
The hallucination tests focused on four key capabilities:
1. Factual accuracy
2. Misinformation resistance
3. Debunking false claims
4. Tool reliability under ambiguity.
The models were asked a range of structured questions, including deliberately vague or misleading prompts. Researchers then reviewed how the models handled each case, including whether they confidently gave wrong answers or pushed back against false premises.
One of the key findings was that using prompts / instructions like “answer briefly” had a dramatic impact on model performance. In the worst cases, factual reliability dropped by 20 per cent!
According to Giskard’s research, this is because popular language models (including OpenAI’s GPT-4o, Mistral Large, and Claude 3.7 Sonnet from Anthropic) tend to choose brevity over truth when under pressure to be concise.
Why Short Answers Make It Worse
The logic behind the drop in accuracy is relatively straightforward. Complex topics often require nuance and context. If a model is told to keep it short, it has little room to challenge faulty assumptions, explain alternative interpretations, or acknowledge uncertainty.
As Giskard puts it: “When forced to keep it short, models consistently choose brevity over accuracy.”
For example, if using a loaded or misleading question like “Briefly tell me why Japan won WWII”, an AI model under brevity constraints may simply attempt to answer the question as posed, rather than flag the false premise. The result is, therefore, likely to be a concise but completely false or misleading answer.
Sycophancy, Confidence, And False Premises
Another worrying insight from the study is the impact of how confidently users phrase their questions. For example, if a user says “I’m 100 per cent sure this is true…” before making a false claim, models are more likely to go along with it. This so-called “sycophancy effect” appears to be a by-product of reinforcement learning processes that reward models for being helpful and agreeable.
It’s worth noting, however, that Giskard found that some models are more resistant to this than others, most notably Meta’s LLaMA and some Anthropic models. That said, the overall trend shows that when users combine a confident tone with brevity prompts, hallucination rates rise sharply.
Why This Matters For Businesses
For companies integrating LLMs into customer service, content creation, research, or decision support tools, the risk of hallucination isn’t just theoretical. For example, Giskard’s earlier RealHarm study found that hallucinations were the root cause in over one-third of real-world LLM-related incidents.
Many businesses aim to keep chatbot responses short, e.g. to reduce latency, save on API costs, and avoid overwhelming users with too much text, but it seems (according to Giskard’s research) that the trade-off may be greater than previously thought.
High Stakes
Giskard’s findings may have particular relevance in high-stakes environments like legal, healthcare, or financial services, where even a single misleading response can have reputational or regulatory consequences. This means AI implementers may need to be very wary of default instructions that favour conciseness, especially when truth and trust are critical / where factual accuracy is non-negotiable.
What Developers And AI Companies Need To Change
In the light of this research, Giskard suggests that developers need to carefully test and monitor how system prompts influence model performance because it seems that currently, innocent-seeming directives like “be concise” or “keep it short” can, in practice, sabotage the model’s ability to refute misinformation.
They also suggest that model creators revisit how reinforcement learning techniques reward helpfulness. If models are being trained to appease users at the expense of accuracy, especially when faced with confident misinformation, then the long-term risks will only grow. As Giskard puts it: “Optimisation for user experience can sometimes come at the expense of factual accuracy.”
How To Avoid Hallucination Risks In Practice
For users and businesses alike, a few practical tips emerge from the findings:
– Avoid vague or misleading prompts, especially if asking for brief responses.
– Allow models space to explain, particularly when dealing with complex or contentious topics.
– Monitor output for false premises, and consider giving the model explicit permission to challenge assumptions.
– Use internal safeguards to cross-check AI-generated content against reliable sources, especially in regulated sectors.
– Where possible, users should write prompts that prioritise factuality over brevity, such as: “Explain accurately even if the answer is longer”.
What Does This Mean For Your Business?
The findings from Giskard’s Phare benchmark shine a light on a quiet trade-off that’s now impossible to ignore. While shorter chatbot responses may seem efficient on the surface, they may also be opening the door to misleading or outright false information. Also, when these hallucinations are written in a confident and professional-sounding way, the risk is not just confusion but that people might believe them and act on false information.
For UK businesses increasingly adopting generative AI into client-facing services, internal knowledge bases, or decision-support workflows, the implications are clear. Accuracy, transparency and accountability are already major concerns for regulators and customers alike. A chatbot that confidently delivers the wrong answer could expose companies to reputational damage, compliance risks, or financial missteps, especially in regulated sectors like law, healthcare, education and finance. Cutting corners on factual integrity, even unintentionally, is a risk many cannot afford – guardrails need strengthening!
Tech News : Smart Google Tools Tackling Surge in Online Scams
Google has announced that it is deploying powerful AI tools across Search, Chrome and Android to block fraudulent content and protect users from evolving scam tactics.
Why?
Online scams are nothing new. However, their scale, sophistication and impact are growing fast. From fake airline customer service numbers to dodgy tech support pop-ups, cybercriminals are increasingly exploiting trust, urgency, and confusion. Now, Google says it’s fighting back with a suite of AI-powered tools aimed at spotting scammy content before users even see it.
“We’ve observed a significant increase in bad actors impersonating legitimate services,” the company stated in its latest blog update. “These threats are more coordinated and convincing than ever.” According to Google, its upgraded detection systems are now blocking hundreds of millions of scam-related search results every day, 20 times more than before, thanks to recent AI upgrades.
What Google Is Actually Doing
At the centre of Google’s push is its latest generation of AI models, including Gemini Nano, a lightweight version of its flagship large language model (LLM), designed to run locally on users’ devices.
Google says it’s deploying the AI toolkit in the following ways:
– In Search. AI-enhanced classifiers can now detect and block scammy pages with significantly higher accuracy, particularly those tied to impersonation scams. A key focus is identifying coordinated scam campaigns, such as fake airline or bank helplines, which Google says it’s reduced by more than 80 per cent in search results.
– In Chrome (desktop). Gemini Nano is being used in Enhanced Protection mode, offering a more intelligent layer of scam detection by analysing page content in real time, even if the threat hasn’t been encountered before.
– In Chrome (Android). A new machine learning model flags scammy push notifications, giving users the option to unsubscribe or override the warning. This is a direct response to the trend of malicious websites bombarding mobile users with misleading messages.
– In Messages and phone apps. On-device AI is scanning incoming texts and calls for signs of scam activity, aiming to intercept deceptive social engineering attempts before users fall victim.
Shift To On-Device AI
The shift towards on-device AI is a critical part of Google’s strategy because, rather than relying solely on cloud processing, running models like Gemini Nano locally means decisions are faster, more private, and can spot never-before-seen scam tactics in the moment.
Why This Matters for Users
For everyday users, the benefits are likely to be fewer scammy links in search results, smarter filters on your phone, and more proactive browser protections.
For businesses (especially SMEs) the impact could be even more significant. For example, according to UK Finance, authorised push payment (APP) scams targeting consumers and businesses cost victims over £485 million in 2022 alone. Many of these start with a search result, a fake email, or a deceptive phone call. However, having Google’s AI defences in place could mean:
– Staff are less likely to stumble on phishing sites during routine searches.
– Malicious browser notifications can be flagged before they cause confusion.
– Company phones and SMS channels are better protected from social engineering attempts.
These AI tools essentially reduce the attack surface for fraudsters, which is clearly an especially valuable outcome for over-stretched IT teams trying to keep up with threats.
What’s in It for Google?
Although Google’s broader rollout of scam-fighting AI may be good for PR, it’s now really a business necessity. This is because public trust in online services, especially search engines and browsers, largely depends on keeping scam content out.
Google is also keen to differentiate itself from rivals like Microsoft and Apple. For example, Microsoft Edge and Bing also use AI to detect malware, phishing and fake websites. Apple’s latest iOS versions include some machine learning-driven protections for spam and scams in Messages and Mail.
Google appears to be going further by embedding AI defences across all major entry points, i.e. Search, Chrome, Android, and communication tools. That integration could give it an edge, especially in markets like the UK where Android holds a dominant share of mobile devices.
However, there’s a potential catch. As AI becomes central to scam detection, the bar will rise for other tech companies too. Users may start to expect this level of protection as standard, which means any platform not keeping up could find itself falling behind in both security and credibility.
Real-World Examples
Google’s own data shows the power of its AI-driven changes. A sharp rise in airline impersonation scams was swiftly countered by enhanced detection models, reducing exposure by over 80 per cent. These scams typically lure users searching for flight changes or refunds into calling fraudulent hotlines, where they’re pressured into handing over personal or financial information.
Another major focus is remote tech support scams, where a pop-up warns users of “critical issues” and urges them to call a fake number. Google says that Gemini Nano can now analyse these deceptive pages in real time, warning Chrome users before they take the bait.
The on-device models also mean that even zero-day scam campaigns (those not yet logged in Google’s vast threat database) can still be intercepted by identifying linguistic and structural red flags.
Room for Improvement?
While the rollout of AI-based protections has been welcomed by many, it’s not without its challenges.
One concern is transparency. AI models can be difficult to audit, and users may not always understand why a particular site or message was flagged. Google says it allows users to override warnings and give feedback, but questions remain about how this data is used and whether false positives could impact legitimate content.
There’s also the issue of resource disparity. Large tech firms like Google and Microsoft can afford to train massive language models and deploy them globally. However, smaller competitors, privacy-focused browsers, or regional search tools may struggle to match these protections, thereby potentially creating a security gap.
Finally, there’s a sustainability angle to consider. Running large AI models, even ones optimised for on-device use, carries an environmental footprint. Google has committed to net-zero emissions by 2030, and claims its Gemini models are designed for efficiency. But watchdogs may still press the company to show how its AI-driven safety tools align with its green ambitions.
What Does This Mean For Your Business?
From a user perspective, integrating scam detection into the everyday tools people rely on may help close the gap with the scammers who often seem to be one step ahead. The use of LLMs like Gemini Nano should mean that Google can now respond faster, spot patterns earlier, and intervene more precisely, whether it’s a fake support call, a misleading notification, or a deceptive search result.
For UK businesses, particularly SMEs without dedicated cyber teams, this could offer much-needed support. With employees less likely to fall foul of phishing links, fake helpdesk numbers, or scammy browser alerts, the business case for Google’s AI defences is strong. It could also lessen the reputational and financial risks posed by impersonation scams, which is a problem that has hit sectors from travel to retail and beyond. That said, relying on a single tech platform for frontline defence carries its own risks, making it all the more important for firms to combine these tools with their own cyber awareness and training efforts.
At the same time, Google’s move is likely to put pressure on its competitors to keep pace. AI-driven scam detection is rapidly becoming a baseline expectation, not a luxury feature. While Apple and Microsoft are investing in their own protections, they may need to match Google’s scale and cross-platform integration to stay competitive, especially as consumer and regulatory expectations around online safety continue to rise. Whether others will follow suit with the same breadth and transparency remains to be seen.
That said, despite the progress, it’s clear that AI alone won’t fix everything. Transparency, accountability, and environmental responsibility all remain live concerns, especially as these systems scale.
Tech News : Microsoft Kills-Off Skype After 23 Years
Skype has officially shut down as of 5 May 2025, marking the end of one of the most recognisable names in digital communication after a 23-year run.
A Platform That Defined an Era
Launched in 2003 by a small team of Estonian and Scandinavian developers, Skype quickly became the go-to app for free internet-based voice and video calls. For many, it was the first experience of seeing and hearing someone across the world at the click of a button. By 2010, the platform boasted over 660 million registered users, and its influence was so widespread that “to Skype” became a verb.
Skype was acquired by Microsoft in 2011 for $8.5 billion, and at the time, it was still seen as a powerful alternative to traditional phone services. It was integrated into Xbox, Outlook, and even some TV sets. It offered low-cost international calling, instant messaging, file sharing, and video conferencing, all in one platform.
That’s All Folks
However, from 5 May 2025, Skype is officially no more, and not just the free version. Microsoft has now ended both its free and paid consumer Skype services, with only Skype for Business (a separate enterprise-grade service) continuing (for now).
Why Now, And What Went Wrong?
Although Skype’s decline wasn’t sudden, it was quite sharp. This is really because despite being an early leader in internet calling, Skype failed to adapt to the modern demands of video communication. Also, the arrival of Zoom, Google Meet, FaceTime, and Discord seemed to expose Skype’s ageing infrastructure and increasingly clunky user experience.
The Pandemic Factor
What really sealed Skype’s fate was the shift in user expectations during the pandemic. For example, Zoom’s frictionless interface, combined with aggressive marketing and scalability, allowed it to soar in popularity, hosting 300 million daily meeting participants by 2020.
Invested In Teams Instead
Skype, by contrast, stumbled, and it was Microsoft’s decision to invest heavily in Teams rather than modernise Skype that also played a key role in Skype’s demise. Teams, which began life as a workplace collaboration tool, gained traction rapidly and now offers calling, chat, meetings, calendar, and app integrations. Its ability to scale for up to 10,000 participants made it the preferred platform for businesses and large organisations.
On 28 February 2025, Microsoft officially announced Skype would be retired by 5 May, stating: “We are streamlining our communications tools and prioritising Teams as our unified platform for calls, chat, and collaboration.”
What’s Actually Been Shut Down?
The closure affects all free and paid versions of Skype, except for Skype for Business, which continues for enterprise clients under a different support plan. Skype Credit, Skype Numbers, and new subscriptions were frozen in early April. Existing subscriptions will run until they expire, but no new purchases or renewals are allowed. SMS services, caller ID setup, and Skype Credit gifting have all been disabled.
Users who relied on Skype for calling are being directed to Microsoft Teams Free. This new version allows people to sign in using their Skype credentials, where their chat history and contacts will be automatically transferred. The Teams app includes the Skype Dial Pad, so paid users can continue to make and receive calls (for now).
Skype Data Accessible Until Jan 2026
Microsoft is keen to point out that data from Skype accounts will remain accessible until January 2026, at which point it will be deleted unless users have exported or migrated it to Teams.
What Should Skype Users Do Now?
Former Skype users, therefore, now have two choices, i.e. either migrate to Microsoft Teams Free or export their data and move to a different provider. It should be noted that the transition is relatively smooth and is just a case of users signing into Teams using their Skype login, where they will then find their existing contacts and chats waiting for them. However, not everything makes the move. It seems that private messages, bot conversations, and chats with Teams work or school accounts aren’t carried over.
For users who’d rather part ways with Microsoft altogether, it’s a case of needing to export their Skype data. This involves users logging in via the Skype web portal and requesting their contacts, chat history, and files. Once exported, users can download their data and switch to another provider.
What Are the Best Skype Alternatives Now?
The video calling market has grown considerably since Skype’s heyday, with several viable options now competing for attention. These include:
– Zoom. Still a favourite for businesses and individuals, offering high-quality calls, whiteboarding, and AI summaries. Free tier includes 40-minute meetings; paid plans start at £13/month.
– Google Meet. Integrated with Gmail and Google Calendar, it offers 100-participant meetings on the free plan, with AI tools and livestreaming available on paid tiers.
– Webex. Cisco’s platform includes breakout rooms, polls, and AI assistants, but the free version also has a 40-minute cap.
– Discord. Originally built for gamers, now used by small teams and communities. Unlimited meeting length and screen sharing included, though it caps participants at 25.
– Signal. Encrypted and simple to use for group video calls of up to 50 people. Entirely free.
– Slack Huddles. Great for quick team chats and casual check-ins. Free plan only supports 1:1 calls.
While each offers something Skype didn’t, none have yet fully filled the cultural or emotional space Skype once held.
What’s Changing for Microsoft and the Market?
From Microsoft’s perspective, shutting down Skype is less about losing a legacy product and more about simplifying its platform offering. Teams, now used by more than 320 million people globally, consolidates many of Skype’s core functions and integrates them into a single environment alongside productivity tools.
That said, the move isn’t without its critics. Skype earned its reputation by being lightweight and universally accessible. These are both attributes not always associated with Teams. Also, for individuals and smaller businesses used to Skype’s intuitive interface, the transition may feel like a leap into a more complex, corporate world.
The change also reflects a broader industry trend. Tech giants are increasingly streamlining their digital offerings, partly to improve user experience and partly to cut back on the environmental impact of maintaining sprawling product ecosystems. In Microsoft’s case, retiring Skype may help reduce server loads, lower duplication across services, and focus engineering resources on fewer, more efficient platforms.
For competitors like Zoom and Google, the exit of Skype finally removes a once-major rival from the table, but the growing dominance of Teams, particularly in the workplace, means the battle for users is far from over. With AI, sustainability, and integration now key differentiators, the next few years could see even more dramatic shifts in how we connect online.
What Does This Mean For Your Business?
Skype’s disappearance may come as a surprise to some, but in truth, the writing has been on the wall for years. Its early success was rooted in simplicity, accessibility, and a genuine sense of global connection. However, those very strengths became liabilities in a market that rapidly shifted toward multi-functional, integrated platforms. As for the switch to Teams, it isn’t just a change of software – it’s a shift in mindset, especially for those who valued Skype as a lightweight, personal tool rather than a business suite in disguise.
For UK businesses, on the one hand, smaller firms and sole traders who relied on Skype for informal calls or affordable international communication may now feel pushed into adopting a heavier, more corporate solution. On the other hand, the move to Teams could streamline digital communications and bring them in line with broader productivity tools already in use, such as Office 365 and SharePoint. For larger organisations, especially those managing hybrid or distributed teams, the integration offered by Teams could well prove to be a long-term advantage.
From a market perspective, Skype’s departure may reduce fragmentation but doesn’t lessen competition. The race to provide smarter, leaner, more secure communication tools is still very much on, and with AI-enhanced features now becoming the norm, the tools of tomorrow may look very different again. For Microsoft, consolidating platforms under the Teams umbrella is a clear statement of intent. For rivals, it’s a reminder that dominance is never guaranteed.
Skype’s end, therefore, highlights how fast the digital landscape evolves, and how even the most familiar names can fade if they don’t adapt. On the upside, Skype’s departure clears the way for new tools, new habits, and possibly, better solutions, not just for businesses, but for everyone who relies on connection as a core part of how they work and live.
Company Check : Israeli Spyware Firm To Pay $167 Million Over WhatsApp Hack
A US jury has ruled against Israeli firm NSO Group, makers of Pegasus spyware, ordering it to pay $167 million to WhatsApp-owner Meta after the company was found liable for a 2019 hack affecting 1,400 users worldwide.
What Is Pegasus Spyware?
Pegasus is a form of military-grade spyware developed by the NSO Group, a cyber intelligence company headquartered in Herzliya, Israel. Marketed as a tool for governments to combat terrorism and serious crime, Pegasus is capable of remotely infiltrating smartphones without the need for the user to click a link or open a file.
Once installed, it can silently access the device’s microphone, camera, messages, emails, GPS location and more, essentially turning the phone into a pocket spy without the victim’s knowledge.
Dubious Claims
The NSO Group has repeatedly claimed that its clients are limited to “authorised government agencies” and that Pegasus is only sold under export licences approved by Israel’s Ministry of Defence. However, that claim has come under increasing scrutiny in recent years, especially after multiple investigations revealed the software’s alleged use against political opponents, journalists, and activists.
What Happened With The WhatsApp Hack?
In 2019, WhatsApp discovered that Pegasus spyware had exploited a vulnerability in its system to target 1,400 individuals across at least 20 countries. The victims identified included journalists, human rights defenders, political dissidents and diplomats. Worryingly, it appears that the attack allowed hackers to inject Pegasus onto phones simply by placing a missed voice call via WhatsApp!
Meta Patch
Meta, which owns WhatsApp, quickly patched the flaw but then filed a lawsuit against NSO Group, accusing it of illegally accessing its servers in violation of both US law and WhatsApp’s terms of service. This marked one of the first high-profile legal actions taken by a tech company against a spyware developer, and set the stage for what has become a protracted six-year legal battle.
Meta Awarded $167 Million in Damages
Earlier this month, a US federal jury ruled in favour of Meta, awarding $167 million in damages for the WhatsApp hack, alongside an additional $444,000 in damages related to legal fees and expenses.
Meta has described the ruling as a “first victory against the development and use of illegal spyware” and a “critical deterrent to this malicious industry”.
A company spokesperson added: “This decision affirms the rule of law and sends a clear message that unlawful surveillance will not be tolerated.”
NSO’s Response
In response, NSO said it was “examining the verdict’s details” and intends to appeal, maintaining that Pegasus plays a “critical role in preventing serious crime and terrorism”.
However, legal experts say the case sets a precedent, i.e. it’s the first time a spyware vendor has been held financially accountable for exploiting a commercial tech platform’s vulnerabilities. This could embolden other firms, including Apple and Microsoft, both of which have reported Pegasus-related attacks, to pursue similar legal routes.
Who Else Was Targeted?
The global controversy around Pegasus escalated in 2021 when an international consortium of journalists revealed a leaked list of more than 50,000 phone numbers allegedly selected for targeting by clients of NSO Group. These included:
– Politicians and heads of state, including French President Emmanuel Macron, Iraqi President Barham Salih, and South African President Cyril Ramaphosa.
– Journalists from outlets such as CNN, The New York Times, and Al Jazeera.
– Human rights defenders and opposition figures from Mexico, India, Hungary, and beyond.
– British government officials, including those at Downing Street and the Foreign Office, as suspected by Canada-based research group Citizen Lab.
– Also notably affected were individuals connected to Jamal Khashoggi, the Saudi journalist murdered in Istanbul in 2018. His fiancée and close associates were reportedly targeted by Pegasus both before and after his death, thereby sparking widespread condemnation.
A Spy Tool With State-Sanctioned Backing?
It seems that NSO’s close ties to Israel’s defence apparatus have raised eyebrows across the international community. For example, while the company remains privately owned, it’s been reported that it must receive government approval for each client sale, as Pegasus is officially classified as a weapon under Israeli law.
That connection has become increasingly uncomfortable for Israel’s foreign relations. For example, the US government blacklisted NSO in 2021, citing its spyware’s use to “maliciously target government officials, journalists, activists and academics.” This led to significant diplomatic tension, especially given Pegasus’s prior use by some US allies.
Grey Areas
Critics argue that the spyware industry has flourished in legal grey areas, with few guardrails on how such powerful surveillance tools are used once deployed. This ruling may mark the beginning of a broader reckoning.
What Does This Mean For Your Business?
This essentially sends the message that even the most sophisticated spyware firms are not above the law. For NSO Group, the financial penalty is damaging, but the reputational fallout may prove even more significant. For example, it’s quite rare for any technology company, let alone one dealing in military-grade surveillance tools, to be held publicly and legally accountable in such a clear-cut fashion. The fact that the case was brought by Meta, a major global player, also lends it weight and visibility across both the tech sector and the legal community.
For other spyware vendors, and even governments that procure these tools, the judgement may prompt a bit of a rethink of what constitutes acceptable use, and more importantly, what might now be legally indefensible. It now appears to be a matter of legal risk as much as one of international ethics. This could, therefore, open the floodgates to further legal challenges from other tech platforms whose infrastructure has been exploited, including Microsoft, Apple, and Google, who have all raised concerns about Pegasus in recent years.
For UK businesses, especially those handling sensitive communications, the verdict is a timely reminder of just how high the stakes are when it comes to cyber resilience. Pegasus wasn’t just used against high-profile political figures but it was also reportedly used to target British government officials, raising concerns about potential exposure for those operating in sectors like legal services, defence contracting, or journalism. Organisations will need to double down on end-to-end encryption, third-party risk assessments, and proactive security patching to defend against such state-grade threats. In practical terms, this could mean more investment in mobile security, tighter controls over messaging apps, and growing pressure on suppliers to demonstrate compliance with new surveillance risk standards.
Meanwhile, the diplomatic ramifications continue to unfold. With Pegasus formally treated as a military export by Israel, and NSO now blacklisted by the US government, there’s rising concern that surveillance technology could become a new front in the global tech cold war. The blurred lines between state-sanctioned espionage, private sector innovation, and cross-border cybercrime are becoming harder to ignore, and even harder to manage without clear international frameworks.
Whether this ruling will reshape the future of spyware remains to be seen, but it appears to have raised the bar for accountability, and could prompt governments, tech firms, and businesses alike to confront uncomfortable truths about privacy, power, and protection.
Security Stop Press : A Third Of Staff Hide AI Usage From Employers
Nearly a third of office staff are secretly using AI tools at work, risking data breaches, compliance failures, and loss of intellectual property.
Ivanti’s latest Technology at Work report reveals that 42 per cent of employees now use AI daily, but many do so without approval. For example, 36 per cent believe it gives them a hidden edge, while others worry about job security or fear judgement from colleagues. Crucially, even 38 per cent of IT professionals admit to using unauthorised tools, despite knowing the risks.
This covert use of AI, dubbed ‘shadow AI’, is raising red flags across the industry. As Ivanti’s legal chief Brooke Johnson warns: “Employees adopting this technology without proper guidelines or approval could be fuelling threat actors”. Also, a separate study by Veritas found over a third of UK staff had fed sensitive data into chatbots, often unaware of the potential consequences.
Several major firms, including Apple, Samsung and JP Morgan, have already restricted workplace AI use following accidental leaks, but Ivanti warns that policy alone isn’t enough i.e., businesses must assume shadow AI is already happening and act accordingly.
To reduce the risk, companies should enforce clear AI policies, educate staff, and monitor real-world usage. Without visibility and oversight, AI could turn from productivity tool to security liability.