Amazon AWS … What Happened?
Amazon Web Services has issued a full apology and technical explanation after a 15-hour outage in its North Virginia data region took thousands of major platforms offline, exposing the internet’s heavy dependence on a handful of US cloud providers.
What Happened?
The incident began late on Sunday 19 October, when engineers at Amazon’s US-East-1 data region in North Virginia detected connection failures across multiple services. Starting at 11:48 pm Pacific time, critical systems linked to Amazon DynamoDB, a database service used by many of the world’s largest apps, began to fail.
The root cause, according to Amazon’s post-event summary, was a “latent race condition”, which is a rare timing bug in the automated Domain Name System (DNS) management system that maintains the internal “address book” of AWS services. In this case, automation mistakenly deleted the DNS record for DynamoDB’s regional endpoint, effectively removing its ability to resolve names to IP addresses.
This caused immediate DNS failures for any service trying to connect to DynamoDB, including other AWS components such as EC2 virtual machines, Redshift databases, and Lambda compute functions. The DNS record was finally restored manually at around 2:25 am, but many dependent systems took much longer to recover, with some continuing to fail well into the afternoon of 20 October.
Where The Internet Felt It
The outage’s ripple effects in this case were global. For example, more than 1,000 platforms and services experienced disruption, including Snapchat, Reddit, Roblox, Fortnite, Lloyds Bank, Halifax, and Venmo. UK financial services were hit particularly hard, with some Lloyds customers reporting payment delays and app errors until mid-afternoon.
Other sectors also suffered unexpected consequences. For example, Eight Sleep, which manufactures smart mattresses that use cloud connections to control temperature and positioning, confirmed that some of its products overheated or became stuck in a raised position during the incident. The company said it would work to “outage-proof” its devices following the incident.
For millions of consumers, the event briefly made parts of the digital world disappear. Websites, apps, and connected devices remained online but could not “see” each other due to the DNS fault, illustrating just how central Amazon’s infrastructure has become to everyday online activity.
How A Single Glitch Spread So Widely
At its core, the failure was a simple but catastrophic DNS issue. DNS is the internet’s naming system, translating web addresses into machine-readable IP numbers. When AWS’s automation produced an empty DNS record for DynamoDB’s endpoint, every application depending on it lost its bearings.
AWS engineers later confirmed that two redundant DNS update systems, known internally as “Enactors”, attempted to apply configuration plans simultaneously. One was significantly delayed, allowing an older plan to overwrite a newer one before being deleted, taking all IP addresses with it. The automation could not self-repair, leaving manual intervention as the only option.
As a result, internal systems that depended on DynamoDB also stalled. Amazon EC2, the platform used to launch virtual servers, could not start new instances. Network Load Balancer (NLB), which distributes traffic between servers, suffered cascading health-check failures as it tried to route connections to resources that were technically online but unreachable.
Why Recovery Took Most Of The Day
While the DNS issue was resolved within hours, the actual automated systems that depend on it did not immediately catch up. For example, EC2’s control software reportedly entered a “congestive collapse” as it attempted to re-establish millions of internal leases with physical servers. Restarting this process safely took several hours.
At the same time, delayed network configurations created a backlog in AWS’s Network Manager, causing newly launched instances to remain disconnected. To make things worse, load balancers then misinterpreted these delays as failures and pulled healthy capacity from service, worsening connection errors for some customers.
By early afternoon on 20 October, Amazon said all EC2 and NLB operations were back to normal, though the ripple effects continued to be felt across smaller services for some time.
Amazon’s Explanation And Apology
Following the outage (and the backlash), Amazon published a detailed (long) 7,000-word technical report outlining the chain of events. The company admitted that automation had failed to detect and correct the DNS deletion and said manual recovery was required to restore service.
“We apologise for the impact this event caused our customers,” Amazon wrote. “We know how critical our services are to our customers, their applications and end users, and their businesses. We know this event impacted many customers in significant ways.”
The company confirmed it has disabled the affected DNS automation worldwide until a permanent fix is in place. AWS engineers are now adding new safeguards to prevent outdated plans from being applied, and additional limits to ensure health checks cannot remove too much capacity during regional failovers.
Reactions And Tech Commentary
Industry experts have generally described the incident as a textbook case of automation failure, pointing to how a rare timing error in AWS’s DNS management system exposed wider systemic dependencies. Many engineers have noted that the issue reinforces the importance of resilience and of designing systems to tolerate faults in automated processes.
The outage is a clear reminder of a long-standing saying in IT circles, i.e., “It’s always DNS.” Although such faults are not unusual, the sheer scale of AWS’s infrastructure meant that a single configuration error was able to cause global disruption.
An Argument For Diversifying Cloud Setups?
Experts have also warned that the outage shows why businesses should diversify their cloud setups. For example, those running all workloads within a single AWS region found themselves completely offline. Organisations using multiple regions, or backup capacity in other cloud providers, were, however, able to switch over and maintain operations.
The Broader Implications
AWS remains the market leader in global cloud infrastructure, accounting for roughly 30 per cent of worldwide spending (according to Synergy Research). Its nearest competitors, Microsoft Azure and Google Cloud, hold around 25 per cent and 11 per cent respectively. However, this latest disruption has reignited debate about overreliance on a single provider.
Large-scale customers are now likely to review their resilience strategies in the wake of the incident. Financial institutions, healthcare providers, and government departments using AWS may now face renewed scrutiny over whether they have realistic fallback options if US-East-1 (Amazon’s largest and oldest data region) goes down again.
For Amazon, the incident is a reminder that its strength as the backbone of the internet can also be its greatest vulnerability, and how every outage draws widespread attention because of its systemic impact. The company’s rapid publication of a detailed (and very long) postmortem is in line with its usual transparency practices, but is also now unlikely to prevent competitors from using the episode to argue for multi-cloud adoption.
How Users Were Affected
For individuals and smaller businesses, the experience of the outage was that websites and apps stopped working. Some services displayed error messages while others simply timed out. With AWS hosting backend systems for thousands of platforms, many users had no idea that Amazon was the root cause.
Gaming companies like Roblox and Epic Games were among the first to confirm the disruption, reporting that login and matchmaking services were unavailable for several hours. Social media feeds froze for many users, while banking and payments apps experienced intermittent outages throughout the morning.
Even Amazon’s own services, such as Alexa and Ring, saw degraded performance during the height of the incident, highlighting the circular dependencies within its own ecosystem.
What Critics Are Saying
Criticism has centred on the scale of AWS’s dominance and the concentration of critical systems in one region. The US-East-1 region handles enormous traffic, both for North America and internationally, because it hosts many AWS “control plane” functions that manage authentication and routing across the network.
Analysts have warned for years that this architecture creates a “single point of systemic risk”, which is a problem that cannot be easily fixed without major structural changes. Calls for greater geographic and provider diversity in cloud services are now growing louder, particularly from European regulators seeking more independence from US infrastructure. Analysts have essentially said the incident showed how organisations that rely on a single AWS region are (perhaps obviously) more vulnerable to disruption. Experts in cloud resilience have noted that customers without secondary regions or providers to keep services running during an outage reinforce long-standing advice to build in redundancy and avoid single points of failure.
What Now?
AWS says it is reviewing all automation across its regions to identify similar vulnerabilities. It says the DNS Enactor and Planner systems will remain disabled until the race condition bug is eliminated and additional safeguards verified. It also says engineers are enhancing testing for EC2 recovery workflows to ensure large fleets can re-establish leases more predictably after regional incidents.
For business users, the event is likely to prompt at least a discussion about the wider adoption of multi-region resilience testing and disaster recovery planning. The broader question is whether the global internet can continue to rely so heavily on a few cloud giants without developing greater local redundancy.
Amazon’s response has been technically thorough and contrite, but the 20 October outage has again exposed the fragility of the infrastructure that underpins much of modern digital life.
What Does This Mean For Your Business?
For Amazon, the scale of this disruption highlights both its dominance and its exposure. When so much of the world’s digital infrastructure runs on AWS, even a small internal fault can have far-reaching consequences. That puts continual pressure on the company to prove not only that it can recover quickly but also that it can prevent similar incidents altogether. Investors, partners, and enterprise customers will expect to see evidence of lasting improvements rather than temporary workarounds.
For UK businesses, this incident offers a practical reminder about risk, resilience, and dependency. Many British firms now rely on US cloud platforms for critical operations, from financial transactions to logistics and customer service. The lesson is, therefore, that resilience cannot be outsourced entirely. Businesses must understand where their data and services actually live, review which regions and providers they depend on, and ensure that key functions can continue if one part of the cloud goes dark.
Regulators and policymakers are also likely to have taken note of what happened and its effects. The outage is likely to reinforce long-running discussions in the UK and Europe about digital sovereignty and the risks of relying on infrastructure controlled by a handful of American companies. While creating a truly independent alternative would be expensive and complex, the case for diversified, regionally distributed systems is now stronger than ever.
Competitors, meanwhile, now have an opportunity to frame this as a kind of turning point. Microsoft, Google, and European providers such as OVH and Stackit will likely use the event to promote multi-cloud architectures and region-level redundancy. However, each faces the same challenge at scale, i.e., automation that makes systems efficient can also make them fragile when unexpected conditions arise.
Ultimately, the outage serves as a stark illustration of how deeply interconnected the modern internet has become. Every business that builds on these platforms shares some part of the same risk. The real question for Amazon and its customers alike is not whether such failures can be avoided completely, but how quickly and transparently they can recover when the inevitable happens.
Clippy Returns To Life As ‘Mico’
Microsoft has introduced “Mico”, a new animated avatar for its Copilot assistant that can be transformed into the classic Clippy paper clip, a light-hearted feature that sits within a much wider update focused on making AI more personal, expressive, and easier to use across Microsoft’s ecosystem.
What Microsoft Is Launching And When?
Mico is the new on-screen face of Copilot, designed to appear when users activate voice mode. The character is an animated, blob-like avatar that changes colour and expression during conversations, reacts to tone, and can be customised through a palette of colours and voice options. Microsoft says users can choose from eight voices with names such as Birch, Meadow, Rain, and Canyon, with a mix of British and American accents available. Mico can also be switched off entirely, meaning that voice interactions can still take place without any visual assistant.
Only In The US, For Now
For now, Mico is available only in the United States, with Microsoft confirming that a wider rollout to the UK and Canada will follow in the coming weeks. The company’s “Copilot Fall Release” package brings a range of new features, including collaboration tools, expanded integration with third-party apps, and new learning and health functions.
Easter Egg Turns To ‘Clippy’
The most nostalgic element of this change is an Easter egg, and repeatedly clicking or tapping on Mico temporarily changes its appearance to that of Clippy, the animated paper clip that appeared in Microsoft Office 97 to offer context-based help. This “Clippy skin” is not a separate mode but rather a visual overlay, and a small nod to the assistant that many users loved to hate.
Why Microsoft Is Doing This?
The relaunch forms part of Microsoft’s wider effort to humanise its AI tools. Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has framed the strategy as “human-centred AI”. In a post announcing the update, he wrote: “Technology should work in service of people. Not the other way around.” The goal, he said, is to make Copilot “helpful, supportive and deeply personal”, empowering users rather than replacing human judgement.
Companion
This reflects Microsoft’s broader positioning of Copilot as an “AI companion”, i.e., an assistant that learns from context, remembers preferences, and provides useful prompts while respecting user control and privacy. Suleyman has also highlighted how Microsoft is “not chasing engagement or optimising for screen time” but instead building AI that “gives you back time for the things that matter”.
Mico’s Personality Designed To Be Useful Not Sycophantic
Jacob Andreou, corporate vice president of product and growth at Microsoft AI, recently explained the design rationale in an interview with the Associated Press, saying: “When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you.” He added that Mico’s personality was designed to be “genuinely useful” rather than flattering or manipulative. “Being sycophantic — short-term, maybe — has a user respond more favourably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”
How It Works
In terms of how Mico/Clippy works, when users activate voice mode by clicking the microphone icon, Mico appears on-screen, listening and responding with animated movements and facial expressions. It can explain topics, summarise documents, or walk users through tasks. Testers have reportedly noted that while it responds naturally, text captions of its spoken replies are not always displayed, meaning conversations are primarily auditory.
Several New Copilot Functions
Beyond the avatar, Microsoft’s Fall (Autumn) Release actually introduces several new Copilot functions. For example, “Groups” allows up to 32 people to participate in a shared Copilot chat, enabling teams to co-plan projects, co-write content or share research. Also, “Connectors” integrate Copilot with apps including Outlook, OneDrive, Gmail, Google Drive, and Google Calendar, allowing users to ask questions across multiple data sources through natural language queries.
Another major change is memory and personalisation. For example, Copilot can now remember user preferences, projects, and recurring tasks, recalling them in future sessions and users retain the ability to edit or delete stored memories. The update also includes “Real Talk”, a new conversation mode that Microsoft says challenges assumptions “with care”, helping users to refine ideas rather than simply validate them.
Health And Learning
It seems that health and learning have become core use cases. For example, according to Microsoft, around 40 per cent of Copilot’s weekly users ask health-related questions. Therefore, to support this, the company has introduced Copilot for Health, built with guidance from medical partners such as Harvard Health, which grounds responses in credible medical information. In education, the new “Learn Live” feature turns Copilot into a voice-enabled tutor that uses dialogue and visual cues to help explain concepts ranging from photosynthesis to computer networking.
Who It’s For?
Mico is essentially designed to appeal to everyday users, families, and students who prefer natural, conversational assistance. Microsoft says it can help users plan trips, research topics, draft content or even provide guidance on everyday decisions. For schools and universities, it could represent an evolution of AI-assisted learning, and one that Microsoft hopes will feel more interactive and approachable than text-only tools.
Was Clippy Just A Bit Before Its Time?
When Clippy appeared in the late 1990s, its design didn’t really match how most people wanted to interact with their computers, yet experts now say users have become much more comfortable with expressive, character-based AI. With advances in technology and a clearer sense of what digital assistants are for, a feature that once felt intrusive is now more likely to be seen as friendly and intuitive.
For Practical Productivity Gains
For professional and business users, the focus of these new features (and the resurrected Clippy) appears to be on practical productivity gains. For example, group chats, shared memory, and cross-app integration all lend themselves to collaborative work and faster information retrieval. In theory, employees could ask Copilot to find key messages across multiple accounts, summarise project discussions, and track progress without leaving a conversation window.
What It Means For Microsoft
The Mico update essentially consolidates Microsoft’s vision of Copilot as a cross-platform assistant embedded within Windows, Edge, and Microsoft 365. By giving Copilot a consistent voice interface and optional visual identity, Microsoft is hoping to strengthen its position against rivals such as Google’s Gemini and OpenAI’s ChatGPT, which are also integrating multimodal and conversational AI into their ecosystems.
It could also be said to represent a strategic pivot towards companionship rather than novelty. For example, Suleyman’s emphasis on empathy, control, and trust is designed to counter growing public scepticism toward AI. Microsoft’s avoidance of human-like avatars or flirtatious personalities contrasts sharply with some competitors that have leaned into emotionally charged or entertainment-focused AI designs.
For Microsoft, therefore, Mico’s visual charm will likely serve as an entry point rather than the product itself. The underlying business logic lies in deeper engagement with Copilot’s ecosystem, i.e., drawing users into paid Microsoft 365 subscriptions, expanding cross-app search capabilities, and encouraging adoption of Edge and Windows 11’s built-in AI features.
Competitors
Rival technology companies are exploring similar territory. For example, OpenAI plans to restore a more conversational personality to ChatGPT, while Google continues to integrate Gemini features across its workspace products. However, Microsoft’s approach is distinctive in how it blends nostalgia and restraint, acknowledging Clippy’s cultural legacy while designing Mico to be optional, unobtrusive, and user-controlled.
By connecting to competing ecosystems such as Google Drive and Gmail through connectors, Microsoft is also signalling its intention to become the interface for managing all personal and professional data, not just that which lives within its own cloud. That interoperability could make Copilot more attractive to mixed-platform users, particularly small businesses that rely on multiple services.
Business Users
For UK businesses, Mico and Copilot’s expanded features highlight Microsoft’s ambition to make AI more visible in everyday workflows. Teams can now co-create and share tasks in Copilot Groups, while memory and connector functions reduce the need to re-enter data or switch between platforms. In practice, that could mean faster document searches, streamlined planning sessions, and AI-assisted decision-making that remains traceable and editable.
Microsoft’s insistence that Copilot should “listen, learn and earn trust” rather than replace judgement may also resonate with more compliance-conscious sectors. Features such as editable memory and explicit consent for data access help address growing governance and privacy expectations.
Challenges And Criticisms
One initial criticism is that the rollout has proved inconsistent so far, with some users reporting that Mico is visible in the web version of Copilot but not yet in the Windows 11 desktop app. Microsoft has said that availability will expand gradually, with new features appearing in phases across different regions and devices.
There are also concerns about autonomy. For example, in tests of Copilot’s booking features, reviewers reported the assistant pre-selecting hotel dates and options without confirmation, highlighting the challenge of balancing initiative with transparency.
More broadly, the industry remains a little cautious about the psychological impact of highly interactive AI. Regulators such as the US Federal Trade Commission have begun examining how AI chatbots affect children and teenagers, following reports of harmful advice and emotional over-familiarity from some AI companions. Microsoft is seeking to avoid these pitfalls by keeping Mico’s tone professional, controllable and easy to disable.
Privacy, as always with AI, is another area of concern. For example, while Microsoft says Copilot requires explicit consent before accessing connected apps and allows users to edit or delete memory data, businesses will still need clear internal policies governing what data Copilot can read and store.
The Clippy Question
Mico’s hidden Clippy transformation is basically a light-hearted reminder of how far Microsoft’s digital assistants have come. The company insists that the nostalgia is deliberate but controlled and a playful link to a familiar past, framed within a more sophisticated, opt-in design philosophy.
What Does This Mean For Your Business?
Although the Clippy revival is clearly a playful addition, it actually highlights a serious strategic moment for Microsoft. The company is reframing Copilot as more than just a functional chatbot and instead positioning it as an assistant that can adapt to human tone, behaviour, and context without overstepping boundaries. That balance between warmth and professionalism could prove important as users grow weary of overly mechanical tools yet remain cautious about overly familiar ones.
For UK businesses, the developments point towards an assistant that could fit naturally within daily workflows rather than existing as a separate app or experiment. The ability to connect Copilot to existing systems, recall previous projects, and collaborate across teams could make AI adoption more practical and measurable. It may also help smaller firms, many of which rely on mixed Microsoft and Google environments, to simplify their digital operations without major disruption.
The return of a character like Clippy, now built into an AI that listens, remembers, and coordinates across multiple platforms, underlines how much the workplace has evolved since the late 1990s. For many users, the novelty of talking to a computer has long worn off but what matters now is whether these systems save time, reduce friction, and remain trustworthy. Microsoft’s focus on consent, editability, and transparency is likely to appeal to both business and consumer stakeholders, particularly as regulators tighten expectations around data handling and AI behaviour.
The biggest test, however, will be whether Copilot’s new capabilities can actually translate into everyday usefulness rather than being just novelty (or an annoyance to some, as Clippy was before). As competition intensifies and users gain access to more sophisticated assistants from OpenAI and Google, Microsoft’s long-term advantage may rest on its ability to integrate these tools seamlessly into the familiar rhythm of Windows and Office. The Clippy transformation may be the headline-grabber in this case, but the real story is whether Mico and its wider Copilot ecosystem can finally deliver what its predecessor could not, i.e., an assistant that genuinely helps without getting in the way.
UK Ruling Could Mean Apple Compo For Millions
A UK competition court has ruled that Apple abused its market power with App Store fees, paving the way for compensation that lawyers say could total up to £1.5 billion for around 36 million iPhone and iPad users.
What The Tribunal Decided
The Competition Appeal Tribunal (CAT) found that Apple held “near absolute market power” in two linked markets, i.e., app distribution on iOS devices and in-app payment processing, and had used that position to charge “excessive and unfair” commissions, typically up to 30 per cent, on paid apps and in-app purchases.
The judgment, brought by class representative Dr Rachael Kent, actually marks the first collective competition claim to succeed at trial under the UK’s relatively new regime for group actions. Following a seven-week hearing earlier this year, the tribunal concluded that Apple’s restrictions prevented rival app stores and alternative payment options on iPhones and iPads, leaving developers and consumers with no meaningful choice but to use Apple’s system.
Expert evidence submitted to the court showed that a significant share of Apple’s overcharges to developers were passed on to users through higher prices for apps, subscriptions and digital content. The tribunal agreed, finding that Apple’s business model inflated costs for millions of consumers and small businesses across the UK.
Who Is Covered And From When?
The class action covers anyone in the UK who made purchases through the UK version of the App Store on an iPhone or iPad from 1 October 2015 onwards. That includes paid-for apps, in-app purchases and subscriptions bought within apps.
In fact, law firm Hausfeld, representing Dr Kent, estimates that around 36 million people could fall within this category. Both individual consumers and businesses are included. For example, a company that paid for productivity apps on staff iPhones or made in-app purchases for services through Apple’s system could be entitled to a share of the damages, alongside ordinary consumers.
According to the legal team, users who spent regularly could be due significant sums. For example, a fitness app subscription costing £8.99 a month could yield roughly £21.58 back per year, based on the tribunal’s findings. In another example, a £19.99 in-app purchase could equate to around £4 in compensation. The exact payout will depend on how much each person or business spent and the final calculation approved by the court.
How Much Money Are We Talking?
The tribunal has indicated that aggregate damages could reach up to an eye-watering £1.5 billion, subject to a follow-up hearing on how the total will be calculated and distributed. The court also ordered that interest be added at a rate of 8 per cent per year, which could increase the total compensation for purchases made several years ago.
The collective action covers almost a decade of App Store activity, meaning that regular app users, mobile gamers, and subscribers to digital services could all be affected. With around 36 million potential claimants, even modest individual payments could add up to one of the largest consumer compensation cases ever seen in the UK.
Why The Case Was Brought
Dr Rachael Kent, a Senior Lecturer in Digital Economy and Society Education at King’s College London, launched the case in 2021 claiming that Apple’s conduct had led to “exorbitant profits” by excluding competition and forcing developers to use its own payment system on its own terms.
After the ruling, Dr Kent described the outcome as a “landmark victory, not only for App Store users, but for anyone who has ever felt powerless against a global tech giant”. She added that the judgment “confirms that Apple has been unlawfully overcharging users for more than ten years and that up to £1.5 billion should now be returned to UK consumers and businesses”.
The tribunal agreed with her argument that Apple’s 30 per cent commission was excessive and unfair. It found that a fair rate, based on comparisons with other digital platforms, would have been closer to 17.5 per cent for app distribution and 10 per cent for payment processing.
Apple’s Response And Grounds For Appeal
It’s no surprise that Apple has said it “strongly disagrees” with the ruling and will appeal. In a statement issued after the judgment, the company said the tribunal’s view of the app economy was “flawed” and failed to recognise how the App Store had “benefited businesses and consumers across the UK”.
“The App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments,” Apple said. “This ruling overlooks how the App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments. The App Store faces vigorous competition from many other platforms — often with far fewer privacy and security protections.”
Apple also argues that because commission is only charged on paid apps and in-app purchases, around 85 per cent of the apps available on the App Store pay no commission at all. It points to its Small Business Programme, which halves the rate of commission to 15 per cent for developers earning less than $1 million a year.
The tribunal, however, rejected Apple’s argument that its restrictions were necessary to guarantee user safety and privacy, ruling that the measures were neither proportionate nor justified in relation to competition law.
What Happens Next?
A further hearing, expected in November, will determine the exact approach to calculating and distributing compensation. The court will consider Apple’s application to appeal at the same time.
Any payments to consumers are, therefore, unlikely to begin until the appeals process is complete. However, Hausfeld says the judgment firmly establishes Apple’s liability, meaning that compensation will follow once the calculations and distribution process are finalised.
For now, users can check their eligibility by reviewing their “Purchase History” under their App Store account settings. Those who have paid for apps or in-app purchases through the UK storefront since October 2015 are likely to qualify.
Why The Decision Matters Beyond iPhones
The ruling comes just days after the UK’s Competition and Markets Authority (CMA) designated both Apple and Google as having “strategic market status” under the new Digital Markets, Competition and Consumers Act. This means the regulator can now impose legally binding conduct requirements on how the firms operate their app stores, browsers and payment systems.
The CMA has already indicated it could compel Apple to allow rival app stores to operate on iPhones in the UK, potentially ending its long-standing “closed system” where software can only be downloaded through its own store.
Regulators and analysts view the CAT judgment as part of a wider pattern of scrutiny of Apple’s App Store model. The company is already facing pressure in the European Union, where the Digital Markets Act has forced it to permit third-party app stores and alternative payment routes. In the United States, Apple has been the subject of multiple antitrust investigations and private lawsuits over similar issues.
What The Court Said About Market Power And Pass-Through
The tribunal found that Apple’s control over app distribution on iOS gave it “near absolute market power”, effectively allowing it to dictate terms to developers and consumers. It also accepted evidence that roughly half of Apple’s overcharge was passed on to end users, which formed the basis for estimating total damages at up to £1.5 billion.
The court compared Apple’s commission levels with other digital marketplaces, including Microsoft’s and Epic Games’ app stores, and found its rates to be significantly higher. The tribunal concluded that the excess pricing could not be justified by any additional value or innovation provided by Apple’s system.
What Users And Businesses Should Know
The case is a collective opt-out action, meaning UK-based consumers and businesses who meet the eligibility criteria will automatically be included unless they choose to opt out. This means they will not need to sign up in advance but will be required to provide proof of purchase when the compensation scheme is finalised.
The tribunal’s order of interest at 8 per cent per year also means that older purchases, especially those made between 2015 and 2020, could attract larger payouts.
Dr Kent’s legal team has said further updates will be issued once the next phase of the case concludes. For now, eligible users are advised to retain any records of App Store purchases or subscriptions made on UK-registered Apple accounts.
The Wider Industry Context
This case is being watched closely by technology firms and regulators because it sets a new benchmark for competition enforcement in the digital economy. It also highlights how the UK’s collective action framework can be used to hold major global platforms to account for past conduct that inflated prices for consumers and businesses.
While Apple maintains that its ecosystem provides unique safety and privacy benefits, the tribunal’s findings appear to have called into question the balance between those protections and fair competition. The upcoming damages hearing will now determine what that accountability looks like in financial terms for millions of UK users.
What Does This Mean For Your Business?
The outcome of this case may mark a defining moment in how the UK approaches digital market regulation. For example, by confirming that a global company of Apple’s scale can be held accountable through collective legal action, the tribunal has set a clear precedent that could influence future cases involving other dominant tech platforms. It also signals that the UK’s competition and consumer law framework is now capable of addressing the realities of platform-based markets, where small differences in commission rates or payment terms can affect millions of users and developers simultaneously.
For UK businesses, the implications extend well beyond potential compensation. For example, many small firms that rely on mobile apps for marketing, payments, or service delivery have long been subject to the same terms as global developers, often without the ability to negotiate or switch to alternative platforms. A successful compensation process could return meaningful sums to those businesses, but more importantly, it may drive structural changes that reduce dependency on a single distribution channel. In a more competitive marketplace, smaller developers and service providers could benefit from lower costs, broader reach, and greater freedom over how they price and deliver their products.
Also, developers and consumers are likely to watch closely for signs of how Apple responds. If the appeal fails and the compensation framework goes ahead, the company may be forced to reconsider its UK App Store model to comply with competition expectations. That could include opening its payment systems to external providers or lowering commission rates to align more closely with those found in other digital marketplaces. Such changes would not only reshape Apple’s UK operations but could also influence its strategy across Europe, where similar legal and regulatory challenges are already underway.
The ruling also gives some momentum to regulators such as the Competition and Markets Authority, which has already indicated plans to impose new obligations on major digital platforms. Having both the CAT judgment and the CMA’s new enforcement powers in play strengthens the UK’s position as one of the leading jurisdictions for digital competition oversight. It could, in time, make the country a test case for how to balance consumer protection, business innovation, and fair access in the app economy.
For consumers, the short-term focus will be on how quickly compensation arrives and what steps they must take to claim it. However, the longer-term significance appears to lie in how this case may reshape the digital ecosystem itself. Whether through greater transparency, reduced commissions, or the introduction of alternative app stores, the outcome has the potential to alter how users, developers, and major tech firms interact across the UK’s mobile marketplace.
Company Check : OpenAI Unveils ChatGPT-Powered Atlas Browser
OpenAI has released Atlas, a free macOS web browser built around ChatGPT, and it arrives with big ambitions, useful features, and some immediate security questions.
What OpenAI Has Launched, And Why It Matters
OpenAI describes Atlas as “a new web browser built with ChatGPT at its core.” The idea of Atlas is, rather than visiting a website, copying content, and pasting it into a chatbot, the chatbot now lives inside the browser and can see the page you are on. OpenAI has framed it as a chance to “rethink what it means to use the web.”
Just On macOS (Free) For Now
Atlas is available now worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android versions “coming soon.” Business users can enable Atlas in beta, and Agent mode is available in preview for Plus, Pro, and Business tiers. OpenAI also published release notes and a download link, underlining that Atlas can import bookmarks, passwords, and browsing history from existing browsers.
How It Works In Practice
Atlas opens directly to ChatGPT rather than a traditional home page. Users can type a question or a URL, then work in a split view where ChatGPT summarises, compares, or explains the page they are on. An optional sidebar, “Ask ChatGPT,” follows the user as they browse, designed to remove the copy-paste friction that has characterised earlier chatbot use. OpenAI states that the browser can “understand what you’re trying to do, and complete tasks for you, all without leaving the page.”
Two features really stand out. The first is “browser memories,” which is an opt-in setting that allows ChatGPT to remember context from sites a user visits so it can bring that context back when needed. The second is “Agent mode,” which enables ChatGPT to act on the user’s behalf in the browser, carrying out tasks such as research, form-filling, or making bookings. OpenAI is keen to emphasise the benefit of user control, noting that browser memories can be viewed, archived, or deleted, that browsing content is not used to train models by default, and that visibility for specific sites can be turned off directly from the address bar.
Availability And Controls
At launch, Atlas includes parental controls that carry over from ChatGPT, with options to disable memories or Agent mode entirely. OpenAI says Agent mode can’t run code in the browser, download files, or install extensions, and it pauses on sensitive sites such as banks. Users can also run the agent in logged-out mode to limit access to private data.
Where Atlas Fits In A Crowded Browser Market
This move from OpenAI appears to be a direct challenge to existing players. For example, on desktop, Chrome holds about 73.65 percent of the global browser market, followed by Edge on 10.43 percent and Safari on 5.73 percent (StatCounter, September 2025). For Atlas to gain traction, it must prove both trustworthy and genuinely useful in daily workflows.
Vague Wording? What “AI Browser” Really Means
It seems that “AI browser” is quickly becoming shorthand for a set of common features, i.e., a chatbot that can read what’s on the screen, answer questions about it, and act within context. In Atlas, this takes the form of ChatGPT as a ride-along assistant that can process and recall on-page information.
Microsoft is pursuing the same idea. For example, in its Edge browser, Copilot Mode provides similar capabilities, opening a chat window that can summarise and compare data across multiple tabs. The company has also introduced “Actions,” which can fill in forms or book hotels, and “Journeys,” which group your tab history into ongoing projects.
The Indirect Prompt-Injection Issue
It seems that the most significant technical challenge currently facing Atlas, however, may not be unique to OpenAI. For example, Brave’s security team recently warned that indirect prompt injection is “a systemic challenge facing the entire category of AI-powered browsers.”
In simple terms, prompt injection occurs when a malicious webpage hides instructions that an AI assistant mistakenly interprets as user commands. This could cause the AI to perform unintended actions, such as fetching data from other tabs or leaking information from logged-in accounts.
Brave’s research revealed that similar vulnerabilities have been found in other AI browsers, including Perplexity’s Comet and Fellou, where attackers could hide commands inside website text or even faint image overlays. These instructions can bypass normal safeguards by being passed to the model as part of the page context.
In fact, OpenAI’s own documentation acknowledges this threat. For example, Dane Stuckey, OpenAI’s Chief Information Security Officer, described prompt injection as “a frontier, unsolved security problem” and said the company has implemented overlapping guardrails, detection systems, and model training updates to reduce risk. “Our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks,” he wrote, adding that users should run agents in logged-out mode when working on sensitive tasks.
Early Testing And What Researchers Are Seeing
Early demonstrations have already shown why this remains an open concern. For example, independent researchers have reportedly shared examples where Atlas responded to hidden instructions embedded within ordinary documents, producing unexpected outputs instead of the requested summaries. While these examples did not involve harmful actions, they highlight how easily indirect prompt injections can influence AI behaviour when content is treated as part of a legitimate task.
AI security researcher Johann Rehberger, who has documented several prompt-injection attacks across AI platforms, described the risk as affecting “confidentiality, integrity, and availability of data.” He noted that while OpenAI has built sensible safeguards, “carefully crafted content on websites can still trick ChatGPT Atlas into responding with attacker-controlled text or invoking tools to take actions.”
Brave’s recent post about this security issue also warned that agentic browsers can bypass traditional web protections such as the same-origin policy because they act using the user’s authenticated privileges. For example, a simple instruction hidden in a web page could, in theory, make the assistant act across sites, including banks or corporate systems, if guardrails fail.
How OpenAI Says It Has Balanced Power And Control
OpenAI has listed several design choices intended to reduce these risks. For example, users can clear specific page visibility, delete all browsing history, or use incognito windows that temporarily log ChatGPT out. Browser memories are private to the user’s ChatGPT account, are off by default, and can be managed directly in settings.
If a user opts to allow training on browsing content, pages that block GPTBot remain excluded. Agent mode cannot install extensions, access the file system, or execute code, and it pauses on sensitive sites where actions might expose personal data.
OpenAI says its approach is to combine technical safeguards with transparency. Users are shown what the agent is doing step by step, and actions can be stopped mid-flow.
For example, someone planning a dinner party can ask Atlas to find a grocery store, add ingredients to a basket, and place the order, watching each action unfold. Also, a student could use Atlas to ask real-time questions about lecture slides, while a business user can ask it to summarise competitor data or past documents without switching tabs.
Two Days Later, Microsoft Reframes Edge As An “AI Browser”
Just two days after OpenAI’s announcement, Microsoft expanded its own browser to include nearly identical functionality. On 23 October, the company unveiled an upgraded Copilot Mode for Edge, now officially described as “an AI browser.”
Mustafa Suleyman, CEO of Microsoft AI, wrote in a company blog post: “Copilot Mode in Edge is evolving into an AI browser that is your dynamic, intelligent companion.” The update introduces new features called “Actions,” which allow Copilot to fill out forms and make bookings, and “Journeys,” which group browsing sessions around specific goals.
Although Microsoft’s project was likely in development long before Atlas was revealed, the timing and similarity are notable. Both browsers now integrate AI deeply into browsing, both rely on contextual understanding to assist users, and both frame the assistant as a companion that can interpret what is on screen.
Independent reviewers have noted that the new Copilot Mode in Edge is visually and functionally close to Atlas. The layout differs slightly, but the underlying premise is the same: a built-in AI that reads, reasons, and acts on content as you browse. Microsoft says all new features require user consent before accessing tab content or history.
Challenges And Criticisms
While Atlas has been praised for its clean design and intelligent functionality, some experts have already raised questions about privacy, data control, and long-term security. OpenAI insists that browser memories are fully optional and off by default, but data protection specialists warn that even anonymised context retention can reveal behavioural patterns over time.
Also, some commentators have warned that Atlas, like other AI-driven browsers, could raise new privacy and security concerns if not carefully managed. For example, cybersecurity specialists have noted that the browser’s ability to access bookmarks, saved passwords, and full browsing histories could make the trade-off between convenience and data protection more critical than ever. They have also cautioned that combining web activity with chatbot interactions could increase risks such as profiling, targeted phishing, or unintended exposure of sensitive information.
It should also be noted here that early feedback from users has been mixed. For example, some testers have praised Atlas for its clear presentation of information and accurate sourcing, while others have reported slower performance and questioned how effectively Agent mode will operate once the browser is adopted at scale.
Cybersecurity researchers point out that even if Atlas performs safely under current controls, new prompt-injection techniques are constantly being developed. Brave’s researchers have already hinted that further vulnerabilities are likely to surface as more companies introduce AI-driven browsing.
The balance between innovation and oversight, and between convenience and confidentiality could, therefore, be the central test for Atlas and the new wave of AI browsers it represents.
What Does This Mean For Your Business?
OpenAI’s launch of Atlas could be one of the most ambitious steps yet in merging web browsing with conversational AI. It shows how quickly the boundary between search, productivity, and automation is dissolving, with the browser itself becoming a personal assistant rather than a static window to the internet. Yet it also exposes how far the technology still has to go before it can be trusted to act independently in real-world settings.
For users, the attraction is that Atlas promises a streamlined way to find information, take action, and move between tasks without switching tabs or tools. For OpenAI, it provides a direct platform for embedding ChatGPT more deeply into everyday digital life. However, the same integration that makes Atlas powerful also increases the surface area for risk. Allowing an AI agent to see and act within live browsing sessions inevitably raises questions about data access, authentication, and the potential for malicious manipulation through prompt injection or hidden instructions.
UK businesses, in particular, may need to approach Atlas with a mix of curiosity and caution. For example, the prospect of an intelligent browser that can summarise research, handle admin tasks, or automate data collection could boost productivity and streamline workflows. However, organisations will have to consider how it interacts with internal systems, how data is stored and transmitted, and whether its automation features comply with corporate security and privacy policies. For sectors such as finance, healthcare, and education, these considerations will be especially pressing, as even minor missteps could expose sensitive information or breach compliance rules.
For other stakeholders, including regulators and cybersecurity specialists, Atlas may represent an early glimpse of what “agentic” browsing could actually mean for the wider internet. It challenges long-held assumptions about user control, privacy, and accountability. If AI browsers become mainstream, the focus of online safety will need to expand from defending websites against users to defending users against their own automated agents.
In that sense, Atlas is less a final product than a live experiment in how people and machines might share control over digital tasks. Its success will depend not just on speed or convenience but on whether OpenAI can earn sustained trust from users, businesses, and regulators alike. For now, Atlas looks like being both a milestone in browser innovation and a reminder that every step towards automation must also bring new standards of responsibility, transparency, and security.
Security Stop-Press: AI Tools Fuel Record Rise in DDoS Botnets
Attackers are using artificial intelligence (AI) to build record-breaking DDoS botnets, according to new data from internet security firm Qrator Labs.
The company reports that one botnet it tracked contained 5.76 million infected devices, a 25-fold increase on last year’s largest network. Qrator’s CTO, Andrey Leskin, said AI now lets attackers “find and capture devices much faster and more efficiently,” driving unprecedented growth.
Brazil has overtaken Russia and the US as the biggest source of application-layer DDoS attacks, accounting for 19 per cent of malicious traffic, while Vietnam’s share has surged as unsecured devices multiply across developing regions. Fintech and e-commerce remain the top targets, with peak attacks reaching 1.15 Tbps.
Experts warn that AI tools are lowering the barriers to entry for cybercriminals, enabling large-scale automated attacks. Businesses are urged to use layered DDoS protection, keep connected devices updated, and monitor for unusual network activity to defend against this new AI-driven threat.
Sustainability-In-Tech : UK-Made Lithium Breakthrough
Cornish Lithium has produced the UK’s first samples of battery-grade lithium hydroxide, marking a major step towards a domestic, low-carbon supply chain for electric vehicles and clean energy storage.
A Local Company with Global Ambitions
Cornish Lithium is a Penryn-based mining and technology company founded in 2016 by former investment banker and mining engineer Jeremy Wrathall. The company’s goal is to produce lithium sustainably within the UK, thereby reducing reliance on imports and supporting the transition to electric vehicles and renewable energy.
The business operates across two key areas of lithium extraction, i.e., hard rock and geothermal brines. Its projects are centred in Cornwall, where it is exploring and developing lithium resources from granite and hot spring waters deep underground. Through a combination of traditional mining expertise and modern processing technology, Cornish Lithium aims to make Cornwall a cornerstone of Britain’s green industrial future.
The Factory
At the heart of the latest breakthrough is the company’s Trelavour Hard Rock Project near St Dennis, Cornwall. Built on a repurposed china clay pit, the Trelavour Demonstration Plant began operating in 2024 and represents the UK’s first low-emission lithium hydroxide production facility. The site seems to embody sustainable redevelopment in practice in that it’s transforming a brownfield location once central to the region’s clay industry into a clean-tech hub for critical minerals.
Hydrometallurgical Processing
The plant uses hydrometallurgical processing to refine lithium-bearing mica from Cornish granite into high-purity lithium hydroxide. It also acts as a testing ground for new refining technologies that could later be scaled up for full commercial production. According to the company, commercial operations are expected to begin in 2027 with a planned output of around 10,000 tonnes of lithium hydroxide per year.
Why This Discovery Matters
Cornish Lithium’s discovery lies not only in the presence of lithium-bearing granite but in the ability to extract and refine it locally using cleaner methods. For example, the company estimates that its operations can achieve at least a 40 per cent reduction in carbon emissions compared with typical international lithium production, where ores are mined in Australia, shipped to China for refining, and then exported to Europe.
As CEO Jeremy Wrathall explained when the first samples were announced, “This achievement demonstrates that Cornwall can once again play a vital role in supporting Britain’s industrial future — this time through the production of sustainable, battery-grade lithium.”
Cornwall’s geology has long been known to contain lithium, but until recently it was not considered economically viable to extract. However, it seems that advances in processing technology, along with rising global demand and the UK’s push for net zero, have changed that outlook. In essence, the region’s combination of mineral-rich granite and geothermal resources makes it uniquely positioned to supply both hard-rock and brine-based lithium sustainably.
What’s Being Produced And Who For?
The Trelavour Demonstration Plant produces lithium hydroxide monohydrate (LHM), which is a high-purity chemical essential for lithium-ion batteries used in electric vehicles and large-scale energy storage systems. Battery-grade LHM is particularly suited to high-nickel cathodes, which are used by leading EV manufacturers to deliver higher energy density and longer range.
Cornish Lithium’s immediate aim is to refine enough material to demonstrate commercial viability and secure supply agreements with UK gigafactories and automotive manufacturers. The longer-term goal, combining both hard rock and geothermal extraction, is to produce up to 25,000 tonnes of lithium carbonate equivalent annually by 2030.
Currently, the UK imports almost all of its battery-grade lithium, leaving the country’s growing EV and battery industries reliant on international supply chains dominated by China. Local production from Cornwall would allow UK manufacturers to shorten those supply lines, cut emissions, and improve energy security.
Investment and Strategic Importance
In September 2025, Cornish Lithium secured up to £35 million in new funding, including £31 million from the UK’s National Wealth Fund and additional investment from TechMet, a critical minerals investor partly backed by the US government. This funding is earmarked to expand operations at Trelavour and advance the company’s geothermal projects.
The investment also forms part of the UK government’s broader strategy to establish a secure domestic supply chain for EV batteries. The Automotive Transformation Fund and other initiatives aim to ensure that gigafactories planned in Sunderland, Coventry, and Somerset have access to local raw materials, which is likely to be a key factor in their long-term sustainability and cost competitiveness.
Carbon Savings and Sustainability
Even though the idea of mining doesn’t seem that conducive to conserving the environment and sustainability, the sustainability benefits of local lithium production actually extend well beyond emissions. For example, processing and refining lithium within Cornwall eliminates the need for transcontinental shipping and significantly lowers the embodied carbon in each tonne of lithium hydroxide produced.
Local production also improves traceability, which is a growing requirement for European battery makers under emerging “battery passport” rules that demand transparency on the source and environmental impact of materials.
Also, by situating the plant on a disused industrial site, Cornish Lithium has actually revived part of Cornwall’s long-mining heritage in a modern, environmentally responsible way. The company estimates its projects could create more than 300 skilled jobs, contributing to regional regeneration and helping to retain talent in the South West.
The project’s reliance on UK and European technology partnerships also supports intellectual property development and knowledge transfer. By bringing advanced refining processes, such as those licensed from Australia’s Lepidico, onto British soil, the company is helping to develop local expertise in hydrometallurgy and battery chemistry.
Competitors and the Industry
Cornish Lithium’s milestone actually places it at the forefront of a growing UK lithium industry. However, it is not alone. For example, Imerys British Lithium, also based near St Austell, is developing a separate hard-rock project and has already produced pilot-scale lithium carbonate from mica-rich granite. The company plans to scale up to around 20,000 tonnes per year, potentially making it another major domestic supplier by the late 2020s.
Further north, Green Lithium is constructing a large lithium refinery at Teesside that will process imported spodumene concentrate into lithium hydroxide, complementing the raw material supply coming from Cornwall. Meanwhile, Northern Lithium is exploring brine-based extraction in the North East using direct lithium extraction (DLE) technology.
Together, these projects signal the emergence of a full UK lithium supply chain, encompassing extraction, processing, and eventual recycling, which is a development that could make the UK less dependent on imported critical minerals.
Challenges and Criticisms
Despite its progress, Cornish Lithium faces some significant hurdles. For example, Cornwall’s lithium grades are lower than those of high-grade spodumene ores mined in Australia, which could affect production costs and competitiveness. Energy-intensive refining processes also present challenges in a country with some of Europe’s highest industrial electricity prices.
The company must also navigate permitting and community engagement. For example, although its operations are based on brownfield sites, local stakeholders have raised questions about water use, noise, and the environmental management of tailings and waste.
Another challenge lies in the volatility of global lithium prices. As the Financial Times has reported, financing large-scale lithium projects can be difficult without government guarantees or long-term offtake agreements, particularly when prices fall from recent highs.
There are also broader market questions. The UK’s gigafactory sector remains nascent, and if domestic battery production fails to grow as quickly as expected, local lithium producers could struggle to find nearby buyers.
That said, for now, the company’s combination of local sourcing, low-emission processing, and government-backed funding positions it as one of the most advanced and strategically significant lithium ventures in Europe.
What Does This Mean For Your Business?
Cornish Lithium’s progress could be a real turning point in how the UK approaches its clean energy supply chain. By combining extraction, processing, and refining within one region, the company has shown that it is possible to produce critical battery materials closer to where they are used, with substantially lower emissions than imported alternatives. The immediate impact is industrial rather than symbolic, since it demonstrates that local lithium production is not just feasible but commercially and environmentally credible.
For UK businesses, particularly those in automotive manufacturing and energy storage, this development could prove decisive. For example, a domestic source of battery-grade lithium would reduce dependence on long global supply chains, stabilise costs, and make it easier to meet carbon reporting and traceability standards that are becoming central to procurement. It could also help strengthen the competitiveness of UK gigafactories, ensuring that jobs and intellectual property linked to electrification remain within the country. For other stakeholders, including local communities and policymakers, the benefits extend to regional regeneration, skilled employment, and the revival of industrial activity in an area that once relied on mining.
At the same time, it is clear that success will depend on more than geology. Cornish Lithium and its peers must scale up efficiently, manage environmental impacts transparently, and align with downstream demand from battery producers. The challenge for government and industry alike will be to create a framework that rewards sustainable extraction and encourages private investment without distorting the market.
If those conditions are met, Cornwall’s emerging lithium industry could form the foundation of a genuinely circular, low-carbon supply chain for the UK’s transition to clean transport and renewable power. In that sense, the real significance of the Trelavour plant lies not only in the metal it produces but in the model it represents, i.e., a local, collaborative, and technologically advanced approach to sustainable resource development.