AI-Generated Code Blamed for 1-in-5 Breaches

A new report has revealed that AI-written code is already responsible for a significant share of security incidents, with one in five organisations suffering a major breach linked directly to code produced by generative AI tools.

Vulnerabilities Found in AI Code

The finding comes from cybersecurity company Aikido Security’s State of AI in Security & Development 2026, which features the results of a wide-ranging survey of 450 developers, AppSec engineers and CISOs across Europe and the US.

According to the study, nearly a quarter of all production code (24 per cent) is now written by AI tools, rising to 29 per cent in the US and 21 per cent in Europe. However, it seems that adoption has come at a cost. For example, the report shows that almost seven in ten respondents said they had found vulnerabilities introduced by AI-generated code, while one in five reported serious incidents that caused material business impact. As Aikido’s researchers put it, “AI-generated code is already causing real-world damage.”

Worse In The US

According to the report, the US appears to be hit hardest. For example, 43 per cent of US organisations reported serious incidents linked to AI-generated code, compared with just 20 per cent in Europe. The report says this is due to stronger regulatory oversight and stricter testing practices in Europe, where companies tend to catch problems earlier. European respondents recorded more “near misses”, indicating that vulnerabilities were identified before they could cause harm.

AI Changing The Development Landscape

AI coding assistants such as GitHub Copilot, ChatGPT and other generative tools are now integral to the software pipeline, promising faster output and fewer repetitive tasks, but they also introduce a new layer of risk.

Aikido’s data highlights that productivity gains can be offset by increased complexity and slower remediation. For example, teams now spend an average of 6.1 hours per week triaging alerts from security tools, with most of that time wasted on false positives. In larger environments, the triage burden grows to nearly eight hours a week where teams rely on multiple tools.

Leads To Dangerous Shortcuts

It seems that this problem can lead to dangerous shortcuts. For example, two-thirds of respondents admitted bypassing or delaying security checks due to a kind of alert fatigue. Developers under pressure to deliver have started to “push through” security warnings, creating a cycle where quick fixes outweigh caution.

Natalia Konstantinova, Global Architecture Lead in AI at BP, highlights the issue, saying: “AI-generated code shouldn’t be fully trusted, since it can cause serious damage. This is a reminder to carefully double-check its outputs.”

Accountability Is Becoming A Flashpoint

It seems that as AI-generated code makes its way into production, one of the biggest challenges is determining who is responsible when things go wrong.

Aikido’s survey shows a clear divide. For example, 53 per cent of respondents said security teams would be blamed if AI code caused a breach, 45 per cent blamed the developer who wrote the code, and 42 per cent blamed whoever merged it into production. The result, according to UK insurance and pensions company Rothesay’s CISO Andy Boura, is “a lack of clarity among respondents over where accountability should sit for good risk management.”

In fact, half of developers said they expect to shoulder the blame personally if AI-generated code they produced led to an incident, suggesting a growing culture of uncertainty and mistrust between teams.

The blurred lines are also fuelling tension between developers and security leaders. Many security professionals worry that AI-assisted development is moving too fast for proper oversight, while developers argue that outdated review processes are slowing down innovation.

“Tool Sprawl” Is Making Things Worse

Perhaps surprisingly, Aikido’s research found that organisations with more security tools were actually experiencing more security incidents. For example, companies using six to nine different tools reported incidents 90 per cent of the time, compared with 64 per cent for those using just one or two.

It seems this “tool sprawl” is also linked to slower fixes. Teams with multiple vendor tools took almost eight days on average to remediate a critical vulnerability, compared with just over three days in smaller, more consolidated setups.

The problem, according to Aikido, is not the tools themselves but the overhead they create, i.e., duplicate alerts, inconsistent data and fractured workflows that slow response times.

Walid Mahmoud, DevSecOps Lead at the UK Cabinet Office, notes about this issue: “Giving developers the right security tool that works with existing tools and workflows allows teams to implement security best practices and improve their posture.”

Teams using integrated, all-in-one platforms built for both developers and security professionals were twice as likely to report zero incidents compared with those using tools aimed at one group only.

Regional Differences In Oversight

The study draws a clear contrast between European and American approaches. For example, European teams tend to rely more on human oversight, manual reviews and compliance-based testing frameworks, while US teams are quicker to automate processes and deploy AI-generated code at scale.

Aikido’s figures show that 58 per cent of US teams track AI-generated code line by line, compared with just 35 per cent in Europe. That difference, coupled with the higher level of automation in US pipelines, may explain why more AI-related vulnerabilities are being detected (and exploited) there.

As Aikido puts it, “Europe prevents, the US reacts.” The slower, more regulated approach across Europe appears to be reducing the number of major breaches, even if it creates extra workload for developers.

Independent Findings Support The Trend

It should be noted here that security concerns raised by Aikido are actually consistent with other recent studies. For example, Veracode’s 2025 GenAI Code Security Report found that 45 per cent of AI-generated code samples failed basic security tests. Java was the worst affected, with a 72 per cent failure rate, followed by JavaScript (43 per cent), C# (45 per cent) and Python (38 per cent).

The Veracode team concluded that while AI tools can generate functional code quickly, they often fail to account for secure design or contextual logic. Their analysis showed little improvement in security quality between model generations, even as syntax accuracy improved.

Policy researchers are also warning of deeper structural issues. For example, the Center for Security and Emerging Technology (CSET) at Georgetown University has outlined three categories of risk from AI-generated code, i.e., insecure outputs, vulnerabilities in the AI models themselves, and wider supply chain exposure.

Also, research from OX Security has pointed to what it calls the “army of juniors” effect, which is where AI tools can produce vast amounts of syntactically correct code, but often lack the architectural understanding of experienced developers, multiplying low-level errors at scale.

Industry Perspectives On A Path Forward

Despite these warnings, it seems that optimism remains widespread. For example, 96 per cent of Aikido’s respondents believe AI will eventually be able to produce secure, reliable code, with nearly half expecting that within three to five years.

However, only one in five think AI will achieve that without human oversight. The consensus is that people will remain essential to guide secure design, architecture and business logic.

AI Can Check AI

There also appears to be growing belief that AI should be used to check AI. For example, nine out of ten organisations expect AI-driven penetration testing to become mainstream within around five and a half years, using autonomous “agentic” systems to identify vulnerabilities faster than human testers could.

“The 79 per cent are the smart ones,” said Lisa Ventura, founder of the UK’s AI and Cyber Security Association. “AI isn’t about replacing human judgment, it’s about amplifying it.”

This sentiment echoes a wider industry move towards what security leaders call “augmented development”, i.e., human-centred workflows supported by automation, not replaced by it.

Why This Matters

For UK organisations, the implications are immediate. For example, the report shows that AI-generated code is not a future risk but a current operational issue already affecting production environments.

As Kevin Curran, Professor of Cybersecurity at Ulster University, says: “This demonstrates the slim thread which at times holds systems together, and highlights the need to properly allocate resources to cybersecurity.”

Aikido’s findings also underline the importance of developer education and clear accountability. Matias Madou, CTO at Secure Code Warrior, wrote that “in the AI era, security starts with developers. They are the first line of defence for the code they write, and for the AI that writes alongside them.”

For businesses already navigating compliance regimes such as the UK NCSC’s Cyber Essentials or ISO 27001, this means treating AI-generated code as a separate risk class requiring its own testing and review procedures.

Criticisms And Challenges

While Aikido’s report is one of the most comprehensive of its kind, it is not without its critics. For example, some security analysts argue that “one in five breaches” may overstate the influence of AI-generated code because correlation does not prove causation. Many breaches involve complex attack chains where AI code may only play a small role.

Others have questioned the representativeness of the sample. For example, the survey focused primarily on organisations already experimenting with AI in production, which may naturally skew toward higher exposure. Small or less digitally mature companies, where AI coding tools are still limited to pilot use, may experience fewer issues.

There are also some methodological challenges. For example, measuring what qualifies as “AI-generated” can be difficult, particularly when developers use AI assistants to autocomplete small code segments rather than entire functions. Attribution of vulnerabilities can therefore be subjective.

That said, even many of the sceptics agree that the report captures a growing and genuine concern. Independent findings from Veracode, OX Security and CSET all point in the same direction, i.e., that AI-generated code introduces new risks that traditional security pipelines were never designed to manage.

The challenge for developers and CISOs alike is, therefore, to close that gap before AI coding becomes the default, not the exception. As the technology matures, the balance between innovation speed and security assurance will define how safely businesses can harness AI’s potential without repeating the mistakes of early adoption.

What Does This Mean For Your Business?

The findings appear to point to an industry racing ahead faster than its safety systems can adapt. AI coding tools have clearly shifted from experimental to mainstream, yet governance and testing practices are still catching up. The evidence suggests that while automation can improve productivity, it cannot yet replicate the depth of human reasoning needed to identify design-level flaws or assess real-world attack paths. That gap between capability and control is where today’s vulnerabilities are being born.

For UK businesses, this raises practical questions about oversight and responsibility. Many already face pressure to adopt AI for competitive reasons, yet the report shows that without strong testing regimes and clear accountability, the risks can outweigh the benefits. In particular, financial services, healthcare and public sector organisations, which handle sensitive data and operate under strict compliance frameworks, will need to ensure that AI-generated code goes through the same, if not stricter, scrutiny as any other form of software.

Developers, too, are being asked to operate within new boundaries. The growing reliance on generative tools means the traditional model of code review and approval is no longer sufficient. UK companies may now need to invest in dedicated AI audit trails, tighter version tracking and security validation that can distinguish between human and machine-written code. The evidence from Aikido’s report also suggests that integrated platforms, where developer and security functions work together, can yield better results than fragmented tool stacks, making collaboration a critical priority.

For other stakeholders, including regulators and insurers, the implications are equally clear. For example, regulators will need to consider whether existing standards, such as Cyber Essentials, adequately address AI-generated components. Insurers may need to begin to factor the presence of AI-written code into risk assessments and premiums, especially if breach attribution becomes more traceable.

There is also a wider social and ethical dimension to consider here. For example, if AI-generated code becomes a leading cause of breaches, the question of accountability will soon reach the boardroom and, potentially, the courts. The current ambiguity over who is at fault, i.e., the developer, the CISO or the AI vendor, will not remain sustainable for long. Policymakers may be forced to define clearer lines of liability, particularly where generative AI is being deployed at scale in safety-critical systems.

The overall picture that emerges here is not one of panic but of adjustment. The technology is here to stay, and most industry leaders still believe it will eventually write secure, reliable code. The challenge lies in getting from here to there without compromising trust or resilience in the process. For now, it seems the safest path forward is not to reject AI in development, but to treat it with the same caution as any powerful, untested colleague: valuable, but never unsupervised.

Amazon AWS … What Happened?

Amazon Web Services has issued a full apology and technical explanation after a 15-hour outage in its North Virginia data region took thousands of major platforms offline, exposing the internet’s heavy dependence on a handful of US cloud providers.

What Happened?

The incident began late on Sunday 19 October, when engineers at Amazon’s US-East-1 data region in North Virginia detected connection failures across multiple services. Starting at 11:48 pm Pacific time, critical systems linked to Amazon DynamoDB, a database service used by many of the world’s largest apps, began to fail.

The root cause, according to Amazon’s post-event summary, was a “latent race condition”, which is a rare timing bug in the automated Domain Name System (DNS) management system that maintains the internal “address book” of AWS services. In this case, automation mistakenly deleted the DNS record for DynamoDB’s regional endpoint, effectively removing its ability to resolve names to IP addresses.

This caused immediate DNS failures for any service trying to connect to DynamoDB, including other AWS components such as EC2 virtual machines, Redshift databases, and Lambda compute functions. The DNS record was finally restored manually at around 2:25 am, but many dependent systems took much longer to recover, with some continuing to fail well into the afternoon of 20 October.

Where The Internet Felt It

The outage’s ripple effects in this case were global. For example, more than 1,000 platforms and services experienced disruption, including Snapchat, Reddit, Roblox, Fortnite, Lloyds Bank, Halifax, and Venmo. UK financial services were hit particularly hard, with some Lloyds customers reporting payment delays and app errors until mid-afternoon.

Other sectors also suffered unexpected consequences. For example, Eight Sleep, which manufactures smart mattresses that use cloud connections to control temperature and positioning, confirmed that some of its products overheated or became stuck in a raised position during the incident. The company said it would work to “outage-proof” its devices following the incident.

For millions of consumers, the event briefly made parts of the digital world disappear. Websites, apps, and connected devices remained online but could not “see” each other due to the DNS fault, illustrating just how central Amazon’s infrastructure has become to everyday online activity.

How A Single Glitch Spread So Widely

At its core, the failure was a simple but catastrophic DNS issue. DNS is the internet’s naming system, translating web addresses into machine-readable IP numbers. When AWS’s automation produced an empty DNS record for DynamoDB’s endpoint, every application depending on it lost its bearings.

AWS engineers later confirmed that two redundant DNS update systems, known internally as “Enactors”, attempted to apply configuration plans simultaneously. One was significantly delayed, allowing an older plan to overwrite a newer one before being deleted, taking all IP addresses with it. The automation could not self-repair, leaving manual intervention as the only option.

As a result, internal systems that depended on DynamoDB also stalled. Amazon EC2, the platform used to launch virtual servers, could not start new instances. Network Load Balancer (NLB), which distributes traffic between servers, suffered cascading health-check failures as it tried to route connections to resources that were technically online but unreachable.

Why Recovery Took Most Of The Day

While the DNS issue was resolved within hours, the actual automated systems that depend on it did not immediately catch up. For example, EC2’s control software reportedly entered a “congestive collapse” as it attempted to re-establish millions of internal leases with physical servers. Restarting this process safely took several hours.

At the same time, delayed network configurations created a backlog in AWS’s Network Manager, causing newly launched instances to remain disconnected. To make things worse, load balancers then misinterpreted these delays as failures and pulled healthy capacity from service, worsening connection errors for some customers.

By early afternoon on 20 October, Amazon said all EC2 and NLB operations were back to normal, though the ripple effects continued to be felt across smaller services for some time.

Amazon’s Explanation And Apology

Following the outage (and the backlash), Amazon published a detailed (long) 7,000-word technical report outlining the chain of events. The company admitted that automation had failed to detect and correct the DNS deletion and said manual recovery was required to restore service.

“We apologise for the impact this event caused our customers,” Amazon wrote. “We know how critical our services are to our customers, their applications and end users, and their businesses. We know this event impacted many customers in significant ways.”

The company confirmed it has disabled the affected DNS automation worldwide until a permanent fix is in place. AWS engineers are now adding new safeguards to prevent outdated plans from being applied, and additional limits to ensure health checks cannot remove too much capacity during regional failovers.

Reactions And Tech Commentary

Industry experts have generally described the incident as a textbook case of automation failure, pointing to how a rare timing error in AWS’s DNS management system exposed wider systemic dependencies. Many engineers have noted that the issue reinforces the importance of resilience and of designing systems to tolerate faults in automated processes.

The outage is a clear reminder of a long-standing saying in IT circles, i.e., “It’s always DNS.” Although such faults are not unusual, the sheer scale of AWS’s infrastructure meant that a single configuration error was able to cause global disruption.

An Argument For Diversifying Cloud Setups?

Experts have also warned that the outage shows why businesses should diversify their cloud setups. For example, those running all workloads within a single AWS region found themselves completely offline. Organisations using multiple regions, or backup capacity in other cloud providers, were, however, able to switch over and maintain operations.

The Broader Implications

AWS remains the market leader in global cloud infrastructure, accounting for roughly 30 per cent of worldwide spending (according to Synergy Research). Its nearest competitors, Microsoft Azure and Google Cloud, hold around 25 per cent and 11 per cent respectively. However, this latest disruption has reignited debate about overreliance on a single provider.

Large-scale customers are now likely to review their resilience strategies in the wake of the incident. Financial institutions, healthcare providers, and government departments using AWS may now face renewed scrutiny over whether they have realistic fallback options if US-East-1 (Amazon’s largest and oldest data region) goes down again.

For Amazon, the incident is a reminder that its strength as the backbone of the internet can also be its greatest vulnerability, and how every outage draws widespread attention because of its systemic impact. The company’s rapid publication of a detailed (and very long) postmortem is in line with its usual transparency practices, but is also now unlikely to prevent competitors from using the episode to argue for multi-cloud adoption.

How Users Were Affected

For individuals and smaller businesses, the experience of the outage was that websites and apps stopped working. Some services displayed error messages while others simply timed out. With AWS hosting backend systems for thousands of platforms, many users had no idea that Amazon was the root cause.

Gaming companies like Roblox and Epic Games were among the first to confirm the disruption, reporting that login and matchmaking services were unavailable for several hours. Social media feeds froze for many users, while banking and payments apps experienced intermittent outages throughout the morning.

Even Amazon’s own services, such as Alexa and Ring, saw degraded performance during the height of the incident, highlighting the circular dependencies within its own ecosystem.

What Critics Are Saying

Criticism has centred on the scale of AWS’s dominance and the concentration of critical systems in one region. The US-East-1 region handles enormous traffic, both for North America and internationally, because it hosts many AWS “control plane” functions that manage authentication and routing across the network.

Analysts have warned for years that this architecture creates a “single point of systemic risk”, which is a problem that cannot be easily fixed without major structural changes. Calls for greater geographic and provider diversity in cloud services are now growing louder, particularly from European regulators seeking more independence from US infrastructure. Analysts have essentially said the incident showed how organisations that rely on a single AWS region are (perhaps obviously) more vulnerable to disruption. Experts in cloud resilience have noted that customers without secondary regions or providers to keep services running during an outage reinforce long-standing advice to build in redundancy and avoid single points of failure.

What Now?

AWS says it is reviewing all automation across its regions to identify similar vulnerabilities. It says the DNS Enactor and Planner systems will remain disabled until the race condition bug is eliminated and additional safeguards verified. It also says engineers are enhancing testing for EC2 recovery workflows to ensure large fleets can re-establish leases more predictably after regional incidents.

For business users, the event is likely to prompt at least a discussion about the wider adoption of multi-region resilience testing and disaster recovery planning. The broader question is whether the global internet can continue to rely so heavily on a few cloud giants without developing greater local redundancy.

Amazon’s response has been technically thorough and contrite, but the 20 October outage has again exposed the fragility of the infrastructure that underpins much of modern digital life.

What Does This Mean For Your Business?

For Amazon, the scale of this disruption highlights both its dominance and its exposure. When so much of the world’s digital infrastructure runs on AWS, even a small internal fault can have far-reaching consequences. That puts continual pressure on the company to prove not only that it can recover quickly but also that it can prevent similar incidents altogether. Investors, partners, and enterprise customers will expect to see evidence of lasting improvements rather than temporary workarounds.

For UK businesses, this incident offers a practical reminder about risk, resilience, and dependency. Many British firms now rely on US cloud platforms for critical operations, from financial transactions to logistics and customer service. The lesson is, therefore, that resilience cannot be outsourced entirely. Businesses must understand where their data and services actually live, review which regions and providers they depend on, and ensure that key functions can continue if one part of the cloud goes dark.

Regulators and policymakers are also likely to have taken note of what happened and its effects. The outage is likely to reinforce long-running discussions in the UK and Europe about digital sovereignty and the risks of relying on infrastructure controlled by a handful of American companies. While creating a truly independent alternative would be expensive and complex, the case for diversified, regionally distributed systems is now stronger than ever.

Competitors, meanwhile, now have an opportunity to frame this as a kind of turning point. Microsoft, Google, and European providers such as OVH and Stackit will likely use the event to promote multi-cloud architectures and region-level redundancy. However, each faces the same challenge at scale, i.e., automation that makes systems efficient can also make them fragile when unexpected conditions arise.

Ultimately, the outage serves as a stark illustration of how deeply interconnected the modern internet has become. Every business that builds on these platforms shares some part of the same risk. The real question for Amazon and its customers alike is not whether such failures can be avoided completely, but how quickly and transparently they can recover when the inevitable happens.

Clippy Returns To Life As ‘Mico’

Microsoft has introduced “Mico”, a new animated avatar for its Copilot assistant that can be transformed into the classic Clippy paper clip, a light-hearted feature that sits within a much wider update focused on making AI more personal, expressive, and easier to use across Microsoft’s ecosystem.

What Microsoft Is Launching And When?

Mico is the new on-screen face of Copilot, designed to appear when users activate voice mode. The character is an animated, blob-like avatar that changes colour and expression during conversations, reacts to tone, and can be customised through a palette of colours and voice options. Microsoft says users can choose from eight voices with names such as Birch, Meadow, Rain, and Canyon, with a mix of British and American accents available. Mico can also be switched off entirely, meaning that voice interactions can still take place without any visual assistant.

Only In The US, For Now

For now, Mico is available only in the United States, with Microsoft confirming that a wider rollout to the UK and Canada will follow in the coming weeks. The company’s “Copilot Fall Release” package brings a range of new features, including collaboration tools, expanded integration with third-party apps, and new learning and health functions.

Easter Egg Turns To ‘Clippy’

The most nostalgic element of this change is an Easter egg, and repeatedly clicking or tapping on Mico temporarily changes its appearance to that of Clippy, the animated paper clip that appeared in Microsoft Office 97 to offer context-based help. This “Clippy skin” is not a separate mode but rather a visual overlay, and a small nod to the assistant that many users loved to hate.

Why Microsoft Is Doing This?

The relaunch forms part of Microsoft’s wider effort to humanise its AI tools. Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has framed the strategy as “human-centred AI”. In a post announcing the update, he wrote: “Technology should work in service of people. Not the other way around.” The goal, he said, is to make Copilot “helpful, supportive and deeply personal”, empowering users rather than replacing human judgement.

Companion

This reflects Microsoft’s broader positioning of Copilot as an “AI companion”, i.e., an assistant that learns from context, remembers preferences, and provides useful prompts while respecting user control and privacy. Suleyman has also highlighted how Microsoft is “not chasing engagement or optimising for screen time” but instead building AI that “gives you back time for the things that matter”.

Mico’s Personality Designed To Be Useful Not Sycophantic

Jacob Andreou, corporate vice president of product and growth at Microsoft AI, recently explained the design rationale in an interview with the Associated Press, saying: “When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you.” He added that Mico’s personality was designed to be “genuinely useful” rather than flattering or manipulative. “Being sycophantic — short-term, maybe — has a user respond more favourably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”

How It Works

In terms of how Mico/Clippy works, when users activate voice mode by clicking the microphone icon, Mico appears on-screen, listening and responding with animated movements and facial expressions. It can explain topics, summarise documents, or walk users through tasks. Testers have reportedly noted that while it responds naturally, text captions of its spoken replies are not always displayed, meaning conversations are primarily auditory.

Several New Copilot Functions

Beyond the avatar, Microsoft’s Fall (Autumn) Release actually introduces several new Copilot functions. For example, “Groups” allows up to 32 people to participate in a shared Copilot chat, enabling teams to co-plan projects, co-write content or share research. Also, “Connectors” integrate Copilot with apps including Outlook, OneDrive, Gmail, Google Drive, and Google Calendar, allowing users to ask questions across multiple data sources through natural language queries.

Another major change is memory and personalisation. For example, Copilot can now remember user preferences, projects, and recurring tasks, recalling them in future sessions and users retain the ability to edit or delete stored memories. The update also includes “Real Talk”, a new conversation mode that Microsoft says challenges assumptions “with care”, helping users to refine ideas rather than simply validate them.

Health And Learning

It seems that health and learning have become core use cases. For example, according to Microsoft, around 40 per cent of Copilot’s weekly users ask health-related questions. Therefore, to support this, the company has introduced Copilot for Health, built with guidance from medical partners such as Harvard Health, which grounds responses in credible medical information. In education, the new “Learn Live” feature turns Copilot into a voice-enabled tutor that uses dialogue and visual cues to help explain concepts ranging from photosynthesis to computer networking.

Who It’s For?

Mico is essentially designed to appeal to everyday users, families, and students who prefer natural, conversational assistance. Microsoft says it can help users plan trips, research topics, draft content or even provide guidance on everyday decisions. For schools and universities, it could represent an evolution of AI-assisted learning, and one that Microsoft hopes will feel more interactive and approachable than text-only tools.

Was Clippy Just A Bit Before Its Time?

When Clippy appeared in the late 1990s, its design didn’t really match how most people wanted to interact with their computers, yet experts now say users have become much more comfortable with expressive, character-based AI. With advances in technology and a clearer sense of what digital assistants are for, a feature that once felt intrusive is now more likely to be seen as friendly and intuitive.

For Practical Productivity Gains

For professional and business users, the focus of these new features (and the resurrected Clippy) appears to be on practical productivity gains. For example, group chats, shared memory, and cross-app integration all lend themselves to collaborative work and faster information retrieval. In theory, employees could ask Copilot to find key messages across multiple accounts, summarise project discussions, and track progress without leaving a conversation window.

What It Means For Microsoft

The Mico update essentially consolidates Microsoft’s vision of Copilot as a cross-platform assistant embedded within Windows, Edge, and Microsoft 365. By giving Copilot a consistent voice interface and optional visual identity, Microsoft is hoping to strengthen its position against rivals such as Google’s Gemini and OpenAI’s ChatGPT, which are also integrating multimodal and conversational AI into their ecosystems.

It could also be said to represent a strategic pivot towards companionship rather than novelty. For example, Suleyman’s emphasis on empathy, control, and trust is designed to counter growing public scepticism toward AI. Microsoft’s avoidance of human-like avatars or flirtatious personalities contrasts sharply with some competitors that have leaned into emotionally charged or entertainment-focused AI designs.

For Microsoft, therefore, Mico’s visual charm will likely serve as an entry point rather than the product itself. The underlying business logic lies in deeper engagement with Copilot’s ecosystem, i.e., drawing users into paid Microsoft 365 subscriptions, expanding cross-app search capabilities, and encouraging adoption of Edge and Windows 11’s built-in AI features.

Competitors

Rival technology companies are exploring similar territory. For example, OpenAI plans to restore a more conversational personality to ChatGPT, while Google continues to integrate Gemini features across its workspace products. However, Microsoft’s approach is distinctive in how it blends nostalgia and restraint, acknowledging Clippy’s cultural legacy while designing Mico to be optional, unobtrusive, and user-controlled.

By connecting to competing ecosystems such as Google Drive and Gmail through connectors, Microsoft is also signalling its intention to become the interface for managing all personal and professional data, not just that which lives within its own cloud. That interoperability could make Copilot more attractive to mixed-platform users, particularly small businesses that rely on multiple services.

Business Users

For UK businesses, Mico and Copilot’s expanded features highlight Microsoft’s ambition to make AI more visible in everyday workflows. Teams can now co-create and share tasks in Copilot Groups, while memory and connector functions reduce the need to re-enter data or switch between platforms. In practice, that could mean faster document searches, streamlined planning sessions, and AI-assisted decision-making that remains traceable and editable.

Microsoft’s insistence that Copilot should “listen, learn and earn trust” rather than replace judgement may also resonate with more compliance-conscious sectors. Features such as editable memory and explicit consent for data access help address growing governance and privacy expectations.

Challenges And Criticisms

One initial criticism is that the rollout has proved inconsistent so far, with some users reporting that Mico is visible in the web version of Copilot but not yet in the Windows 11 desktop app. Microsoft has said that availability will expand gradually, with new features appearing in phases across different regions and devices.

There are also concerns about autonomy. For example, in tests of Copilot’s booking features, reviewers reported the assistant pre-selecting hotel dates and options without confirmation, highlighting the challenge of balancing initiative with transparency.

More broadly, the industry remains a little cautious about the psychological impact of highly interactive AI. Regulators such as the US Federal Trade Commission have begun examining how AI chatbots affect children and teenagers, following reports of harmful advice and emotional over-familiarity from some AI companions. Microsoft is seeking to avoid these pitfalls by keeping Mico’s tone professional, controllable and easy to disable.

Privacy, as always with AI, is another area of concern. For example, while Microsoft says Copilot requires explicit consent before accessing connected apps and allows users to edit or delete memory data, businesses will still need clear internal policies governing what data Copilot can read and store.

The Clippy Question

Mico’s hidden Clippy transformation is basically a light-hearted reminder of how far Microsoft’s digital assistants have come. The company insists that the nostalgia is deliberate but controlled and a playful link to a familiar past, framed within a more sophisticated, opt-in design philosophy.

What Does This Mean For Your Business?

Although the Clippy revival is clearly a playful addition, it actually highlights a serious strategic moment for Microsoft. The company is reframing Copilot as more than just a functional chatbot and instead positioning it as an assistant that can adapt to human tone, behaviour, and context without overstepping boundaries. That balance between warmth and professionalism could prove important as users grow weary of overly mechanical tools yet remain cautious about overly familiar ones.

For UK businesses, the developments point towards an assistant that could fit naturally within daily workflows rather than existing as a separate app or experiment. The ability to connect Copilot to existing systems, recall previous projects, and collaborate across teams could make AI adoption more practical and measurable. It may also help smaller firms, many of which rely on mixed Microsoft and Google environments, to simplify their digital operations without major disruption.

The return of a character like Clippy, now built into an AI that listens, remembers, and coordinates across multiple platforms, underlines how much the workplace has evolved since the late 1990s. For many users, the novelty of talking to a computer has long worn off but what matters now is whether these systems save time, reduce friction, and remain trustworthy. Microsoft’s focus on consent, editability, and transparency is likely to appeal to both business and consumer stakeholders, particularly as regulators tighten expectations around data handling and AI behaviour.

The biggest test, however, will be whether Copilot’s new capabilities can actually translate into everyday usefulness rather than being just novelty (or an annoyance to some, as Clippy was before). As competition intensifies and users gain access to more sophisticated assistants from OpenAI and Google, Microsoft’s long-term advantage may rest on its ability to integrate these tools seamlessly into the familiar rhythm of Windows and Office. The Clippy transformation may be the headline-grabber in this case, but the real story is whether Mico and its wider Copilot ecosystem can finally deliver what its predecessor could not, i.e., an assistant that genuinely helps without getting in the way.

UK Ruling Could Mean Apple Compo For Millions

A UK competition court has ruled that Apple abused its market power with App Store fees, paving the way for compensation that lawyers say could total up to ÂŁ1.5 billion for around 36 million iPhone and iPad users.

What The Tribunal Decided

The Competition Appeal Tribunal (CAT) found that Apple held “near absolute market power” in two linked markets, i.e., app distribution on iOS devices and in-app payment processing, and had used that position to charge “excessive and unfair” commissions, typically up to 30 per cent, on paid apps and in-app purchases.

The judgment, brought by class representative Dr Rachael Kent, actually marks the first collective competition claim to succeed at trial under the UK’s relatively new regime for group actions. Following a seven-week hearing earlier this year, the tribunal concluded that Apple’s restrictions prevented rival app stores and alternative payment options on iPhones and iPads, leaving developers and consumers with no meaningful choice but to use Apple’s system.

Expert evidence submitted to the court showed that a significant share of Apple’s overcharges to developers were passed on to users through higher prices for apps, subscriptions and digital content. The tribunal agreed, finding that Apple’s business model inflated costs for millions of consumers and small businesses across the UK.

Who Is Covered And From When?

The class action covers anyone in the UK who made purchases through the UK version of the App Store on an iPhone or iPad from 1 October 2015 onwards. That includes paid-for apps, in-app purchases and subscriptions bought within apps.

In fact, law firm Hausfeld, representing Dr Kent, estimates that around 36 million people could fall within this category. Both individual consumers and businesses are included. For example, a company that paid for productivity apps on staff iPhones or made in-app purchases for services through Apple’s system could be entitled to a share of the damages, alongside ordinary consumers.

According to the legal team, users who spent regularly could be due significant sums. For example, a fitness app subscription costing £8.99 a month could yield roughly £21.58 back per year, based on the tribunal’s findings. In another example, a £19.99 in-app purchase could equate to around £4 in compensation. The exact payout will depend on how much each person or business spent and the final calculation approved by the court.

How Much Money Are We Talking?

The tribunal has indicated that aggregate damages could reach up to an eye-watering ÂŁ1.5 billion, subject to a follow-up hearing on how the total will be calculated and distributed. The court also ordered that interest be added at a rate of 8 per cent per year, which could increase the total compensation for purchases made several years ago.

The collective action covers almost a decade of App Store activity, meaning that regular app users, mobile gamers, and subscribers to digital services could all be affected. With around 36 million potential claimants, even modest individual payments could add up to one of the largest consumer compensation cases ever seen in the UK.

Why The Case Was Brought

Dr Rachael Kent, a Senior Lecturer in Digital Economy and Society Education at King’s College London, launched the case in 2021 claiming that Apple’s conduct had led to “exorbitant profits” by excluding competition and forcing developers to use its own payment system on its own terms.

After the ruling, Dr Kent described the outcome as a “landmark victory, not only for App Store users, but for anyone who has ever felt powerless against a global tech giant”. She added that the judgment “confirms that Apple has been unlawfully overcharging users for more than ten years and that up to £1.5 billion should now be returned to UK consumers and businesses”.

The tribunal agreed with her argument that Apple’s 30 per cent commission was excessive and unfair. It found that a fair rate, based on comparisons with other digital platforms, would have been closer to 17.5 per cent for app distribution and 10 per cent for payment processing.

Apple’s Response And Grounds For Appeal

It’s no surprise that Apple has said it “strongly disagrees” with the ruling and will appeal. In a statement issued after the judgment, the company said the tribunal’s view of the app economy was “flawed” and failed to recognise how the App Store had “benefited businesses and consumers across the UK”.

“The App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments,” Apple said. “This ruling overlooks how the App Store helps developers succeed and gives consumers a safe, trusted place to discover apps and securely make payments. The App Store faces vigorous competition from many other platforms — often with far fewer privacy and security protections.”

Apple also argues that because commission is only charged on paid apps and in-app purchases, around 85 per cent of the apps available on the App Store pay no commission at all. It points to its Small Business Programme, which halves the rate of commission to 15 per cent for developers earning less than $1 million a year.

The tribunal, however, rejected Apple’s argument that its restrictions were necessary to guarantee user safety and privacy, ruling that the measures were neither proportionate nor justified in relation to competition law.

What Happens Next?

A further hearing, expected in November, will determine the exact approach to calculating and distributing compensation. The court will consider Apple’s application to appeal at the same time.

Any payments to consumers are, therefore, unlikely to begin until the appeals process is complete. However, Hausfeld says the judgment firmly establishes Apple’s liability, meaning that compensation will follow once the calculations and distribution process are finalised.

For now, users can check their eligibility by reviewing their “Purchase History” under their App Store account settings. Those who have paid for apps or in-app purchases through the UK storefront since October 2015 are likely to qualify.

Why The Decision Matters Beyond iPhones

The ruling comes just days after the UK’s Competition and Markets Authority (CMA) designated both Apple and Google as having “strategic market status” under the new Digital Markets, Competition and Consumers Act. This means the regulator can now impose legally binding conduct requirements on how the firms operate their app stores, browsers and payment systems.

The CMA has already indicated it could compel Apple to allow rival app stores to operate on iPhones in the UK, potentially ending its long-standing “closed system” where software can only be downloaded through its own store.

Regulators and analysts view the CAT judgment as part of a wider pattern of scrutiny of Apple’s App Store model. The company is already facing pressure in the European Union, where the Digital Markets Act has forced it to permit third-party app stores and alternative payment routes. In the United States, Apple has been the subject of multiple antitrust investigations and private lawsuits over similar issues.

What The Court Said About Market Power And Pass-Through

The tribunal found that Apple’s control over app distribution on iOS gave it “near absolute market power”, effectively allowing it to dictate terms to developers and consumers. It also accepted evidence that roughly half of Apple’s overcharge was passed on to end users, which formed the basis for estimating total damages at up to £1.5 billion.

The court compared Apple’s commission levels with other digital marketplaces, including Microsoft’s and Epic Games’ app stores, and found its rates to be significantly higher. The tribunal concluded that the excess pricing could not be justified by any additional value or innovation provided by Apple’s system.

What Users And Businesses Should Know

The case is a collective opt-out action, meaning UK-based consumers and businesses who meet the eligibility criteria will automatically be included unless they choose to opt out. This means they will not need to sign up in advance but will be required to provide proof of purchase when the compensation scheme is finalised.

The tribunal’s order of interest at 8 per cent per year also means that older purchases, especially those made between 2015 and 2020, could attract larger payouts.

Dr Kent’s legal team has said further updates will be issued once the next phase of the case concludes. For now, eligible users are advised to retain any records of App Store purchases or subscriptions made on UK-registered Apple accounts.

The Wider Industry Context

This case is being watched closely by technology firms and regulators because it sets a new benchmark for competition enforcement in the digital economy. It also highlights how the UK’s collective action framework can be used to hold major global platforms to account for past conduct that inflated prices for consumers and businesses.

While Apple maintains that its ecosystem provides unique safety and privacy benefits, the tribunal’s findings appear to have called into question the balance between those protections and fair competition. The upcoming damages hearing will now determine what that accountability looks like in financial terms for millions of UK users.

What Does This Mean For Your Business?

The outcome of this case may mark a defining moment in how the UK approaches digital market regulation. For example, by confirming that a global company of Apple’s scale can be held accountable through collective legal action, the tribunal has set a clear precedent that could influence future cases involving other dominant tech platforms. It also signals that the UK’s competition and consumer law framework is now capable of addressing the realities of platform-based markets, where small differences in commission rates or payment terms can affect millions of users and developers simultaneously.

For UK businesses, the implications extend well beyond potential compensation. For example, many small firms that rely on mobile apps for marketing, payments, or service delivery have long been subject to the same terms as global developers, often without the ability to negotiate or switch to alternative platforms. A successful compensation process could return meaningful sums to those businesses, but more importantly, it may drive structural changes that reduce dependency on a single distribution channel. In a more competitive marketplace, smaller developers and service providers could benefit from lower costs, broader reach, and greater freedom over how they price and deliver their products.

Also, developers and consumers are likely to watch closely for signs of how Apple responds. If the appeal fails and the compensation framework goes ahead, the company may be forced to reconsider its UK App Store model to comply with competition expectations. That could include opening its payment systems to external providers or lowering commission rates to align more closely with those found in other digital marketplaces. Such changes would not only reshape Apple’s UK operations but could also influence its strategy across Europe, where similar legal and regulatory challenges are already underway.

The ruling also gives some momentum to regulators such as the Competition and Markets Authority, which has already indicated plans to impose new obligations on major digital platforms. Having both the CAT judgment and the CMA’s new enforcement powers in play strengthens the UK’s position as one of the leading jurisdictions for digital competition oversight. It could, in time, make the country a test case for how to balance consumer protection, business innovation, and fair access in the app economy.

For consumers, the short-term focus will be on how quickly compensation arrives and what steps they must take to claim it. However, the longer-term significance appears to lie in how this case may reshape the digital ecosystem itself. Whether through greater transparency, reduced commissions, or the introduction of alternative app stores, the outcome has the potential to alter how users, developers, and major tech firms interact across the UK’s mobile marketplace.

Company Check : OpenAI Unveils ChatGPT-Powered Atlas Browser

OpenAI has released Atlas, a free macOS web browser built around ChatGPT, and it arrives with big ambitions, useful features, and some immediate security questions.

What OpenAI Has Launched, And Why It Matters

OpenAI describes Atlas as “a new web browser built with ChatGPT at its core.” The idea of Atlas is, rather than visiting a website, copying content, and pasting it into a chatbot, the chatbot now lives inside the browser and can see the page you are on. OpenAI has framed it as a chance to “rethink what it means to use the web.”

Just On macOS (Free) For Now

Atlas is available now worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android versions “coming soon.” Business users can enable Atlas in beta, and Agent mode is available in preview for Plus, Pro, and Business tiers. OpenAI also published release notes and a download link, underlining that Atlas can import bookmarks, passwords, and browsing history from existing browsers.

How It Works In Practice

Atlas opens directly to ChatGPT rather than a traditional home page. Users can type a question or a URL, then work in a split view where ChatGPT summarises, compares, or explains the page they are on. An optional sidebar, “Ask ChatGPT,” follows the user as they browse, designed to remove the copy-paste friction that has characterised earlier chatbot use. OpenAI states that the browser can “understand what you’re trying to do, and complete tasks for you, all without leaving the page.”

Two features really stand out. The first is “browser memories,” which is an opt-in setting that allows ChatGPT to remember context from sites a user visits so it can bring that context back when needed. The second is “Agent mode,” which enables ChatGPT to act on the user’s behalf in the browser, carrying out tasks such as research, form-filling, or making bookings. OpenAI is keen to emphasise the benefit of user control, noting that browser memories can be viewed, archived, or deleted, that browsing content is not used to train models by default, and that visibility for specific sites can be turned off directly from the address bar.

Availability And Controls

At launch, Atlas includes parental controls that carry over from ChatGPT, with options to disable memories or Agent mode entirely. OpenAI says Agent mode can’t run code in the browser, download files, or install extensions, and it pauses on sensitive sites such as banks. Users can also run the agent in logged-out mode to limit access to private data.

Where Atlas Fits In A Crowded Browser Market

This move from OpenAI appears to be a direct challenge to existing players. For example, on desktop, Chrome holds about 73.65 percent of the global browser market, followed by Edge on 10.43 percent and Safari on 5.73 percent (StatCounter, September 2025). For Atlas to gain traction, it must prove both trustworthy and genuinely useful in daily workflows.

Vague Wording? What “AI Browser” Really Means

It seems that “AI browser” is quickly becoming shorthand for a set of common features, i.e., a chatbot that can read what’s on the screen, answer questions about it, and act within context. In Atlas, this takes the form of ChatGPT as a ride-along assistant that can process and recall on-page information.

Microsoft is pursuing the same idea. For example, in its Edge browser, Copilot Mode provides similar capabilities, opening a chat window that can summarise and compare data across multiple tabs. The company has also introduced “Actions,” which can fill in forms or book hotels, and “Journeys,” which group your tab history into ongoing projects.

The Indirect Prompt-Injection Issue

It seems that the most significant technical challenge currently facing Atlas, however, may not be unique to OpenAI. For example, Brave’s security team recently warned that indirect prompt injection is “a systemic challenge facing the entire category of AI-powered browsers.”

In simple terms, prompt injection occurs when a malicious webpage hides instructions that an AI assistant mistakenly interprets as user commands. This could cause the AI to perform unintended actions, such as fetching data from other tabs or leaking information from logged-in accounts.

Brave’s research revealed that similar vulnerabilities have been found in other AI browsers, including Perplexity’s Comet and Fellou, where attackers could hide commands inside website text or even faint image overlays. These instructions can bypass normal safeguards by being passed to the model as part of the page context.

In fact, OpenAI’s own documentation acknowledges this threat. For example, Dane Stuckey, OpenAI’s Chief Information Security Officer, described prompt injection as “a frontier, unsolved security problem” and said the company has implemented overlapping guardrails, detection systems, and model training updates to reduce risk. “Our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks,” he wrote, adding that users should run agents in logged-out mode when working on sensitive tasks.

Early Testing And What Researchers Are Seeing

Early demonstrations have already shown why this remains an open concern. For example, independent researchers have reportedly shared examples where Atlas responded to hidden instructions embedded within ordinary documents, producing unexpected outputs instead of the requested summaries. While these examples did not involve harmful actions, they highlight how easily indirect prompt injections can influence AI behaviour when content is treated as part of a legitimate task.

AI security researcher Johann Rehberger, who has documented several prompt-injection attacks across AI platforms, described the risk as affecting “confidentiality, integrity, and availability of data.” He noted that while OpenAI has built sensible safeguards, “carefully crafted content on websites can still trick ChatGPT Atlas into responding with attacker-controlled text or invoking tools to take actions.”

Brave’s recent post about this security issue also warned that agentic browsers can bypass traditional web protections such as the same-origin policy because they act using the user’s authenticated privileges. For example, a simple instruction hidden in a web page could, in theory, make the assistant act across sites, including banks or corporate systems, if guardrails fail.

How OpenAI Says It Has Balanced Power And Control

OpenAI has listed several design choices intended to reduce these risks. For example, users can clear specific page visibility, delete all browsing history, or use incognito windows that temporarily log ChatGPT out. Browser memories are private to the user’s ChatGPT account, are off by default, and can be managed directly in settings.

If a user opts to allow training on browsing content, pages that block GPTBot remain excluded. Agent mode cannot install extensions, access the file system, or execute code, and it pauses on sensitive sites where actions might expose personal data.

OpenAI says its approach is to combine technical safeguards with transparency. Users are shown what the agent is doing step by step, and actions can be stopped mid-flow.

For example, someone planning a dinner party can ask Atlas to find a grocery store, add ingredients to a basket, and place the order, watching each action unfold. Also, a student could use Atlas to ask real-time questions about lecture slides, while a business user can ask it to summarise competitor data or past documents without switching tabs.

Two Days Later, Microsoft Reframes Edge As An “AI Browser”

Just two days after OpenAI’s announcement, Microsoft expanded its own browser to include nearly identical functionality. On 23 October, the company unveiled an upgraded Copilot Mode for Edge, now officially described as “an AI browser.”

Mustafa Suleyman, CEO of Microsoft AI, wrote in a company blog post: “Copilot Mode in Edge is evolving into an AI browser that is your dynamic, intelligent companion.” The update introduces new features called “Actions,” which allow Copilot to fill out forms and make bookings, and “Journeys,” which group browsing sessions around specific goals.

Although Microsoft’s project was likely in development long before Atlas was revealed, the timing and similarity are notable. Both browsers now integrate AI deeply into browsing, both rely on contextual understanding to assist users, and both frame the assistant as a companion that can interpret what is on screen.

Independent reviewers have noted that the new Copilot Mode in Edge is visually and functionally close to Atlas. The layout differs slightly, but the underlying premise is the same: a built-in AI that reads, reasons, and acts on content as you browse. Microsoft says all new features require user consent before accessing tab content or history.

Challenges And Criticisms

While Atlas has been praised for its clean design and intelligent functionality, some experts have already raised questions about privacy, data control, and long-term security. OpenAI insists that browser memories are fully optional and off by default, but data protection specialists warn that even anonymised context retention can reveal behavioural patterns over time.

Also, some commentators have warned that Atlas, like other AI-driven browsers, could raise new privacy and security concerns if not carefully managed. For example, cybersecurity specialists have noted that the browser’s ability to access bookmarks, saved passwords, and full browsing histories could make the trade-off between convenience and data protection more critical than ever. They have also cautioned that combining web activity with chatbot interactions could increase risks such as profiling, targeted phishing, or unintended exposure of sensitive information.

It should also be noted here that early feedback from users has been mixed. For example, some testers have praised Atlas for its clear presentation of information and accurate sourcing, while others have reported slower performance and questioned how effectively Agent mode will operate once the browser is adopted at scale.

Cybersecurity researchers point out that even if Atlas performs safely under current controls, new prompt-injection techniques are constantly being developed. Brave’s researchers have already hinted that further vulnerabilities are likely to surface as more companies introduce AI-driven browsing.

The balance between innovation and oversight, and between convenience and confidentiality could, therefore, be the central test for Atlas and the new wave of AI browsers it represents.

What Does This Mean For Your Business?

OpenAI’s launch of Atlas could be one of the most ambitious steps yet in merging web browsing with conversational AI. It shows how quickly the boundary between search, productivity, and automation is dissolving, with the browser itself becoming a personal assistant rather than a static window to the internet. Yet it also exposes how far the technology still has to go before it can be trusted to act independently in real-world settings.

For users, the attraction is that Atlas promises a streamlined way to find information, take action, and move between tasks without switching tabs or tools. For OpenAI, it provides a direct platform for embedding ChatGPT more deeply into everyday digital life. However, the same integration that makes Atlas powerful also increases the surface area for risk. Allowing an AI agent to see and act within live browsing sessions inevitably raises questions about data access, authentication, and the potential for malicious manipulation through prompt injection or hidden instructions.

UK businesses, in particular, may need to approach Atlas with a mix of curiosity and caution. For example, the prospect of an intelligent browser that can summarise research, handle admin tasks, or automate data collection could boost productivity and streamline workflows. However, organisations will have to consider how it interacts with internal systems, how data is stored and transmitted, and whether its automation features comply with corporate security and privacy policies. For sectors such as finance, healthcare, and education, these considerations will be especially pressing, as even minor missteps could expose sensitive information or breach compliance rules.

For other stakeholders, including regulators and cybersecurity specialists, Atlas may represent an early glimpse of what “agentic” browsing could actually mean for the wider internet. It challenges long-held assumptions about user control, privacy, and accountability. If AI browsers become mainstream, the focus of online safety will need to expand from defending websites against users to defending users against their own automated agents.

In that sense, Atlas is less a final product than a live experiment in how people and machines might share control over digital tasks. Its success will depend not just on speed or convenience but on whether OpenAI can earn sustained trust from users, businesses, and regulators alike. For now, Atlas looks like being both a milestone in browser innovation and a reminder that every step towards automation must also bring new standards of responsibility, transparency, and security.

Security Stop-Press: AI Tools Fuel Record Rise in DDoS Botnets

Attackers are using artificial intelligence (AI) to build record-breaking DDoS botnets, according to new data from internet security firm Qrator Labs.

The company reports that one botnet it tracked contained 5.76 million infected devices, a 25-fold increase on last year’s largest network. Qrator’s CTO, Andrey Leskin, said AI now lets attackers “find and capture devices much faster and more efficiently,” driving unprecedented growth.

Brazil has overtaken Russia and the US as the biggest source of application-layer DDoS attacks, accounting for 19 per cent of malicious traffic, while Vietnam’s share has surged as unsecured devices multiply across developing regions. Fintech and e-commerce remain the top targets, with peak attacks reaching 1.15 Tbps.

Experts warn that AI tools are lowering the barriers to entry for cybercriminals, enabling large-scale automated attacks. Businesses are urged to use layered DDoS protection, keep connected devices updated, and monitor for unusual network activity to defend against this new AI-driven threat.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives