Company Check : OpenAI Unveils ChatGPT-Powered Atlas Browser

OpenAI has released Atlas, a free macOS web browser built around ChatGPT, and it arrives with big ambitions, useful features, and some immediate security questions.

What OpenAI Has Launched, And Why It Matters

OpenAI describes Atlas as “a new web browser built with ChatGPT at its core.” The idea of Atlas is, rather than visiting a website, copying content, and pasting it into a chatbot, the chatbot now lives inside the browser and can see the page you are on. OpenAI has framed it as a chance to “rethink what it means to use the web.”

Just On macOS (Free) For Now

Atlas is available now worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android versions “coming soon.” Business users can enable Atlas in beta, and Agent mode is available in preview for Plus, Pro, and Business tiers. OpenAI also published release notes and a download link, underlining that Atlas can import bookmarks, passwords, and browsing history from existing browsers.

How It Works In Practice

Atlas opens directly to ChatGPT rather than a traditional home page. Users can type a question or a URL, then work in a split view where ChatGPT summarises, compares, or explains the page they are on. An optional sidebar, “Ask ChatGPT,” follows the user as they browse, designed to remove the copy-paste friction that has characterised earlier chatbot use. OpenAI states that the browser can “understand what you’re trying to do, and complete tasks for you, all without leaving the page.”

Two features really stand out. The first is “browser memories,” which is an opt-in setting that allows ChatGPT to remember context from sites a user visits so it can bring that context back when needed. The second is “Agent mode,” which enables ChatGPT to act on the user’s behalf in the browser, carrying out tasks such as research, form-filling, or making bookings. OpenAI is keen to emphasise the benefit of user control, noting that browser memories can be viewed, archived, or deleted, that browsing content is not used to train models by default, and that visibility for specific sites can be turned off directly from the address bar.

Availability And Controls

At launch, Atlas includes parental controls that carry over from ChatGPT, with options to disable memories or Agent mode entirely. OpenAI says Agent mode can’t run code in the browser, download files, or install extensions, and it pauses on sensitive sites such as banks. Users can also run the agent in logged-out mode to limit access to private data.

Where Atlas Fits In A Crowded Browser Market

This move from OpenAI appears to be a direct challenge to existing players. For example, on desktop, Chrome holds about 73.65 percent of the global browser market, followed by Edge on 10.43 percent and Safari on 5.73 percent (StatCounter, September 2025). For Atlas to gain traction, it must prove both trustworthy and genuinely useful in daily workflows.

Vague Wording? What “AI Browser” Really Means

It seems that “AI browser” is quickly becoming shorthand for a set of common features, i.e., a chatbot that can read what’s on the screen, answer questions about it, and act within context. In Atlas, this takes the form of ChatGPT as a ride-along assistant that can process and recall on-page information.

Microsoft is pursuing the same idea. For example, in its Edge browser, Copilot Mode provides similar capabilities, opening a chat window that can summarise and compare data across multiple tabs. The company has also introduced “Actions,” which can fill in forms or book hotels, and “Journeys,” which group your tab history into ongoing projects.

The Indirect Prompt-Injection Issue

It seems that the most significant technical challenge currently facing Atlas, however, may not be unique to OpenAI. For example, Brave’s security team recently warned that indirect prompt injection is “a systemic challenge facing the entire category of AI-powered browsers.”

In simple terms, prompt injection occurs when a malicious webpage hides instructions that an AI assistant mistakenly interprets as user commands. This could cause the AI to perform unintended actions, such as fetching data from other tabs or leaking information from logged-in accounts.

Brave’s research revealed that similar vulnerabilities have been found in other AI browsers, including Perplexity’s Comet and Fellou, where attackers could hide commands inside website text or even faint image overlays. These instructions can bypass normal safeguards by being passed to the model as part of the page context.

In fact, OpenAI’s own documentation acknowledges this threat. For example, Dane Stuckey, OpenAI’s Chief Information Security Officer, described prompt injection as “a frontier, unsolved security problem” and said the company has implemented overlapping guardrails, detection systems, and model training updates to reduce risk. “Our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks,” he wrote, adding that users should run agents in logged-out mode when working on sensitive tasks.

Early Testing And What Researchers Are Seeing

Early demonstrations have already shown why this remains an open concern. For example, independent researchers have reportedly shared examples where Atlas responded to hidden instructions embedded within ordinary documents, producing unexpected outputs instead of the requested summaries. While these examples did not involve harmful actions, they highlight how easily indirect prompt injections can influence AI behaviour when content is treated as part of a legitimate task.

AI security researcher Johann Rehberger, who has documented several prompt-injection attacks across AI platforms, described the risk as affecting “confidentiality, integrity, and availability of data.” He noted that while OpenAI has built sensible safeguards, “carefully crafted content on websites can still trick ChatGPT Atlas into responding with attacker-controlled text or invoking tools to take actions.”

Brave’s recent post about this security issue also warned that agentic browsers can bypass traditional web protections such as the same-origin policy because they act using the user’s authenticated privileges. For example, a simple instruction hidden in a web page could, in theory, make the assistant act across sites, including banks or corporate systems, if guardrails fail.

How OpenAI Says It Has Balanced Power And Control

OpenAI has listed several design choices intended to reduce these risks. For example, users can clear specific page visibility, delete all browsing history, or use incognito windows that temporarily log ChatGPT out. Browser memories are private to the user’s ChatGPT account, are off by default, and can be managed directly in settings.

If a user opts to allow training on browsing content, pages that block GPTBot remain excluded. Agent mode cannot install extensions, access the file system, or execute code, and it pauses on sensitive sites where actions might expose personal data.

OpenAI says its approach is to combine technical safeguards with transparency. Users are shown what the agent is doing step by step, and actions can be stopped mid-flow.

For example, someone planning a dinner party can ask Atlas to find a grocery store, add ingredients to a basket, and place the order, watching each action unfold. Also, a student could use Atlas to ask real-time questions about lecture slides, while a business user can ask it to summarise competitor data or past documents without switching tabs.

Two Days Later, Microsoft Reframes Edge As An “AI Browser”

Just two days after OpenAI’s announcement, Microsoft expanded its own browser to include nearly identical functionality. On 23 October, the company unveiled an upgraded Copilot Mode for Edge, now officially described as “an AI browser.”

Mustafa Suleyman, CEO of Microsoft AI, wrote in a company blog post: “Copilot Mode in Edge is evolving into an AI browser that is your dynamic, intelligent companion.” The update introduces new features called “Actions,” which allow Copilot to fill out forms and make bookings, and “Journeys,” which group browsing sessions around specific goals.

Although Microsoft’s project was likely in development long before Atlas was revealed, the timing and similarity are notable. Both browsers now integrate AI deeply into browsing, both rely on contextual understanding to assist users, and both frame the assistant as a companion that can interpret what is on screen.

Independent reviewers have noted that the new Copilot Mode in Edge is visually and functionally close to Atlas. The layout differs slightly, but the underlying premise is the same: a built-in AI that reads, reasons, and acts on content as you browse. Microsoft says all new features require user consent before accessing tab content or history.

Challenges And Criticisms

While Atlas has been praised for its clean design and intelligent functionality, some experts have already raised questions about privacy, data control, and long-term security. OpenAI insists that browser memories are fully optional and off by default, but data protection specialists warn that even anonymised context retention can reveal behavioural patterns over time.

Also, some commentators have warned that Atlas, like other AI-driven browsers, could raise new privacy and security concerns if not carefully managed. For example, cybersecurity specialists have noted that the browser’s ability to access bookmarks, saved passwords, and full browsing histories could make the trade-off between convenience and data protection more critical than ever. They have also cautioned that combining web activity with chatbot interactions could increase risks such as profiling, targeted phishing, or unintended exposure of sensitive information.

It should also be noted here that early feedback from users has been mixed. For example, some testers have praised Atlas for its clear presentation of information and accurate sourcing, while others have reported slower performance and questioned how effectively Agent mode will operate once the browser is adopted at scale.

Cybersecurity researchers point out that even if Atlas performs safely under current controls, new prompt-injection techniques are constantly being developed. Brave’s researchers have already hinted that further vulnerabilities are likely to surface as more companies introduce AI-driven browsing.

The balance between innovation and oversight, and between convenience and confidentiality could, therefore, be the central test for Atlas and the new wave of AI browsers it represents.

What Does This Mean For Your Business?

OpenAI’s launch of Atlas could be one of the most ambitious steps yet in merging web browsing with conversational AI. It shows how quickly the boundary between search, productivity, and automation is dissolving, with the browser itself becoming a personal assistant rather than a static window to the internet. Yet it also exposes how far the technology still has to go before it can be trusted to act independently in real-world settings.

For users, the attraction is that Atlas promises a streamlined way to find information, take action, and move between tasks without switching tabs or tools. For OpenAI, it provides a direct platform for embedding ChatGPT more deeply into everyday digital life. However, the same integration that makes Atlas powerful also increases the surface area for risk. Allowing an AI agent to see and act within live browsing sessions inevitably raises questions about data access, authentication, and the potential for malicious manipulation through prompt injection or hidden instructions.

UK businesses, in particular, may need to approach Atlas with a mix of curiosity and caution. For example, the prospect of an intelligent browser that can summarise research, handle admin tasks, or automate data collection could boost productivity and streamline workflows. However, organisations will have to consider how it interacts with internal systems, how data is stored and transmitted, and whether its automation features comply with corporate security and privacy policies. For sectors such as finance, healthcare, and education, these considerations will be especially pressing, as even minor missteps could expose sensitive information or breach compliance rules.

For other stakeholders, including regulators and cybersecurity specialists, Atlas may represent an early glimpse of what “agentic” browsing could actually mean for the wider internet. It challenges long-held assumptions about user control, privacy, and accountability. If AI browsers become mainstream, the focus of online safety will need to expand from defending websites against users to defending users against their own automated agents.

In that sense, Atlas is less a final product than a live experiment in how people and machines might share control over digital tasks. Its success will depend not just on speed or convenience but on whether OpenAI can earn sustained trust from users, businesses, and regulators alike. For now, Atlas looks like being both a milestone in browser innovation and a reminder that every step towards automation must also bring new standards of responsibility, transparency, and security.

Security Stop-Press: AI Tools Fuel Record Rise in DDoS Botnets

Attackers are using artificial intelligence (AI) to build record-breaking DDoS botnets, according to new data from internet security firm Qrator Labs.

The company reports that one botnet it tracked contained 5.76 million infected devices, a 25-fold increase on last year’s largest network. Qrator’s CTO, Andrey Leskin, said AI now lets attackers “find and capture devices much faster and more efficiently,” driving unprecedented growth.

Brazil has overtaken Russia and the US as the biggest source of application-layer DDoS attacks, accounting for 19 per cent of malicious traffic, while Vietnam’s share has surged as unsecured devices multiply across developing regions. Fintech and e-commerce remain the top targets, with peak attacks reaching 1.15 Tbps.

Experts warn that AI tools are lowering the barriers to entry for cybercriminals, enabling large-scale automated attacks. Businesses are urged to use layered DDoS protection, keep connected devices updated, and monitor for unusual network activity to defend against this new AI-driven threat.

Sustainability-In-Tech : UK-Made Lithium Breakthrough

Cornish Lithium has produced the UK’s first samples of battery-grade lithium hydroxide, marking a major step towards a domestic, low-carbon supply chain for electric vehicles and clean energy storage.

A Local Company with Global Ambitions

Cornish Lithium is a Penryn-based mining and technology company founded in 2016 by former investment banker and mining engineer Jeremy Wrathall. The company’s goal is to produce lithium sustainably within the UK, thereby reducing reliance on imports and supporting the transition to electric vehicles and renewable energy.

The business operates across two key areas of lithium extraction, i.e., hard rock and geothermal brines. Its projects are centred in Cornwall, where it is exploring and developing lithium resources from granite and hot spring waters deep underground. Through a combination of traditional mining expertise and modern processing technology, Cornish Lithium aims to make Cornwall a cornerstone of Britain’s green industrial future.

The Factory

At the heart of the latest breakthrough is the company’s Trelavour Hard Rock Project near St Dennis, Cornwall. Built on a repurposed china clay pit, the Trelavour Demonstration Plant began operating in 2024 and represents the UK’s first low-emission lithium hydroxide production facility. The site seems to embody sustainable redevelopment in practice in that it’s transforming a brownfield location once central to the region’s clay industry into a clean-tech hub for critical minerals.

Hydrometallurgical Processing

The plant uses hydrometallurgical processing to refine lithium-bearing mica from Cornish granite into high-purity lithium hydroxide. It also acts as a testing ground for new refining technologies that could later be scaled up for full commercial production. According to the company, commercial operations are expected to begin in 2027 with a planned output of around 10,000 tonnes of lithium hydroxide per year.

Why This Discovery Matters

Cornish Lithium’s discovery lies not only in the presence of lithium-bearing granite but in the ability to extract and refine it locally using cleaner methods. For example, the company estimates that its operations can achieve at least a 40 per cent reduction in carbon emissions compared with typical international lithium production, where ores are mined in Australia, shipped to China for refining, and then exported to Europe.

As CEO Jeremy Wrathall explained when the first samples were announced, “This achievement demonstrates that Cornwall can once again play a vital role in supporting Britain’s industrial future — this time through the production of sustainable, battery-grade lithium.”

Cornwall’s geology has long been known to contain lithium, but until recently it was not considered economically viable to extract. However, it seems that advances in processing technology, along with rising global demand and the UK’s push for net zero, have changed that outlook. In essence, the region’s combination of mineral-rich granite and geothermal resources makes it uniquely positioned to supply both hard-rock and brine-based lithium sustainably.

What’s Being Produced And Who For?

The Trelavour Demonstration Plant produces lithium hydroxide monohydrate (LHM), which is a high-purity chemical essential for lithium-ion batteries used in electric vehicles and large-scale energy storage systems. Battery-grade LHM is particularly suited to high-nickel cathodes, which are used by leading EV manufacturers to deliver higher energy density and longer range.

Cornish Lithium’s immediate aim is to refine enough material to demonstrate commercial viability and secure supply agreements with UK gigafactories and automotive manufacturers. The longer-term goal, combining both hard rock and geothermal extraction, is to produce up to 25,000 tonnes of lithium carbonate equivalent annually by 2030.

Currently, the UK imports almost all of its battery-grade lithium, leaving the country’s growing EV and battery industries reliant on international supply chains dominated by China. Local production from Cornwall would allow UK manufacturers to shorten those supply lines, cut emissions, and improve energy security.

Investment and Strategic Importance

In September 2025, Cornish Lithium secured up to £35 million in new funding, including £31 million from the UK’s National Wealth Fund and additional investment from TechMet, a critical minerals investor partly backed by the US government. This funding is earmarked to expand operations at Trelavour and advance the company’s geothermal projects.

The investment also forms part of the UK government’s broader strategy to establish a secure domestic supply chain for EV batteries. The Automotive Transformation Fund and other initiatives aim to ensure that gigafactories planned in Sunderland, Coventry, and Somerset have access to local raw materials, which is likely to be a key factor in their long-term sustainability and cost competitiveness.

Carbon Savings and Sustainability

Even though the idea of mining doesn’t seem that conducive to conserving the environment and sustainability, the sustainability benefits of local lithium production actually extend well beyond emissions. For example, processing and refining lithium within Cornwall eliminates the need for transcontinental shipping and significantly lowers the embodied carbon in each tonne of lithium hydroxide produced.

Local production also improves traceability, which is a growing requirement for European battery makers under emerging “battery passport” rules that demand transparency on the source and environmental impact of materials.

Also, by situating the plant on a disused industrial site, Cornish Lithium has actually revived part of Cornwall’s long-mining heritage in a modern, environmentally responsible way. The company estimates its projects could create more than 300 skilled jobs, contributing to regional regeneration and helping to retain talent in the South West.

The project’s reliance on UK and European technology partnerships also supports intellectual property development and knowledge transfer. By bringing advanced refining processes, such as those licensed from Australia’s Lepidico, onto British soil, the company is helping to develop local expertise in hydrometallurgy and battery chemistry.

Competitors and the Industry

Cornish Lithium’s milestone actually places it at the forefront of a growing UK lithium industry. However, it is not alone. For example, Imerys British Lithium, also based near St Austell, is developing a separate hard-rock project and has already produced pilot-scale lithium carbonate from mica-rich granite. The company plans to scale up to around 20,000 tonnes per year, potentially making it another major domestic supplier by the late 2020s.

Further north, Green Lithium is constructing a large lithium refinery at Teesside that will process imported spodumene concentrate into lithium hydroxide, complementing the raw material supply coming from Cornwall. Meanwhile, Northern Lithium is exploring brine-based extraction in the North East using direct lithium extraction (DLE) technology.

Together, these projects signal the emergence of a full UK lithium supply chain, encompassing extraction, processing, and eventual recycling, which is a development that could make the UK less dependent on imported critical minerals.

Challenges and Criticisms

Despite its progress, Cornish Lithium faces some significant hurdles. For example, Cornwall’s lithium grades are lower than those of high-grade spodumene ores mined in Australia, which could affect production costs and competitiveness. Energy-intensive refining processes also present challenges in a country with some of Europe’s highest industrial electricity prices.

The company must also navigate permitting and community engagement. For example, although its operations are based on brownfield sites, local stakeholders have raised questions about water use, noise, and the environmental management of tailings and waste.

Another challenge lies in the volatility of global lithium prices. As the Financial Times has reported, financing large-scale lithium projects can be difficult without government guarantees or long-term offtake agreements, particularly when prices fall from recent highs.

There are also broader market questions. The UK’s gigafactory sector remains nascent, and if domestic battery production fails to grow as quickly as expected, local lithium producers could struggle to find nearby buyers.

That said, for now, the company’s combination of local sourcing, low-emission processing, and government-backed funding positions it as one of the most advanced and strategically significant lithium ventures in Europe.

What Does This Mean For Your Business?

Cornish Lithium’s progress could be a real turning point in how the UK approaches its clean energy supply chain. By combining extraction, processing, and refining within one region, the company has shown that it is possible to produce critical battery materials closer to where they are used, with substantially lower emissions than imported alternatives. The immediate impact is industrial rather than symbolic, since it demonstrates that local lithium production is not just feasible but commercially and environmentally credible.

For UK businesses, particularly those in automotive manufacturing and energy storage, this development could prove decisive. For example, a domestic source of battery-grade lithium would reduce dependence on long global supply chains, stabilise costs, and make it easier to meet carbon reporting and traceability standards that are becoming central to procurement. It could also help strengthen the competitiveness of UK gigafactories, ensuring that jobs and intellectual property linked to electrification remain within the country. For other stakeholders, including local communities and policymakers, the benefits extend to regional regeneration, skilled employment, and the revival of industrial activity in an area that once relied on mining.

At the same time, it is clear that success will depend on more than geology. Cornish Lithium and its peers must scale up efficiently, manage environmental impacts transparently, and align with downstream demand from battery producers. The challenge for government and industry alike will be to create a framework that rewards sustainable extraction and encourages private investment without distorting the market.

If those conditions are met, Cornwall’s emerging lithium industry could form the foundation of a genuinely circular, low-carbon supply chain for the UK’s transition to clean transport and renewable power. In that sense, the real significance of the Trelavour plant lies not only in the metal it produces but in the model it represents, i.e., a local, collaborative, and technologically advanced approach to sustainable resource development.

Tech Tip – Create Custom Stickers in WhatsApp for Personalised Communication

Stand out in conversations with custom stickers that reflect your brand or personality, or to highlight specific products or features. Here’s how to create and use stickers in WhatsApp:

To Create a Sticker:

– Open WhatsApp and go to any chat.
– Tap the emoji icon > Sticker > Create.
– Select an image from your gallery or take a new photo.
– Crop and edit the image to fit the sticker format.
– Add text or drawings if desired.
– Save the sticker (and create a sticker album for related stickers).

Use Stickers in Messages:

– Open a chat and tap the emoji icon.
– Select the sticker icon and choose your custom sticker.
– Send the sticker to add a personal touch to your messages.

Benefits:

 

– Personalise Communication: Custom stickers help build rapport with clients or colleagues.
– Add a Professional Touch: Brand-specific stickers can enhance your business identity.
– Enhance Engagement: Stickers can make messages more engaging and fun.
– Use custom stickers to add a creative and personalised touch to your business communications on WhatsApp!

77% of Security Leaders Would Sack Phishing Victims

New research from Arctic Wolf shows that most security leaders say they would sack staff who fall for phishing scams, even as incidents rise and leaders themselves admit to clicking malicious links.

Hardening of Attitudes

Arctic Wolf’s 2025 Human Risk Behaviour Snapshot reveals that 77 per cent of IT and security leaders say they have (or would) sack an employee for falling for a phishing or social engineering scam, up from 66 per cent in 2024. The report describes this shockingly high statistic as the result of a significant hardening of attitudes among security professionals, despite continuing increases in attack volume and breach rates.

The Scale

The study, which surveyed more than 1,700 IT leaders and end users globally, found that 68 per cent of organisations suffered at least one breach in the past year. The UK and Ireland, for example, recorded some of the steepest rises, partly due to high-profile incidents in the retail sector. Arctic Wolf notes that many firms are still failing to implement basic measures, with only 54 per cent enforcing multi-factor authentication (MFA) for all users.

Sacking Doesn’t Solve The Problem

The same report also found that organisations taking an education-first approach rather than firing staff saw an 88 per cent reduction in long-term human risk. According to Arctic Wolf’s Chief Information Security Officer, Adam Marrè, “Terminating employees for falling victim to a phishing attack may feel like a quick fix, but it doesn’t solve the underlying problem.”

A Strong Policy Signal

The findings of the report appear to highlight a growing gap between confidence and capability. For example, three-quarters of leaders said they believed their organisation would not fall for a phishing attack, yet almost two-thirds admitted they have clicked a phishing link themselves, and one in five said they failed to report it.

Corrective Action Instead of Dismissal

It should be noted that, in the same survey, more than six in ten leaders said they had taken corrective action against employees who fell for phishing scams by restricting or changing access privileges, which Arctic Wolf suggests is a more constructive approach than dismissal.

Executives Are Valuable Targets For Cybercriminals

In fact, the company’s own data also shows that 39 per cent of senior leadership teams were targeted by phishing and 35 per cent experienced malware infections, highlighting how executives themselves are often the most valuable targets for attackers.

“When leaders are overconfident in their defences while overlooking how employees actually use technology, it creates the perfect conditions for mistakes to become breaches,” Marrè said. He added that the most secure organisations “pair strong policies and safeguards with a culture that empowers employees to speak up, learn from errors, and continuously improve.”

Confidence Vs Behaviour

The Arctic Wolf report appears to highlight a clear contradiction. For example, while most security leaders view phishing as a frontline employee issue, they are actually statistically among the most likely to make the same mistakes. Many also admit to disabling or bypassing security systems. For example, 51 per cent said they had done so in the past year, often claiming that certain measures “slowed them down” or made their work harder.

This gap between stated policy and personal practice is what Marrè describes as “a major blind spot and degree of hubris among some security leaders.” The report concludes that leadership culture sets the tone for the rest of the organisation, and that inconsistency at the top erodes credibility and weakens defences.

Who Is Really Falling For Phishing In 2025?

The question of who gets caught out most is not as simple as it might appear. For example, Arctic Wolf’s data indicates that senior staff, not junior employees, are often prime targets because of their privileged access and decision-making authority. The company found that nearly four in ten executive teams experienced phishing attempts, compared with lower rates among general staff.

Other research appears to support this pattern. For example, Verizon’s 2025 Data Breach Investigations Report confirms that social engineering remains one of the top causes of data breaches, accounting for more than two-thirds of all initial intrusion methods. Its analysis identifies finance, healthcare, education, and retail as the most heavily targeted sectors. Attackers exploit trust, urgency, and routine workflows to trick users into sharing credentials or downloading malware.

New Hires More Likely To Click

Also, a mid-2025 study by Keepnet, reported by Help Net Security, found that 71 per cent of new hires clicked on phishing emails during their first 90 days, making them 44 per cent more likely to fall victim than longer-serving staff. The main reasons were unfamiliar internal systems, a desire to respond quickly to apparent authority figures, and inconsistent onboarding security training. The same research found that structured, role-specific training reduced click rates by around 30 per cent within three months.

Retail Legacy Systems An Issue

Retail has also seen a marked increase in phishing incidents across the UK and Ireland. Arctic Wolf attributes this to the industry’s reliance on legacy systems, seasonal sales spikes, and the complexity of managing large volumes of customer data. The company says these factors have made retail “a prime target” for opportunistic and scalable attacks.

Can Employers Really Sack Staff For Clicking A Phishing Email?

In the UK, simply sacking an employee for falling for a phishing email is legally possible but rarely straightforward. For example, under the Advisory, Conciliation and Arbitration Service (Acas) Code of Practice, an employer can only dismiss fairly if they have both a valid reason, such as misconduct or capability, and have followed a fair and reasonable procedure.

For a dismissal to be lawful, the employer must investigate properly, give the employee a chance to respond, and ensure the sanction is proportionate. Even where a phishing incident causes financial loss or reputational damage, the question is whether the individual acted negligently or was misled despite reasonable training and policies. In most cases, a first-time mistake caused by deception would not actually meet the threshold for gross misconduct.

Unfair Dismissal?

It’s worth noting here that employees with two years’ service can bring a claim for unfair dismissal if they believe the reason or process was unreasonable. Employment tribunals are required to take the Acas Code into account, and may increase or reduce compensation by up to 25 per cent if either side fails to follow it. This means employers that act punitively without clear evidence or consistent practice could face costly legal challenges.

Most employment lawyers, therefore, recommend a corrective rather than disciplinary response, especially where the organisation’s training or technical safeguards may have been insufficient. Arctic Wolf’s data reflects this tendency, with many leaders actually opting to limit access rights rather than dismiss staff outright after a phishing incident.

Ethics And Culture

Beyond legality, there is an ethical debate here to take account of which focuses on culture and transparency. For example, the UK’s National Cyber Security Centre (NCSC) advises that creating a “no-blame reporting culture” is one of the most effective ways to reduce security risk. Its guidance stresses that employees should feel safe to report suspicious emails or mistakes immediately, without fear of reprisal.

In fact, it is well known that when punishment is the first response, employees often stay silent. Arctic Wolf’s own findings appear to bear this out, i.e., one in five security leaders who clicked a phishing link failed to report it. That silence can allow breaches to escalate before they are detected.

Human Error Inevitable

Security experts argue that treating human error as inevitable, and training people to respond effectively, is far more effective than zero-tolerance policies. Marrè says that “progress comes when leaders accept that human risk is not just a frontline issue but a shared accountability across the organisation.” He advocates regular, engaging training that reflects real threats, backed by leadership example and open communication.

The Double Standard In Practice

The data from this and other reports appears to paint a clear picture of contradiction at the top. For example, many of the same leaders who advocate sacking staff for phishing errors have clicked links themselves or disabled controls that protect the wider organisation. Arctic Wolf’s report describes this as “a culture of ‘do as I say, not as I do’,” warning that it undermines credibility and increases exposure to social engineering attacks.

Phishing Now More Sophisticated

One other important factor to take into account here is the fact that phishing techniques have also grown more sophisticated. For example, attackers now use AI-generated emails, cloned websites, and real-time chat-based scams to trick users into sharing credentials. Even experienced professionals can, therefore, struggle to spot these messages, particularly when they appear to come from known suppliers or senior colleagues.

AI Supercharges Phishing Success

Microsoft’s 2025 Digital Defence Report shows that AI-generated phishing emails are 4.5 times more likely to fool recipients, achieving a 54 per cent click-through rate compared with 12 per cent for traditional scams. The company says this surge in realism and scale has made phishing “the most significant change in cybercrime over the last year”.

Microsoft also estimates that AI can make phishing campaigns up to 50 times more profitable, as attackers use automation to craft messages in local languages, tailor lures, and launch mass campaigns with minimal effort. Beyond email, AI is now being used to scan for vulnerabilities, clone voices, and create deepfakes, transforming phishing into one of the fastest-growing and most lucrative attack methods worldwide.

Initial Compromise Comes From Phishing

Industry-wide data continues to show that phishing is the most common initial attack vector in business email compromise, ransomware, and credential theft cases. Verizon’s latest data shows phishing accounts for roughly 73 per cent of initial compromise methods, followed by previously stolen credentials. These statistics underline how difficult it is to eliminate human error entirely, even in well-trained environments.

Arctic Wolf argues that genuine progress actually requires leading by example rather than blaming employees. In its report, the company’s closing recommendations include continuous education, practical simulations, and building a culture that rewards honesty over silence. Its research concludes that organisations where employees feel confident to report mistakes are significantly less likely to experience repeat incidents, and far more likely to detect breaches early.

What Does This Mean For Your Business?

The findings appear to highlight a cultural challenge within cyber security. Punishing individuals for mistakes that even experienced leaders admit to making risks undermining the very trust and openness that strong defences depend on. The evidence shows that while technical safeguards such as MFA and endpoint protection are essential, they are not enough on their own. What really differentiates resilient organisations is how they handle human error, whether they choose to learn from it or treat it as grounds for dismissal.

For UK businesses, the implications are significant. A strict zero-tolerance policy towards phishing may appear decisive, but it can also damage morale, suppress reporting, and expose employers to potential legal and reputational risks. Dismissing staff without due process could also lead to unfair dismissal claims, while a culture of fear can discourage the transparency needed to contain attacks quickly. By contrast, firms that take a measured, education-focused approach tend to see fewer repeat incidents, faster recovery times, and stronger employee engagement in security.

The message from Arctic Wolf’s data is that leadership example matters most. When senior executives model good cyber hygiene, acknowledge their own vulnerabilities, and support open communication, staff are far more likely to follow suit. Creating an environment where everyone feels responsible for reporting threats, and confident they will be supported for doing so, delivers a far greater return than any punitive measure.

For regulators, investors, training providers and others, the findings reinforce the importance of human-centred strategies that combine accountability with education. As phishing continues to evolve in sophistication, organisations across all sectors must balance clear policy enforcement with a recognition that even the best-informed professionals can make mistakes. The organisations that respond to that reality with fairness, transparency, and leadership integrity will be the ones best equipped to withstand the next wave of attacks.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives