An Apple Byte : Apple Launching UK Roadside Assistance (Via Satellite)
Apple is set to extend its satellite messaging service, introducing a Roadside Assistance feature in the UK through a partnership with Green Flag. This new service, launching with the iPhone 16, will enable drivers to get help in areas with poor or no cellular coverage, using satellite connectivity.
Previously available only in the US, the satellite Roadside Assistance messaging service will mean that iPhone users will still be able to communicate with breakdown services without a mobile or Wi-Fi signal. Scheduled for release with the iPhone 16 later this autumn, it promises greater safety and convenience for iPhone-using motorists in remote areas.
Green Flag, the UK roadside assistance provider, will support Apple in deploying this satellite-driven service. Aimed particularly at regions where mobile coverage is unreliable, the service will be accessible via a new interface on the latest iPhone model. However, while generally effective in open spaces, its performance may be reduced under cover or near large obstructions (satellites usually need a clear line of sight).
Apple’s Roadside Assistance will operate on a pay-per-use basis, thereby offering flexibility for UK drivers who prefer not to commit to a full-time subscription. This new service is expected to set a new industry standard in emergency communications via satellite and could also encourage the broader adoption of satellite assistance technologies, thereby helping Apple to diversify its product offerings and enhance its strategic positioning within the technology sector.
Security Stop Press : Beware ChromeLoader Exploit Malware Website Campaign
An HP Wolf Security report has highlighted how hackers are leveraging a ChromeLoader exploit and using code-signing certificates and malvertising techniques to distribute malware via fake companies and websites.
As part of what appears to be a large-scale cyberattack, cybercriminals are reportedly exploiting the ChromeLoader vulnerability (ChromeLoader is a malicious browser extension) by using valid code-signing certificates (the digital certificates to verify software authenticity and integrity), allowing them to bypass Windows security measures like AppLocker without triggering user warnings.
The report highlights how the attackers set up fake companies to obtain these valid certificates or steal them from legitimate sources. These fake companies then host websites that offer seemingly legitimate tools, such as PDF readers or converters, to lure in victims.
The campaign uses malvertising (malicious advertising) to direct potential victims to the well-designed but malware-ridden websites which often appear in search results for popular keywords like “PDF converters” and “manual readers.”
Once victims visit these infected sites, their browsers can be hijacked, allowing attackers to redirect search queries to malicious sites, increasing the scope of their attacks.
HP’s report suggests that the scripts used in this campaign were likely developed using generative AI tools, making it easier and faster for cybercriminals to launch such attacks.
The advice to avoid ChromeLoader attacks is to only download software from trusted sources, be cautious of online ads, keep security features enabled, use antivirus software, and regularly update your browser and system.
Sustainability-in-Tech : UK Startup Makes ‘Lab’ Leather
Cambridge-based startup ‘Pact’ has raised £9 million in (seed round) funding to expand its factory space and scale-up production of its “world-first” biomaterial – a skin made from collagen that’s a convincing alternative to leather.
Oval
Oval, developed by Pact, is a pioneering biomaterial made from natural collagen, designed to be a sustainable and scalable alternative to traditional materials like leather. For example, with Oval Pact says it is “Capturing the strength, feel, stretch and durability of heritage materials through upcycled collagen.”
The collagen used in Oval is sourced from ethical and environmentally friendly suppliers, often from surplus or recycled materials such as those used in cosmetics.
Oval not only looks like leather, but it also behaves like leather, i.e. it responds to scratches, water, and sunlight in a very similar way.
What’s Collagen?
Natural collagen, the biomaterial that Oval is made from, is a protein, found in the skin, bones, and connective tissues of animals, providing structural support and elasticity. It is often used in cosmetics for its ability to promote skin hydration, elasticity, and repair, making it popular in anti-aging products. Collagen’s biocompatibility and strength make it an ideal material for sustainable biomaterials like Oval, which mimics leather while reducing environmental impact. The Collagen used to make Oval is recycled collagen from old cosmetics with some herbal extracts, oils, and minerals added. Pact says: “Our collagen is a natural byproduct used in high-end cosmetics, skincare and pharmaceuticals”.
Customisable
Oval is versatile and customisable, allowing designers to create a wide range of textures, patterns, and colours. The material is finished using techniques traditionally applied to leather, making it ideal for luxury fashion, footwear, interiors, and more.
Chemical-Free + Reduced CO2
Its production is chemical-free, requires less water, and has a significantly lower carbon footprint compared to traditional leather production. Pact estimates that incorporating Oval in place of leather and synthetic alternatives could prevent 4.8 million tonnes of CO2 emissions annually!
Patented
Pact says that in the production of Oval, a patented process is used to transform cosmetic-grade collagen into collagen skins. Pact says the skins are then “enriched with all-natural ingredients, then enhanced using time-honoured finishing techniques”. Pact sums up the key benefits of Oval, saying “Oval radically reduces environmental impact and inspires unlimited design possibilities”.
Who’s It For?
Pact CEO, Yudí Ding, highlights how the company has already partnered with Luxury Maisons and how the new biomaterial has been embraced by leading fashion houses and groups globally. For example, investors in this seed investment round included Hoxton Ventures, ReGen Ventures, Celsius Industries (formerly Untitled) and Polytechnique Ventures.
Pact has also developed “drop in” manufacturing technology, enabling clients to produce Oval directly in their own supply chains.
Funding To Scale-Up
The £9 million of funding raised in this seed round has enabled Pact to invest in a new 13,820 sqft headquarters in Cambridge, which includes a laboratory and pilot production facility. This will put Pact in a better position to push into the commercialisation phase and expand and scale-up production to meet demand (which is anticipated to be global).
What Does This Mean For Your Organisation?
The success of Pact and its innovative biomaterial, Oval, marks a significant shift towards sustainable alternatives in industries traditionally dependent on leather. As environmental concerns become paramount, Oval’s ability to mimic leather while drastically reducing water usage and CO2 emissions could position it as a game-changer across fashion, interiors, and even automotive design. By offering a material that combines durability, versatility, and sustainability, Pact is responding to the increasing demand for eco-friendly solutions without compromising on quality or creativity.
This advancement doesn’t just affect consumers and brands, but it also sends a clear message to competitors in the materials industry. As Pact scales up production and solidifies partnerships with luxury brands, traditional leather manufacturers and other synthetic alternatives may feel the pressure to innovate or risk becoming obsolete. Oval’s ability to slot seamlessly into existing supply chains, thanks to Pact’s “drop-in” manufacturing technology, may give it an edge that could force competitors to reassess their production models and environmental footprints.
As more companies adopt sustainable practices, Pact’s Oval appears to be setting a new benchmark that competitors will likely need to meet. This biomaterial’s potential to reduce millions of tonnes of CO2 emissions annually makes it not just an alternative but possibly a necessary evolution for the industry. Ultimately, Pact’s breakthrough may not only disrupt the materials market but also challenge the entire ecosystem to raise its sustainability standards and embrace innovation.
All that said, however, Pact’s Oval is still at the beginning of its journey and has yet to live up to its considerable promise when it goes fully into the commercialistaion phase, although it appears that the signs are good so far.
Tech Tip – Use “Ctrl + D” to Quickly Bookmark Pages in Web Browsers
Quickly bookmark important pages or documents in any browser using the Ctrl + D shortcut, making it easier to save and access key resources. Here’s how to use it to bookmark a page and choose the bookmark folder:
How to bookmark a Page
– While on the webpage you want to bookmark, press Ctrl + D.
Choose the bookmark Folder
– Choose a folder to save the bookmark or use the default option, and click Done.
This tip works across all major browsers, including Chrome, Edge, and Firefox.
Featured Article : Would You Be Filmed Working At Your Desk All Day?
Following a recent report in the Metro that BT is carrying out research into continuous authentication software, we look at some of the pros and cons and the issues around employees potentially being filmed all day at their desks … under the guise of cyber-security.
Why Use Continuous Authentication Technology?
Businesses use continuous authentication technology to enhance security, i.e. to add an extra layer of protection. As the name suggests, this type of software continuously verifies users throughout their session, rather than relying solely on traditional one-time authentication methods like passwords or PINs. This approach is designed to mitigate risks such as session hijacking, whereby unauthorised users gain access after the initial login, or insider threats where someone might misuse another’s logged-in session. Continuous authentication essentially helps detect abnormal behavior in real-time, flagging up potential breaches or fraud by monitoring unique patterns such as typing style, mouse movements, or facial recognition. By integrating this technology, businesses may hope to reduce security vulnerabilities, safeguard sensitive data, and improve compliance with industry regulations, all while maintaining a seamless user experience, i.e. it’s happening automatically in the background.
BT Trialling Continuous Authentication Technology
BT is reported to be trialling BehavioSec’s behavioral biometrics technology at its Adastral Park science campus near Ipswich. This software is used for continuous authentication, where it monitors users’ unique behavior patterns, such as how they type, move the mouse, or interact with their devices, to confirm their identity. However, in the case of BehavioSec’s technology, it doesn’t usually require the use of a camera, i.e. the user doesn’t need to filmed by a webcam all day. Instead, it can rely on analysis of a user’s behaviour patterns by looking at factors such as keystroke dynamics, mouse movements, touchscreen gestures, and device interaction patterns (e.g. how the user holds their phone, scrolls through pages, or interacts with specific applications). In the recent Metro story however, the reporter witnessed a demonstration of the system that did use facial recognition and required continuous filming of the user with a webcam/front-facing camera to detect whether the user’s face was consistent with expected dimensions.
BT is exploring this technology as part of its broader efforts to improve cybersecurity, particularly in response to the growing threat of cyberattacks and data breaches. The trials of BehavioSec’s behavioral biometrics technology are part of BT’s research into how it can use innovative technology to better protect digital assets and infrastructure, especially in enterprise and government contexts. For example, back in 2022, BT said it would be taking security to a new level so that even if an attacker obtained a device, any ongoing work session would end, locking the device, because their biometrics wouldn’t match that of the device user’s known biometrics.
Systems Using Cameras?
There are, however, many such continuous authentication systems now available which require a camera being trained on a user’s face. A few prominent examples include:
– FaceTec’s ZoOm. This is a 3D facial recognition solution that uses the front-facing camera of devices (it can use a webcam) to authenticate users, e.g. by carrying out “Liveness Checks, Face Matches & Photo ID Scans”. It’s often used in applications requiring high security, such as financial services or identity verification systems, and biometric security for remote digital identity.
– FacePhi. This (Spanish) biometric solution for facial recognition is widely used in the banking, healthcare, and fintech sectors for secure access to mobile banking apps and fraud prevention. The software uses a camera to identify users and offers continuous authentication by tracking facial features during interactions.
– IDEMIA’s VisionPass. This system combines 3D facial recognition with AI and uses cameras to recognise faces and continuously verify identities, even in challenging conditions like low light or with face masks. It’s generally deployed in secure facilities, airports, and government buildings for access control and ongoing authentication.
– Trueface. This AI-powered facial recognition technology integrates with existing security systems, such as cameras in corporate offices, to provide continuous authentication. Trueface can recognise and track users in real-time, improving access security and is used in corporate offices, airports, and law enforcement for continuous identification and authentication.
Other popular systems that use similar methods include Clearview AI, Neurotechnology’s Face Verification System, AnyVision, and ZKTeco’s FaceKiosk.
It’s also worth noting here that the “big tech” companies’ versions, such Apple’s Face ID, Google’s Face Unlock (Pixel Devices), and Microsoft Windows ‘Hello’ are also facial recognition-based authentication systems that are classed as continuous authentication technology. However, for the purposes of this overview, we’re focusing on the kinds of systems that businesses may use for their own employees.
Issues
The usage of facial recognition (e.g. by law enforcement) has had its share of criticism in recent years. However, the thought of businesses using a camera to continuously film an employee, even if it may be for security purposes, such as continuous authentication, raises several serious issues and concerns. For example:
– An invasion of privacy. With constant surveillance, employees may feel that their privacy is being violated. Cameras can capture not only work-related activities but also personal moments, which may lead to discomfort and a sense of being micromanaged. Cameras might inadvertently record personal or sensitive information, such as confidential discussions, which could be accessed or potentially misused.
– The effect on employee trust and morale. Continuous filming can create an atmosphere of distrust between employees and employers. Workers may feel they are being monitored for reasons beyond security, leading to an atmosphere of fear, plus a decrease in morale and engagement (and ‘quiet quitting’).
– Psychological stress. Constant camera surveillance can lead to stress or anxiety among employees, affecting their overall well-being and productivity, which could obviously be counterproductive for the company.
– Data security and misuse. For example, video recordings of employees can contain sensitive biometric data, which, if compromised through a data breach, could have serious consequences. Biometric data is immutable, i.e. once stolen, it cannot be changed (like a password). There is a risk of video footage being misused, either by internal parties or external hackers. The footage could be exploited for purposes other than security, such as inappropriate monitoring of behavior or harassment.
– Ethical concerns. These could arise if employees are not fully aware of the extent and purpose of the surveillance, or if they feel coerced into accepting it as a condition of employment. Also, filming employees all day can be viewed as excessive (overreach), especially if less invasive alternatives exist. Monitoring behavior to this degree may cross ethical boundaries of acceptable workplace practices.
– Legal implications. Many regions have strict privacy laws (e.g. GDPR in Europe, CCPA in California) that require companies to obtain explicit consent for continuous surveillance and ensure the proportionality and necessity of such measures. Non-compliance could lead to legal consequences, fines, or lawsuits for a business. In some countries (or US states, for example) there are labour laws that protect employees from invasive workplace monitoring. Continuous surveillance may violate these protections if it is deemed too intrusive.
– The Potential for bias and discrimination. Among other things, this could include algorithmic bias. If the continuous authentication system relies on facial recognition, there is a risk of bias against certain groups, such as racial minorities or those with disabilities, due to known issues with facial recognition accuracy across diverse demographics. Also, employees may worry that the surveillance data could be used for purposes other than security, such as evaluating performance, which could lead to discrimination or unfair treatment.
– Technical reliability, e.g. false positives/negatives. Continuous authentication systems relying on cameras may fail, leading to false positives (unauthorised users being granted access) or false negatives (legitimate users being denied access). This can disrupt work and erode trust in the system.
While continuous authentication aims to enhance security, using cameras to film employees all day raises significant challenges. Companies need to carefully balance security needs with privacy rights, ethical considerations, and legal compliance to avoid potential negative consequences. For example, in 2020, H&M (the German multinational clothing retailer) was fined €35.3 million by the Hamburg Data Protection Authority in Germany for violating GDPR due to excessive and invasive surveillance of employees.
What Is ‘Emotional Analysis’ And Why Is It Causing Concern?
Some continuous authentication software can now use ‘emotional analysis’. This refers to the use of AI to detect and interpret human emotions through cues like facial expressions, voice tones, or body language. Its purpose is to monitor and assess workers’ emotional states, such as stress, engagement, or satisfaction. It could help a business by providing insights into employee well-being and productivity, identifying signs of burnout or disengagement, and enabling management to respond proactively to improve workplace morale, increase efficiency, and enhance overall performance through better support and tailored interventions.
However, its usage also raises significant concerns around privacy, accuracy, and bias. The technology is often inaccurate, particularly across different demographics, leading to misinterpretation of emotions. Its use in workplaces for employee monitoring can create a sense of invasion and stress, eroding trust, and morale. There are also ethical and legal issues, with fears of misuse for micromanagement or even manipulation of behavior, making its widespread deployment highly controversial.
Susannah Copson, legal and policy officer with civil liberties and privacy campaigning organisation Big Brother Watch has described ‘emotion recognition technology’ as “pseudoscientific AI surveillance” and has called for it to be banned.
What Do Rights Organisations Say?
Big Brother Watch is strongly opposed to the unchecked growth of workplace surveillance tools, calling them an invasion of privacy, harmful to employee well-being, and in need of stricter regulation to protect workers’ rights. Big Brother Watch recently held an event at the UK at the Labour Party conference to launch its report on workplace surveillance in the UK, highlighting its increasing use by bosses and their employers, and its negative effects on employees.
Big Brother Watch argues that workplace surveillance technologies, such as keystroke logging and AI-powered emotional analysis, invade employee privacy, erode trust, enable micromanagement, and harm mental health, potentially violating privacy laws like GDPR, while calling for stricter regulation to protect workers’ rights.
How Much Has Workplace Surveillance Increased?
A recent report by ExpressVPN, titled the “2023 State of Workplace Surveillance,” highlights a significant increase in workplace surveillance. Some key findings include:
– 78 per cent of employers are using some form of employee monitoring tools in 2023, up from 60 per cent before the COVID-19 pandemic.
– 57 per cent of employers implemented new surveillance tools specifically due to remote work conditions caused by the pandemic.
– 41 per cent of companies now use software to track keystrokes, screenshots, or record the activity of employees’ screens.
– 32 per cent of employers monitor employee emails and messages, while 25 per cent track employee location using GPS or IP data.
A Growing Market
This surge in monitoring reflects the growing reliance on digital surveillance tools to manage remote workforces. Regarding the market for identity and access management (IAM) and cybersecurity solutions, Gartner reported in its “Market Guide for User Authentication” that continuous authentication is gaining traction due to increasing concerns about cybersecurity and the limitations of traditional login methods.
A MarketsandMarkets report has also noted that the global user authentication market, which includes continuous authentication solutions, is projected to grow from $13.9 billion in 2022 to $25.2 billion by 2027. A 2022 Verizon Data Breach Investigations Report also noted that 61 per cent of breaches involve stolen credentials and pushed companies to adopt continuous authentication as a preventive measure.
What Can Employees Do?
If employees are concerned about continuous camera monitoring such as that used with some continuous verification systems, the (realistic) options they have are to:
– Review company policies to understand the purpose and limits of the surveillance.
– Raise concerns with HR or management to request less invasive alternatives, like fingerprint or password-based methods.
– Seek legal advice if monitoring violates privacy laws, or report it to a regulatory body like the ICO (in the UK).
– Consult with a union to negotiate privacy protections, if applicable.
– Document their issues for potential disputes and familarise themselves with their rights under local privacy and employment laws.
What Does This Mean For Your Business?
The rise of continuous authentication software, particularly that using facial recognition and behavioural biometrics, highlights the tension between advancing cybersecurity and respecting employee privacy.
While the primary aim of these systems may be to offer ongoing, seamless security by monitoring users throughout their work sessions, the methods employed, such as continuous video surveillance or behavioural tracking, have raised significant ethical and privacy concerns. The promise of enhanced protection against cyberattacks, session hijacking, and insider threats is compelling, especially in industries where data security is paramount. However, the potential downsides of this technology can’t be ignored.
One of the key concerns is the invasion of privacy. Employees may feel uncomfortable or even violated if they know that cameras or other tracking mechanisms are monitoring their every move. The potential for these systems to inadvertently capture non-work-related activities, or even sensitive personal interactions, adds to the unease. Continuous surveillance risks creating an atmosphere of distrust between employers and employees, fostering a sense of being constantly watched, which could have a detrimental effect on morale. In extreme cases, this might lead to disengagement, lower productivity, or even a rise in ‘quiet quitting,’ as employees withdraw emotionally from their work due to feeling over-monitored.
Also, there are concerns about the psychological impact of constant surveillance. The knowledge that a camera or biometric system is perpetually tracking your behaviour can lead to stress, anxiety, and a feeling of being under perpetual scrutiny. This could, paradoxically, undermine the productivity gains that continuous authentication aims to protect. Employees working under these conditions might find it difficult to focus or perform optimally, especially if they perceive the surveillance as intrusive or excessive.
In addition to these privacy and security concerns, there are ethical and legal considerations. In many jurisdictions, privacy laws require companies to obtain explicit consent for such monitoring and ensure that the measures are proportionate and necessary. Failure to comply with these regulations could lead to hefty fines or legal action (as seen in the case of H&M’s €35.3 million fine in Germany).
There are also the issues of bias and discrimination. Facial recognition technologies have been shown to be less accurate across diverse demographic groups, potentially leading to unfair treatment of certain employees. If continuous authentication systems generate false positives or negatives due to these biases, it could create additional hurdles for employees from minority groups, further entrenching workplace inequalities. There is also the risk that the data gathered could be used for purposes beyond security, such as monitoring productivity or evaluating performance, which could lead to unfair assessments or discrimination.
Despite these challenges, it is clear why businesses are keen to explore continuous authentication technology. The ever-present threat of cyberattacks, data breaches, and insider threats has made it essential for organisations to find new ways to secure their digital assets. Continuous authentication offers a promising solution by providing ongoing verification without disrupting the user experience. However, businesses must tread carefully, ensuring that these systems are deployed in ways that respect employee privacy, comply with legal requirements, and avoid creating a toxic work environment.
As continuous authentication (seemingly inevitably) becomes more widespread, it will be crucial for businesses to engage in transparent communication with employees about how these systems work, why they are being implemented, and what safeguards are in place to protect their privacy. Offering alternative, less invasive methods, such as fingerprint recognition or password-based systems, may help alleviate some concerns. Ultimately, the successful adoption of continuous authentication will depend on striking the right balance between robust security measures and the protection of employee rights and well-being.
Tech Insight : The Rising Cost Of API & Bot Attacks
Following a recent report by cyber-security company Imperva about the rising costs to businesses of bot attacks and vulnerable APIs, we look at why it’s happening and what can be done.
Vulnerable APIs & Bot Attacks Costing Businesses $186 Billion
Imperva’s report was based on Marsh McLennan Cyber Risk Intelligence Centre’s study of data from 161,000 cybersecurity incidents related to vulnerable APIs and bot attacks. The key findings were that businesses face an annual (estimated) economic burden of up to $186 billion due to vulnerable APIs and automated bot attacks. Also, the study found that these two security threats often work in tandem, are becoming increasingly prevalent, and pose significant risks to organisations worldwide.
APIs
An API (Application Programming Interface) is a set of rules and protocols that allows different software applications to communicate with each other. Businesses adopt and use APIs because they enable seamless integration between apps and services, improving efficiency and automation. Mulesoft1 figures show that 99 per cent of organisations have already embraced APIs. An API can, for example, connect a company’s CRM system with its email marketing platform, thereby automatically syncing customer data. APIs also enhance customer experiences, like allowing users to log in via their Google or Facebook accounts. They help with scalability, such as a small business using cloud storage services via APIs to expand without building infrastructure. By using APIs for payments (like Stripe) or shipping (like FedEx), businesses can quickly innovate and offer services without developing them in-house. APIs also enable secure data sharing, such as a fintech company offering real-time stock market data through an API, while fostering partnerships, like travel booking sites combining flight, hotel, and rental services from different providers. This makes businesses more agile, efficient, and competitive in a connected world, thus highlighting positive outcomes of API adoption.
The financials of using them are illustrated by Mulesoft1 figures which suggest that many organistaions which are using APIs are reporting increased revenues, e.g. up to a 35 per cent increase, plus they are reporting reduced operational costs.
Why Are APIs Vulnerable?
APIs are particularly vulnerable because they expose numerous endpoints, each acting as a potential entry point for attackers. As businesses increasingly adopt APIs to improve agility and efficiency, the number of these exposed endpoints has surged—on average, enterprises managed 613 API endpoints in 2023. This rapid expansion has created a larger attack surface, making APIs an attractive target for cybercriminals.
Also, with enterprise sites handling 1.5 billion API calls annually, the sheer volume makes the likelihood of encountering vulnerabilities greater.
What Kind of Vulnerabilities?
The kind of business logic vulnerabilities in APIs include, for example, weak authentication, insufficient access controls, and improper data validation, all of which can allow attackers to exploit these APIs, leading to data breaches or system compromises.
What’s The Link Between Vulnerable APIs and Bot Attacks?
Put simply, the link between vulnerable APIs and bot attacks is that:
– Greater API adoption (and a growing reliance upon them by organisations) has expanded the attack surface.
– Cybercriminals have realised that automated bots are a great and inexpensive way to attack the increasing number of vulnerable APIs, due to the scalability, speed, and efficiency of automated bots. Imperva, for example, highlights the fact that even low-skilled attackers can launch sophisticated bot attacks.
– Bots can quickly exploit multiple API endpoints – averaging 613 per enterprise in 2023 (Marsh McLennan) – making them ideal for large-scale attacks. Their low cost and 24/7 operation allow cybercriminals to probe for weak spots continuously, extracting sensitive data, executing fraudulent transactions, or launching disruptive denial-of-service attacks. Also, vulnerable APIs often lack strong security measures, thereby making them easy targets for bots, which can monetise stolen data or cause significant disruptions. As API adoption grows, bot attacks offer cyber-criminals a high-reward, low-effort method for exploiting these weaknesses, contributing to billions in annual financial losses.
This is why the Marsh McLennan Cyber Risk Intelligence Centre figures featured in the report show an 88 per cent rise in bot-related security incidents in 2022, followed by another 28 per cent increase in 2023. In essence, the more vulnerable APIs there are, the more bots are being used to attack them and as APIs become more integral to business, they become prime targets for bot attacks.
More Sophisticated
One other key point highlighted in Imperva’s report is that the increasing sophistication of bad bots is a growing concern. For example, Imperva reports that over 60 per cent of bad bots detected today are classified as evasive, i.e. they use a mix of moderate and advanced techniques to carry out attacks. Worryingly, these bots can now mimic human behaviour, leveraging AI and machine learning to adapt and evolve over time. They can also delay requests and bypass common security measures like CAPTCHAs, making them harder to detect. This allows them to launch significant attacks with fewer requests, thereby reducing the typical “noise” associated with bot campaigns, making their actions stealthier and more effective.
The Financial Toll
As mentioned at the beginning of this article, bot attacks on APIs are contributing significantly to financial tolls for organisations – up to $186 billion annually, with API-related breaches costing organisations up to $87 billion annually – an increase of $12 billion from 2021. Specifically, automated API abuse by bots now accounts for a massive $17.9 billion of these losses each year, thereby illustrating the immense economic impact of API vulnerabilities combined with bot-driven attacks.
Biggest Companies At Highest Risk
Reseach appears to show that large enterprises (those with over $100 billion in revenue) face the greatest risk, with bot-related incidents making up as much as 14 per cent of all cyber incidents. Imperva’s report attributes the fact that they’re prime targets to their high visibility, extensive digital presence, and valuable assets.
Global Vulnerability
Imperva’s report also highlights the global nature of API and bot attack threats, with countries like Brazil, France, Japan, and India now seeing high percentages of security incidents related to insecure APIs and bot activity. Although the proportion of such events in the United States is lower compared to these countries, the U.S. still accounts for 66 per cent of all reported incidents, highlighting its significant exposure to these growing threats.
What Does This Mean For Your Business?
The financial and operational costs of API and bot attacks are escalating at an alarming rate. With global losses reaching as high as $186 billion annually, these threats are becoming a major concern for organisations of all sizes. The rapid adoption of APIs, while improving efficiency and agility, has also expanded the attack surface, making businesses more vulnerable. Automated bots, with their scalability and increasing sophistication, are exploiting these vulnerabilities at an unprecedented rate. Imperva’s report, featuring the findings of Marsh McLennan Cyber Risk Intelligence Centre’s study, appear to illustrate the severity of the situation. This situation appears to be worsening too as bots become more evasive, using advanced techniques like AI and machine learning to mimic human behaviour, evade detection, and carry out stealthy, highly effective attacks.
Larger enterprises, with extensive digital infrastructures, are particularly exposed, with bot-related incidents accounting for up to 14 per cent of all cyber incidents. These companies face significant financial risks due to their high-value assets and complex API ecosystems, making them prime targets for automated bot attacks. That said, smaller businesses are also frequently targeted due to potentially weaker security measures, meaning that businesses of all sizes should sit up and take notice.
It also appears that this threat is global, e.g. countries like Brazil, France, Japan, and India have experienced surges in API and bot-related incidents (although the U.S. remains the most affected).
As the digital landscape evolves, the overlap between API and bot vulnerabilities highlights the critical need for businesses and organisations of all kinds to adopt proactive, comprehensive security strategies. Businesses must tailor their defences to the specific risks associated with their size and complexity. For example, large enterprises managing hundreds of API endpoints need robust API security testing frameworks that regularly assess vulnerabilities, ensuring all endpoints are secure. This could include adopting authentication mechanisms like OAuth 2.0 or implementing rate limiting to restrict how many requests can be made to the API in a short period, which helps prevent bot-driven attacks.
Smaller businesses may want to focus on securing their APIs with proper encryption and multi-factor authentication to minimise exposure. They can deploy web application firewalls (WAFs) with bot management features, such as those provided by services like Cloudflare or Imperva, to detect and block malicious bot traffic before it reaches critical endpoints.
Both small and large businesses should adopt continuous monitoring for abnormal behaviour and invest in AI-powered security tools that detect patterns characteristic of bot activity. Also, penetration testing should be part of regular security audits to simulate attacks on API endpoints, exposing any weaknesses before they can be exploited by cybercriminals.