Sustainability-in-Tech : UK Startup Makes ‘Lab’ Leather
Cambridge-based startup ‘Pact’ has raised £9 million in (seed round) funding to expand its factory space and scale-up production of its “world-first” biomaterial – a skin made from collagen that’s a convincing alternative to leather.
Oval
Oval, developed by Pact, is a pioneering biomaterial made from natural collagen, designed to be a sustainable and scalable alternative to traditional materials like leather. For example, with Oval Pact says it is “Capturing the strength, feel, stretch and durability of heritage materials through upcycled collagen.”
The collagen used in Oval is sourced from ethical and environmentally friendly suppliers, often from surplus or recycled materials such as those used in cosmetics.
Oval not only looks like leather, but it also behaves like leather, i.e. it responds to scratches, water, and sunlight in a very similar way.
What’s Collagen?
Natural collagen, the biomaterial that Oval is made from, is a protein, found in the skin, bones, and connective tissues of animals, providing structural support and elasticity. It is often used in cosmetics for its ability to promote skin hydration, elasticity, and repair, making it popular in anti-aging products. Collagen’s biocompatibility and strength make it an ideal material for sustainable biomaterials like Oval, which mimics leather while reducing environmental impact. The Collagen used to make Oval is recycled collagen from old cosmetics with some herbal extracts, oils, and minerals added. Pact says: “Our collagen is a natural byproduct used in high-end cosmetics, skincare and pharmaceuticals”.
Customisable
Oval is versatile and customisable, allowing designers to create a wide range of textures, patterns, and colours. The material is finished using techniques traditionally applied to leather, making it ideal for luxury fashion, footwear, interiors, and more.
Chemical-Free + Reduced CO2
Its production is chemical-free, requires less water, and has a significantly lower carbon footprint compared to traditional leather production. Pact estimates that incorporating Oval in place of leather and synthetic alternatives could prevent 4.8 million tonnes of CO2 emissions annually!
Patented
Pact says that in the production of Oval, a patented process is used to transform cosmetic-grade collagen into collagen skins. Pact says the skins are then “enriched with all-natural ingredients, then enhanced using time-honoured finishing techniques”. Pact sums up the key benefits of Oval, saying “Oval radically reduces environmental impact and inspires unlimited design possibilities”.
Who’s It For?
Pact CEO, Yudí Ding, highlights how the company has already partnered with Luxury Maisons and how the new biomaterial has been embraced by leading fashion houses and groups globally. For example, investors in this seed investment round included Hoxton Ventures, ReGen Ventures, Celsius Industries (formerly Untitled) and Polytechnique Ventures.
Pact has also developed “drop in” manufacturing technology, enabling clients to produce Oval directly in their own supply chains.
Funding To Scale-Up
The £9 million of funding raised in this seed round has enabled Pact to invest in a new 13,820 sqft headquarters in Cambridge, which includes a laboratory and pilot production facility. This will put Pact in a better position to push into the commercialisation phase and expand and scale-up production to meet demand (which is anticipated to be global).
What Does This Mean For Your Organisation?
The success of Pact and its innovative biomaterial, Oval, marks a significant shift towards sustainable alternatives in industries traditionally dependent on leather. As environmental concerns become paramount, Oval’s ability to mimic leather while drastically reducing water usage and CO2 emissions could position it as a game-changer across fashion, interiors, and even automotive design. By offering a material that combines durability, versatility, and sustainability, Pact is responding to the increasing demand for eco-friendly solutions without compromising on quality or creativity.
This advancement doesn’t just affect consumers and brands, but it also sends a clear message to competitors in the materials industry. As Pact scales up production and solidifies partnerships with luxury brands, traditional leather manufacturers and other synthetic alternatives may feel the pressure to innovate or risk becoming obsolete. Oval’s ability to slot seamlessly into existing supply chains, thanks to Pact’s “drop-in” manufacturing technology, may give it an edge that could force competitors to reassess their production models and environmental footprints.
As more companies adopt sustainable practices, Pact’s Oval appears to be setting a new benchmark that competitors will likely need to meet. This biomaterial’s potential to reduce millions of tonnes of CO2 emissions annually makes it not just an alternative but possibly a necessary evolution for the industry. Ultimately, Pact’s breakthrough may not only disrupt the materials market but also challenge the entire ecosystem to raise its sustainability standards and embrace innovation.
All that said, however, Pact’s Oval is still at the beginning of its journey and has yet to live up to its considerable promise when it goes fully into the commercialistaion phase, although it appears that the signs are good so far.
Tech Tip – Use “Ctrl + D” to Quickly Bookmark Pages in Web Browsers
Quickly bookmark important pages or documents in any browser using the Ctrl + D shortcut, making it easier to save and access key resources. Here’s how to use it to bookmark a page and choose the bookmark folder:
How to bookmark a Page
– While on the webpage you want to bookmark, press Ctrl + D.
Choose the bookmark Folder
– Choose a folder to save the bookmark or use the default option, and click Done.
This tip works across all major browsers, including Chrome, Edge, and Firefox.
Featured Article : Would You Be Filmed Working At Your Desk All Day?
Following a recent report in the Metro that BT is carrying out research into continuous authentication software, we look at some of the pros and cons and the issues around employees potentially being filmed all day at their desks … under the guise of cyber-security.
Why Use Continuous Authentication Technology?
Businesses use continuous authentication technology to enhance security, i.e. to add an extra layer of protection. As the name suggests, this type of software continuously verifies users throughout their session, rather than relying solely on traditional one-time authentication methods like passwords or PINs. This approach is designed to mitigate risks such as session hijacking, whereby unauthorised users gain access after the initial login, or insider threats where someone might misuse another’s logged-in session. Continuous authentication essentially helps detect abnormal behavior in real-time, flagging up potential breaches or fraud by monitoring unique patterns such as typing style, mouse movements, or facial recognition. By integrating this technology, businesses may hope to reduce security vulnerabilities, safeguard sensitive data, and improve compliance with industry regulations, all while maintaining a seamless user experience, i.e. it’s happening automatically in the background.
BT Trialling Continuous Authentication Technology
BT is reported to be trialling BehavioSec’s behavioral biometrics technology at its Adastral Park science campus near Ipswich. This software is used for continuous authentication, where it monitors users’ unique behavior patterns, such as how they type, move the mouse, or interact with their devices, to confirm their identity. However, in the case of BehavioSec’s technology, it doesn’t usually require the use of a camera, i.e. the user doesn’t need to filmed by a webcam all day. Instead, it can rely on analysis of a user’s behaviour patterns by looking at factors such as keystroke dynamics, mouse movements, touchscreen gestures, and device interaction patterns (e.g. how the user holds their phone, scrolls through pages, or interacts with specific applications). In the recent Metro story however, the reporter witnessed a demonstration of the system that did use facial recognition and required continuous filming of the user with a webcam/front-facing camera to detect whether the user’s face was consistent with expected dimensions.
BT is exploring this technology as part of its broader efforts to improve cybersecurity, particularly in response to the growing threat of cyberattacks and data breaches. The trials of BehavioSec’s behavioral biometrics technology are part of BT’s research into how it can use innovative technology to better protect digital assets and infrastructure, especially in enterprise and government contexts. For example, back in 2022, BT said it would be taking security to a new level so that even if an attacker obtained a device, any ongoing work session would end, locking the device, because their biometrics wouldn’t match that of the device user’s known biometrics.
Systems Using Cameras?
There are, however, many such continuous authentication systems now available which require a camera being trained on a user’s face. A few prominent examples include:
– FaceTec’s ZoOm. This is a 3D facial recognition solution that uses the front-facing camera of devices (it can use a webcam) to authenticate users, e.g. by carrying out “Liveness Checks, Face Matches & Photo ID Scans”. It’s often used in applications requiring high security, such as financial services or identity verification systems, and biometric security for remote digital identity.
– FacePhi. This (Spanish) biometric solution for facial recognition is widely used in the banking, healthcare, and fintech sectors for secure access to mobile banking apps and fraud prevention. The software uses a camera to identify users and offers continuous authentication by tracking facial features during interactions.
– IDEMIA’s VisionPass. This system combines 3D facial recognition with AI and uses cameras to recognise faces and continuously verify identities, even in challenging conditions like low light or with face masks. It’s generally deployed in secure facilities, airports, and government buildings for access control and ongoing authentication.
– Trueface. This AI-powered facial recognition technology integrates with existing security systems, such as cameras in corporate offices, to provide continuous authentication. Trueface can recognise and track users in real-time, improving access security and is used in corporate offices, airports, and law enforcement for continuous identification and authentication.
Other popular systems that use similar methods include Clearview AI, Neurotechnology’s Face Verification System, AnyVision, and ZKTeco’s FaceKiosk.
It’s also worth noting here that the “big tech” companies’ versions, such Apple’s Face ID, Google’s Face Unlock (Pixel Devices), and Microsoft Windows ‘Hello’ are also facial recognition-based authentication systems that are classed as continuous authentication technology. However, for the purposes of this overview, we’re focusing on the kinds of systems that businesses may use for their own employees.
Issues
The usage of facial recognition (e.g. by law enforcement) has had its share of criticism in recent years. However, the thought of businesses using a camera to continuously film an employee, even if it may be for security purposes, such as continuous authentication, raises several serious issues and concerns. For example:
– An invasion of privacy. With constant surveillance, employees may feel that their privacy is being violated. Cameras can capture not only work-related activities but also personal moments, which may lead to discomfort and a sense of being micromanaged. Cameras might inadvertently record personal or sensitive information, such as confidential discussions, which could be accessed or potentially misused.
– The effect on employee trust and morale. Continuous filming can create an atmosphere of distrust between employees and employers. Workers may feel they are being monitored for reasons beyond security, leading to an atmosphere of fear, plus a decrease in morale and engagement (and ‘quiet quitting’).
– Psychological stress. Constant camera surveillance can lead to stress or anxiety among employees, affecting their overall well-being and productivity, which could obviously be counterproductive for the company.
– Data security and misuse. For example, video recordings of employees can contain sensitive biometric data, which, if compromised through a data breach, could have serious consequences. Biometric data is immutable, i.e. once stolen, it cannot be changed (like a password). There is a risk of video footage being misused, either by internal parties or external hackers. The footage could be exploited for purposes other than security, such as inappropriate monitoring of behavior or harassment.
– Ethical concerns. These could arise if employees are not fully aware of the extent and purpose of the surveillance, or if they feel coerced into accepting it as a condition of employment. Also, filming employees all day can be viewed as excessive (overreach), especially if less invasive alternatives exist. Monitoring behavior to this degree may cross ethical boundaries of acceptable workplace practices.
– Legal implications. Many regions have strict privacy laws (e.g. GDPR in Europe, CCPA in California) that require companies to obtain explicit consent for continuous surveillance and ensure the proportionality and necessity of such measures. Non-compliance could lead to legal consequences, fines, or lawsuits for a business. In some countries (or US states, for example) there are labour laws that protect employees from invasive workplace monitoring. Continuous surveillance may violate these protections if it is deemed too intrusive.
– The Potential for bias and discrimination. Among other things, this could include algorithmic bias. If the continuous authentication system relies on facial recognition, there is a risk of bias against certain groups, such as racial minorities or those with disabilities, due to known issues with facial recognition accuracy across diverse demographics. Also, employees may worry that the surveillance data could be used for purposes other than security, such as evaluating performance, which could lead to discrimination or unfair treatment.
– Technical reliability, e.g. false positives/negatives. Continuous authentication systems relying on cameras may fail, leading to false positives (unauthorised users being granted access) or false negatives (legitimate users being denied access). This can disrupt work and erode trust in the system.
While continuous authentication aims to enhance security, using cameras to film employees all day raises significant challenges. Companies need to carefully balance security needs with privacy rights, ethical considerations, and legal compliance to avoid potential negative consequences. For example, in 2020, H&M (the German multinational clothing retailer) was fined €35.3 million by the Hamburg Data Protection Authority in Germany for violating GDPR due to excessive and invasive surveillance of employees.
What Is ‘Emotional Analysis’ And Why Is It Causing Concern?
Some continuous authentication software can now use ‘emotional analysis’. This refers to the use of AI to detect and interpret human emotions through cues like facial expressions, voice tones, or body language. Its purpose is to monitor and assess workers’ emotional states, such as stress, engagement, or satisfaction. It could help a business by providing insights into employee well-being and productivity, identifying signs of burnout or disengagement, and enabling management to respond proactively to improve workplace morale, increase efficiency, and enhance overall performance through better support and tailored interventions.
However, its usage also raises significant concerns around privacy, accuracy, and bias. The technology is often inaccurate, particularly across different demographics, leading to misinterpretation of emotions. Its use in workplaces for employee monitoring can create a sense of invasion and stress, eroding trust, and morale. There are also ethical and legal issues, with fears of misuse for micromanagement or even manipulation of behavior, making its widespread deployment highly controversial.
Susannah Copson, legal and policy officer with civil liberties and privacy campaigning organisation Big Brother Watch has described ‘emotion recognition technology’ as “pseudoscientific AI surveillance” and has called for it to be banned.
What Do Rights Organisations Say?
Big Brother Watch is strongly opposed to the unchecked growth of workplace surveillance tools, calling them an invasion of privacy, harmful to employee well-being, and in need of stricter regulation to protect workers’ rights. Big Brother Watch recently held an event at the UK at the Labour Party conference to launch its report on workplace surveillance in the UK, highlighting its increasing use by bosses and their employers, and its negative effects on employees.
Big Brother Watch argues that workplace surveillance technologies, such as keystroke logging and AI-powered emotional analysis, invade employee privacy, erode trust, enable micromanagement, and harm mental health, potentially violating privacy laws like GDPR, while calling for stricter regulation to protect workers’ rights.
How Much Has Workplace Surveillance Increased?
A recent report by ExpressVPN, titled the “2023 State of Workplace Surveillance,” highlights a significant increase in workplace surveillance. Some key findings include:
– 78 per cent of employers are using some form of employee monitoring tools in 2023, up from 60 per cent before the COVID-19 pandemic.
– 57 per cent of employers implemented new surveillance tools specifically due to remote work conditions caused by the pandemic.
– 41 per cent of companies now use software to track keystrokes, screenshots, or record the activity of employees’ screens.
– 32 per cent of employers monitor employee emails and messages, while 25 per cent track employee location using GPS or IP data.
A Growing Market
This surge in monitoring reflects the growing reliance on digital surveillance tools to manage remote workforces. Regarding the market for identity and access management (IAM) and cybersecurity solutions, Gartner reported in its “Market Guide for User Authentication” that continuous authentication is gaining traction due to increasing concerns about cybersecurity and the limitations of traditional login methods.
A MarketsandMarkets report has also noted that the global user authentication market, which includes continuous authentication solutions, is projected to grow from $13.9 billion in 2022 to $25.2 billion by 2027. A 2022 Verizon Data Breach Investigations Report also noted that 61 per cent of breaches involve stolen credentials and pushed companies to adopt continuous authentication as a preventive measure.
What Can Employees Do?
If employees are concerned about continuous camera monitoring such as that used with some continuous verification systems, the (realistic) options they have are to:
– Review company policies to understand the purpose and limits of the surveillance.
– Raise concerns with HR or management to request less invasive alternatives, like fingerprint or password-based methods.
– Seek legal advice if monitoring violates privacy laws, or report it to a regulatory body like the ICO (in the UK).
– Consult with a union to negotiate privacy protections, if applicable.
– Document their issues for potential disputes and familarise themselves with their rights under local privacy and employment laws.
What Does This Mean For Your Business?
The rise of continuous authentication software, particularly that using facial recognition and behavioural biometrics, highlights the tension between advancing cybersecurity and respecting employee privacy.
While the primary aim of these systems may be to offer ongoing, seamless security by monitoring users throughout their work sessions, the methods employed, such as continuous video surveillance or behavioural tracking, have raised significant ethical and privacy concerns. The promise of enhanced protection against cyberattacks, session hijacking, and insider threats is compelling, especially in industries where data security is paramount. However, the potential downsides of this technology can’t be ignored.
One of the key concerns is the invasion of privacy. Employees may feel uncomfortable or even violated if they know that cameras or other tracking mechanisms are monitoring their every move. The potential for these systems to inadvertently capture non-work-related activities, or even sensitive personal interactions, adds to the unease. Continuous surveillance risks creating an atmosphere of distrust between employers and employees, fostering a sense of being constantly watched, which could have a detrimental effect on morale. In extreme cases, this might lead to disengagement, lower productivity, or even a rise in ‘quiet quitting,’ as employees withdraw emotionally from their work due to feeling over-monitored.
Also, there are concerns about the psychological impact of constant surveillance. The knowledge that a camera or biometric system is perpetually tracking your behaviour can lead to stress, anxiety, and a feeling of being under perpetual scrutiny. This could, paradoxically, undermine the productivity gains that continuous authentication aims to protect. Employees working under these conditions might find it difficult to focus or perform optimally, especially if they perceive the surveillance as intrusive or excessive.
In addition to these privacy and security concerns, there are ethical and legal considerations. In many jurisdictions, privacy laws require companies to obtain explicit consent for such monitoring and ensure that the measures are proportionate and necessary. Failure to comply with these regulations could lead to hefty fines or legal action (as seen in the case of H&M’s €35.3 million fine in Germany).
There are also the issues of bias and discrimination. Facial recognition technologies have been shown to be less accurate across diverse demographic groups, potentially leading to unfair treatment of certain employees. If continuous authentication systems generate false positives or negatives due to these biases, it could create additional hurdles for employees from minority groups, further entrenching workplace inequalities. There is also the risk that the data gathered could be used for purposes beyond security, such as monitoring productivity or evaluating performance, which could lead to unfair assessments or discrimination.
Despite these challenges, it is clear why businesses are keen to explore continuous authentication technology. The ever-present threat of cyberattacks, data breaches, and insider threats has made it essential for organisations to find new ways to secure their digital assets. Continuous authentication offers a promising solution by providing ongoing verification without disrupting the user experience. However, businesses must tread carefully, ensuring that these systems are deployed in ways that respect employee privacy, comply with legal requirements, and avoid creating a toxic work environment.
As continuous authentication (seemingly inevitably) becomes more widespread, it will be crucial for businesses to engage in transparent communication with employees about how these systems work, why they are being implemented, and what safeguards are in place to protect their privacy. Offering alternative, less invasive methods, such as fingerprint recognition or password-based systems, may help alleviate some concerns. Ultimately, the successful adoption of continuous authentication will depend on striking the right balance between robust security measures and the protection of employee rights and well-being.
Tech Insight : The Rising Cost Of API & Bot Attacks
Following a recent report by cyber-security company Imperva about the rising costs to businesses of bot attacks and vulnerable APIs, we look at why it’s happening and what can be done.
Vulnerable APIs & Bot Attacks Costing Businesses $186 Billion
Imperva’s report was based on Marsh McLennan Cyber Risk Intelligence Centre’s study of data from 161,000 cybersecurity incidents related to vulnerable APIs and bot attacks. The key findings were that businesses face an annual (estimated) economic burden of up to $186 billion due to vulnerable APIs and automated bot attacks. Also, the study found that these two security threats often work in tandem, are becoming increasingly prevalent, and pose significant risks to organisations worldwide.
APIs
An API (Application Programming Interface) is a set of rules and protocols that allows different software applications to communicate with each other. Businesses adopt and use APIs because they enable seamless integration between apps and services, improving efficiency and automation. Mulesoft1 figures show that 99 per cent of organisations have already embraced APIs. An API can, for example, connect a company’s CRM system with its email marketing platform, thereby automatically syncing customer data. APIs also enhance customer experiences, like allowing users to log in via their Google or Facebook accounts. They help with scalability, such as a small business using cloud storage services via APIs to expand without building infrastructure. By using APIs for payments (like Stripe) or shipping (like FedEx), businesses can quickly innovate and offer services without developing them in-house. APIs also enable secure data sharing, such as a fintech company offering real-time stock market data through an API, while fostering partnerships, like travel booking sites combining flight, hotel, and rental services from different providers. This makes businesses more agile, efficient, and competitive in a connected world, thus highlighting positive outcomes of API adoption.
The financials of using them are illustrated by Mulesoft1 figures which suggest that many organistaions which are using APIs are reporting increased revenues, e.g. up to a 35 per cent increase, plus they are reporting reduced operational costs.
Why Are APIs Vulnerable?
APIs are particularly vulnerable because they expose numerous endpoints, each acting as a potential entry point for attackers. As businesses increasingly adopt APIs to improve agility and efficiency, the number of these exposed endpoints has surged—on average, enterprises managed 613 API endpoints in 2023. This rapid expansion has created a larger attack surface, making APIs an attractive target for cybercriminals.
Also, with enterprise sites handling 1.5 billion API calls annually, the sheer volume makes the likelihood of encountering vulnerabilities greater.
What Kind of Vulnerabilities?
The kind of business logic vulnerabilities in APIs include, for example, weak authentication, insufficient access controls, and improper data validation, all of which can allow attackers to exploit these APIs, leading to data breaches or system compromises.
What’s The Link Between Vulnerable APIs and Bot Attacks?
Put simply, the link between vulnerable APIs and bot attacks is that:
– Greater API adoption (and a growing reliance upon them by organisations) has expanded the attack surface.
– Cybercriminals have realised that automated bots are a great and inexpensive way to attack the increasing number of vulnerable APIs, due to the scalability, speed, and efficiency of automated bots. Imperva, for example, highlights the fact that even low-skilled attackers can launch sophisticated bot attacks.
– Bots can quickly exploit multiple API endpoints – averaging 613 per enterprise in 2023 (Marsh McLennan) – making them ideal for large-scale attacks. Their low cost and 24/7 operation allow cybercriminals to probe for weak spots continuously, extracting sensitive data, executing fraudulent transactions, or launching disruptive denial-of-service attacks. Also, vulnerable APIs often lack strong security measures, thereby making them easy targets for bots, which can monetise stolen data or cause significant disruptions. As API adoption grows, bot attacks offer cyber-criminals a high-reward, low-effort method for exploiting these weaknesses, contributing to billions in annual financial losses.
This is why the Marsh McLennan Cyber Risk Intelligence Centre figures featured in the report show an 88 per cent rise in bot-related security incidents in 2022, followed by another 28 per cent increase in 2023. In essence, the more vulnerable APIs there are, the more bots are being used to attack them and as APIs become more integral to business, they become prime targets for bot attacks.
More Sophisticated
One other key point highlighted in Imperva’s report is that the increasing sophistication of bad bots is a growing concern. For example, Imperva reports that over 60 per cent of bad bots detected today are classified as evasive, i.e. they use a mix of moderate and advanced techniques to carry out attacks. Worryingly, these bots can now mimic human behaviour, leveraging AI and machine learning to adapt and evolve over time. They can also delay requests and bypass common security measures like CAPTCHAs, making them harder to detect. This allows them to launch significant attacks with fewer requests, thereby reducing the typical “noise” associated with bot campaigns, making their actions stealthier and more effective.
The Financial Toll
As mentioned at the beginning of this article, bot attacks on APIs are contributing significantly to financial tolls for organisations – up to $186 billion annually, with API-related breaches costing organisations up to $87 billion annually – an increase of $12 billion from 2021. Specifically, automated API abuse by bots now accounts for a massive $17.9 billion of these losses each year, thereby illustrating the immense economic impact of API vulnerabilities combined with bot-driven attacks.
Biggest Companies At Highest Risk
Reseach appears to show that large enterprises (those with over $100 billion in revenue) face the greatest risk, with bot-related incidents making up as much as 14 per cent of all cyber incidents. Imperva’s report attributes the fact that they’re prime targets to their high visibility, extensive digital presence, and valuable assets.
Global Vulnerability
Imperva’s report also highlights the global nature of API and bot attack threats, with countries like Brazil, France, Japan, and India now seeing high percentages of security incidents related to insecure APIs and bot activity. Although the proportion of such events in the United States is lower compared to these countries, the U.S. still accounts for 66 per cent of all reported incidents, highlighting its significant exposure to these growing threats.
What Does This Mean For Your Business?
The financial and operational costs of API and bot attacks are escalating at an alarming rate. With global losses reaching as high as $186 billion annually, these threats are becoming a major concern for organisations of all sizes. The rapid adoption of APIs, while improving efficiency and agility, has also expanded the attack surface, making businesses more vulnerable. Automated bots, with their scalability and increasing sophistication, are exploiting these vulnerabilities at an unprecedented rate. Imperva’s report, featuring the findings of Marsh McLennan Cyber Risk Intelligence Centre’s study, appear to illustrate the severity of the situation. This situation appears to be worsening too as bots become more evasive, using advanced techniques like AI and machine learning to mimic human behaviour, evade detection, and carry out stealthy, highly effective attacks.
Larger enterprises, with extensive digital infrastructures, are particularly exposed, with bot-related incidents accounting for up to 14 per cent of all cyber incidents. These companies face significant financial risks due to their high-value assets and complex API ecosystems, making them prime targets for automated bot attacks. That said, smaller businesses are also frequently targeted due to potentially weaker security measures, meaning that businesses of all sizes should sit up and take notice.
It also appears that this threat is global, e.g. countries like Brazil, France, Japan, and India have experienced surges in API and bot-related incidents (although the U.S. remains the most affected).
As the digital landscape evolves, the overlap between API and bot vulnerabilities highlights the critical need for businesses and organisations of all kinds to adopt proactive, comprehensive security strategies. Businesses must tailor their defences to the specific risks associated with their size and complexity. For example, large enterprises managing hundreds of API endpoints need robust API security testing frameworks that regularly assess vulnerabilities, ensuring all endpoints are secure. This could include adopting authentication mechanisms like OAuth 2.0 or implementing rate limiting to restrict how many requests can be made to the API in a short period, which helps prevent bot-driven attacks.
Smaller businesses may want to focus on securing their APIs with proper encryption and multi-factor authentication to minimise exposure. They can deploy web application firewalls (WAFs) with bot management features, such as those provided by services like Cloudflare or Imperva, to detect and block malicious bot traffic before it reaches critical endpoints.
Both small and large businesses should adopt continuous monitoring for abnormal behaviour and invest in AI-powered security tools that detect patterns characteristic of bot activity. Also, penetration testing should be part of regular security audits to simulate attacks on API endpoints, exposing any weaknesses before they can be exploited by cybercriminals.
Tech News : AI Drone Swarms … Military Tests Successful
Munich-based Quantum Systems (a Small Unmanned Aerial Systems – ‘sUAS’ – company), has announced a successful test of AI-powered ‘drone swarm’ technology which could advance the role drones play in warfare.
What Is A ‘Drone Swarm’?
In short, Drone swarm technology involves coordinating multiple drones to operate as a unified system and it can be used for tasks like military operations and surveillance, although it can be used for less sinister missions such as search and rescue and agricultural monitoring.
Test
The announcement by Quantum Systems (working with Airbus) follows “intensive research” into whether it could “develop innovative solutions for the AI-supported autonomous control of swarms of drones” and “maximise the potential of artificial intelligence to coordinate mixed UAS swarms”. More specifically, research focused on the development of a Tactical UAS (Unmanned Aerial System), i.e. a system using small to medium-sized drones used for military operations (reconnaissance, surveillance, target acquisition, and communication support on the battlefield).
Seven Successful Tests
At a recent presentation at the Airbus Drone Centre in Manching (Germany), Quantum Systems announced that it had carried out seven successful tests of its AI-controlled UAS at the centre. Quantum Systems says its “new technologies enable large swarms of autonomous UAS to be effectively controlled by a small number of operators, even in highly dynamic and interference-prone environments.”
What’s So Different About The New Drone Swarm Technology?
Drone swarm technology is already in use, particularly in military and defence applications in countries like the US, China, and Russia. Also, they’re used for civilian uses too, such as disaster response, search and rescue, agriculture, and entertainment (such as drone light shows).
However, following what’s been described as “a major breakthrough in autonomous swarm technology”, the drone swarm technology developed by Airbus Defence & Space, Quantum Systems, and Spleenlab (a software company) stands out for its use of advanced artificial intelligence (AI) and machine learning, particularly in autonomous coordination and decision-making. For example, what makes this technology unique includes:
– AI-powered autonomy. The drones can autonomously make decisions and adapt in real-time without needing human intervention, allowing for more efficient missions even in complex, dynamic environments. Quantum Systems says this was achieved by training the AI using deep reinforcement learning methods in a highly-specialised simulation environment. This allows the AI to refine its tactics through “continuous self-optimisation”, meaning it can make more efficient and precise decisions in tactical operations.
– Advanced sensor fusion. By integrating cutting-edge sensors and AI-based fusion algorithms, the swarms can gather and process a wide range of data (e.g. visual, infrared) in real-time, improving situational awareness and accuracy in missions.
– Collaborative behaviour. The swarms operate with a high level of coordination, thereby allowing the drones to perform complex tasks such as surveillance, search and rescue, or reconnaissance as a unified system, even when communication is limited or disrupted. As Quantum Systems says: “For the first time, a specially developed mission-AI controls and coordinates the UAS systems to ensure reliable mission execution even in scenarios with radio interference or a complete failure of individual drones”.
– Scalability and flexibility. The system is designed to be scalable, enabling both large and small-scale drone swarms, and can be customised for diverse civilian and military applications, from disaster response to tactical operations.
What Happened In The Tests?
In the successful tests, Quantum Systems says the Vector and Scorpion UAS from Quantum Systems and two other multi-purpose drones from Airbus were deployed in swarm flight and the reconnaissance data from all the drones was “merged in real time to form a joint situation picture and integrated into the Airbus ‘Fortion Joint C2’ battle management system”.
Also, Quantum Systems has reported how the Vector drones demonstrated their ability to autonomously perform missions such as joint reconnaissance and target acquisition under GPS-denied conditions (GNSS denied), such as those found in Ukraine, thereby highlighting the ability of AI to increase the resilience of UAS to interference and ensure autonomous operation even under difficult conditions.
How Important Are Drones In Modern Warfare?
Drones have become increasingly critical in modern warfare, particularly in conflicts like the ongoing war in Ukraine. In this war, both sides have used drones extensively for reconnaissance, surveillance, and strikes. For example, Ukraine deploys around 10,000 drones per month and relies heavily on smaller, commercial drones, such as DJI models, costing as little as $1,000 each. These drones are often repurposed with explosives for precision attacks.
The impact of drones in Ukraine is immense. They allow for real-time battlefield intelligence, enable faster response times, and reduce the cost of air operations. Also, the Ukrainian government has ramped up domestic drone production, with over 80 drone manufacturers now contributing to the war effort. As a result, drones have been a game-changer, levelling the playing field against larger forces, and are likely to dominate future conflicts.
What Next?
Quantum Systems says the knowledge gained from its (KITU2) drone swarm study will help future developments to evaluate how learned behaviours from simulations can be integrated into real UAS systems, and the extent to which AI-controlled behaviours are superior to traditional manual control approaches. It also says, “The research results from the KITU2 study are intended to support the development of autonomous systems for major Bundeswehr projects such as the Main Ground Combat System (MGCS) and the Future Combat Air System (FCAS)”.
As Sven Kruck, CRO and Managing Director, Quantum Systems says: “We are not just interested in expanding the technological capabilities of our drones. We want to give customers and users a real advantage in real-life scenarios. Ultimately, it’s about protecting soldiers and increasing safety. In the future, there will be no way around software-based and AI-supported systems for drone technology.”
What Does This Mean For Your Business?
The successful demonstration of Quantum Systems’ AI-powered drone swarm technology could be a significant milestone in modern military operations, because it shows how advancements in AI look like revolutionising the way drones are deployed in both military and civilian settings. With AI enabling autonomous decision-making and self-optimisation in real-time, this technology offers a more efficient and adaptive approach to complex missions, even in the most challenging conditions and could revolutionise modern warfare. Swarms of AI drones, learning and acting together, could have implications for future human deployment on the ground, perhaps taking us one small step further towards the idea of drone wars.
The potential applications of this technology also, thankfully, extend beyond warfare, offering promise in areas like disaster response, surveillance, and search and rescue. By demonstrating successful swarm coordination in highly dynamic and interference-prone environments, Quantum Systems has shown how AI can enhance the resilience, flexibility, and scalability of drone operations, setting the stage for future developments.
For other drone manufacturers and AI businesses, these advancements signal a growing demand for AI integration across drone systems. Companies that wish to stay competitive may now need to focus more on developing more intelligent and autonomous drones capable of performing complex tasks with minimal human input. The success of this technology opens up opportunities for collaboration between AI developers and industries such as agriculture, logistics, and defence, where advanced drone capabilities can be applied. This could also put pressure on AI firms to innovate further, particularly in the areas of machine learning algorithms, sensor fusion, and autonomous coordination, which will be increasingly critical as the industry moves towards smarter, more capable drone solutions.
As drones continue to play a pivotal role in modern conflicts, such as the ongoing war in Ukraine, and beyond into sectors like disaster management, the importance of AI-driven advancements can’t be overstated. Quantum Systems’ focus on integrating AI-learned behaviours into real-world systems, and its potential application to larger military projects like the Future Combat Air System (FCAS), highlights the transformative role AI looks likely to play in shaping the future of both military and civilian drone technology. This breakthrough reflects the broader trend towards AI-driven systems, and as these technologies evolve, they are poised to reshape industries far beyond the battlefield, offering new ways to manage national security and civilian crises.
Tech News : Human Right Abuses Linked To Lithium Batteries
New research compiled from AI-powered supply chain risk platform Infyos has revealed that 75 per cent of the lithium-ion battery supply chain may be linked to severe human rights abuses.
Human Rights Abuses – Forced (and Child) Labour
Infyos’s analysis, which drew on government datasets, NGO reports, news articles, social media, and proprietary data, has revealed widespread human rights abuses in resource-rich countries where raw materials such as lithium and cobalt are mined and refined for lithium-ion batteries. These abuses, particularly involving forced and child labour, were found to be most prevalent in the early stages of the supply chain, notably during the extraction and processing of these critical materials.
Where?
According to the analysis, much of this abuse appears to be concentrated in regions like Xinjiang, China, and countries with fragile governance, such as the Democratic Republic of Congo. In Xinjiang, allegations of forced labour are particularly severe, with accusations that many companies operating in the region are complicit. For example, it’s been suggested that companies that mine and refine lithium and cobalt in these regions may be involved in labour abuses, including instances where children as young as five are engaged in dangerous mining activities.
Link To The Battery Industry
The demand for lithium-ion batteries has surged in recent years primarily due to the increased production of electric vehicles (EVs), the growth of renewable energy storage systems, and the expansion of portable electronic devices like smartphones and laptops. Governments and industries pushing for decarbonisation and net-zero emissions targets have also further driven this demand.
The battery industry’s connection to the alleged human rights abuses highlighted by Infyos stems from manufacturers sourcing components or materials from potentially unethical companies within their supply chain. These unethical practices are further obscured by complex business relationships, such as joint ventures or equity investments, where shifting ownership structures make it difficult to uncover the true extent of the exploitation.
As highlighted by Sarah Montgomery, CEO & Co-Founder, Infyos: “The relative opaqueness of battery supply chains and the complexity of supply chain legal requirements means current approaches like ESG audits are out of date and don’t comply with new regulations”. Sarah Montgomery added: “Most battery manufacturers and their customers, including automotive companies and grid-scale battery energy storage developers, still don’t have complete supply chain oversight.”
So Many Suppliers
One of the challenges that electric vehicle and battery manufacturers may have in identifying their supply chain risks is that they often have very complex supply chains, perhaps as many as 10,000 suppliers across their network, from mines to chemical refineries and automotive manufacturers. Human rights abuses upstream, e.g. at the raw materials stage (as identified by Infosys) may therefore be difficult to spot.
Not Just Infyos
Infyos isn’t alone in suggesting human rights abuses in the lithium-ion battery supply chain. For example:
– Back in 2016, Amnesty International exposed child labour and hazardous working conditions in cobalt mining in the DRC, showing that some of the world’s largest electronics and automotive companies have not adequately addressed these risks.
– In 2023, the Business & Human Rights Resource Centre reported human rights violations and environmental damage related to lithium and cobalt mining in China, South America, and the DRC, with forced (and child) labour commonly involved.
– Also in 2023, Radio Free Asia reported uncovering human rights abuses and ecological damage in nickel mining in Indonesia and the Philippines, which provide critical materials for lithium-ion batteries, impacting local communities’ health and livelihoods.
Scrutiny
However, the global battery supply chain is now under increasing scrutiny, particularly from regulators in Europe and the US. This is primarily due to growing concerns about human rights abuses such as forced (and child) labour in countries like the Democratic Republic of Congo (DRC) and China’s Xinjiang region. Legislation such as the EU Battery Regulation and the US Uyghur Forced Labour Prevention Act (UFLPA) are pushing companies to improve supply chain transparency and accountability. Non-compliance with these laws can result in products being blocked from key markets and heavy penalties, which could damage the reputation of the battery industry and slow down the energy transition. For example, companies are now at risk of losing investor confidence and facing financial penalties if they fail to manage these risks, with many already struggling to meet these stringent regulatory requirements.
What Can Be Done?
To tackle these challenges, companies must adopt proactive measures to ensure ethical sourcing throughout their supply chains. This could include enhanced due diligence, where firms closely monitor their suppliers and implement robust Environmental, Social, and Governance (ESG) policies. Collaborating with independent auditors, utilising AI-based supply chain risk management tools like those provided by Infyos, and fostering stronger partnerships with suppliers may also be essential strategies. Also, companies must comply with emerging regulations, such as the battery passport system in the EU, which mandates rigorous supply chain traceability by 2027. By doing so, firms can not only avoid penalties but also align with investor expectations and contribute to a more sustainable future.
What Does This Mean For Your Business?
With alternative battery types still some way off, as the demand for lithium-ion batteries continues to grow, so too does the urgency to address the human rights abuses linked to their supply chains. The findings from Infyos, alongside investigations by organisations like Amnesty International and the Business & Human Rights Resource Centre, serve as shocking reminders of the ethical complexities and the suffering behind these critical technologies. The global shift towards electric vehicles and renewable energy solutions must not be built on exploitation.
However, regulatory pressure is mounting, and companies that fail to ensure transparency and ethical sourcing will face significant reputational and financial risks. The path forward therefore does appear to be clear. By embracing stringent due diligence practices, enhancing supply chain visibility through AI-powered tools, and adhering to emerging regulations like the EU Battery Regulation, the industry can foster a more responsible and sustainable future. That said, in the real world, many companies may be deterred by the high costs of implementing such measures, especially in complex global supply chains. The vastness and opacity of these networks, coupled with competitive pressures to keep costs low, may make ethical sourcing less of a priority. Also, inconsistent enforcement of regulations and varying levels of consumer concern about supply chain ethics could further reduce the incentive for businesses to fully embrace transparency and accountability that’s needed.
Ultimately, the energy transition depends not only on technological innovation but also on a commitment to human rights and ethical practices. For the battery industry to truly support a greener future, it must first ensure that its foundations are just and free from exploitation.