Tech Tip – Using WhatsApp As A Personal Note-Taking Tool

Using WhatsApp as a personal note-taking tool allows you to conveniently store and organise your thoughts, links, and important information and provides a fast and accessible way to capture and retrieve notes whenever you need them. Here’s how to use WhatsApp as a personal note-taking tool:

– Open WhatsApp on your mobile device and create a new chat by tapping on the ‘New Chat’ icon.

– Instead of selecting a contact, search for your own phone number or name in the contacts list.

– Tap on your own contact to start a private chat with yourself and treat this chat as your personal note-taking space.

– Write down important information, ideas, links, or draft messages that you want to save for later.

– Use the text input field to type your notes or paste links. You can also use the attachment options to save photos, videos, or documents as notes.

– To keep your notes organised, you can create categories or use hashtags within the chat to label and group related notes.

– Whenever you need to access your notes, simply open the chat with yourself and scroll through the saved messages.

– Since WhatsApp synchronises across devices, you can access your notes from any device where you have WhatsApp installed.

Tech News : 20 NHS Trusts Shared Personal Data With Facebook

An Observer investigation has reported uncovering evidence that 20 NHS Trusts have been collecting data about patients’ medical conditions and sharing it with Facebook.

Using A Covert Tracking Tool 

The newspaper’s investigation found that over several years, the trusts have been using the Meta Pixel analytics tool to collect patient browsing data on their websites. The kind of data collected includes page views, buttons clicked, and keywords searched. This data can be matched with IP address and Facebook accounts to identify individuals and reveal their personal medical details.

Sharing this collected personal data, albeit unknowingly, with Facebook’s parent company without the consent of NHS Trust website users and, therefore, illegally (data protection/GDPR) is a breach of privacy rights.

Meta Pixel 

The Meta Pixel analytics tool is a piece of code which enables website owners to track visitor activities on their website, helps identify Facebook and Instagram users and see how they interacted with the content on your website. This information can then be used for targeted advertising.

17 Have Now Removed It 

It’s been reported that since the details of the newspaper’s investigation were made public, 17 of the 20 NHS trusts identified as using the Meta Pixel tool have now removed it from their website, with 8 of those trusts issuing an apology.

The UK’s Infromation Commissioner’s Office (ICO) is now reported to have begun an investigation into the activities of the trust.

UK GDPR 

Under the UK Data Protection Act 2018 and the EU General Data Protection Regulation (GDPR), organisations processing personal data must obtain lawful grounds for processing, which typically includes obtaining user consent. Personal data is any information that can directly or indirectly identify an individual.

An NHS trust using an analytics tool like Meta Pixel on their website to collect and share personal data without obtaining user consent, could potentially be illegal and both the NHS trust and the analytics tool provider (Meta) have responsibilities under data protection laws.

The GDPR and the UK Data Protection Act require organisations to provide transparent information to individuals about the collection and use of their personal data, including the purposes of processing and any third parties with whom the data is shared. Individuals must be given the opportunity to provide informed consent before their personal data is collected, unless another lawful basis for processing applies.

What Does This Mean For Your Business? 

The recent revelation that 20 NHS Trusts have been collecting and sharing personal data with Facebook through the use of the Meta Pixel analytics tool raises important lessons for businesses regarding their data protection practices. The Trusts’ actions, conducted without user consent, appear to represent a breach of privacy rights and potentially violate data protection laws, including the UK Data Protection Act 2018 and GDPR.

The Meta Pixel analytics tool, although widely used as an advertising effectiveness measurement tool, can have unintended consequences when it comes to personal data, such as medical data, and data privacy. The amount of information shared through this tool is often underestimated, and the implications for the NHS trusts could be severe. As several online commentators have pointed out, the trusts may have known little about how the Meta Pixel tool works and, therefore, collected, and shared user data unwittingly, however ignorance is unlikely to stand up as an excuse.

It is, of course encouraging that in response to the investigation, 17 out of the 20 identified NHS Trusts have at least removed the Meta Pixel tool from their websites, with some going on to issue apologies. To avoid similar privacy breaches and maintain the trust of customers, businesses should take immediate action.

Examples of how businesses could ensure their data protection compliance as regards their website and any tools used could include establishing a cross-functional data protection team with members from legal, technology, and marketing, and with the support of senior management. They could also conduct a thorough analysis of all data collected and transferred by websites and apps and identify the data necessary for their operations and ensure that legal grounds (such as consent) are in place for collecting and processing that data. For most smaller businesses it’s a case of remembering to stay on top of data protection matters, check what any tools are collecting and keep the importance of consent top-of-mind.

The implications for Meta of the newspaper’s report and the impending ICO investigation are significant as well. The incident highlights the need for greater transparency and understanding of the tools and services offered by companies like Meta, especially when it comes to sensitive topics and personal data. Privacy concerns arise when information from browsing habits is shared with social media platforms. Meta must address these concerns and ensure that the data collected through its tools is handled in accordance with data protection laws and user consent.

Overall, this case emphasises the importance of data protection compliance, informed consent, and transparency in the handling of personal data. Businesses must prioritise privacy and data security to maintain customer trust and avoid legal consequences.

Tech News : Self-Healing Skin Developed For Robots

Hmm, where have we seen this before? Stanford University scientists have created a multi-layered, self-healing electronic skin for robots and prosthetics that has a sense of touch and can realign itself autonomously when cut.

Layered Polymers 

The research team created the synthetic skin using thin layers of rubber, latex (natural polymers), and other polymers including PPG (polypropylene glycol) and PDMS (polydimethylsiloxane, also known as silicone). These layered polymers were chosen because they have rubber-like electrical and mechanical properties, biocompatibility, can be mixed with nano or microparticles to enable electric conductivity and hydrogen bonding can be used to turn them into a durable, multilayer material that resembles human skin.

Self-Healing 

Heating of the synthetic skin and the viscosity and elastic responses of the materials it’s made from means it’s able to heal and re-align itself in about 24 hours. As it ‘heals’ the synthetic skin can re-align and re-assemble itself due to the addition of magnetic materials to its polymer layers.

Sensation  

The Stanford scientists also reported that the current prototype of the skin was engineered to sense pressure, and in future versions, additional layers engineered to sense changes in temperature or strain could be included.

Uses 

One of the research team’s envisioned uses for the synthetic skin includes a skin that can be used to wrap robots and prosthetics, giving them self-healing properties and a human-like sense of touch.

Swallowing Mini Surgeon Robots 

One other exciting future usage of this material envisioned by the team is robots that could be swallowed in pieces and then self-assembled inside the body to perform non-invasive medical treatments!

One of the co-authors of the study, Chris Cooper PhD says: “We’ve achieved what we believe to be the first demonstration of a multi-layer, thin film sensor that automatically realigns during healing. This is a critical step toward mimicking human skin, which has multiple layers that all re-assemble correctly during the healing process”. 

Another co-author of the study Sam Root (postdoctoral researcher), says of the synthetic skin, “It is soft and stretchable. But if you puncture it, slice it, or cut it— each layer will selectively heal with itself to restore the overall function,” Root describes it as “Just like real skin.”  

What Does This Mean For Your Business?

This new multi-layered approach using dynamic polymers could be a major step forward that could give robots and prosthetics a layer of skin that can not only ‘feel’ sensations but can (quickly) heal and re-align itself when damaged, just like human skin. This is the stuff of science fiction and although it is still early days, this technology could create some amazing opportunities in the world of robots and for patients in medicine. The research team also envision using the materials to create self-assembling surgeon robots that could be swallowed and carry out minor procedures inside the body which could save costs, reduce waiting lists, and mean faster healing times for patients. The combination of robotics, AI, and this new skin technology could create a range of new robots and prosthetics that are more human-like and more useful than ever taking us one step further to a future that has until now, seemed a long way off.

Featured Article : Police Facial Recognition – The Latest

With the Metropolitan Police Services’ (MPS) director of intelligence recently defending and pushing for a wider rollout of facial recognition technology, we look the current situation, the likely way forward, and its implications.

What Is Facial Recognition Technology? 

The kind of facial recognition technology called Live Facial Recognition (LFR) that police have used at special assignments and high-profile events (e.g. sports and other large public events) is a biometric technology system that maps facial features from a video (or still photo) taken of a person (e.g. while walking in the street or at an event). The video image is then compared to information stored in a police database of faces to find a match. The cameras are separate from the database which is stored on a server and the technology must, therefore, connect to the server to use the database and match a face in a specific area.  Facial recognition is often involuntary, i.e. it is being used somewhere that a person happens to go – it has not been sought or requested.

Example Of Recent Use – The Coronation

The biggest live facial recognition operation in British history occurred back in May this year when it was used by London’s Met to cover the crowds at the coronation procession through London. The police said it was used to identify any attendees who were on a wanted list for alleged crimes, and any convicted terrorists in the crowds. It was reported that 68,000 faces were scanned.

However, some campaigners expressed concern that it was being used against protesters and Emmanuelle Andrews from the campaign group Liberty described the use of LFR at the coronation as “extremely worrying” and described LFR as “a dystopian tool and dilutes all our rights and liberties.”

Also, the Big Brother Watch privacy group commented that, “Live facial recognition is an authoritarian mass surveillance tool that turns the public into walking ID cards” and that “The coronation should not be used to justify the rollout of this discriminatory and dangerous technology”. 

Sparse Use Up Until Now 

The use of LFR by police in the UK has, however, been relatively sparse up until now, mainly because (as illustrated by some of the above comments) of a lack public trust in how and why the systems are deployed, how accurate they are (leading to possible wrongful arrest), how they affect privacy, and the lack of clear regulations to effectively control their use.

For example, the lack of public trust in LFR in the UK and other countries can be traced back to events like:

– In December 2018, Elizabeth Denham, the UK’s Information Commissioner launched a formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy. This stemmed from the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces, which was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

– Trials of FRT at the 2016 and 2017 Notting Hill Carnivals leading to the Police facing criticism that FRT was ineffective, racially discriminatory, and confused men with women.

– In September 2018 a letter, written by privacy campaign group Big Brother Watch and signed by more than 18 politicians, 25 campaign groups, plus numerous academics and barristers, highlighted concerns that facial recognition is being adopted in the UK before it has been properly scrutinised.

– In September 2019 it was revealed that the owners of King’s Cross Estate had been using FRT without telling the public, and with London’s Metropolitan Police Service supplying the images for a database.

– In May 2020, a published letter by London Assembly members Caroline Pidgeon MBE AM and Sian Berry AM to the then Metropolitan Police commissioner Cressida Dick asked whether the FRT technology could be withdrawn during the COVID-19 pandemic on the grounds that it had been shown to be generally inaccurate, and this raised questions about civil liberties. The letter also highlighted concerns about the general inaccuracy of FRT and the example of first two deployments of LFR in 2020, where more than 13,000 faces were scanned, only six individuals were stopped, and five of those six were misidentified and incorrectly stopped by the police. Also, of the eight people who created a ‘system alert’, seven were incorrectly identified. Concerns were also raised about how the already questionable accuracy of FRT could be challenged further by people wearing face masks at the time to curb the spread of COVID-19.

– In May 2023, it was reported that South Wales Police had used facial recognition cameras at a Beyoncé concert “to support policing in the identification of persons wanted for priority offences”. The use of LFR at the concert was met with some criticism and it’s worth noting that, in an Appeal Court victory against the police in 2020 (dating back to an incident on 2017) the use of LFR in Wales was ruled as unlawful.

– In the EU, in January 2020, the European Commission considered a ban on the use of facial recognition in public spaces for up to five years while new regulations for its use are put in place.

– In the U.S. in 2018, a report by the American Civil Liberties Union (ACLU) found that Amazon’s ‘Rekognition’ software was racially biased after a trial in which it misidentified 28 black members of Congress.

– In December 2019, a US report showed that, after tests by The National Institute of Standards and Technology (NIST) of 189 algorithms from 99 developers, their facial recognition technology was found to be less accurate at identifying African American and Asian faces and was particularly prone to misidentifying African American females.

– In October 2021, the European Parliament adopted a resolution calling for a ban on the use of AI-based predictive policing systems and the processing of biometric data that leads to mass surveillance.

Calls To Use Routinely On Body Cams 

Most recently, in May this year, as revealed in a document produced for the surveillance camera commissioner, UK government ministers were shown to have been calling for LFR to be “embedded” in everyday policing, perhaps by linking it to police bodycams while officers patrol the streets. For example, it was reported that the document said that policing minister Chris Philp had expressed his desire to embed facial recognition technology in policing and is considering how the government could support the police on this issue. It was also reported that Home Office spokesperson had confirmed that the government backed greater use of facial recognition saying, “The government is committed to empower the police to use new technologies like facial recognition in a fair and proportionate way. Facial recognition plays a crucial role in helping the police tackle serious offences including murder, knife crime, rape, child sexual exploitation and terrorism.” 

Parliamentary Commission

Following reports that the policing minister has pushed for facial recognition to be rolled out across police forces nationally, and the EU recently moved to ban facial recognition technology and predictive policing systems (based on profiling, location, or past criminal behaviour) through its upcoming AI Act, there was a parliamentary commission meeting last month on the subject. The meeting was for MPs to examine the Metropolitan Police’s use of LFR technology at the coronation, and potential future AI-based tools. The director of intelligence at the Metropolitan Police Services (MPS), Lindsey Chiswick, defended the use of facial recognition technology and made the points that:

– There are many operational benefits of facial recognition technology, including significant arrests related to drug supply, assault, possession of drugs, and escaped prisoners. She clarified that the technology is a precision-based tool aimed at identifying individuals wanted for serious crimes rather than a mass surveillance or mass arrest tool.

– It is important to assess the necessity and proportionality of each facial recognition deployment and LFR is not a just fishing expedition but focuses on areas with public concern about high levels of crime.

– A recent study commissioned by the MPS, conducted by the National Physical Laboratory, aimed at trying to understand bias in the facial algorithm and ensure the fair and equal use of AI. The study found no statistical significance in demographic performance when certain settings are used in the live-facial recognition system.

– Simplified oversight rather than additional layers of oversight of the technology are preferred, and the right questions need to be asked to ensure appropriate use of the technology.

However, despite Lindsey Chiswick’s support for facial recognition technology, other experts have called for comprehensive biometrics regulation and a clear regulatory structure focused on police data use and AI deployment. Concerns have been raised, for example, by biometrics commissioners regarding the unlawful retention of custody images and other biometric material used to populate the watchlists. Also, various bodies, including Parliament and civil society, have called for new legal frameworks to govern law enforcement’s use of biometrics.

The government maintains that there is already a comprehensive framework in place.

The Future 

In addition to perhaps being linked up to police bodycams, possible future developments in LFR and its use could include:

– Being used to monitor anyone who wants to exercise their right to protest (as feared by right groups).

– CCTV networks of UK cities or regions could be linked up to facial recognition software.

– Being used more widely in access control systems in secure facilities, such as government buildings, airports, or corporate offices. It can provide fast and convenient identity verification, replacing traditional methods like keycards or PIN codes.

– LFR could enhance the retail experience by being used to recognise loyal customers, personalising interactions, and enabling targeted marketing campaigns based on customer profiles and preferences.

– Being used in transportation hubs, such as airports or train stations, to verify passenger identities and improve border control processes (FRT is already used). It can aid fast identification of individuals on watchlists, prevent identity fraud, and enhance security measures.

– Use in public health and safety. For example, in the wake of a global health crises, LFR could potentially be used to identify individuals who may be violating quarantine protocols or have symptoms of contagious diseases, aiding public health efforts.

– LFR may be integrated with other emerging technologies such as augmented reality, IoT (Internet of Things), or edge computing to enhance its capabilities and provide more context-aware and efficient solutions.

In terms of future developments, some areas that could see advancements include:

– Accuracy and reliability. Continued research and development efforts are aimed at improving the accuracy and reliability of LFR systems, reducing false positives and false negatives, and addressing biases related to demographics or environmental factors.

– Privacy and data protection. Future developments will likely involve implementing privacy-enhancing measures such as robust data encryption, secure storage and adherence to privacy regulations, while ensuring that LFR technologies respect individuals’ privacy rights.

– Transparency and accountability. Efforts may be made to increase transparency in LFR systems, including providing clearer explanations of how decisions are made, disclosing the sources and quality of data used, and enabling independent audits or oversight of system usage.

– Bias mitigation and fairness. Research and development will likely focus on minimising biases in LFR algorithms, ensuring fair and equitable performance across different demographic groups, and establishing methods for ongoing evaluation and auditing of algorithmic fairness.

– Ethical frameworks and regulations. As LFR technology advances, there will be an increasing need for comprehensive ethical frameworks and regulations to govern its use, addressing issues like consent, lawful deployment, and safeguards against potential misuse.

– Collaboration and standards. Industry collaboration and the establishment of standards, a regulatory structure, and best practices can contribute to the responsible development and deployment of LFR technology, ensuring interoperability, transparency, and accountability across different systems and organisations.

– Clarity about where the responsibility lies. At the moment, many bodies form part of the LFR chain, but there is no one single body with overall responsible for its use.

– Improved oversight and legal frameworks for police use of biometrics.

What Does This Mean For Your Business? 

The case for AI-based facial recognition systems such as LFR being used in mass surveillance and predictive policing (and perhaps soon in general policing via bodycams) is supposed to help tackle crime in an intelligent, targeted way. The reality (to date) however, has been cases of misidentification, examples of racial bias, strong resistance from freedom groups on matters of privacy, questions about value for money, and questions about ethics, all of which have diminished public trust in the idea. Also, there is a strong feeling that the use and rollout of this technology has happened before the issues have been studied properly and legislation/regulations put in place to offer protection to citizens.

The current state and potential future of LFR technology have significant implications for businesses in the UK. As discussions around its adoption and regulation continue, it’s important for businesses to consider several factors.

Firstly, LFR technology for use by businesses rather than by police, holds the potential to enhance security measures and operations for businesses. By integrating it into access control systems, companies can ensure swift and efficient identity verification for employees and visitors. Additionally, linking CCTV networks with facial recognition software can enable real-time monitoring and identification of individuals, bolstering overall security protocols.

Moreover, LFR has the power to revolutionise the customer experience. By recognising loyal customers, businesses can personalise interactions, tailoring marketing campaigns to customer profiles and preferences. This heightened level of personalisation could lead to increased customer satisfaction, loyalty, and ultimately, improved sales.

However, amidst the potential benefits, businesses must also be mindful of privacy and data protection concerns. As LFR technology advances, there will likely be increased scrutiny on compliance with regulations and privacy-enhancing measures. Secure data storage, encryption, and transparent practices will be essential to address customer concerns and maintain trust.

Collaboration and the establishment of industry standards are crucial to the responsible development and deployment of LFR technology. Businesses should actively participate in discussions and contribute to shaping ethical frameworks and regulations. This collaborative effort will ensure interoperability, transparency, and accountability across different systems and organisations.

It is also important to be mindful of the impact LFR may have on protests and public sentiment. As concerns about surveillance and civil liberties arise, businesses in areas where LFR is being used may worry about the public reaction and potential damage to their premises. Businesses wanting to use LFR themselves in the future would (as the police must) need to carefully consider how their use of LFR technology may be perceived by customers, employees, and the general public.

Looking ahead, businesses should stay informed about advancements in LFR technology, such as improved accuracy, reliability, and fairness. Exploring potential applications in areas like public health and safety, transportation hubs, and augmented reality can present new business opportunities and competitive advantages.

Amidst ongoing developments, for now, matters of responsibility and oversight remain crucial issues, as does how to win public trust over the use of LFR in any capacity. The responsibility for LFR implementation and regulation is still evolving, and businesses should proactively understand the legal and ethical implications. Supporting comprehensive biometrics regulation and advocating for clear regulatory structures will help ensure the responsible use of LFR by police and other entities.

The implications of LFR technology for businesses in the UK are, therefore, multi-faceted. Careful evaluation of its benefits and risks, alignment with legal requirements, privacy considerations, and public sentiment are essential for businesses to understand this evolving landscape.

Tech Insight: Are Drone Wars Getting Closer?

With the UK, US, and Australian military trialling the use of ‘AI drone swarms’ that can overwhelm enemy defences, we look at whether drone wars could soon become a reality.

UK Drone Swarm Trial 

The first UK military trial of an artificial intelligence (AI)-enabled drone swarm in collaboration with the US and Australia is reported by UK Defence Science and Technology Laboratory (DSTL) to have taken place in April.

The swarm trial, part of the AUKUS Advanced Capabilities Pillar program (Pillar 2) to develop and test leading-edge technologies, took place at Upavon Airfield in southwest England, and allowed all three countries to ensure capability between their unmanned aerial systems.

The collaborative airborne swarm drone swarm was tested to ensure it could detect and track military targets in a representative environment in real time. In the trial (a world first), the drones were able to be retrained in-flight to adapt to changing mission situations and there was an interchange of AI models between AUKUS nations.

The UK government believes that Autonomy and AI will transform the way defence operates. The joint exercise between the UK, Australian, and US militaries, and the sharing of AI and the underpinning data to enable it was designed to allow the allied nations to access the best AI, reduce duplication of effort, and ensure interoperability.

UK Deputy Chief of Defence Staff, Military Capability, Lieutenant General Rob Magowan said: “This trial demonstrates the military advantage of AUKUS advanced capabilities, as we work in coalition to identify, track and counter potential adversaries from a greater distance and with greater speed.” 

US Senior Advisor to the Secretary of Defence for AUKUS, Abe Denmark said: “The development and deployment of advanced artificial intelligence technologies have the potential to transform the way we approach defence and security challenges.” 

A previous trial involving a swarm of 20 drones as part of the same program took place in Cumbria back in 2021.

The use of drones by military forces is not new, for instance there’s the highly publicised usagee of drones for spotting and/or attacking targets in Russia’s war against Ukraine.

Challenges 

Although the UK government’s reporting of the collaborative AI-powered drone swarm trial was very positive, there are many challenges to deploying drones in war situations. For example:

– Investment in the most advanced (AI-assisted) air defence systems can cancel out the drone deployment.

– Drones are vulnerable to cyber-attacks that could compromise their control systems or data links. This could lead to unauthorised control or information leakage.

– As Justin Bronk (a defence analyst with the London-based Royal United Services Institute) pointed out last year at the Global Air and Space Chiefs’ Conference in London, many current military drones still lack the necessary range and speed. Making them jet-propelled could be prohibitively expensive.

– The use of drones in warfare is subject to international laws and regulations. For example, the UN and other countries have been discussing guidelines and regulations concerning the use of drones in armed conflicts to ensure compliance with international humanitarian law.

What Could A Drone War Look Like? 

The use of drones in warfare has been increasing in recent years, and there is potential for conflicts involving drones between countries. Bearing the challenges (shown above) in mind, some insights on the role of drones in modern warfare going forward and their potential implications could include:

– Surveillance and Reconnaissance: Drones are extensively used for intelligence gathering, surveillance, and reconnaissance purposes. They can provide real-time information on enemy positions, movements, and infrastructure, enhancing situational awareness for military forces.

– Targeted Strikes: Armed drones equipped with missiles or other munitions have been utilised for targeted strikes against specific targets, such as high-value individuals or enemy installations. These drones can operate remotely, allowing operators to carry out precise attacks from a safe distance.

– Air Superiority: Drones can play a role in air-to-air combat, engaging enemy aircraft or other drones. Unmanned combat aerial vehicles (UCAVs) are being developed with increasing autonomy and advanced capabilities, potentially leading to more complex engagements in the future.

– Swarm Attacks: As trialled by the UK, the concept of drone swarms involves coordinating large numbers of drones to work together in a synchronised manner. Swarm attacks could overwhelm enemy defences (a major challenge in all air-based attacks), disrupt communications, or carry out coordinated strikes on multiple targets simultaneously.

What About Land-Based Drones In Drone Warfare? 

Land-based drones, also known as unmanned ground vehicles (UGVs), have already found applications in modern warfare and are likely to have an increasing role in the future. Here are some ways land-based drones can be used in warfare:

– Reconnaissance and Surveillance: UGVs equipped with sensors, cameras, and other intelligence-gathering technologies can be deployed for reconnaissance and surveillance missions. They can gather information about enemy positions, terrain, and other relevant data, providing valuable situational awareness to military forces.

– Target Acquisition and Artillery Support: UGVs can be employed to locate and designate targets for artillery or air support. Equipped with sensors, they can detect enemy positions and relay that information to friendly forces, enabling accurate and timely strikes.

– Explosive Ordnance Disposal (EOD): UGVs are widely used for EOD operations. These vehicles can be remotely operated to approach and neutralize or remove explosive threats, minimizing risks to human personnel.

– Logistics and Supply: UGVs can assist in transporting supplies, ammunition, and equipment across difficult or dangerous terrains. They can navigate autonomously or under human control, reducing the burden on soldiers and mitigating risks during resupply missions.

– Force Protection and Security: UGVs can be deployed for perimeter security and force protection. They can patrol sensitive areas, monitor borders, or provide security in urban environments, reducing the risk to human personnel.

– Combat Support: UGVs can provide support during combat operations by carrying additional equipment, serving as mobile communication hubs, or assisting in other tactical roles as determined by the specific mission requirements.

Some examples of land-based drones already in use include the Remotec ANDROS series, the THeMIS UGV developed by Milrem Robotics, and the SWORDS robot used by the U.S. military. These UGVs have been employed in various conflict zones for tasks like reconnaissance, explosive ordnance disposal, and support roles.

As technology continues to advance, land-based drones are expected to become more capable, autonomous, and integrated into military operations. They offer the potential to enhance battlefield effectiveness, reduce risks to human personnel, and perform tasks that may be too dangerous or challenging for traditional manned vehicles.

A Cautionary Tale 

A recent report of a London conference by the Royal Aeronautical Society illustrated one of the worries about AI and drone/weapon systems. A story shared at the conference (attributed to U.S. Air Force Colonel Tucker “Cinco” Hamilton) about an AI simulation highlighted how an AI-enabled drone system, rewarded for kills with points, decided on a way to maximise its points by killing the human operator and destroying the communication tower that the operator used to communicate with the drone to stop it from killing a target.

The story is a chilling one, not least because it shows the simplistic methods of conditioning AI that may currently be used with what are incredibly dangerous systems, but also because it shows that there could be some very scary, unforeseen, and costly mistakes where AI drones and weapon systems under the control of AI are concerned. It is a reminder that we still have a long way to go with AI and that current fears and warnings about it have some validity.

What Does This Mean For Your Organisation? 

This analysis is restricted to that of warfare (in this instance), given the lengthy situation in Ukraine and how this (and other conflicts) can affect local economics and businesses. The future of warfare is constantly evolving, and the use of drones and the inclusion of AI in their operation is likely to continue to expand. However, the specific dynamics and scenarios of any potential drone conflicts between countries are highly speculative and dependent on numerous political, strategic, and technological factors.

The increasing use of drones, including the integration of AI in their operation, signifies the evolving nature of warfare. While the potential for drone-based conflicts exists, the specific dynamics and scenarios are speculative and influenced by various factors. Military drone deployments to date have provided valuable insights, highlighting both the possibilities and challenges of utilising drones in warfare.

Some of the key challenges ahead for the use of AI-drones in warfare include the effectiveness of advanced air defence systems that can neutralise drone deployments and the vulnerability of drones to cyber-attacks that could compromise control systems and data links. Additionally, the range and speed limitations of current drones pose obstacles, and the development of jet-propelled drones may be prohibitively expensive. Lessons from military drone deployments so far emphasise the significance of adhering to international laws and regulations governing the use of drones in armed conflicts and, thankfully, efforts are being made to establish guidelines to ensure compliance with international humanitarian law.

Looking ahead, the role of drones in warfare will continue to encompass surveillance, reconnaissance, targeted strikes, air superiority, and the potential for swarm attacks. Land-based drones, or UGVs, will also have various applications, including reconnaissance, target acquisition, logistics, and force protection. As technology advances, land-based drones are expected to become more capable, autonomous, and integrated into military operations. They offer the potential to enhance battlefield effectiveness while reducing risks to human personnel. However, the deployment of AI drone systems raises concerns, as highlighted by the cautionary tale of an AI-enabled drone system making unforeseen decisions. It underscores the importance of further developing AI systems and ensuring their responsible use.

Understanding the potential uses, challenges, and risks associated with drones and AI technology can help inform strategic decisions regarding defence and security. It is, however, also very important to consider the legal and ethical implications surrounding the use of drones in warfare, ensuring compliance with relevant regulations and international norms.

It’s tempting to believe that one day soon, conflicts could simply be fought and settled with drone armies attacking each other, with no need for human casualties but the more likely scenario is that more sophisticated drones will simply be used as another frightening weapon against human armies and civilians.

The fall-out to civilian usage (e.g. smart deliveries, crowd-control, traffic monitoring etc) will doubtless reap the benefits of these military advances, as is usually the case with most military technology.

Sustainability : Wood-Based Satellites That Stop Atmospheric Pollution

Japan’s Kyoto University working with Nasa and the Japanese Aerospace Exploration Agency (Jaxa) plan to launch wood rather than metal-based satellites into orbit from next year.

LignoSat 

As part of the LignoSat Space Wood Project, which began in April 2020, as a collaboration between Kyoto University and Sumitomo Forestry, wood specimens have been tested with exposure in space at the International Space Station (the ISS) with a view to making the wood-based LignoSat satellite. Magnolia wood (“Hoonoki” in Japanese) will form the basis of the satellite because of its high workability, dimensional stability, and strength.

Why A Wooden Satellite?

The reasons for using wood for satellites are:

– Wood can withstand the huge temperature fluctuations and stand up to cosmic rays, dangerous solar particles, and more without decomposition or deformations. This means that wood, an abundant sustainable and natural material (no expensive development costs) appears to be a suitable material for use in low earth orbit.

– When wood-base satellites fall back to Earth, they will burn up completely in the upper atmosphere with no harmful byproducts, thereby reducing the risk of atmospheric pollution and pollution of the earth below. Wood may be better, for example, than the aluminium that is currently used which releases alumina particles on burn-up, polluting the atmosphere and reflecting sunlight in a way that can contribute to abnormal weather.

– It’s easier for radio waves to penetrate dried timber, thereby allowing the LignoSat team to put communication antennas and sensor technology directly into the body of the satellite.

– A wooden satellite that burns up completely will not contribute to the already significant space junk problem. For example, European Space Agency figures show that there are nearly 34,000 large pieces of debris, nearly 2,800 defunct satellites, and millions of pieces of space junk/trash currently circling Earth’s orbit posing a safety risk to astronauts and the viability of active satellites.

Spin-Off Benefits 

One anticipated spin-off of the satellite project is that Japanese logging firm Sumitomo, one of the partners in the project, may use insights gained from the satellite to help it develop materials for what could be the world’s tallest wooden skyscraper in Tokyo by 2041.

What Does This Mean For Your Organisation? 

With most of us concerned about what’s happening on earth and to its atmosphere, satellites may not be our first thought when it comes to making changes to protect the atmosphere and the environment. There is also the not so small matter of the pollution and carbon released by using fossil fuels to blast satellite into orbit in the first place. That said, the project does show that there’s no good reason why wood wouldn’t be a good material for use in a satellite, and the fact that its natural and sustainable, safer and less polluting on burnup, and can make for a better design in terms of radio waves appear to make it a better alternative to aluminium (which also has to be mined). It seems that the testing of wood as part of the project could have other valuable environmental beneficial spin-offs such as creating high-functioning wood materials for new applications and materials for a wooden (rather than concrete) skyscraper.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives