Tech News : UK ‘Passportless’ eGates
A recent Times report highlighted how Phil Douglas, director-general of the UK Border Force, aims to replace the UK’s physical passport-based entry system with an upgraded, frictionless, facial recognition-based e-Gates system.
Current eGates System
The current eGates system that most UK travellers have experienced involves the use of facial recognition alongside a passport and automated gates. With this system, travellers must still queue before entering the automated gates and hold their physical passport into the machine while looking into a camera (which isn’t always successful). The current system relies on a match between data encrypted on the passport and the facial recognition camera image, and users of the system must be registered on a database.
The current eGates system can also only be used by travellers aged 10 and over who are citizens of the UK, EU, US, Canada, Australia, New Zealand, Singapore, and South Korea.
Issues
In addition to the queuing required and the fact that some users need several attempts, the current eGates system has several other issues. For example, unsuccessful attempts to use the system (of which there are many) still require manual checks, while major outages of the eGates system have previously caused chaos at UK airports (in May and September 2023).
The Upgraded System
The upgraded system highlighted by Mr Douglas in the recent Times report will mean that passengers can keep their physical passports in their pockets and be admitted to the UK just by looking into a camera linked to a centralised facial recognition system.
The benefits of the upgraded eGates should be less queuing (better for the airport and for travellers) plus a more ‘frictionless’ experience for travellers.
Already In Operation In Other Countries
Much faster and more frictionless systems, like the upgraded version intended for the UK, are already in operation in countries like Dubai and Australia. It’s been reported that the Dubai ‘Smart Gates’ system uses facial recognition for 50 nationalities and can enable travellers to clear immigration procedures in as little as five seconds!
Eta
Speaking at the Airlines 2023 conference in November last year, Phil Douglas highlighted how the eGates system changes are part of wider immigration process changes including the incoming Electronic Travel Authorisation (Eta). The Eta scheme, which opened for applications last October, is a requirement worldwide for visitors who don’t need a visa for short stays in the UK but who the government would like to know more about and be able to refuse entry if they may pose a threat. It’s envisaged that the application-style scheme (which will work even for those people “airside” at Heathrow for two hours between international flights), could enable the UK Border Force to make decisions about admission much earlier, and perhaps refuse ETas for those with a criminal history. Critics, however, have said that the scheme could damage UK airlines and tourism, particularly for Northern Ireland.
What Does This Mean For Your Business?
For anyone who’s ever arrived home at a UK airport from a holiday or business trip, not having to fish out the passport after the flight, being able to avoid queues in arrivals for the eGates machines, then being able to just walk through in seconds sounds very attractive.
Avoiding the chaos of eGates outages is also likely to be very attractive to airports, passengers, airlines, and other stakeholders, although it does highlight the dangers of ever-reliance on technology. For a stretched UK government’s Border Force, technology that can cut queues and cut staff costs, and eliminate passport reliance while eliminating human error opportunities is also likely to be appealing. A system that allows travellers to complete immigration checks in seconds, like Dubai’s or Australia’s may also be an image that the UK wants to project as a country positioning itself as a tech centre.
However, some may see a more sinister, rigid, less romantic side to travel. Having a purely biometrics-based immigration procedure where your freedom to enter or leave is decided by whatever is recorded on a central database entry (and triggered by your face) is perhaps a more negative vision of the future. Police facial recognition trials, for example, have not been accurate or unbiased, and coupled with updated eGates and the Eta scheme (which is unsurprising given the government has prioritised immigration as a central issue), some may feel uneasy about a dystopian creep into travel and freedom.
For example, could ETa and purely smart borders mean those individuals whose central database details are marked with previous (perhaps minor) offences or other issues (e.g. social media posts) find themselves refused exit from or entry into countries? Could such a system be misused by governments?
Also, with the phasing-out of physical passports (and payments for renewals), and everything linked to a central database, could this open up the route for travel subscription payment systems? There is also the fear of security and privacy related to a border/immigration database that holds so much personal information about people.
The more distant future and fears aside, AI is now likely to be the key to enhancing and improving biometric systems and – whether we like it or not – such a system for borders is just one of many we will face going forward.
An Apple Byte : Apple Pays After Throttling iPhones Settlement
Following a US case dating back to 2017 which led to Apple admitting that it has deliberately slowed down some older models of iPhones (throttling), a settlement for compensation payments to affected iPhone owners has been reached.
After agreeing in 2020 to pay, this settlement of the lawsuit means that each of the 5.5 million complainants will receive £72 from Apple’s agreed £394m ($500m) total payout pot.
Unfortunately for Apple, it is still facing another class action lawsuit led by market researcher Justin Gutmann on behalf of 24 million British iPhone users over a similar throttling issue following an update. If it loses, Apple could face an £853 million ($1.03 billion) payout.
Security Stop Press : List Of Malicious Android Apps To Delete Now
Online protection company McAfee’s Mobile Research Team has identified a list of malicious apps that Android owners should immediately delete. This is because they use Xamalicious malware to build a stealth backdoor, infect, and take over devices.
The apps, which have now been removed from the Google Play store, are reported to have been downloaded hundreds of thousands of times.
Detailed information about how each of the apps infects devices, and a list of the 13 malicious apps that were present in the Google Play Store can be found on McAfee’s website here.
The advice is to avoid using apps that require accessibility services unless there is a genuine need for their use, install security software on your device, always keep the security software up to date.
Sustainability-in-Tech : Map Shows UK Areas Under Water By 2050
An online map from non-profit climate science organisation ‘Climate Central’ shows which areas of the UK could be underwater due to climate change by 2050.
Sea Level Rise
As highlighted in a 2021 report by Benjamin H Strauss et al, if a high emissions scenario raised global temperatures by 4 ∘C warming, this could produce an 8.9 m global sea level rise within a roughly 200 to 2000-year envelope, that could submerge at least 50 major cities!
First Unveiled In 2020
It was back in 2020 when Climate Central’ s CoastalDEM digital model and warnings that sea levels may rise between 2-7ft by the end of the century first illustrated how and why many UK coastal areas could end up submerged i by 2050. It also highlighted the point that how high the sea level rises depends on how much warming pollution humanity dumps into the atmosphere.
The Latest
Climate Central’s latest update to the model, highlighted in its new ‘Flooded Future: Global vulnerability to sea level rise worse than previously understood’ report, suggests that predictions have got worse. The report, which includes the map, explains the basic sea level rise cause, saying: “As humanity pollutes the atmosphere with greenhouse gases, the planet warms” and that “as it does so, ice sheets and glaciers melt and warming sea water expands, increasing the volume of the world’s oceans.”
However, as indicated by the new report’s title, it goes on to explain, with the help of its flooding map, that it is now thought that even if moderate’ reductions are made to the amount of human-made pollution, and with a ‘medium’ amount of ‘luck’ with weather events, vast swathes of the UK look likely to become submerged.
In fact, the report suggests that globally, by 2050, land that’s currently home to 300 million people will fall below the elevation of an average annual coastal flood and, by 2100, land now home to 200 million people could sit permanently below the high tide line. Based on the current CoastalDEM, Climate Central reports that around 10 million people currently live on land below the high-tide line.
What The Map Shows
The updated map, included in the report shows (using red markings for submerged areas), shows that by 2050, areas of the UK most likely to be affected by serious flooding and/or being completely under water include:
– London’s River Thames area (flagged as a danger zone).
– The River Severn (in the South West) either side of the estuary from Taunton up to Tewkesbury, and back to Cardiff on the northern bank.
– Large areas of the Lincolnshire and Norfolk coast, into Cambridgeshire.
– Large areas around the Humber Estuary in the North of England.
The interactive map, entitled the “Coastal Risk Screening Tool” can be viewed online here.
Challenges
Some of the key challenges to accurate predictions of submerged areas have been not knowing how much warming pollution is being dumped into the atmosphere and how quickly the land-based ice sheets in Greenland and especially Antarctica will destabilise / are destabilising. It’s also very difficult to accurately project where and when the sea level rise could lead to increased/permanent flooding, and be able to compare sea level rise against land elevations, given that accurate elevation data is generally unavailable, inaccessible, and/or too expensive
What Does This Mean For Your Organisation?
One of the many worrying aspects of global warming is around melting of the world’s ice caps and the expansion of the warmer oceans leading to a rise in sea level and serious flooding. Having a map based on up-to-date data that shows where the worst flooding could be will be a useful tool in terms of helping to raise awareness about the threat and its potential consequences and in planning to help mitigate its effects where possible.
In the UK, the fact that the Thames is flagged by the map and reported as being a danger zone has caused alarm. The effects of flooding of the kind highlighted on Climate Central’s map could range from near-term increases in coastal flooding damaging infrastructure and crops to the permanent displacement of whole coastal communities. Although many coastal areas have existing coastal defences, the Climate Central map indicates that these may not be enough to deal with future sea levels, thereby giving a serious heads-up to the need for planning and action for next steps. For example, more adaptive and expensive measures may be needed such as the construction of levees and other defences or, in the worst cases, relocation to higher ground could be the only way to lessen some of the threats.
One of the key points of the report and the map is to make people understand that the amount by which the sea level will rise (and flooding will occur) depends upon how much greenhouse gasses are dumped into the atmosphere by human activity. The conclusion, therefore, must be that changing our behaviour to minimise our carbon footprint and really focusing on meeting climate targets is the only way to minimise the damage to the planet, reduce global warming, and hopefully reduce the risk of disappearing beneath the waves. Although there was a pledge at the COP28 summit in Dubai last year, to ‘transition away’ from the use of fossil fuels, many people are aware that the clock is really ticking on this most fundamental issue.
Tech Tip – Distraction-Free Reading In Word
If you have a document in Microsoft Word that you’d like to convert to a more readable, book-like format, reducing distractions and allowing you to really focus on the content, here’s how:
Open the document in Microsoft Word.
Go to the ‘View’ tab (top left) and select ‘Read Mode.’
This tidies away the menus and distractions at the side, but leaves the edit and other options in a drop-down (top right).
Featured Article : NY Times Sues OpenAI And Microsoft Over Alleged Copyright
It’s been reported that The New York Times has sued OpenAI and Microsoft, alleging that they used millions of its articles without permission to help train chatbots.
The First
It’s understood that the New York Times (NYT) is the first major US media organisation to sue ChatGPT’s creator OpenAI, plus tech giant Microsoft (which is also an OpenAI investor and creator of Copilot), over copyright issues associated with its works.
Main Allegations
The crux of the NYT’s argument appears to be that the use of its work to create GenAI tools should come with permission and an agreement that reflects the fair value of the work. Also, it’s important in this case to note that the NYT relies on digital subscriptions rather than physical newspaper subscriptions, of which it now has 9 million+ subscribers (the relevance of which will be clear below).
With this in mind, in addition to the main allegation of training AI on its articles without permission (for free), other main allegations made by the NYT about OpenAI and Microsoft in relation to the lawsuit include :
– OpenAI and Microsoft may be trying to get a “free-ride on The Times’s massive investment in its journalism” by using it to provide another way to deliver information to readers, i.e. a way around its payment wall. For example, the NYT alleges that OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles. The NYT’s legal team have given examples of these, such as restaurant critic Pete Wells’ 2012 review of Guy Fieri’s (of Diners, Drive-Ins, and Dives fame) “Guy’s American Kitchen & Bar”. The NYT argues that this threatens its high-quality journalism by reducing readers’ perceived need to visit its website, thereby reducing its web traffic, and potentially reducing its revenue from advertising and from the digital subscriptions that now make up most of its readership.
– Misinformation from OpenAI’s (and Microsoft’s) chatbots, in the form of errors and so-called ‘AI hallucinations’ make it harder for readers to tell fact from fiction, including when their technology falsely attributes information to the newspaper. The NYT’s legal team cite examples of where this may be the case, such as ChatGPT once falsely attributing two recommendations for office chairs to its Wirecutter product review website.
“Fair Use” And Transformative
In their defence, Open AI and Microsoft appear likely to be relying mainly on the arguments that the training of AI on NYT’s content amounts to “fair use” and the outputs of the chatbots are “transformative.”
For example, under US law, “fair use” is a doctrine that allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research. Determining whether a specific use qualifies as fair use, however, will involve considering factors like the purpose and character of the usage. For example, the use must be “transformative”, i.e. adding something new or altering the original work in a significant way (often for a different purpose). OpenAI and Microsoft may therefore argue that training their AI products could potentially be seen as transformative as the AI uses the newspaper content in a way that is different from the original purpose of news reporting or commentary. However, the NYT has already stated that: “There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it”. Any evidence of verbatim outputs may also damage the ‘transformative’ argument for OpenAI and Microsoft.
Complicated
Although these sound like relatively clear arguments either way, there are several factors that add to the complication of this case. These include:
– The fact that OpenAI altered its products following copyright issues, thereby making it difficult to decide whether its outputs are currently enough to find liability.
– Many possible questions about the journalistic, financial, and legal implications of generative AI for news organisations.
– Broader ethical and practical dilemmas facing media companies in the age of AI.
What Is It Going To Cost?
Given reports that talks between all three companies to avert the lawsuit have failed to resolve the matter, what the NYT wants is:
Damages of an as yet undisclosed sum, which some say could be in the $billions (given that OpenAI is valued at $80 billion and Microsoft has invested $13 billion in a for-profit subsidiary).
For OpenAI and Microsoft to destroy the chatbot models and training sets that incorporate the NYT’s material.
Many Other Examples
AI companies like OpenAI are now facing many legal challenges of a similar nature, e.g. the scraping/automatic collection of online content/data by AI without compensation, and for other related reasons. For example:
– A class action lawsuit filed in the Northern District of California accuses OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claim that this practice violates the Computer Fraud and Abuse Act (CFAA).
– Google has been accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems. This case raises issues about the boundaries of data use and copyright infringement in the context of AI training.
– A Stability AI, Midjourney, and DeviantArt class action claims that these companies used copyrighted images to train their AI systems without permission. The key issue in this lawsuit is likely to be whether the training of AI models with copyrighted content, particularly visual art, constitutes copyright infringement. The challenge lies in proving infringement, as the generated art may not directly resemble the training images. The involvement of Large-scale Artificial Intelligence Open Network (LAION) in compiling images used for training adds another layer of complexity to the case.
– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.
The Actors and Writers Strike
The recent strike by Hollywood actors and writers is another example of how fears about AI, consent, and copyright, plus the possible effects of AI on eroding the value of people’s work and jeopardising their income are now of real concern. For example, the strike was primarily focused on concerns regarding the use of AI in the entertainment industry. Writers, represented by the Writers Guild of America, were worried about AI being used to write or complete scripts, potentially affecting their jobs and pay. Actors, under SAG-AFTRA, protested against proposals to use AI to scan and use their likenesses indefinitely without ongoing consent or compensation.
Disputes like this, and the many lawsuits against AI companies highlight the urgent need for clear policies and regulations on AI’s use, and the fear that AI’s advance is fast outstripping the ability for laws to keep up.
What Does This Mean For Your Business?
We’re still very much at the beginning of a fast-evolving generative AI revolution. As such, lawsuits against AI companies like Google, Meta, Microsoft, and OpenAI are now challenging the legal limits of gathering training material for AI models from public databases. These types of cases are likely to help to shape the legal framework around what is permissible in the realm of data-scraping for AI purposes going forward.
The NYT/OpenAI/Microsoft lawsuit and other examples, therefore, demonstrate the evolving legal landscape as courts now try to grapple with the implications of AI technology on copyright, privacy, and data use laws, and its complexities. Each case will contribute to defining the boundaries and acceptable practices in the use of online content for AI training purposes, and it will be very interesting to see whether arguments like “fair use” are enough to stand up to the pressure from multiple companies and industries. It will also be interesting to see what penalties (if things go the wrong way for OpenAI and others) will be deemed suitable, both in terms of possible compensation and/or the destruction of whole models and training sets.
For businesses (who are now able to create their own specialised, tailored chatbots), these major lawsuits should serve as a warning to be very careful in the training of their chatbots and to think carefully about any legal implications, and to focus on creating chatbots that are not just effective but are also likely to be compliant.