Sustainability-in-Tech : Data Centres May Have Their Water Restricted

The recent announcement by Thames Water that it may restrict water flows to London data centres and charge them more at times of peak demand may be the shape of things to come.

Thames Water Ltd – Considering Restrictions 

The announcement follows an investigation last year into how much water was being used by London and M4 corridor datacentres. At the time, Thames Water’s Strategic Development Manager, John Hernon, said that he was looking to work closely “with those consultants planning for new datacentres in the Slough area” and saying, “Our guidance has already resulted in a significant reduction in the amount of water requested by these new centres due to guidance on additional storage and cooling procedures”. 

What’s The Issue? 

The main issue is that datacentres require vast amounts of water for cooling and the water they currently use is drinking water from supply pipes. This can put pressure on supply and infrastructure during hot weather when demand for water surges. It should also be noted that datacentres also require substantial amounts of electricity.

Restrictions 

To address these issues, Thames Water Ltd has announced that it has discussed with at least one datacentre operator in London about physically restricting the water flow at peak times. John Hernon has been reported as saying that this could involve introducing flow restrictions on pipes.

What Are The Alternatives To Physically Restricting Water? 

Although Thames Water has discussed restricting water flow to the datacentres, the company has said it would prefer to take a collaborative approach with datacentres, encouraging them to look at water reusage and recycling options on-site. This could involve reusing final effluent from the company’s sewage treatment works or intercepting and surface water before it reaches treatment plants.

Thames Water first raised the issue of datacentres using raw water in August 2022. At the time, a drought had led them to introduce a hosepipe ban affecting 15 million customers in the southeast (including London and the Thames Valley area), bringing the issue of how much drinking water datacentres were using into sharp focus.

In Other Countries 

Although water is the traditional cooling method for data centres, in some other countries and regions they also use different cooling methods depending on factors like climate, local regulations, and available resources. For example, some common cooling methods include air cooling, or specialised cooling solutions like chilled water systems. Some recent experiments in data centre cooling methods have also included immersing servers in liquid / engineered fluid, i.e. immersion cooling (which Microsoft now uses at its datacentre in Quincy, WA in the US), and underwater datacentres such as Microsoft’s Project Natick. Other recent and innovative data centre cooling ideas have included a decentralised model whereby homeowners are incentivised to have business servers attached to their home water tanks for cooling.

Criticism 

Even though it would make sense for datacentres to get their large water requirements from somewhere other than the drinking water supply, some have criticised Thames Water for scapegoating the datacentre industry when the water company doesn’t appear to be doing much about losing 600 million litres of water a day (nearly a quarter of its daily supplies) through leaks.

What Does This Mean For Your Business? 

Pressures such as climate change, the growth of the digital society, cloud computing, the IoT, and the newer pressures of widescale use of generative AI have all fed the demand for more datacentres and have exacerbated the cooling challenges faced by them. Although some innovative alternatives are being tried, datacentres predominantly have huge water and power requirements and, as the Tames Water area example shows, the fact that they tap into drinking water can be a big problem in drought or at peak general demand times. Businesses require reliable computing and access to their cloud resources as well as water (and power) illustrating the importance to the wider economy and society of finding a solution that enables data centres to function reliably while not negatively impacting other infrastructure, businesses, and the economy or, indeed, climate targets.

Using raw water may be one alternative that could help (as could fixing leaks as some critics argue). Some other methods such as immersion cooling or underwater datacentres look promising, but these may take some time to scale up. AI may also prove to be helpful in helping to improve the management of datacentre cooling.

There are, in fact, many methods that could ultimately all help tackle the problem, such as optimising airflow management, implementing intelligent monitoring systems, utilising computational fluid dynamics simulations, and exploring innovative architectural designs. All these methods could help by enhancing airflow efficiency, preventing hotspots, improving heat dissipation, proactively adjust cooling parameters, informing cooling infrastructure design, and dynamically adapting to workload demands to meet the modern cooling challenges faced by data centres.

For Thames water, the idea of working collaboratively with datacentres rather than simply imposing water restrictions sounds a sensible way forward because datacentre cooling is a challenge that now affects all of us and needs intelligent solutions that work and can be put into practice soon.

Tech Trivia : Did You Know? This Week in History …

The Wonderful Wizard of Woz!

Steve Wozniak – business partner of Steve Jobs and co-founder of Apple – was born on August 11, 1950.

Most people know that Apple went on to become one of the most valuable companies in the world and even though Steve the company way back in left in 1985, there are numerous other projects and charities he contributed to without diminishing his love of computing and engineering as he got older.

However, some of the details of his earlier life before 1985 are worthy of note, so here are 10 fun facts about this energetic engineer :

 1 – He was born in San Jose, California to Francis Jacob Wozniak and Margaret Louise Wozniak. He possibly got his love of engineering from his father, who was an engineer for Lockheed Martin and was of Polish descent. Wozniak is a common surname in Poland, and it’s derived from the Polish word “woźny,” which means a driver or a carriage man.

2 – Steve Wozniak (‘Woz’) is a well-known fan of Star Trek and has often spoken about how the show influenced him as a child. He has attributed Star Trek with inspiring him to think about the future and the possibilities of technology.

3 – He has talked about his prosopagnosia, also known as ‘face blindness’. Prosopagnosia is a cognitive disorder that affects a person’s ability to recognise faces, including faces of people they know well.

4 – Woz attended the University of Colorado Boulder, for a year from 1968 to 1969, studying electrical engineering before expelled for hacking into the university’s computer system! This was part of Wozniak’s early interest in computers and electronics. He later went back to university (5).

5 – Steve Wozniak attended the University of California, Berkeley, but he dropped out in 1972 to work with Steve Jobs. They later co-founded Apple Inc. in 1976.

6 – Like Steve Jobs, Steve Wozniak also worked at Hewlett-Packard.  Wozniak worked at HP from 1971 to 1976. During his time at HP, Wozniak worked as an engineer in the calculator division. It was during this period that he began attending meetings of the ‘Homebrew Computer Club’ with Jobs. He developed the early design of what would become the Apple I computer while still employed at HP. Wozniak actually offered his design for the Apple I to HP while he was still an employee, but the company declined his offer.

7 – Wozniak left HP in 1976 to co-found Apple Inc. with Steve Jobs and Ronald Wayne. Apple revolutionized the tech industry with a series of innovative products including the Apple I and II computers, the Macintosh, the iPod, the iPhone, and the iPad, transforming personal computing, music, and mobile communications, and (as we all know), becoming one of the most valuable companies in the world.

8 – To gather funds for their initial Apple prototype, which would eventually evolve into the Apple I, both Wozniak and Jobs had to make sacrifices. Jobs sold his Volkswagen van, while Wozniak decided to sell his HP scientific calculator. These sales collectively generated over $1,000, providing the necessary capital to kickstart their venture.

9 – In 1981, he was involved in a crash when the plane (that he was piloting), stalled while taking off. Wozniak suffered several injuries, including a concussion. The concussion caused him to lose his short-term memory, meaning he couldn’t remember things that had happened since the crash. This memory loss lasted for several weeks, but Wozniak eventually made a full recovery.

The crash had a significant impact on Wozniak’s life. He took time off from Apple to recover and also began to reevaluate his life and his priorities, which eventually led him to reduce his role at Apple and focus on other interests and projects.

10 – Later that same year (1981) he decided to go back to school to complete his degree at the University of California, Berkeley. He re-enrolled under the pseudonym Rocky Raccoon Clark to avoid attention and earned his Bachelor of Science degree in Electrical Engineering and Computer Science in 1986. The university also awarded him an honorary Doctor of Engineering degree in 2000.

So, quite a few inspirations there from his formative years before Apple became massive, not least of which is that when it comes to University, it’s never to late to finish what you started!

Tech Tip – How To Quickly Check If Any Of Yours Passwords Have Been Leaked

Google’s updated Password Manager in Chrome allows you check if any of your passwords have been compromised or involved in a data breach. Here’s how to use it:

– In Chrome, click on the three dots (top right).

– Select ‘Passwords’ (in iOS) or ‘Google Password Manager’ in desktop browser.

– Click on ‘Check Now’.

– Google will list which (if any) passwords have been compromised and need to be changed.

Tech Insight : What Are ‘Deepfake Testers’?

Here we look at what deepfake videos are, why they’re made, how they (might) be quickly and easily detected using a variety of deepfake detection and testing tools.

What Are Deepfake Videos? 

Deepfake videos are a kind of synthetic media created using deep learning and artificial intelligence techniques. Making a deepfake video involves manipulating or superimposing existing images, videos, or audio onto other people or objects to create highly realistic (but typically fake) content. The term “deepfake” comes from the combination of “deep learning” and “fake.”

Why Make Deepfake Videos? 

People create deepfake videos for various reasons, driven by both benign and malicious intentions. Here are some of the main motivations behind the creation of deepfake videos:

– Entertainment and art. Deepfakes can be used as a form of artistic expression or for entertainment purposes. AI may be used, for example, to create humorous videos, mimic famous movie scenes with different actors, or explore creative possibilities.

– Special effects and visual media. In the film and visual effects industry, deepfake technology is often used to achieve realistic visual effects, such as de-aging actors or bringing deceased actors back to the screen (a contentious point at the moment, give the actors strike over AI fears). That said, some sportspeople, actors and celebrities have embraced the technology and are allowing their deepfake identities to be used by companies. Examples include Lionel Messi by Lay’s crisps and Singapore celebrity Jamie Yeo agreeing a deal with financial technology firm Hugosave.

– Education and research. Deepfakes can be used for research and educational purposes, helping researchers, academics, and institutions study and understand the capabilities and limitations of AI technology.

– Memes and internet culture. In recent times, deepfakes have become part of internet culture and meme communities, where users create and share entertaining or humorous content featuring manipulated faces and voices.

– Face swapping and avatar creation. Some people use deepfakes to swap faces in videos, such as putting their face on a character in a movie or video game or creating avatars for online platforms.

– Satire and social commentary. Deepfake videos are also made to satirise public figures or politicians, creating humorous or critical content to comment on current events and societal issues.

– Privacy and anonymity. In some cases, people may use deepfakes to protect their privacy or identity by concealing their face and voice in videos.

– Spreading misinformation and disinformation. Unfortunately, deepfake technology has been misused to spread misinformation, fake news, and malicious content. Deepfakes can be used to create convincing videos of individuals saying or doing things they never did, including political figures, leading to potential harm, defamation, and the spread of falsehoods.

– Fraud and scams. This is a very worrying area as criminals can now use deepfakes for fraudulent activities, e.g. impersonating someone in video calls to deceive or extort others. For example, deepfake testing company Deepware says: ”We expect destructive use of deepfakes, particularly as phishing attacks, to materialise very soon”.

What Are Deepfake Testers? 

With AI deepfakes becoming more convincing and easier to produce thanks to rapidly advancing AI developments and many good AI video, image, and voice services widely available online (many for free), tools that can quickly detect deepfakes have become important. In short, deepfake testers are online tools that can be used to scan any suspicious video to discover if it’s synthetically manipulated. In the case of deepfakes made to spread misinformation and disinformation or for fraud and scams, these can be particularly valuable tools.

How Do They Work? 

For the user, deepfake testers typically involve copying and pasting the URL of a suspected deepfake into the online deepfake testing tool and hitting the ‘scan’ button to get a quick opinion about whether it’s likely to be a deepfake video.

Behind the scenes there a number of technologies used by deepfake testers, such as:

– Photoplethysmography (PPG), for detecting changes in blood flow, because deepfake faces don’t give out these signals. This type of detection is more difficult if the deepfake video is pixelated.

– Eye movement analysis. This is because deepfake eyes tend to be divergent, i.e. they don’t look at a central point like real human eyes do.

– Lip Sync Analysis can help highlight a lack of audio and visual synchronisation, something which is a feature of deepfakes.

– Facial landmark detection and tracking algorithms to assess whether the facial movements and expressions align realistically with the audio and overall context of the video.

– Testing for visual irregularities, e.g. unnatural facial movements, inconsistent lighting, or strange artifacts around the face.

Examples 

Some examples of deepfake testers include:

DeepwareAI 

This is only for detecting AI-generated face manipulations and can be used via the  website, API key, or in an offline environment via SDK. There is also an Android app. There is a maximum limit of 10 minutes for each video.

Intel’s FakeCatcher

With a reported 96 accuracy rate, Intel’s deepfake detection platform, introduced last year, was billed as “the world’s first real-time deepfake detector that returns results in milliseconds.” Using Intel hardware and software, it runs on a server and interfaces through a web-based platform.

Microsoft’s Video Authenticator Tool

Announced 3 years ago, this deepfake detecting tool uses advanced AI algorithms to detect signs of manipulation in media and provides users with a real-time confidence score. This tool was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both of which are key models for training and testing deepfake detection technologies.

Sentinel

This AI-based protection platform is used by governments, defence agencies, and enterprises. Users upload their digital media through the website or API, whereupon Sentinel uses advanced AI algorithms to automatically analyse the media. Users are given a a detailed report of its findings.

DeepFake-O-Meter

This is an open platform allowing users to upload a video (.wav video, maximum size is 50MB), input their email address, and get an assessment of whether a video is fake.

DuckDuckGoose DeepDetector Software 

This is fully automatised deepfake detection software for videos and images which uses explainable output powered by AI to help users understand how the detection was made. It detects deepfake videos and images in real-time and provides explainable outputs

WeVerify

This project aims to develop intelligent human-in-the-loop content verification and disinformation analysis methods and tools whereby social media and web content is analysed and contextualised within the broader online ecosystem. The project offers, for example, a chatbot to guide users through the verification process, an open-source browser plugin, and other open source AI tools, as well as proprietary tools owned by the consortium partners.

What Does This Mean For Your Business? 

Deepfake videos can be fun and satirical, however there are real concerns that with AI advancements, deepfake videos are being made for spreading misinformation and disinformation. Furthermore, fraud and scam deepfakes can be incredibly convincing and, therefore dangerous.

Political interference such as spreading videos of world leaders and politicians saying things they didn’t say, plus using videos to impersonate someone to deceive or extort are now very real problems. With it being so difficult to tell for sure just by watching a video whether it’s fake or not, these deepfake testing tools can have a real value both as a safety measure for businesses, or for anyone who needs a fast way to check out their suspicions.

Deepfake testers, therefore, can contribute to cybercrime prevention and countering of fake news. The issue of deepfakes as a threat is only going to grow, so the hope is that as deepfake videos become ever-more sophisticated, the detection tools are able to keep up in their ability to be able to tell with certainty whether a video is fake.

Featured Article : UK Gov Pushing To Spy On WhatsApp (& Others)

The recent amendment to the Online Safety Bill which means a compulsory report must be written for Ofcom by a “skilled person” before encrypted app companies are forced to scan messages has led to even more criticism of this rather controversial bill to bypass security in apps and give the government (and therefore any number of people) more access to sensitive and personal information.

What Amendment? 

In the House of Lords debate, which was the final session of the Report Stage and the last chance for the Online Safety Bill to be amended before the Bill becomes law, Government minister Lord Parkinson amended the bill by calling for the need for a report to be written for Ofcom by a “skilled person” (appointed by Ofcom) before powers can be used to force a provider / tech company (e.g. WhatsApp or Signal), to scan its messages. The stated purpose of scanning messages using the powers of the Online Safety Bill is (ostensibly) to uncover child abuse images.

The amendment states that “OFCOM may give a notice under section 111(1) to a provider only after obtaining a report from a skilled person appointed by OFCOM under section 94(3).” 

Prior to the amendment, the report had been optional.

Why Is A Compulsory Report Stage So Important? 

The amendment says that the report is needed before companies can be forced to scan messages “to assist OFCOM in deciding whether to give a notice…. and to advise about the requirements that might be imposed by such a notice if it were to be given”. In other words, the report will be to assess the impact of scanning on freedom of expression or privacy, and to explore whether other less intrusive, less alternative technologies could be used instead.

It is understood, therefore, that the report’s findings will be used to help decide whether to force a tech firm to scan messages. Under the detail of the amendment, a summary of the report’s findings must be shared with the tech firm concerned.

Reaction 

Tech companies may be broadly in agreement of the aims of the bill. However, the detail of the bill that companies such as encrypted messages operators (e.g. WhatsApp and Signal and others) have always opposed being forced into scanning user messages before they are encrypted (client-side scanning). Operators say that this completely undermines the privacy and security of encrypted messaging, and they object to the idea of having to run government-mandated scanning services on their devices. Also, they argue that this could leave their apps more vulnerable to attack.

The latest amendment, therefore, has not changed this situation for the tech companies and has led to more criticism and more objections. Many objections have also been aired by campaign and rights groups such as Index on Censorship and The Open Rights Group, who have always opposed what they call the “spy clause” in the bill for example:

– The Ofcom appointed “skilled person” could simply be a consultant or political appointee, and having these people oversee decisions about free speech and privacy rights would not amount to effective oversight.

– Judicial oversight should be a bare minimum and a report written by just a “skilled person” wouldn’t be binding and would lack legal authority.

Other groups, however, such as the NSPCC, have broadly backed the bill in terms of finding ways to make tech firms mitigate the risks of child sexual abuse when designing their apps or adding features, e.g. end-to-end encryption.

Another Amendment 

Another House of Lords amendment to the bill requires Ofcom to look at the possible impact of the use of technology on journalism and the protection of journalistic sources. Under the amendment, Ofcom would be able to force tech companies to use what’s been termed as “accredited technology” to scan messages for child sexual abuse material.

This has also been met with similar criticisms over the idea of government-mandated scanning technology’s effects on privacy, freedom of speech, and potentially being used as a kind of monitoring and surveillance. WhatsApp, Signal, and Apple have all opposed the scanning idea, with WhatsApp and Signal reportedly indicating that they would not comply.

Breach Of International Law? 

The clause 9(2) of the Online Safety Bill which requires platforms to prevent users from “encountering” certain “illegal content” has also been soundly criticised recently. This clause means that platforms which host user-generated content will need to immediately remove any such content, which has a broad range, or face considerable fines, blocked services, or even jail for executives. Quite apart from the technical and practical challenges of being able to achieve this effectively at scale, criticisms of the clause include that it threatens free speech in the UK, and it lacks the detail for legislation.

Advice provided The Open Rights Group suggests that the clause may even be a breach of international law in that there could be “interference with freedom of expression that is unforeseeable” and goes against the current legal order on platforms.

It’s also been reported that Wikipedia could withdraw from the UK over the rules in the bill.

Investigatory Powers Act Objections (The Snooper’s Charter) 

Suggested new updates to the Investigatory Powers Act (IPA) 2016 (sometimes called the ‘Snooper’s Charter’) have also come under attack from tech firms, not least Apple. For example, the government wants messaging services, e.g. WhatsApp, to clear security features with the Home Office before releasing them to customers. The update to the IPA would mean that the UK’s Home Office could demand, with immediate effect, that security features are disabled, without telling the users/the public. Currently, a review process with independent oversight (with the option of appeal by the tech company) is needed before any such action could happen.

The Response 

The response from tech companies has been and swift and negative, with Apple threatening to remove FaceTime and iMessage from the UK if the planned update to the Act goes ahead.

Concerns about granting the government the power to secretly remove security features from messaging app services include:

– It could allow government surveillance of users’ devices by default.

– It could reduce security for users, seriously affect their privacy and freedom of speech, and could be exploited by adversaries, whether they are criminal or political.

– Building backdoors into encrypted apps essentially means there is no longer end-to-end encryption.

Apple 

Apple’s specific response to the proposed updates/amendments (which will be subject to an eight-week consultation anyway) is that:

– It refuses to make changes to security features specifically for one country that would weaken a product for all users globally.

– Some of the changes would require issuing a software update, which users would have to be told about, thereby stopping changes from being made secretly.

– The proposed amendments threaten security and information privacy and would affect people outside the UK.

What Does This Mean For Your Business? 

There’s broad agreement about the aims of UK’s Online Safety Bill and IPA in terms of wanting to tackle child abuse, keep people safe, and even making tech companies take more responsibility and measures to improve safety. However, these are global tech companies where UK users represent only a small part of their total user base, and ideas like building in back doors into secure apps, running government approved scanning of user content and using reports written by consultants/political appointees to support scanning all go against ideas of privacy, one of key features of apps like WhatsApp.

Allowing governments access into apps and granting them powers to turn off security ‘as and when’ raise issues and suspicions about free speech, government monitoring and surveillance, legal difficulties, and more. In short, even though the UK government want to press ahead with the new laws and amendments there is still a long way to go before there is any real agreement with the tech companies. In fact, it looks likely that they won’t comply and some, like WhatsApp have simply said they’ll pull out of the UK market, which could be very troublesome for UK businesses, charities, groups and individuals.

The tech companies also have a point in that it seems unreasonable to expect them to alter their services just for one country in a way that could negatively affect their users in other countries. As some critics have pointed out, if the UK wants to be a leading player on the global tech stage, alienating the big tech companies may not be the best way to go about it. It seems that a lot more talking and time will be needed to get anywhere near real-world workable laws and, at the moment, with the UK government being seen by many as straying into areas that are alarming rights groups, some tech companies are suggesting the government ditch their new laws and start again.

Expect continued strong resistance from tech companies going forward if the UK government doesn’t slow down or re-think many aspects of these new laws – watch this space.

Tech News : $900,000 For Netflix AI Product Manager

Leading Streamer Netflix has added more fuel to the fire in the actors and writers’ union strike by advertising for an AI product manager with a salary up to $900,000.

Could Have Paid For Actors 

With US actors and writers who are members of the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) having been on strike for weeks, partly to protect their jobs from AI, the mega-salary job ad has been met with some criticism. For example, SAG-AFTRA has argued that 87 per cent of its membership only earn less than $26,000 a year, and some union members have added that the eye-watering $300,000 to $900,000 dollars for one AI job could have paid and supported 35 actors and their families instead.

The Reasons For The Strike 

Some of the main reasons for the dual strike of actors and writers include:

– Worries that background performers could be scanned and paid one day’s pay, while their scanned image is then owned by film companies who can use the person’s image or likeness (reproduced with AI) for unlimited projects in the future without the performer’s consent and without compensation.

– The need for wage increases.

– Streaming service like Prime, Netflix, and Disney not paying enough in “residuals” (the royalties earned from repeat broadcasts of films or TV shows).

– Concerns that AI will be used to write first drafts of scripts and screenwriters will only be hired at a lower rate to bring them up to scratch.

The Job 

The $300,000 to $900,000 salary Machine Learning Platform Manager Netflix job is in their Los Gatos office or can be remotely based on the West Coast. The advert specifies that the right candidate will be someone with “a technical background in engineering and/or machine learning”. Netflix says that “Machine Learning/Artificial Intelligence is powering innovation” with the “Machine Learning Platform (MLP)” providing “the foundation for all of this innovation” with the new Product Management role being used to “increase the leverage of our Machine Learning Platform”. 

What Does This Mean For Your Business? 

With actors (and writers) out on strike to essentially stop AI from replacing them and to get more residuals from streaming services. Netflix posting a high salary AI job offer, therefore, is felt by many actors to be a double kick in the teeth.

The use of AI in film and TV is already here but, at a time when the union needs to negotiate its terms between actors and the studios, the studios are focusing on how they plan to expand their use of AI. Understandably, the fears of actors about AI have been brought to a head. For example, with writer, actor and comedian Rob Delaney describing the AI job advert as “ghoulish” and actor Brian Cox being widely quoted as saying “the worst aspect is the whole idea of AI and what AI can do to us”, it’s clear that there’s still some way to go before any kind of agreement can be reached between the studios and the unions and the major disruption to the industry can be stopped.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives