Sustainability-in-Tech : Underwater Data-Centres Vulnerable to Soundwaves
A study by cybersecurity and robotics researchers at the University of Florida and the University of Electro-Communications has revealed how powerful sound waves could disrupt the operation of underwater data-centres.
Why Underwater Data-Centres?
With demand for data-centres growing due to increasing demand for cloud computing and AI, plus with data-centres producing large amounts of heat, one idea from data-centre operators in recent years has been to submerge servers in metal boxes beneath the sea. Doing so can harness the natural cooling properties of ocean water and can help dramatically cut cooling costs and carbon emissions. For example, back in 2018, Microsoft submerged 2 racks with 864 servers beneath the waves in Scotland as part of the experimental project ‘Natick’.
Soundwave Threat
However, the news from a group of cybersecurity and robotics researchers at the University of Florida and the University of Electro-Communications in Japan has revealed that the successful operation of underwater data-centres has a critical vulnerability – the potential to be seriously affected by underwater sounds. Also, there is the added complication that if servers are submerged in metal boxes below the sea and components broken/damaged (e.g. by sound or other means), it will be a complicated (and costly) operation to fix them.
As highlighted by Md Jahidul Islam, Ph.D., a professor of electrical and computer engineering at UF and author of the study: “The main advantages of having a data center underwater are the free cooling and the isolation from variable environments on land,” but “these two advantages can also become liabilities, because the dense water carries acoustic signals faster than in air, and the isolated data center is difficult to monitor or to service if components break.”
Why Is Sound A Threat?
The study involved submerging test data centre-style servers in a laboratory water tank and in a lake on the UF campus with a speaker playing music in the water, tuned to five kilohertz. This is a frequency designed to make hard drives vibrate uncontrollably and one octave above what can be played on a piano.
The results were that networks were able to be crashed and their reliability disrupted by sound waves generated from 20 feet away. In wild conditions for example, similarly loud and potentially damaging sound waves could be generated by marine life, submarine sonar systems, industrial activity (drilling), earthquakes and seismic activity and more.
The study appears to have shown, therefore, that even something as simple as an underwater speaker playing a D note could have the potential to seriously disrupt or damage server operations in submerged data centres.
Deliberate State-Sponsored Attacks
One key worry highlighted by the study is how deliberate sound injection attacks / acoustic attacks (e.g. by other states as an act of sabotage) could be a real threat to underwater data-centres. For example, as highlighted by UF Professor of Computer and Information Science and Engineering Sara Rampazzi, Ph.D, acoustic attacks on a submerged data-centre could be subtle: “The difference here is an attacker can manipulate the data centre in a controlled way. And it’s not easy to detect”.
Other Defences Tested
As part of the study, the researchers tested different defences for the submerged servers. For example, sound-proof panels were tried but raised the servers’ temperature too much, thereby countering the advantages of cooling with water. Also, active noise cancellation was found to be too cumbersome and expensive to add to every data-centre.
Algorithm
To counter the threat of soundwaves to underwater data-centres, the research team developed a software-based solution in the form of an algorithm. The algorithm they developed (using machine learning) can identify the pattern of disruption caused by acoustic attacks and it’s anticipated that improvements to this algorithm could minimise the damage to networks by reallocating computational resources before an attack can crash the system.
What Does This Mean For Your Business?
With Microsoft’s submerged server tests showing very positive results in terms of low failure rates and dramatically reduced cooling costs, underwater data-centres appear to be something that will be put into practice in the near future. However, until now, the potential threat to their operation caused by sound is not something that has been fully realised or explored until this research.
The study has therefore been valuable in raising awareness of the threat. For example, in addition to demonstrating how server disruption by sound can happen inadvertently (e.g. from a loud submarine sonar blast), it has also raised awareness of how data-centres could be vulnerable to deliberate acoustic attacks as acts of sabotage. Not only does the research have value in highlighting the threats, but it has also enabled the development of what appears to be an effective solution,i.e., an algorithm.
Finding a way to protect underwater data-centres from acoustic attacks helps future-proof the idea, thus enabling its rollout which will benefit data-centre operators (e.g. with lower costs, better heat management, and expansion of much-needed capacity). It also provides protection for all the businesses, organisations, governments, and economies for whom the smooth operation and expansion of the cloud and now AI is vital to their operations, prosperity, and plans. This study, therefore, helps contribute towards both healthier economies and a healthier planet through reducing data-centre carbon emissions.
Video Update : LinkedIn Page : Featured Posts
This video-update explains a new function about how you can add featured posts to your LinkedIn Page …
Tech Tip – Use File Explorer’s “Group By” Feature for Better File Organisation
Organising files in File Explorer can make it easier to manage and locate documents. The Windows “Group By” feature allows you to categorise files by various attributes, such as date, type, size, or name. Here’s how to use it:
– Open File Explorer and navigate to the folder you want to organise.
– Right-click in an empty space within the folder, hover over Group by, and select an attribute to group your files by (e.g. Date modified, Type, Size).
– The files in the folder will now be grouped according to the selected attribute, making it easier to sort and find specific files.
Featured Article : Currys, Accenture, Microsoft & New ‘GPT-4o’
International omnichannel retailer of technology products and services, Currys, has selected Accenture and Microsoft to deliver the core cloud technology infrastructure that will enable it to leverage the latest generative AI technologies.
Accenture?
Accenture is a US multinational professional services company (IT services, cloud, data, AI, and consulting) headquartered in Dublin, that helps leading businesses, governments and other organisations build their digital core. Accenture says it has 742,000 “people serving clients” in more than 120 countries.
Why?
Accenture says their involvement with Microsoft, as part of their joint venture called Avanade (established back in 2000), will be working closely with Currys to “modernise, secure and simplify its technology estate” with the intention “enabling Currys to accelerate the adoption of Microsoft AI technologies such as Azure OpenAI Service”.
What Will It Do For Currys?
Currys says that using Microsoft’s AI technologies will enable it to “unlock value across every part of the business” bringing benefits like:
– Making it easier for customers to shop due to personalised and relevant product information and suggestions tailored to the consumer’s needs at the correct the moment.
– Improved customer retention and loyalty through the provision of improved post-sales experience and warranty services.
– A better experience for staff because they will be equipped with faster and easier access to information including product availability, delivery costs, and add-on services so they can better serve customers and identify potential cross and upselling opportunities.
– Future growth and profitability through the integration of AI into marketing, HR, finance, and legal processes. Currys anticipates that this will increase productivity across core business functions and that AI could be used to create/reveal opportunities to improve omnichannel experiences.
Net Zero?
It’s also hoped that this transition to embracing AI will help accelerate Currys’ journey to meet its net zero emissions before the 2040 target by moving nine existing data centres (including more than 2000 servers and 200 applications) onto Azure, to create a more energy efficient infrastructure.
Technological Leap
Alex Baldock, Group CEO of Currys plc said: “AI is the biggest technological leap of our lifetime. Currys exists to help everyone enjoy amazing technology, so as well as bringing the benefits of AI to millions of customers, we’ll do the same to our own business.”
Ralph Haupter, President (EMEA) at Microsoft said of its new deal with Currys: “By deploying the latest cloud and AI technologies, Currys can enhance the shopping experience for millions of customers, both in-store and online, whilst ensuring its 25,000 employees have the insights at their fingertips to unlock value across the entire business.”
Competition
What Currys hasn’t mentioned in its announcement about its deal with Microsoft and Accenture is that it will enable Currys to compete with other major retailers who are already leveraging AI technologies from Microsoft and Accenture. These include, for example, John Lewis Partnership, Argos (part of Sainsbury’s), Tesco, Amazon, and AO World.
Currys has faced mixed financial performance in recent years due to challenges like increased competition, supply chain disruptions, and changing consumer behaviour. Also, Currys has seen a decline in physical store sales but has tried to offset this with growth in its online sales. Efforts to streamline operations and cut costs have been part of their strategy to adapt to market conditions and improve financial stability, and the deal with Microsoft and Accenture could, therefore, be seen as part of this strategy.
Open AI Announces “Omni” Model
Just four days after the Currys/Microsoft/ Accenture announcement, OpenAI (which is a close partner of Microsoft) made another significant AI announcement with the release of its next-generation GPT-4o (“o” for “omni”) model – now available in ChatGPT. Open AI says it is: “a step towards much more natural human-computer interaction” and that it “accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs.”
OpenAI has also been keen to stress how fast it is (compared to 3.5 and 4.0) saying: “It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.”
Omni’s key USPs include advanced contextual understanding, superior problem-solving skills, a broader knowledge base, and (apparently) robust ethical safeguards.
Here’s a brief summary of the key features of GPT-4o:
– Multimodal capabilities. GPT-4o can process and generate text, images, audio, and video, enabling diverse applications like image descriptions, video summaries, and interactive media experiences.
– Improved contextual understanding. It can maintain coherence over long conversations, making it highly effective for virtual assistants and other roles requiring extended interactions.
– Advanced problem-solving skills. OpenAI says GPT-4o offers enhanced reasoning, logic, and problem-solving abilities, suitable for tackling complex mathematical problems, data analysis, and scientific research.
– Real-time adaptability. Omni can adjust responses dynamically based on user feedback and changing contexts, improving personalisation and accuracy.
– A broader knowledge base, because it’s been trained on a larger, more diverse dataset, thereby enabling it to offer accurate and informed responses across a wide range of topics.
– Ethical and safe AI practices (according to Accenture) which incorporate advanced safety mechanisms to detect and mitigate harmful content, bias, and misinformation.
– Enhanced integration capabilities for easy embedding into various applications, such as chatbots, customer service platforms, and content creation tools.
What Does This Mean For Your Business?
Currys’ collaboration with Microsoft and Accenture to integrate AI technologies into its operations is a strategic move aimed at transforming its business model and enhancing its competitive edge. By leveraging advanced AI solutions, Currys hopes to streamline its technology infrastructure, improve operational efficiency, unlock value, improve productivity, and deliver personalised customer experiences. Currys no doubt hopes that AI could help it turn around some of the performance of recent years and improve how its online business operates as it moves away from physical stores.
For Currys, the benefits are, therefore, many. For example, the more tailored and personalised shopping experiences that AI can bring could enhance customer satisfaction and loyalty. Also, improved post-sales services, facilitated by AI, could further boost customer retention. Additionally, equipping staff with AI-powered could help drive sales growth. Not forgetting the core functions, integrating AI into HR, finance, and legal processes could increase productivity for Currys and reveal new growth opportunities, particularly in enhancing omnichannel experiences.
It could also be noted that transitioning to a more energy-efficient infrastructure powered by Microsoft’s Azure could help Currys in its net zero emissions by 2040 ambitions, helping the company to present a greener image.
This story also shows how (in the broader business landscape) AI is proving to be a significant advantage across various sectors. Companies using AI are being seen to streamline operations, enhance customer experiences, and make data-driven decisions more effectively. The ability of AI to process vast amounts of data and generate actionable insights is transforming industries from retail and finance to healthcare and logistics, providing a competitive edge to those who adopt it.
The recent launch of OpenAI’s GPT-4o also underscores the rapid advancements in AI technology. With its multimodal capabilities, GPT-4o looks like being a versatile tool for diverse applications. Also, for many ChatGPT users, news that it’s extremely fast will be welcome, and its real-time adaptability, superior problem-solving skills and broad knowledge base may make it a very useful model for the many businesses that are increasingly reliant on generative AI to help with their productivity, innovation, efficiency, and customer engagement.
For OpenAI, the launch of GPT-4o could, of course, strengthens its position in what is already now a highly competitive AI industry and could (probably for a brief period) set a new benchmark for competitors.
Tech Insight : What Are ‘Deadbots’?
Following warnings by ethicists at Cambridge University that AI chatbots made to simulate the personalities of deceased loved ones could be used to spam family and friends, we take a look at the subject of so-called “deadbots”.
Griefbots, Deadbots, Postmortem Avatars
The Cambridge study, entitled “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” looks at the negative consequences and ethical concerns of adoption of generative AI solutions in what it calls “the digital afterlife industry (DAI)”.
Scenarios
As suggested by the title of the study, a ‘deadbot’ is a digital avatar or AI chatbot designed to simulate the personality and behaviour of a deceased individual. The Cambridge study used simulations and different scenarios to try and understand the effects that these AI clones trained on data about the deceased, known as “deadbots” or “griefbots”, could have on living loved ones if made to interact with them as part of this kind of service.
Who Could Make Deadbots and Why?
The research involved several scenarios designed to highlight the issues around the use of deadbots. For example, the possible negative uses of deadbots highlighted in the study included:
– A subscription app that can create a free AI re-creation of a deceased relative (a grandmother in the study), trained on their data, and which can exchange text messages with and contact the living loved one, in a similar way that the deceased relative used to (via WhatsApp) giving the impression that they are still around to talk to. The study scenario showed how the bot could be made to mimic the deceased loved one’s grandmother’s “accent and dialect when synthesising her voice, as well as her characteristic syntax and consistent typographical errors when texting”. However, the study showed how this deadbot service could also be made to output messages that include advertisements in the loved one’s voice, thereby causing the loved one distress. The study also looked at how further distress could be caused if the app designers did not fully consider the user’s feelings around deleting the account and the deadbot, such as if provision is not made to allow them to say goodbye to the deadbot in a meaningful way.
– A service allowing a dying relative (e.g. a father and grandfather), to create their own deadbot that will allow their younger relatives (i.e. children and grandchildren) to get to know them better after they’ve died. The study highlighted negative consequences of this type of service, such as the dying relative not getting consent from the children and grandchildren to be contacted by the ‘deadbolt’ and the resulting unsolicited notifications, reminders, and updates from the deadbot, leaving relatives distressed and feeling as though they were being ‘haunted’ or even ‘stalked’.
Examples of services and apps that already exist and offer to recreate the dead with AI include ‘Project December’, and apps like ‘HereAfter’.
Many Potential Issues
As shown by the examples in the Cambridge research (there were 3 main scenarios), the use of deadbots raise several ethical, psychological and social concerns. Some of the potential ways they could be harmful, unethical, or exploitative (along with the negative feelings they might provoke in loved ones) include concerns, such as:
– Consent and autonomy. As noted in the Cambridge study, a primary concern is whether the deceased gave consent for their personality, appearance, or private thoughts to be used in this way. Using someone’s identity without their explicit consent could be seen as a violation of their autonomy and dignity.
– Accuracy and representation: There is a risk that the AI might not accurately represent the deceased’s personality or views, potentially spreading misinformation or creating a false image that could tarnish their memory.
– Commercial exploitation. The study looked at how a deadbot could be used for advertising because the potential for commercial exploitation of a deceased person’s identity is a real concern. Companies could use deadbots for profit, exploiting a person’s image or personality without fair compensation to their estate or consideration of their legacy.
– Contractual issues. For example, relatives may find themselves in a situation where they are powerless to have an AI deadbot simulation suspended, e.g. if their deceased loved one signed a lengthy contract with a digital afterlife service.
Psychological and Social Impacts
The Cambridge study was designed to look at the possible negative aspects of the use of deadbots, an important part of which are the psychological and social impacts on the living. These could include, for example:
– Impeding grief. Interaction with a deadbot might impede the natural grieving process. Instead of coming to terms with the loss, people may cling to the digital semblance of the deceased, potentially leading to prolonged grief or complicated emotional states.
– There’s also a risk that individuals might become overly dependent on the deadbot for emotional support, isolating themselves from real human interactions and not seeking support from living friends and family.
– Distress and discomfort. As identified in the Cambridge study, aspects of the experience of interacting with a simulation of a deceased loved one can be distressing or unsettling for some people, especially if the interaction feels uncanny or not quite right. For example, the Cambridge study highlighted how relatives may get some initial comfort from the deadbot of a loved one but may become drained by daily interactions that become an “overwhelming emotional weight”.
Potential for Abuse
Considering the fact that, as identified in the Cambridge study, people may develop strong emotional bonds with the deadbot AI simulations thereby making them particularly vulnerable to manipulation, one of the major risks of the growth of a digital afterlife industry (DAI) is the potential for abuse. For example:
– There could be misuse of the deceased’s private information (privacy violations), especially if sensitive or personal data is incorporated into the deadbot without proper safeguards.
– In the wrong hands, deadbots could be used to harass or emotionally manipulate survivors, for example, by a controlling individual using a deadbot to exert influence beyond the grave.
– There is also the real potential for deadbots to be used in scams or fraudulent activities, impersonating the deceased to deceive the living.
Emotional Reactions from Loved Ones
The psychological and social impacts of the use of deadbots as part some kind of service to living loved ones, and/or misuse of deadbots could therefore lead to a number of negative emotional reactions. These could include :
– Distress due to the unsettling experience of interacting with a digital replica.
– Anger or frustration over the misuse or misrepresentation of the deceased.
– Sadness from a constant reminder of the loss that might hinder emotional recovery.
– Fear concerning the ethical implications and potential for misuse.
– Confusion over the blurred lines between reality and digital facsimiles.
What Do The Cambridge Researchers Suggest?
The Cambridge study led to several suggestions of ways in which users of this kind of service may be better protected from its negative effects, including:
– Deadbot designers being required to seek consent from “data donors” before they die.
– Products of this kind being required to regularly alert users about the risks and to provide easy opt-out protocols, as well as measures being taken to prevent the disrespectful uses of deadbots.
– The introduction of user-friendly termination methods, e.g. having a “digital funeral” for the deadbot. This would allow the living relative to say goodbye to the deadbot in a meaningful way if the account was to be closed and the deadbot deleted.
– As highlighted by Dr Tomasz Hollanek, one of the study co-authors: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.”
What Does This Mean For Your Business?
The findings and recommendations from the Cambridge study shed light on crucial considerations that organisations involved in the digital afterlife industry (DAI) must address. As developers and businesses providing deadbot services, there is a heightened responsibility to ensure these technologies are developed and used ethically and sensitively. The study’s call for obtaining consent from data donors before their death underscores the need for clear consent mechanisms to be built in. This consent is not just a legal formality but a foundational ethical practice that respects the rights and dignity of individuals.
Also, the suggestion by the Cambridge team to implement regular risk notifications and provide straightforward opt-out options is needed for greater transparency and user control in digital interactions. This could mean incorporating these safeguards into service offerings to enhance user trust and digital afterlife services companies perhaps positioning themselves as a leaders in ethical AI practice. The introduction of a “digital funeral” to these services could also be a respectful and symbolic way to conclude the use of a deadbot, as well as being a sensitive way to meet personal closure needs, e.g. at the end of the contract.
The broader implications of the Cambridge study for the DAI sector include the need to navigate potential psychological impacts and prevent exploitative practices. As Dr Tomasz Hollanek from the study highlighted, the unintentional distress caused by these AI recreations can be profound, suggesting that their design and deployment strategies should really prioritise psychological safety and emotional wellbeing. This should involve designing AI that is not only technically proficient but also emotionally intelligent and sensitive to the nuances of human grief and memory.
Businesses in this field must also consider the long-term implications of their services on societal norms and personal privacy. The risk of commercial exploitation or disrespectful uses of deadbots could lead to public backlash and regulatory scrutiny, which could stifle innovation and growth in the industry. The Cambridge study, therefore serves as an early but important guidepost for the DAI industry and has highlighted some useful guidelines and recommendations that could contribute to a more ethical and empathetic digital world.
Tech News : OpenAI To Boost Training With Stack Overflow Data
A partnership deal between OpenAI and Stack Overflow (the question-and-answer website for programmers and developers) will see the Stack overflow Q&A data used to train and improve AI model performance, potentially benefitting developers who use OpenAI’s products.
Stack Overflow
Stack Overflow is the world’s largest developer community, with more than 59 million questions and answers. OverflowAPI is the subscription-based API service that gives AI companies access to Stack Overflow’s public dataset so they can use it to train and improve their LLMs.
The Partnership
OpenAI says that its new partnership with Stack Overflow via OverflowAPI access will provide a way for OpenAI to give its users and customers the accurate and vetted data foundation that AI tools need to quickly find a solution to their problem. OpenAI says the deal will also mean that validated technical knowledge from Stack Overflow will be added directly in ChatGPT, thereby giving users “easy access to trusted, attributed, accurate, and highly technical knowledge and code backed by the millions of developers that have contributed to the Stack Overflow platform for 15 years.”
What They Both Get
Open AI says being able to utilise Stack Overflow’s OverflowAPI product and the Stack Overflow data “will help OpenAI improve its AI models using enhanced content and feedback from the Stack Overflow community and provide attribution to the Stack Overflow community within ChatGPT to foster deeper engagement with content.”
The collaboration will also mean that Stack Overflow can utilise OpenAI models “as part of their development of OverflowAI and work with OpenAI to leverage insights from internal testing to maximize the performance of OpenAI models”.
This could help Stack Overflow to create better products for its own Stack Exchange community.
Prashanth Chandrasekar, CEO of Stack Overflow, said of the partnership: “Through this industry-leading partnership with OpenAI, we strive to redefine the developer experience, fostering efficiency and collaboration through the power of community, best-in-class data, and AI experiences,”
Not Everyone Is Happy About The Deal
Despite the positive noises by OpenAI and Stack Overflow about the deal, there appears to have been a mini rebellion among Stack Overflow users, with many removing or editing their questions and answers to stop them from being used to train AI. Many users have also highlighted how this appears to be an about-face by Stack Overflow from a long-standing policy of preventing the use of GenAI in the writing or rewording of any questions or answers posted on the site. Also, there have been reports that Stack Overflow’s moderators have been banning the rebellious users from the site and preventing high-popularity posts from being deleted.
What Does This Mean For Your Business?
The strategic partnership between OpenAI and Stack Overflow signifies a pivotal development in the integration of community-sourced knowledge and artificial intelligence. For businesses, this collaboration could herald a new era of enhanced technical solutions, more refined AI tools, and an enriched knowledge base, potentially reshaping the landscape of tech support and development.
For OpenAI, access to Stack Overflow’s vast repository of programming questions and answers through the OverflowAPI should mean a significant upgrade in the quality and relevance of the data used to train its models. This could translate into AI tools that are not only more accurate but also more attuned to the nuanced requirements of developers. Businesses using OpenAI’s products may find that these tools offer more precise and contextually appropriate solutions, thereby significantly reducing the time developers spend troubleshooting and refining code. This efficiency-boost could accelerate project timelines and improve the cost-effectiveness of technical development teams.
Stack Overflow stands to benefit from this partnership by integrating OpenAI’s cutting-edge AI capabilities into its new product offerings, such as OverflowAI. This could enhance the user experience on Stack Overflow’s platforms, making them more intuitive and responsive to user needs. For businesses that rely on Stack Overflow for problem-solving and knowledge sharing, these improvements may lead to quicker resolutions of technical issues, enabling smoother and more continuous workflow.
However, the partnership has not been met with universal acclaim within the Stack Overflow community. The backlash from some users highlights concerns about the ethical use of community-sourced information. This rebellion sheds light on the growing pains associated with adapting user-generated content for AI training purposes without alienating the very community that generates it. For businesses, this underscores the importance of navigating ethical considerations and community relations as they implement AI solutions.