Video Update : LinkedIn Page : Featured Posts
This video-update explains a new function about how you can add featured posts to your LinkedIn Page …
Tech Tip – Use File Explorer’s “Group By” Feature for Better File Organisation
Organising files in File Explorer can make it easier to manage and locate documents. The Windows “Group By” feature allows you to categorise files by various attributes, such as date, type, size, or name. Here’s how to use it:
– Open File Explorer and navigate to the folder you want to organise.
– Right-click in an empty space within the folder, hover over Group by, and select an attribute to group your files by (e.g. Date modified, Type, Size).
– The files in the folder will now be grouped according to the selected attribute, making it easier to sort and find specific files.
Featured Article : Currys, Accenture, Microsoft & New ‘GPT-4o’
International omnichannel retailer of technology products and services, Currys, has selected Accenture and Microsoft to deliver the core cloud technology infrastructure that will enable it to leverage the latest generative AI technologies.
Accenture?
Accenture is a US multinational professional services company (IT services, cloud, data, AI, and consulting) headquartered in Dublin, that helps leading businesses, governments and other organisations build their digital core. Accenture says it has 742,000 “people serving clients” in more than 120 countries.
Why?
Accenture says their involvement with Microsoft, as part of their joint venture called Avanade (established back in 2000), will be working closely with Currys to “modernise, secure and simplify its technology estate” with the intention “enabling Currys to accelerate the adoption of Microsoft AI technologies such as Azure OpenAI Service”.
What Will It Do For Currys?
Currys says that using Microsoft’s AI technologies will enable it to “unlock value across every part of the business” bringing benefits like:
– Making it easier for customers to shop due to personalised and relevant product information and suggestions tailored to the consumer’s needs at the correct the moment.
– Improved customer retention and loyalty through the provision of improved post-sales experience and warranty services.
– A better experience for staff because they will be equipped with faster and easier access to information including product availability, delivery costs, and add-on services so they can better serve customers and identify potential cross and upselling opportunities.
– Future growth and profitability through the integration of AI into marketing, HR, finance, and legal processes. Currys anticipates that this will increase productivity across core business functions and that AI could be used to create/reveal opportunities to improve omnichannel experiences.
Net Zero?
It’s also hoped that this transition to embracing AI will help accelerate Currys’ journey to meet its net zero emissions before the 2040 target by moving nine existing data centres (including more than 2000 servers and 200 applications) onto Azure, to create a more energy efficient infrastructure.
Technological Leap
Alex Baldock, Group CEO of Currys plc said: “AI is the biggest technological leap of our lifetime. Currys exists to help everyone enjoy amazing technology, so as well as bringing the benefits of AI to millions of customers, we’ll do the same to our own business.”
Ralph Haupter, President (EMEA) at Microsoft said of its new deal with Currys: “By deploying the latest cloud and AI technologies, Currys can enhance the shopping experience for millions of customers, both in-store and online, whilst ensuring its 25,000 employees have the insights at their fingertips to unlock value across the entire business.”
Competition
What Currys hasn’t mentioned in its announcement about its deal with Microsoft and Accenture is that it will enable Currys to compete with other major retailers who are already leveraging AI technologies from Microsoft and Accenture. These include, for example, John Lewis Partnership, Argos (part of Sainsbury’s), Tesco, Amazon, and AO World.
Currys has faced mixed financial performance in recent years due to challenges like increased competition, supply chain disruptions, and changing consumer behaviour. Also, Currys has seen a decline in physical store sales but has tried to offset this with growth in its online sales. Efforts to streamline operations and cut costs have been part of their strategy to adapt to market conditions and improve financial stability, and the deal with Microsoft and Accenture could, therefore, be seen as part of this strategy.
Open AI Announces “Omni” Model
Just four days after the Currys/Microsoft/ Accenture announcement, OpenAI (which is a close partner of Microsoft) made another significant AI announcement with the release of its next-generation GPT-4o (“o” for “omni”) model – now available in ChatGPT. Open AI says it is: “a step towards much more natural human-computer interaction” and that it “accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs.”
OpenAI has also been keen to stress how fast it is (compared to 3.5 and 4.0) saying: “It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation.”
Omni’s key USPs include advanced contextual understanding, superior problem-solving skills, a broader knowledge base, and (apparently) robust ethical safeguards.
Here’s a brief summary of the key features of GPT-4o:
– Multimodal capabilities. GPT-4o can process and generate text, images, audio, and video, enabling diverse applications like image descriptions, video summaries, and interactive media experiences.
– Improved contextual understanding. It can maintain coherence over long conversations, making it highly effective for virtual assistants and other roles requiring extended interactions.
– Advanced problem-solving skills. OpenAI says GPT-4o offers enhanced reasoning, logic, and problem-solving abilities, suitable for tackling complex mathematical problems, data analysis, and scientific research.
– Real-time adaptability. Omni can adjust responses dynamically based on user feedback and changing contexts, improving personalisation and accuracy.
– A broader knowledge base, because it’s been trained on a larger, more diverse dataset, thereby enabling it to offer accurate and informed responses across a wide range of topics.
– Ethical and safe AI practices (according to Accenture) which incorporate advanced safety mechanisms to detect and mitigate harmful content, bias, and misinformation.
– Enhanced integration capabilities for easy embedding into various applications, such as chatbots, customer service platforms, and content creation tools.
What Does This Mean For Your Business?
Currys’ collaboration with Microsoft and Accenture to integrate AI technologies into its operations is a strategic move aimed at transforming its business model and enhancing its competitive edge. By leveraging advanced AI solutions, Currys hopes to streamline its technology infrastructure, improve operational efficiency, unlock value, improve productivity, and deliver personalised customer experiences. Currys no doubt hopes that AI could help it turn around some of the performance of recent years and improve how its online business operates as it moves away from physical stores.
For Currys, the benefits are, therefore, many. For example, the more tailored and personalised shopping experiences that AI can bring could enhance customer satisfaction and loyalty. Also, improved post-sales services, facilitated by AI, could further boost customer retention. Additionally, equipping staff with AI-powered could help drive sales growth. Not forgetting the core functions, integrating AI into HR, finance, and legal processes could increase productivity for Currys and reveal new growth opportunities, particularly in enhancing omnichannel experiences.
It could also be noted that transitioning to a more energy-efficient infrastructure powered by Microsoft’s Azure could help Currys in its net zero emissions by 2040 ambitions, helping the company to present a greener image.
This story also shows how (in the broader business landscape) AI is proving to be a significant advantage across various sectors. Companies using AI are being seen to streamline operations, enhance customer experiences, and make data-driven decisions more effectively. The ability of AI to process vast amounts of data and generate actionable insights is transforming industries from retail and finance to healthcare and logistics, providing a competitive edge to those who adopt it.
The recent launch of OpenAI’s GPT-4o also underscores the rapid advancements in AI technology. With its multimodal capabilities, GPT-4o looks like being a versatile tool for diverse applications. Also, for many ChatGPT users, news that it’s extremely fast will be welcome, and its real-time adaptability, superior problem-solving skills and broad knowledge base may make it a very useful model for the many businesses that are increasingly reliant on generative AI to help with their productivity, innovation, efficiency, and customer engagement.
For OpenAI, the launch of GPT-4o could, of course, strengthens its position in what is already now a highly competitive AI industry and could (probably for a brief period) set a new benchmark for competitors.
Tech Insight : What Are ‘Deadbots’?
Following warnings by ethicists at Cambridge University that AI chatbots made to simulate the personalities of deceased loved ones could be used to spam family and friends, we take a look at the subject of so-called “deadbots”.
Griefbots, Deadbots, Postmortem Avatars
The Cambridge study, entitled “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” looks at the negative consequences and ethical concerns of adoption of generative AI solutions in what it calls “the digital afterlife industry (DAI)”.
Scenarios
As suggested by the title of the study, a ‘deadbot’ is a digital avatar or AI chatbot designed to simulate the personality and behaviour of a deceased individual. The Cambridge study used simulations and different scenarios to try and understand the effects that these AI clones trained on data about the deceased, known as “deadbots” or “griefbots”, could have on living loved ones if made to interact with them as part of this kind of service.
Who Could Make Deadbots and Why?
The research involved several scenarios designed to highlight the issues around the use of deadbots. For example, the possible negative uses of deadbots highlighted in the study included:
– A subscription app that can create a free AI re-creation of a deceased relative (a grandmother in the study), trained on their data, and which can exchange text messages with and contact the living loved one, in a similar way that the deceased relative used to (via WhatsApp) giving the impression that they are still around to talk to. The study scenario showed how the bot could be made to mimic the deceased loved one’s grandmother’s “accent and dialect when synthesising her voice, as well as her characteristic syntax and consistent typographical errors when texting”. However, the study showed how this deadbot service could also be made to output messages that include advertisements in the loved one’s voice, thereby causing the loved one distress. The study also looked at how further distress could be caused if the app designers did not fully consider the user’s feelings around deleting the account and the deadbot, such as if provision is not made to allow them to say goodbye to the deadbot in a meaningful way.
– A service allowing a dying relative (e.g. a father and grandfather), to create their own deadbot that will allow their younger relatives (i.e. children and grandchildren) to get to know them better after they’ve died. The study highlighted negative consequences of this type of service, such as the dying relative not getting consent from the children and grandchildren to be contacted by the ‘deadbolt’ and the resulting unsolicited notifications, reminders, and updates from the deadbot, leaving relatives distressed and feeling as though they were being ‘haunted’ or even ‘stalked’.
Examples of services and apps that already exist and offer to recreate the dead with AI include ‘Project December’, and apps like ‘HereAfter’.
Many Potential Issues
As shown by the examples in the Cambridge research (there were 3 main scenarios), the use of deadbots raise several ethical, psychological and social concerns. Some of the potential ways they could be harmful, unethical, or exploitative (along with the negative feelings they might provoke in loved ones) include concerns, such as:
– Consent and autonomy. As noted in the Cambridge study, a primary concern is whether the deceased gave consent for their personality, appearance, or private thoughts to be used in this way. Using someone’s identity without their explicit consent could be seen as a violation of their autonomy and dignity.
– Accuracy and representation: There is a risk that the AI might not accurately represent the deceased’s personality or views, potentially spreading misinformation or creating a false image that could tarnish their memory.
– Commercial exploitation. The study looked at how a deadbot could be used for advertising because the potential for commercial exploitation of a deceased person’s identity is a real concern. Companies could use deadbots for profit, exploiting a person’s image or personality without fair compensation to their estate or consideration of their legacy.
– Contractual issues. For example, relatives may find themselves in a situation where they are powerless to have an AI deadbot simulation suspended, e.g. if their deceased loved one signed a lengthy contract with a digital afterlife service.
Psychological and Social Impacts
The Cambridge study was designed to look at the possible negative aspects of the use of deadbots, an important part of which are the psychological and social impacts on the living. These could include, for example:
– Impeding grief. Interaction with a deadbot might impede the natural grieving process. Instead of coming to terms with the loss, people may cling to the digital semblance of the deceased, potentially leading to prolonged grief or complicated emotional states.
– There’s also a risk that individuals might become overly dependent on the deadbot for emotional support, isolating themselves from real human interactions and not seeking support from living friends and family.
– Distress and discomfort. As identified in the Cambridge study, aspects of the experience of interacting with a simulation of a deceased loved one can be distressing or unsettling for some people, especially if the interaction feels uncanny or not quite right. For example, the Cambridge study highlighted how relatives may get some initial comfort from the deadbot of a loved one but may become drained by daily interactions that become an “overwhelming emotional weight”.
Potential for Abuse
Considering the fact that, as identified in the Cambridge study, people may develop strong emotional bonds with the deadbot AI simulations thereby making them particularly vulnerable to manipulation, one of the major risks of the growth of a digital afterlife industry (DAI) is the potential for abuse. For example:
– There could be misuse of the deceased’s private information (privacy violations), especially if sensitive or personal data is incorporated into the deadbot without proper safeguards.
– In the wrong hands, deadbots could be used to harass or emotionally manipulate survivors, for example, by a controlling individual using a deadbot to exert influence beyond the grave.
– There is also the real potential for deadbots to be used in scams or fraudulent activities, impersonating the deceased to deceive the living.
Emotional Reactions from Loved Ones
The psychological and social impacts of the use of deadbots as part some kind of service to living loved ones, and/or misuse of deadbots could therefore lead to a number of negative emotional reactions. These could include :
– Distress due to the unsettling experience of interacting with a digital replica.
– Anger or frustration over the misuse or misrepresentation of the deceased.
– Sadness from a constant reminder of the loss that might hinder emotional recovery.
– Fear concerning the ethical implications and potential for misuse.
– Confusion over the blurred lines between reality and digital facsimiles.
What Do The Cambridge Researchers Suggest?
The Cambridge study led to several suggestions of ways in which users of this kind of service may be better protected from its negative effects, including:
– Deadbot designers being required to seek consent from “data donors” before they die.
– Products of this kind being required to regularly alert users about the risks and to provide easy opt-out protocols, as well as measures being taken to prevent the disrespectful uses of deadbots.
– The introduction of user-friendly termination methods, e.g. having a “digital funeral” for the deadbot. This would allow the living relative to say goodbye to the deadbot in a meaningful way if the account was to be closed and the deadbot deleted.
– As highlighted by Dr Tomasz Hollanek, one of the study co-authors: “It is vital that digital afterlife services consider the rights and consent not just of those they recreate, but those who will have to interact with the simulations.”
What Does This Mean For Your Business?
The findings and recommendations from the Cambridge study shed light on crucial considerations that organisations involved in the digital afterlife industry (DAI) must address. As developers and businesses providing deadbot services, there is a heightened responsibility to ensure these technologies are developed and used ethically and sensitively. The study’s call for obtaining consent from data donors before their death underscores the need for clear consent mechanisms to be built in. This consent is not just a legal formality but a foundational ethical practice that respects the rights and dignity of individuals.
Also, the suggestion by the Cambridge team to implement regular risk notifications and provide straightforward opt-out options is needed for greater transparency and user control in digital interactions. This could mean incorporating these safeguards into service offerings to enhance user trust and digital afterlife services companies perhaps positioning themselves as a leaders in ethical AI practice. The introduction of a “digital funeral” to these services could also be a respectful and symbolic way to conclude the use of a deadbot, as well as being a sensitive way to meet personal closure needs, e.g. at the end of the contract.
The broader implications of the Cambridge study for the DAI sector include the need to navigate potential psychological impacts and prevent exploitative practices. As Dr Tomasz Hollanek from the study highlighted, the unintentional distress caused by these AI recreations can be profound, suggesting that their design and deployment strategies should really prioritise psychological safety and emotional wellbeing. This should involve designing AI that is not only technically proficient but also emotionally intelligent and sensitive to the nuances of human grief and memory.
Businesses in this field must also consider the long-term implications of their services on societal norms and personal privacy. The risk of commercial exploitation or disrespectful uses of deadbots could lead to public backlash and regulatory scrutiny, which could stifle innovation and growth in the industry. The Cambridge study, therefore serves as an early but important guidepost for the DAI industry and has highlighted some useful guidelines and recommendations that could contribute to a more ethical and empathetic digital world.
Tech News : OpenAI To Boost Training With Stack Overflow Data
A partnership deal between OpenAI and Stack Overflow (the question-and-answer website for programmers and developers) will see the Stack overflow Q&A data used to train and improve AI model performance, potentially benefitting developers who use OpenAI’s products.
Stack Overflow
Stack Overflow is the world’s largest developer community, with more than 59 million questions and answers. OverflowAPI is the subscription-based API service that gives AI companies access to Stack Overflow’s public dataset so they can use it to train and improve their LLMs.
The Partnership
OpenAI says that its new partnership with Stack Overflow via OverflowAPI access will provide a way for OpenAI to give its users and customers the accurate and vetted data foundation that AI tools need to quickly find a solution to their problem. OpenAI says the deal will also mean that validated technical knowledge from Stack Overflow will be added directly in ChatGPT, thereby giving users “easy access to trusted, attributed, accurate, and highly technical knowledge and code backed by the millions of developers that have contributed to the Stack Overflow platform for 15 years.”
What They Both Get
Open AI says being able to utilise Stack Overflow’s OverflowAPI product and the Stack Overflow data “will help OpenAI improve its AI models using enhanced content and feedback from the Stack Overflow community and provide attribution to the Stack Overflow community within ChatGPT to foster deeper engagement with content.”
The collaboration will also mean that Stack Overflow can utilise OpenAI models “as part of their development of OverflowAI and work with OpenAI to leverage insights from internal testing to maximize the performance of OpenAI models”.
This could help Stack Overflow to create better products for its own Stack Exchange community.
Prashanth Chandrasekar, CEO of Stack Overflow, said of the partnership: “Through this industry-leading partnership with OpenAI, we strive to redefine the developer experience, fostering efficiency and collaboration through the power of community, best-in-class data, and AI experiences,”
Not Everyone Is Happy About The Deal
Despite the positive noises by OpenAI and Stack Overflow about the deal, there appears to have been a mini rebellion among Stack Overflow users, with many removing or editing their questions and answers to stop them from being used to train AI. Many users have also highlighted how this appears to be an about-face by Stack Overflow from a long-standing policy of preventing the use of GenAI in the writing or rewording of any questions or answers posted on the site. Also, there have been reports that Stack Overflow’s moderators have been banning the rebellious users from the site and preventing high-popularity posts from being deleted.
What Does This Mean For Your Business?
The strategic partnership between OpenAI and Stack Overflow signifies a pivotal development in the integration of community-sourced knowledge and artificial intelligence. For businesses, this collaboration could herald a new era of enhanced technical solutions, more refined AI tools, and an enriched knowledge base, potentially reshaping the landscape of tech support and development.
For OpenAI, access to Stack Overflow’s vast repository of programming questions and answers through the OverflowAPI should mean a significant upgrade in the quality and relevance of the data used to train its models. This could translate into AI tools that are not only more accurate but also more attuned to the nuanced requirements of developers. Businesses using OpenAI’s products may find that these tools offer more precise and contextually appropriate solutions, thereby significantly reducing the time developers spend troubleshooting and refining code. This efficiency-boost could accelerate project timelines and improve the cost-effectiveness of technical development teams.
Stack Overflow stands to benefit from this partnership by integrating OpenAI’s cutting-edge AI capabilities into its new product offerings, such as OverflowAI. This could enhance the user experience on Stack Overflow’s platforms, making them more intuitive and responsive to user needs. For businesses that rely on Stack Overflow for problem-solving and knowledge sharing, these improvements may lead to quicker resolutions of technical issues, enabling smoother and more continuous workflow.
However, the partnership has not been met with universal acclaim within the Stack Overflow community. The backlash from some users highlights concerns about the ethical use of community-sourced information. This rebellion sheds light on the growing pains associated with adapting user-generated content for AI training purposes without alienating the very community that generates it. For businesses, this underscores the importance of navigating ethical considerations and community relations as they implement AI solutions.
Tech News : Wales Has Put A SOC In It
The UK’s first national security operations centre (SOC) known as CymruSOC, has launched in Wales to protect the country’s local authorities and fire and rescue services from cyber-attacks.
SOC
The Welsh government has announced that the new SOC service will be managed by Cardiff-based firm Socura, with the intention of ensuring key organisations can continue offering critical services without disruption due to cyber-attacks. Also, the SOC service is intended to safeguard the data of the majority of the Welsh population, as well as 60,000 employees across the public sector.
The Issue
The Wales First Minister, Vaughan Gething, recently outlined the reasons behind the introduction of CymruSOC, saying that the pandemic showed how important the digital side of peoples’ lives has become. Also, the fact that it is now “central” to the way people in Wales learn, work, access public services, and conduct business i.e., there’s now a reliance on digital), has also led to a “stark increase in the risk of cyber-attacks which are becoming ever more common and sophisticated.”
24/7 Monitoring
The Socura SOC team will monitor for potential threats such as phishing and ransomware from its 24/7 remote SOC. Also, the Welsh government says that in conjunction with the National Cyber Security Centre, CymruSOC will share threat intelligence information to ensure they are aware of emerging risks.
‘Defend As One’ Approach
First Minister Vaughan Gething has also highlighted how CymruSOC (this new national security operations centre), a first-of-its-kind solution with social partnership at its heart, will “take a ‘defend as one’ approach”. Mr Gething views CymruSOC as being “a vital part” of the Cyber Action Plan for Wales, which was launched only one year ago, and which Mr Gething describes as “making good progress to protect public services and strengthen cyber resilience and preparedness.”
Incidents
Recent incidents which may have helped speed along the setting up of SOC include a reported hack on the Welsh government’s iShare Connect portal earlier this year, and Harlech Community Council (North Wales) being scammed last November by online fraudsters to the tune of £9,000 (the equivalent of 10 per cent of its annual budget.
A Boost In Defences
Andy Kays, the CEO of Cardiff-based firm Socura, which is managing CymruSOC, has noted that by sharing a SOC and threat intel across all Welsh local authorities, “even the smallest Welsh town will now have the expertise and defences of a large modern enterprise organisation.”
Also, Mr Kays highlighted the importance of boosting the cyber-defences of and protecting the data held by local councils by making the point that a local council is where people “register a birth, apply for schools, housing, and marriage licences” and it is this that makes them “a prized target for financially motivated cybercriminal groups as well as nation state actors seeking to cause disruption to critical infrastructure.”
What Does This Mean For Your Business?
Considering the importance of public sector services such as fire and rescue, plus the fact that the wealth of data and sometimes outdated and underfunded systems of councils and other public sector institutions often make them a softer target for cyber criminals, this is a timely development for Wales. Also, for businesses operating within Wales, this development offers substantial benefits that extend well beyond the immediate protection of public services.
Firstly, the centralised security operations centre, managed by (private) Cardiff-based firm Socura, should help ensure that even the smallest of local councils can enjoy the cyber-defences typically reserved for large enterprises. This is not just a boost for the public sector but also fortifies the security landscape in which Welsh businesses operate. By boosting the cyber-defences of local authorities, businesses that interact with or rely on them for services can expect a more secure and reliable digital environment. This integration of robust cybersecurity measures means that businesses can operate with a greater assurance of continuity, (hopefully) free from the disruptions of potential cyber-attacks on critical public infrastructure.
The ‘defend as one’ approach advocated by CymruSOC emphasises collaborative security, which may be a crucial advantage for businesses. For example, the shared threat intelligence and resources may ensure that emerging cyber threats are identified and mitigated swiftly, not just within the public sector but potentially within the private sector as well.
Also, the focus on safeguarding data across public sector entities could indirectly benefit businesses. With public services handling sensitive information more securely, businesses interacting with these services or handling similar data can align their practices with these enhanced standards, thus improving their overall data protection strategies. This alignment not only helps in compliance with regulatory requirements but also builds trust with customers and partners who are increasingly concerned about data security.
The establishment of CymruSOC, therefore, appears to be a forward-thinking initiative that promises not just to fortify the digital framework of Wales’s public sector, but also for businesses and other entities that interact with them, all of which could help foster growth and innovation in Wales in an increasingly digital business landscape.