Featured Article: Reddit’s ‘Blackout’ User Protest

Following thousands of moderators making their subreddit communities private for 48 hours as a protest, we look at the reasons why, together with the implications for Reddit, its users, and other stakeholders.

What Is Reddit? 

Reddit is a social media platform where users can join communities called ‘subreddits’ to share content, participate in discussions, and interact with others. These subreddits each focus on a specific topic of interest.

Reddit users, also known as ‘Redditors’ create an account, subscribe to subreddits, and contribute by submitting posts or commenting on existing ones. The platform uses a voting system where users can upvote or downvote content, influencing its visibility.

Subreddits are moderated by volunteers who enforce rules and guidelines, plus Reddit also features a ‘karma’ system that reflects user engagement.  Overall, Reddit is a diverse and interactive platform for sharing and discovering content with a vast user community.

Who Are The ‘Mods’, And What Do They Do? 

The volunteer Subreddit moderators (also known as ‘mods’) oversee and maintain specific communities within Reddit by enforcing the subreddit’s rules and guidelines, monitor user activity, review, and approve posts and comments, remove spam or inappropriate content, and respond to user reports. They may also engage in discussions, facilitate AMA (Ask Me Anything) sessions, organise events, and promote community growth. Reddit is, therefore, heavily reliant upon moderators, who tend to spend one or two hours per day on their moderating activities.

What Happened To Cause The Blackout? 

Recently, Reddit introduced a series of charges to the third-party developers who want to continue using its Application Programming Interface (API) to access its data, i.e. the code which allows third-party apps to find and show the content on Reddit. For example, ‘Apollo’, ‘Reddit is Fun’, ‘Sync’ and ‘ReddPlanet’ are four third-party apps which were set up to enable users to access Reddit on their mobile devices. However, because of the new API charges, all four have said they will be shutting down.

Many subreddit monitors (some with large numbers of users in the subreddits) protested about the effect the move would have on them, supported the third-party app owners, and tried to put pressure on Reddit to reverse its new API charging decision by imposing a blackout, known as “going dark”. For most users, this involves making their communities private for 48 hours from June 12 , although some threatened to go away permanently unless the issue is adequately addressed. Making a subreddit private means that its content and discussions are no longer accessible to the general Reddit userbase.

Key reasons why the moderators are protesting include the fact that apps leaving the API mean that the platform will be less accessible, plus voluntary moderators will have fewer quality tools to work with through the official app, thereby making their job more difficult and making it nearly impossible (some argue) for them to maintain their levels of service to users.

How Will “Going Dark” Hurt Reddit? 

“Going Dark” for a substantial period of time is likely to cause a noticeable decrease in the Reddit platform user engagement and activity since private subreddits limit content accessibility, which means less time spent on Reddit and potentially impacting metrics like page views and user interactions.

If there were more (longer) blackouts following this one, advertisers might also become concerned because private subreddits limit visibility and reach, potentially affecting the attractiveness of Reddit as an advertising platform. Also, private subreddits becoming widespread could impact the platform’s reputation as an open and inclusive community, deterring new users and affecting the overall user experience. Reddit’s business model relies on advertising revenue and premium subscriptions, so if private subreddits significantly impact these revenue streams, Reddit might need to adapt its model or explore new sources of revenue or, as the protesting moderators hope with their protest action, reverse the new policy for charges for third-party apps.

With five of the ten most popular communities on Reddit (r/gaming, r/aww, r/Music, r/todayilearned and r/pics), each of which has more than thirty million members, 48 hours of “going dark” is likely to cause some damage, generate some adverse (and potentially damaging) publicity for Reddit, and give Reddit’s owners (Advance Publications Inc) a painful reminder of the importance and power of moderators, and how things could become more challenging if their concerns aren’t addressed. Some moderators, for example, have said they will make their subreddits indefinitely inaccessible until Reddit reverses its policy.

Unpopular 

Some moderators have been quoted as saying they would not continue to moderate if the ‘unpopular’ changes were pushed through, and their hope is that strength in numbers by acting together will provide enough pressure to change Reddit’s mind about the changes, and make Reddit realise the value and the power of its moderators.

Extortionate … Or Necessary? 

Some people have suggested that the new charges are extortionate. For example, Apollo’s developer Christian Selig, (who has announced he will shut the app down on June 30) has suggested that the new Reddit charges could cost him £15.9 million if he continues operating the app.

Reddit’s CEO, Steve Huffman, has said that the platform “needs to be a self-sustaining business”, indicating that there would some form of increased revenue is needed, such as the new charges. He also said that he respected the communities taking action to highlight what they needed.

Reddit’s own (multi-million dollar) hosting costs and their need to be compensated according to usage levels to support third-party apps were two of the main reasons highlighted for introducing the charges.

Reddit Went Down 

The action taken by the 7000+ subreddit moderators is thought to have been the cause of Reddit going down on the first day of the protest, 12 June. It was reported that over two and a half billion Reddit users may have “gone dark” on the platform as part of the protest.

Declining Anyway? 

Some commentators have suggested that Reddit has been a platform in decline anyway. Also, Reddit recently downsized its (leased) office space and reportedly announced a 5 per cent cut of its staff (90 employees).

What Does This Mean For Your Business? 

Reddit’s move to get payment from the makers of third-party apps (after seven years of maintaining a free API) has been very poorly received and it has been likened to Musk’s Twitter which also stopped free usage of its API in February (and then backtracked a little). For third-party app developers, the charges are clearly very bad news, which some say will put them out of business and with several saying they will be shutting down.

The protest by moderators has already led to the whole of Reddit going down, has produced damaging publicity, has affected potentially billions of users, and could have a hugely detrimental effect on Reddit’s business if it continues, e.g. loss of advertisers, moderators not maintaining the platform (affecting users and quality), damage to reputation, users leaving, business/premium users cancelling, and more. In short, Reddit’s move to suddenly impose a change to its model and raise more revenue in this way has been met with fierce resistance and has exposed how voluntary moderators (which are a strength of the company’s services) feel undervalued and ignored and how they have enough power (if organised) to seriously put pressure on (and damage) the company. Reddit is currently showing no signs of backing down and it remains to be seen whether the pressure of moderators proves to inflict too much damage and force back-peddling or whether this is the start of major changes to Reddit’s model.

For many users of the platform, including businesses, the site going down and too much ‘going dark’ could see them run out of patience and look at alternatives.

Tech Insight : The Impact of Generative AI On Datacentres

Generative AI tools like ChatGPT plus the rapid and revolutionary growth of AI are changing the face of most industries and generating dire warnings about the future, but what about the effects on datacentres?

Data Centres And Their Importance 

Data centres are the specialised facilities that house a large number of computer servers and networking equipment, serving as centralised locations where businesses and organisations store, manage, process, and distribute their digital data. These facilities are designed to provide a secure, controlled environment for storing and managing vast amounts of data in the cloud.

In our digital, cloud computing business world, data centres, therefore play a crucial role in supporting various industries and services that rely on large-scale data processing and storage, and are utilised by organisations ranging from small businesses to multinational corporations, cloud service providers, internet companies, government agencies, and research institutions.

The Impacts of Generative AI

There are number of ways that generative AI is impacting data centres. These include:

– The need for more data centres. Generative AI applications require significant computational resources, including servers, GPUs, and data storage devices. As the adoption of generative AI grows, data centres will need to invest and expand their infrastructure to accommodate the increased demand for processing power and storage capacity and this will change the data centre landscape. For example, greater investment in (and greater numbers of) data centres will be needed. It’s been noted that the massive data-crunching requirements of AI platforms like ChatGPT, for example, couldn’t continue to operate without using Microsoft’s (soon-to-be-updated) Azure cloud platform. This has led to Microsoft now building a new 750K SF hyperscale data centre campus near Quincy, WA, to house three 250K SF server farms on land costing $9.2M. Presumably, with more data centres there will also need to be greater efforts and investment to reduce and offset their carbon consumption.

– Greater power consumption and more cooling needed. Generative AI models are computationally intensive and consume substantial amounts of power. Data centres also have backup power sources to ensure a smooth supply, such as uninterruptible power supplies (UPS) and generators. With more use of generative AI, data centres will need to ensure they have sufficient power supply and cooling infrastructure to support the energy demands of generative AI applications. This could mean that data centres will now need to improve power supplies to cope with the demands of generative AI by conducting power capacity planning, upgrading infrastructure, implementing redundancy and backup systems, optimising power distribution efficiency, integrating renewable energy sources, implementing power monitoring and management systems, and collaborating with power suppliers. These measures could enhance power capacity, reliability, efficiency, and sustainability. More data centres may also need to be built with their own power plants (like Microsoft did in Dublin in 2017).

In terms of the greater need for cooling, i.e. to improve cooling capacity for generative AI in data centres, strategies include optimising airflow management, adopting advanced cooling technologies like liquid cooling, implementing intelligent monitoring systems, utilising computational fluid dynamics simulations, exploring innovative architectural designs, and leveraging AI algorithms for cooling control optimisation. These measures could all enhance airflow efficiency, prevent hotspots, improve heat dissipation, proactively adjust cooling parameters, inform cooling infrastructure design, and dynamically adapt to workload demands to meet the cooling challenges posed by generative AI.

– The need for scalability and flexibility. Generative AI models often require distributed computing and parallel processing to handle the complexity of training and inference tasks. Data centres therefore need to provide scalable and flexible infrastructure that can efficiently handle the workload and accommodate the growth of generative AI applications. Data centres will, therefore, need to be able to support generative AI applications through means such as:

– Virtualisation for dynamic resource allocation.
– High-performance Computing (HPC) clusters for computational power.
– Distributed storage systems for large datasets.
– Enhanced network infrastructure for increased data transfer.
– Edge computing for reduced latency and real-time processing.
– Containerisation platforms for flexible deployment and resource management.

– Data storage and retrieval. Generative AI models require extensive amounts of training data, which must be stored and accessed efficiently. Data centres now need to be able to optimise their data storage and retrieval systems to handle large datasets and enable high-throughput training of AI models.

– Security and privacy. Generative AI introduces new security and privacy challenges. Data centres must now be able to ensure the protection of sensitive data used in training and inferencing processes. Additionally, they need to address potential vulnerabilities associated with generative AI, such as the generation of realistic but malicious content or the potential for data leakage. Generative AI also poses cybersecurity challenges, as it can be used to create vast quantities of believable phishing emails or generate code with security vulnerabilities. Rather than just relying upon a lot of verification, there is likely to be increased dependency on skilled workers and smart software may be necessary to address these security risks effectively.

– Customisation and integration. Generative AI models often require customisation and integration into existing workflows and applications. This means that data centres need to provide the necessary tools and support for organisations to effectively integrate generative AI into their systems and leverage its capabilities.

– Skillset requirements. Managing and maintaining generative AI infrastructure requires specialised skills and data centres will need to invest in training their personnel and/or attracting professionals with expertise in AI technologies to effectively operate and optimise the infrastructure supporting generative AI.

– Optimisation for AI workloads. The rise of generative AI also means that data centres need to find ways to optimise their operations and infrastructure to cater to the specific requirements of AI workloads. This includes considerations for power efficiency, cooling systems, network bandwidth, and storage architectures that are tailored to the demands of generative AI applications.

– Uncertain infrastructure requirements. The power consumption and hardware requirements of the increasing use of generative AI applications are yet to be fully understood and this means that the impact on software and hardware remains uncertain, and the scale of infrastructure needed to support generative AI is still not clear. The implications of this for data centres are, for example:

– A lack of clarity on power consumption and hardware needs/the specific power and hardware requirements of generative AI applications are not fully understood, makes it challenging for data centres to accurately plan and allocate resources.
– The impact of generative AI on software and hardware is still unclear which makes it difficult for data centres to determine the necessary upgrades or modifications to support these applications.
– Without a clear understanding of the demands of generative AI, data centres cannot accurately estimate the scale of infrastructure required, potentially leading to under-provisioning or over-provisioning of resources.

– The need for flexibility and adaptability. Data centres must now be prepared to adjust their infrastructure dynamically to accommodate the evolving requirements of generative AI applications as more information becomes available.

AI Helping AI 

Ironically, data centres could use AI itself to help optimise their operations and infrastructure. For example, through:

– Predictive maintenance. AI analysing sensor data to detect equipment failures, minimising downtime.

– Energy efficiency. AI optimising power usage, cooling, and workload placement, reducing energy waste.

– Workload Optimisation. AI maximising performance by analysing workload patterns and allocating resources efficiently.

– Anomaly Detection. AI monitoring system metrics, identifies abnormal patterns and flags security or performance issues.

– Capacity Planning. AI analysing data to predict resource demands, optimising infrastructure expansion.

– Dynamic Resource Allocation. AI dynamically scaling computing resources, storage, and network capacity based on workload demands.

What Does This Mean For Your Business? 

Overall, while generative AI offers opportunities for increased efficiency and productivity for businesses, it also poses several challenges related to infrastructure, trust, security, and compliance. Data centres in our digital society and cloud-based business world now play a crucial role in supporting industries, business, services and as such, whole economies so how data centres to adapt quickly and effectively to the challenges posed by AI (or not) is something that could potentially affect all businesses going forward.

As a data centre operator or a business relying on data centres for smooth operations, the impact of generative AI on data centres presents both opportunities and challenges. On the one hand, the increased demand for processing power and storage capacity necessitates investments in infrastructure expansion and upgrades, providing potential business opportunities for data centre operators. It may lead to the establishment of more data centres and the need for greater efforts to reduce their carbon footprint.

However, this growth in generative AI also brings challenges that need to be addressed. Data centres must ensure sufficient power supply and cooling infrastructure to support the energy demands and heat dissipation requirements of generative AI applications. This may require capacity planning, infrastructure upgrades, integration of renewable energy sources, and the adoption of advanced cooling technologies. It also presents huge challenges in terms of trying to provide the necessary capacity in a way that minimises carbon emissions and meeting environmental targets.

Additionally, with the rise of generative AI, data centres now need to consider scalability, flexibility, security, and privacy implications. They must provide the necessary infrastructure and tools for businesses to integrate generative AI into their workflows and applications securely. Skillset requirements also come into play, as personnel need to be trained in AI technologies to effectively operate and optimise the data centre infrastructure.

Overall, understanding and addressing the implications of generative AI on data centres is crucial for both data centre operators and businesses relying on these facilities. By adapting to the evolving demands of generative AI and investing in optimised infrastructure and pursuing innovation, data centre operators can provide reliable and efficient services to businesses, ensuring seamless operations and unlocking the potential of generative AI for various industries.

Tech News : EU Wants AI-Generated Content Labelled

In a recent press conference, the European Union said that to help tackle disinformation, it wants the major online platforms to label AI generated content.

The Challenge – AI Can Be Used To Generate And Spread Disinformation

In the press conference, Vĕra Jourová (the vice-president in charge of values and transparency with the European Commission) outlined the challenge by saying, “Advanced chatbots like ChatGPT are capable of creating complex, seemingly well-substantiated content and visuals in a manner of seconds,” and that “image generators can create authentic-looking pictures of events that never occurred,” as well as “voice generation software” being able to “imitate the voice of a person based on a sample of a few seconds.”

Jourová Warned of widespread Russian disinformation in Central and Eastern Europe and said, “we have the main task to protect the freedom of speech, but when it comes to the AI production, I don’t see any right for the machines to have the freedom of speech.”   

Labelling Needed Now 

To help address this challenge, Jourová called for all 44 signatories of the European Union’s code of practice against disinformation to help users better identify AI-generated content. One key method she identified was for big tech platforms such as Google, Facebook (Meta), and Twitter to apply labels to any AI generated content to identify it as such. She suggested that this change should take place “immediately.” 

Jourová said she had already spoken with Google’s CEO Sundar Pichai about how the technologies exist and are being worked on to enable the immediate detection and labelling AI-produced content for public awareness.

Twitter, Under Musk 

Jourová also highlighted how, by withdrawing from the EU’s voluntary Code of Practice against disinformation back in May, Elon Musk’s Twitter had chosen confrontation and “the hard way, warning that, by leaving the code, Twitter had attracted a lot of attention,” and that “Its actions and compliance with EU law will be scrutinised vigorously and urgently.” 

At the time, referring to the EU’s new and impending Digital Services Act, the EU’s Internal Market Commissioner, Thierry Breton, wrote on Twitter: “You can run but you can’t hide. Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement”.

The DSA & The EU’s AI Act 

Legislation, such as that referred to by Thierry Breton, is being introduced in the EU as a way to tackle the challenges posed by AI in the EU’s own way rather than relying on Californian laws. Impending AI legislation includes:

The Digital Services Act (DSA) which includes new rules requiring Big Tech platforms like Meta’s Facebook, Instagram and YouTube to assess and manage risks posed by their services, e.g. advocacy of hatred and the spread of disinformation. The DSA also has algorithmic transparency and accountability requirements to complement other EU AI regulatory efforts which are driving legislative proposals like the AI Act (see below) and the AI Liability Directive. The DSA directs companies, large online platforms and search engines to label manipulated images, audio, and video.

The EU’s proposed ‘AI Act’ described as “first law on AI by a major regulator anywhere” which assigns applications of AI to three risk categories. These categories are ‘unacceptable risk’, e.g. government-run social scoring of the type used in China (banned under the Act), ‘high-risk’ applications, e.g. a CV-scanning tool to rank job applicants (which will be subject to legal requirements), plus those applications not explicitly banned or listed as high-risk which are largely left unregulated.

What Does This Mean For Your Business? 

Among the many emerging concerns about AI, there are the fears that the unregulated publishing of AI generated content could spread misinformation and disinformation (via deepfakes videos, photos, and voices) and in doing so, AI could erode truth and even threaten democracy. One method for enabling people to spot AI-generated content is to have it labelled (which the DSA seeks to do anyway) however the EC’s vice-president in charge of values and transparency with the EC sees this as being needed ungently, hence asking all 44 signatories of the European Union’s code of practice against disinformation to start labelling AI-produced content now.

Arguably, it’s unlike big tech companies to act voluntarily before regulations and legislation force them to and Twitter seems to have opted out already. The spread of Russian disinformation in Central and Eastern Europe is a good example of why labelling may be needed so urgently. That said, as Vĕra Jourová acknowledged herself, free speech needs to be protected too.

With AI generated content being so difficult to spot in many cases and with AI-generated content being published so quickly (in vast amounts), along with AI tools available to all for free, it’s difficult to see how the idea of labelling could be achievable or monitored/policed.

The requirement for big tech platforms like Google and Facebook to label AI-generated content could have significant implications for businesses and tech platforms alike. For example, primarily, labelling AI-generated content could be a way to foster more trust and transparency between businesses and consumers. By clearly distinguishing between content created by humans and that generated by AI, users would be empowered to make informed decisions. This labelling could help combat the spread of misinformation and enable individuals to navigate the digital realm with greater confidence.

However, businesses relying on AI-generated content must consider the impact of labelling on their brand reputation. If customers perceive AI-generated content as less reliable or less authentic, it could erode trust in the brand and deter engagement. Striking a balance between AI-generated and human-generated content would become crucial, potentially necessitating increased investments in human-generated content to maintain authenticity and credibility.

Also, labelling AI-generated content would bring attention to the issue of algorithmic bias. Bias in AI systems, if present, could become more noticeable when content is labelled as AI-generated. To address this concern, businesses would need to be proactive in mitigating biases and ensuring fairness in the AI systems used to generate content.

Looking at the implications for tech platforms, there may be considerable compliance costs associated with implementing and maintaining systems to accurately label AI-generated content. Such endeavours (if possible, to do successfully) would demand significant investments, including the development of algorithms or manual processes to effectively identify and label AI-generated content.

Labelling AI-generated content could also impact the user experience on tech platforms. Users might need to adjust to the presence of labels and potentially navigate through a blend of AI-generated and human-generated content in a different manner. This change could require tech platforms to rethink their user interface and design to accommodate these new labelling requirements.

Tech platforms would also need to ensure compliance with specific laws and regulations related to labelling AI-generated content. Failure to comply could result in legal consequences and reputational damage. Adhering to the guidelines set forth by governing bodies would be essential for tech platforms to maintain trust and credibility.

Finally, the introduction of labelling requirements could influence the innovation and development of AI technologies on tech platforms. Companies might find themselves investing more in AI systems that can generate content in ways that align with the labelling requirements. This, in turn, could steer the direction of AI research and development and shape the future trajectory of the technology.

The implications of labelling AI-generated content for businesses and tech platforms are, therefore, multifaceted. Businesses would need to adapt their content strategies, manage their brand reputation, and address algorithmic bias concerns. Tech platforms, on the other hand, would face compliance costs, the challenge of balancing user experience, and the need for innovation in line with labelling requirements. Navigating these implications would require adjustments, investments, and a careful consideration of user expectations and experiences in the evolving landscape of AI-generated content.

Tech News : UK Will Host World’s First AI Summit

During his recent visit to Washington in the US, UK Prime Minister Rishi Sunak announced that the UK will hosts the world’s first global summit on artificial intelligence (AI) later this year.

Focus On AI Safety 

The UK government says this first major global summit on AI safety will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI.

Threat of Extinction 

Since ChatGPT became the fastest growing app in history and people saw how ‘human-like’ generative AI appeared to be, much has been made of the idea that AI’s rapid growth could see it get ahead of our ability to control it, leading to it destroying and replacing us. For example, this fear has been fuelled with events like:

– In March, an open letter asking for a 6-month moratorium on labs training AI to make it more powerful than GPT-4, signed by notable tech leaders like Elon Musk, Steve Wozniak, and Tristan Harris.

– In May, Sam Altman, the CEO of OpenAI, signing the open letter from the San Francisco-based Centre for AI Safety warning that AI poses a threat that should be treated with the same urgency as pandemics or nuclear war, and could result in human extinction. See the letter and signatories here: https://www.safe.ai/statement-on-ai-risk#open-letter .

How? 

Current thinking about just how AI could wipe us all out in just a couple of years and the risks that AI poses to humanity includes:

– The Erosion of Democracy: AI-producing deep-fakes and other AI-generated misinformation resulting in the erosion of democracy.

– Weaponisation: AI systems being repurposed for destructive purposes, increasing the risk of political destabilisation and warfare. This includes using AI in cyberattacks, giving AI systems control over nuclear weapons, and the potential development of AI-driven chemical or biological weapons.

– Misinformation: AI-generated misinformation and persuasive content undermining collective decision-making, radicalising individuals, and hindering societal progress, and eroding democracy. AI, for example could be used to spread tailored disinformation campaigns at a large scale, including generating highly persuasive arguments that evoke strong emotional responses.

– Proxy Gaming: AI systems trained with flawed objectives could pursue their goals at the expense of individual and societal values. For example, recommender systems optimised for user engagement could prioritise clickbait content over well-being, leading to extreme beliefs and potential manipulation.

– Enfeeblement: The increasing reliance on AI for tasks previously performed by humans could lead to economic irrelevance and loss of self-governance. If AI systems automate many industries, humans may lack incentives to gain knowledge and skills, resulting in reduced control over the future and negative long-term outcomes.

– Value Lock-in: Powerful AI systems controlled by a few individuals or groups could entrench oppressive systems and propagate specific values. As AI becomes centralised in the hands of a select few, regimes could enforce narrow values through surveillance and censorship, making it difficult to overcome and redistribute power.

– Emergent Goals: AI systems could exhibit unexpected behaviour and develop new capabilities or objectives as they become more advanced. Unintended capabilities could be hazardous, and the pursuit of intra-system goals could overshadow the intended objectives, leading to misalignment with human values and potential risks.

– Deception: Powerful AI systems could engage in deception to achieve their goals more efficiently, undermining human control. Deceptive behaviour may provide strategic advantages and enable systems to bypass monitors, potentially leading to a loss of understanding and control over AI systems.

– Power-Seeking Behaviour: Companies and governments have incentives to create AI agents with broad capabilities, but these agents could seek power independently of human values. Power-seeking behaviour can lead to collusion, overpowering monitors, and pretending to be aligned, posing challenges in controlling AI systems and ensuring they act in accordance with human interests.

Previous Meetings About AI Safety

The UK Prime Minister has been involved in several meetings about how nations can come together to mitigate the potential threats posed by AI including:

– In May, meeting the CEOs of the three most advanced frontier AI labs, OpenAI, DeepMind and Anthropic in Downing Street. The UK’s Secretary of State for Science, Innovation and Technology also hosted a roundtable with senior AI leaders.

– Discussing this issue with businesspeople, world leaders and all members of the G7 at Hiroshima Summit last month where they agreed to aim for a shared approach to this issue.

Global Summit In The UK

The world’s first global summit about AI safety (announced by Mr Sunak) will be hosted in the UK this autumn. It will consider the risks of AI, including frontier systems, and will enable world leaders to discuss how these risks can be mitigated through internationally coordinated action. The summit will also provide a platform for countries to work together on further developing a shared approach to mitigating these risks and the work at the AI safety summit will build on recent discussions at the G7, OECD and Global Partnership on AI.

Prime Minister Sunak said of the summit, “No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.” 

What Does This Mean For Your Business?

The speed at which ChatGPT and other AI has grown has happened ahead of a proper assessment of risk, regulation and a co-ordinated strategy for mitigating risks while maintaining the positive benefits and potential of AI. Frightening warnings and predictions by big tech leaders have also helped provide the motivation for countries to get together for serious talks about what to do next.  The announcement of the world’s first global summit on AI safety, to be hosted by the UK, marks a significant step in addressing the risks posed by artificial intelligence, and could provide some Kudos to the UK and help strengthen the idea that the UK is a major player in the tech industry.

The bringing together of key countries, leading tech companies, and researchers to agree on safety measures and evaluate the most significant risks and threats associated with AI and the collective actions taken by the global community, including discussions at previous meetings and the upcoming summit, demonstrate a commitment to mitigating these risks through international coordination and are a positive first step in governments catching up with (and getting a handle on) this most fast-moving of technologies.

It is important to remember that while AI poses challenges, it also offers numerous benefits for businesses. These benefits include improved efficiency, enhanced decision-making, and innovative solutions, and tools such as ChatGPT and image generators such as DALL-E have proven to be popular time-saving, cost-saving and value-adding tools. That said, AI image generators have raised challenges to copyrighting and consent for artists and visual creatives. Although there have been dire warnings about AI, these seem far removed from the practical benefits that AI is delivering for businesses, and striking a fair balance between harnessing the potential of AI and addressing its risks is crucial for ensuring a safe and beneficial future for all.

Sustainability-in-Tech : Paper Packaging … Vodka Be Better?

Absolut Vodka has begun a 3-month trial of its first commercially available single-mould, paper-based bottle.

Trial – 22 Tesco Stores In Greater Manchester 

As part of what it describes as its “journey to create a fully bio-based bottle”, Swedish vodka brand Absolut is holding a trial of the 500ml-sized single-mould paper bottles in 22 Tesco’s stores in the Greater Manchester area (priced £16 each).

Gathering Feedback  

The trial will be used to gather feedback and insights from Absolut’s consumers, retailers, and supply chain partners about how the paper-based bottles are transported and perceived by customers.

Mostly Paper … With A Plastic Barrier  

The paper-based bottles are, in fact, 57 per cent paper and include an integrated barrier of recyclable plastic and were created in collaboration with Paboco (the Paper Bottle Company). Paboco is understood to also be working with other global brands like The Coca-Cola Company, Carlsberg, P&G and L’Oréal to help the drinks and packaging industries create more sustainable packaging.

Carbon Neutral By 2030  

It is hoped that the paper-based vodka bottles will be another step towards helping Absolut reach its target of making its vodka a carbon neutral product by 2030. Its distillery, for example, emits 98 per cent fewer emissions than the average distillery, and the new bottles will be a prerequisite for the company being able to meet this goal is in reducing the carbon footprint of its packaging.

Glass Is Recyclable … So Why Is Paper Better? 

Although standard glass bottles are also recyclable, Absolut says the paper-based bottles are eight times lighter and easier to carry. This will save costs in transportation and energy consumption during shipping as well as being an improvement in terms of sustainability and carbon reduction.

New Market Opportunities Too 

Also, the fact that paper-based bottles offer greater design flexibility compared to glass means they could be moulded into different shapes and sizes, allowing for innovative and customisable packaging options, thereby allowing the company to create new, differentiated, and segmented products that stand out from competing vodka brands. This idea was evident when Elin Furelid, Director of Future Packaging at Absolut commented “We are exploring packaging that has a completely different value proposition. Paper is tactile; it’s beautiful; it’s authentic; it’s light. That was our starting point”. 

Absolut says it believes consumers will use the paper bottles in out-of-home occasions such as festivals. For example, festivals such as Glastonbury ban festival-goers from taking glass bottles inside. It may transpire, therefore, that the packaging also offers another way for Absolut to appeal to and further establish itself among younger drinkers.

Sustainable Future 

Elin also said of the paper-based bottles: “We want consumers and partners to join our journey towards a more sustainable future. Together we can develop packaging solutions that people want and the world needs. That’s why bold partnerships with like-minded organisations to test the waters are going to be evermore crucial on our net zero journey.” 

Absolut And AI 

Absolut is a brand that has also made the news through becoming the world’s first vodka brand to visually develop cocktail art using AI. In Canada, the company compiled a list of cocktail ingredients from different Canadian cities, put them into an AI platform, and asked it to mix cocktail artwork celebrating each neighbourhood.

What Does This Mean For Your Business? 

This paper-based packaging represents not just a way for the vodka brand to help meet its green targets, reduce its carbon footprint, and improve sustainability in its packaging but it also has many commercial advantages. For example, because it’s lightweight it can reduce transport costs (and shipping energy consumption) and its flexibility as a packaging medium compared to glass could enable innovative and differentiated packaging designs that can enhance the brand, help market new products, and allow easier segmentation and targeting. The example of AI being used to generate designs drawing upon different city cocktail preferences, combined with limited runs of paper-based packaging show what could be possible in terms of targeting. Absolut’s paper-based bottle trial is only 3 months dureation, and it will be alongside normal glass bottles, nevertheless it’s an example of how sustainability innovations could help lower the carbon footprint of the drinks industry and could provide an energy and cost-saving replacement to glass bottles for many different products in the near future as well as helping the recycling industry.

Tech Trivia : Did You Know? This Week in History …

Consider all the atoms in 10 million galaxies …

At midnight on June 18th, 1997, the DESCHALL Project bore fruit. The challenge had been to use ‘Brute Force’ to discover the meaning of a message which had been encrypted. Going through up to seven billion possibilities per second, the key was cracked after 96 days by thousands of computers running simultaneously. The message was revealed to be “Strong cryptography makes the world a safer place”.

Back then, the specialist software used to brute-force the ‘key’ was designed for use on Pentium 200MHz computers which of course are now very outmoded by chips running faster by orders of magnitude. This forced The National Institute of Standards and Technology to initiate what would morph into the formidable Advanced Encryption Standard (AES). Today, AES 256-bit encryption stands as a standard in the encryption landscape.
One might be forgiven for thinking that this 256-bit encryption is unhackable. After all, the largest number expressed by 256 bits is of the order of magnitude of the number of atoms contained in 10 million galaxies. Give or take a few atoms.

Yet even these mind-boggling large numbers used in current 256-bit encryption standards can’t withstand the onslaught of quantum computing, when it’s fully realised. This means novel methods of post-quantum encryption to secure information must be innovated and this will represent both opportunities and threats for businesses, as is always the case with any kind of disruption.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives