Tech News : EU Wants AI-Generated Content Labelled
In a recent press conference, the European Union said that to help tackle disinformation, it wants the major online platforms to label AI generated content.
The Challenge – AI Can Be Used To Generate And Spread Disinformation
In the press conference, Vĕra Jourová (the vice-president in charge of values and transparency with the European Commission) outlined the challenge by saying, “Advanced chatbots like ChatGPT are capable of creating complex, seemingly well-substantiated content and visuals in a manner of seconds,” and that “image generators can create authentic-looking pictures of events that never occurred,” as well as “voice generation software” being able to “imitate the voice of a person based on a sample of a few seconds.”
Jourová Warned of widespread Russian disinformation in Central and Eastern Europe and said, “we have the main task to protect the freedom of speech, but when it comes to the AI production, I don’t see any right for the machines to have the freedom of speech.”
Labelling Needed Now
To help address this challenge, Jourová called for all 44 signatories of the European Union’s code of practice against disinformation to help users better identify AI-generated content. One key method she identified was for big tech platforms such as Google, Facebook (Meta), and Twitter to apply labels to any AI generated content to identify it as such. She suggested that this change should take place “immediately.”
Jourová said she had already spoken with Google’s CEO Sundar Pichai about how the technologies exist and are being worked on to enable the immediate detection and labelling AI-produced content for public awareness.
Twitter, Under Musk
Jourová also highlighted how, by withdrawing from the EU’s voluntary Code of Practice against disinformation back in May, Elon Musk’s Twitter had chosen confrontation and “the hard way, warning that, by leaving the code, Twitter had attracted a lot of attention,” and that “Its actions and compliance with EU law will be scrutinised vigorously and urgently.”
At the time, referring to the EU’s new and impending Digital Services Act, the EU’s Internal Market Commissioner, Thierry Breton, wrote on Twitter: “You can run but you can’t hide. Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement”.
The DSA & The EU’s AI Act
Legislation, such as that referred to by Thierry Breton, is being introduced in the EU as a way to tackle the challenges posed by AI in the EU’s own way rather than relying on Californian laws. Impending AI legislation includes:
The Digital Services Act (DSA) which includes new rules requiring Big Tech platforms like Meta’s Facebook, Instagram and YouTube to assess and manage risks posed by their services, e.g. advocacy of hatred and the spread of disinformation. The DSA also has algorithmic transparency and accountability requirements to complement other EU AI regulatory efforts which are driving legislative proposals like the AI Act (see below) and the AI Liability Directive. The DSA directs companies, large online platforms and search engines to label manipulated images, audio, and video.
The EU’s proposed ‘AI Act’ described as “first law on AI by a major regulator anywhere” which assigns applications of AI to three risk categories. These categories are ‘unacceptable risk’, e.g. government-run social scoring of the type used in China (banned under the Act), ‘high-risk’ applications, e.g. a CV-scanning tool to rank job applicants (which will be subject to legal requirements), plus those applications not explicitly banned or listed as high-risk which are largely left unregulated.
What Does This Mean For Your Business?
Among the many emerging concerns about AI, there are the fears that the unregulated publishing of AI generated content could spread misinformation and disinformation (via deepfakes videos, photos, and voices) and in doing so, AI could erode truth and even threaten democracy. One method for enabling people to spot AI-generated content is to have it labelled (which the DSA seeks to do anyway) however the EC’s vice-president in charge of values and transparency with the EC sees this as being needed ungently, hence asking all 44 signatories of the European Union’s code of practice against disinformation to start labelling AI-produced content now.
Arguably, it’s unlike big tech companies to act voluntarily before regulations and legislation force them to and Twitter seems to have opted out already. The spread of Russian disinformation in Central and Eastern Europe is a good example of why labelling may be needed so urgently. That said, as Vĕra Jourová acknowledged herself, free speech needs to be protected too.
With AI generated content being so difficult to spot in many cases and with AI-generated content being published so quickly (in vast amounts), along with AI tools available to all for free, it’s difficult to see how the idea of labelling could be achievable or monitored/policed.
The requirement for big tech platforms like Google and Facebook to label AI-generated content could have significant implications for businesses and tech platforms alike. For example, primarily, labelling AI-generated content could be a way to foster more trust and transparency between businesses and consumers. By clearly distinguishing between content created by humans and that generated by AI, users would be empowered to make informed decisions. This labelling could help combat the spread of misinformation and enable individuals to navigate the digital realm with greater confidence.
However, businesses relying on AI-generated content must consider the impact of labelling on their brand reputation. If customers perceive AI-generated content as less reliable or less authentic, it could erode trust in the brand and deter engagement. Striking a balance between AI-generated and human-generated content would become crucial, potentially necessitating increased investments in human-generated content to maintain authenticity and credibility.
Also, labelling AI-generated content would bring attention to the issue of algorithmic bias. Bias in AI systems, if present, could become more noticeable when content is labelled as AI-generated. To address this concern, businesses would need to be proactive in mitigating biases and ensuring fairness in the AI systems used to generate content.
Looking at the implications for tech platforms, there may be considerable compliance costs associated with implementing and maintaining systems to accurately label AI-generated content. Such endeavours (if possible, to do successfully) would demand significant investments, including the development of algorithms or manual processes to effectively identify and label AI-generated content.
Labelling AI-generated content could also impact the user experience on tech platforms. Users might need to adjust to the presence of labels and potentially navigate through a blend of AI-generated and human-generated content in a different manner. This change could require tech platforms to rethink their user interface and design to accommodate these new labelling requirements.
Tech platforms would also need to ensure compliance with specific laws and regulations related to labelling AI-generated content. Failure to comply could result in legal consequences and reputational damage. Adhering to the guidelines set forth by governing bodies would be essential for tech platforms to maintain trust and credibility.
Finally, the introduction of labelling requirements could influence the innovation and development of AI technologies on tech platforms. Companies might find themselves investing more in AI systems that can generate content in ways that align with the labelling requirements. This, in turn, could steer the direction of AI research and development and shape the future trajectory of the technology.
The implications of labelling AI-generated content for businesses and tech platforms are, therefore, multifaceted. Businesses would need to adapt their content strategies, manage their brand reputation, and address algorithmic bias concerns. Tech platforms, on the other hand, would face compliance costs, the challenge of balancing user experience, and the need for innovation in line with labelling requirements. Navigating these implications would require adjustments, investments, and a careful consideration of user expectations and experiences in the evolving landscape of AI-generated content.
Tech News : UK Will Host World’s First AI Summit
During his recent visit to Washington in the US, UK Prime Minister Rishi Sunak announced that the UK will hosts the world’s first global summit on artificial intelligence (AI) later this year.
Focus On AI Safety
The UK government says this first major global summit on AI safety will bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI.
Threat of Extinction
Since ChatGPT became the fastest growing app in history and people saw how ‘human-like’ generative AI appeared to be, much has been made of the idea that AI’s rapid growth could see it get ahead of our ability to control it, leading to it destroying and replacing us. For example, this fear has been fuelled with events like:
– In March, an open letter asking for a 6-month moratorium on labs training AI to make it more powerful than GPT-4, signed by notable tech leaders like Elon Musk, Steve Wozniak, and Tristan Harris.
– In May, Sam Altman, the CEO of OpenAI, signing the open letter from the San Francisco-based Centre for AI Safety warning that AI poses a threat that should be treated with the same urgency as pandemics or nuclear war, and could result in human extinction. See the letter and signatories here: https://www.safe.ai/statement-on-ai-risk#open-letter .
How?
Current thinking about just how AI could wipe us all out in just a couple of years and the risks that AI poses to humanity includes:
– The Erosion of Democracy: AI-producing deep-fakes and other AI-generated misinformation resulting in the erosion of democracy.
– Weaponisation: AI systems being repurposed for destructive purposes, increasing the risk of political destabilisation and warfare. This includes using AI in cyberattacks, giving AI systems control over nuclear weapons, and the potential development of AI-driven chemical or biological weapons.
– Misinformation: AI-generated misinformation and persuasive content undermining collective decision-making, radicalising individuals, and hindering societal progress, and eroding democracy. AI, for example could be used to spread tailored disinformation campaigns at a large scale, including generating highly persuasive arguments that evoke strong emotional responses.
– Proxy Gaming: AI systems trained with flawed objectives could pursue their goals at the expense of individual and societal values. For example, recommender systems optimised for user engagement could prioritise clickbait content over well-being, leading to extreme beliefs and potential manipulation.
– Enfeeblement: The increasing reliance on AI for tasks previously performed by humans could lead to economic irrelevance and loss of self-governance. If AI systems automate many industries, humans may lack incentives to gain knowledge and skills, resulting in reduced control over the future and negative long-term outcomes.
– Value Lock-in: Powerful AI systems controlled by a few individuals or groups could entrench oppressive systems and propagate specific values. As AI becomes centralised in the hands of a select few, regimes could enforce narrow values through surveillance and censorship, making it difficult to overcome and redistribute power.
– Emergent Goals: AI systems could exhibit unexpected behaviour and develop new capabilities or objectives as they become more advanced. Unintended capabilities could be hazardous, and the pursuit of intra-system goals could overshadow the intended objectives, leading to misalignment with human values and potential risks.
– Deception: Powerful AI systems could engage in deception to achieve their goals more efficiently, undermining human control. Deceptive behaviour may provide strategic advantages and enable systems to bypass monitors, potentially leading to a loss of understanding and control over AI systems.
– Power-Seeking Behaviour: Companies and governments have incentives to create AI agents with broad capabilities, but these agents could seek power independently of human values. Power-seeking behaviour can lead to collusion, overpowering monitors, and pretending to be aligned, posing challenges in controlling AI systems and ensuring they act in accordance with human interests.
Previous Meetings About AI Safety
The UK Prime Minister has been involved in several meetings about how nations can come together to mitigate the potential threats posed by AI including:
– In May, meeting the CEOs of the three most advanced frontier AI labs, OpenAI, DeepMind and Anthropic in Downing Street. The UK’s Secretary of State for Science, Innovation and Technology also hosted a roundtable with senior AI leaders.
– Discussing this issue with businesspeople, world leaders and all members of the G7 at Hiroshima Summit last month where they agreed to aim for a shared approach to this issue.
Global Summit In The UK
The world’s first global summit about AI safety (announced by Mr Sunak) will be hosted in the UK this autumn. It will consider the risks of AI, including frontier systems, and will enable world leaders to discuss how these risks can be mitigated through internationally coordinated action. The summit will also provide a platform for countries to work together on further developing a shared approach to mitigating these risks and the work at the AI safety summit will build on recent discussions at the G7, OECD and Global Partnership on AI.
Prime Minister Sunak said of the summit, “No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.”
What Does This Mean For Your Business?
The speed at which ChatGPT and other AI has grown has happened ahead of a proper assessment of risk, regulation and a co-ordinated strategy for mitigating risks while maintaining the positive benefits and potential of AI. Frightening warnings and predictions by big tech leaders have also helped provide the motivation for countries to get together for serious talks about what to do next. The announcement of the world’s first global summit on AI safety, to be hosted by the UK, marks a significant step in addressing the risks posed by artificial intelligence, and could provide some Kudos to the UK and help strengthen the idea that the UK is a major player in the tech industry.
The bringing together of key countries, leading tech companies, and researchers to agree on safety measures and evaluate the most significant risks and threats associated with AI and the collective actions taken by the global community, including discussions at previous meetings and the upcoming summit, demonstrate a commitment to mitigating these risks through international coordination and are a positive first step in governments catching up with (and getting a handle on) this most fast-moving of technologies.
It is important to remember that while AI poses challenges, it also offers numerous benefits for businesses. These benefits include improved efficiency, enhanced decision-making, and innovative solutions, and tools such as ChatGPT and image generators such as DALL-E have proven to be popular time-saving, cost-saving and value-adding tools. That said, AI image generators have raised challenges to copyrighting and consent for artists and visual creatives. Although there have been dire warnings about AI, these seem far removed from the practical benefits that AI is delivering for businesses, and striking a fair balance between harnessing the potential of AI and addressing its risks is crucial for ensuring a safe and beneficial future for all.
Sustainability-in-Tech : Paper Packaging … Vodka Be Better?
Absolut Vodka has begun a 3-month trial of its first commercially available single-mould, paper-based bottle.
Trial – 22 Tesco Stores In Greater Manchester
As part of what it describes as its “journey to create a fully bio-based bottle”, Swedish vodka brand Absolut is holding a trial of the 500ml-sized single-mould paper bottles in 22 Tesco’s stores in the Greater Manchester area (priced £16 each).
Gathering Feedback
The trial will be used to gather feedback and insights from Absolut’s consumers, retailers, and supply chain partners about how the paper-based bottles are transported and perceived by customers.
Mostly Paper … With A Plastic Barrier
The paper-based bottles are, in fact, 57 per cent paper and include an integrated barrier of recyclable plastic and were created in collaboration with Paboco (the Paper Bottle Company). Paboco is understood to also be working with other global brands like The Coca-Cola Company, Carlsberg, P&G and L’Oréal to help the drinks and packaging industries create more sustainable packaging.
Carbon Neutral By 2030
It is hoped that the paper-based vodka bottles will be another step towards helping Absolut reach its target of making its vodka a carbon neutral product by 2030. Its distillery, for example, emits 98 per cent fewer emissions than the average distillery, and the new bottles will be a prerequisite for the company being able to meet this goal is in reducing the carbon footprint of its packaging.
Glass Is Recyclable … So Why Is Paper Better?
Although standard glass bottles are also recyclable, Absolut says the paper-based bottles are eight times lighter and easier to carry. This will save costs in transportation and energy consumption during shipping as well as being an improvement in terms of sustainability and carbon reduction.
New Market Opportunities Too
Also, the fact that paper-based bottles offer greater design flexibility compared to glass means they could be moulded into different shapes and sizes, allowing for innovative and customisable packaging options, thereby allowing the company to create new, differentiated, and segmented products that stand out from competing vodka brands. This idea was evident when Elin Furelid, Director of Future Packaging at Absolut commented “We are exploring packaging that has a completely different value proposition. Paper is tactile; it’s beautiful; it’s authentic; it’s light. That was our starting point”.
Absolut says it believes consumers will use the paper bottles in out-of-home occasions such as festivals. For example, festivals such as Glastonbury ban festival-goers from taking glass bottles inside. It may transpire, therefore, that the packaging also offers another way for Absolut to appeal to and further establish itself among younger drinkers.
Sustainable Future
Elin also said of the paper-based bottles: “We want consumers and partners to join our journey towards a more sustainable future. Together we can develop packaging solutions that people want and the world needs. That’s why bold partnerships with like-minded organisations to test the waters are going to be evermore crucial on our net zero journey.”
Absolut And AI
Absolut is a brand that has also made the news through becoming the world’s first vodka brand to visually develop cocktail art using AI. In Canada, the company compiled a list of cocktail ingredients from different Canadian cities, put them into an AI platform, and asked it to mix cocktail artwork celebrating each neighbourhood.
What Does This Mean For Your Business?
This paper-based packaging represents not just a way for the vodka brand to help meet its green targets, reduce its carbon footprint, and improve sustainability in its packaging but it also has many commercial advantages. For example, because it’s lightweight it can reduce transport costs (and shipping energy consumption) and its flexibility as a packaging medium compared to glass could enable innovative and differentiated packaging designs that can enhance the brand, help market new products, and allow easier segmentation and targeting. The example of AI being used to generate designs drawing upon different city cocktail preferences, combined with limited runs of paper-based packaging show what could be possible in terms of targeting. Absolut’s paper-based bottle trial is only 3 months dureation, and it will be alongside normal glass bottles, nevertheless it’s an example of how sustainability innovations could help lower the carbon footprint of the drinks industry and could provide an energy and cost-saving replacement to glass bottles for many different products in the near future as well as helping the recycling industry.
Tech Trivia : Did You Know? This Week in History …
Consider all the atoms in 10 million galaxies …
At midnight on June 18th, 1997, the DESCHALL Project bore fruit. The challenge had been to use ‘Brute Force’ to discover the meaning of a message which had been encrypted. Going through up to seven billion possibilities per second, the key was cracked after 96 days by thousands of computers running simultaneously. The message was revealed to be “Strong cryptography makes the world a safer place”.
Back then, the specialist software used to brute-force the ‘key’ was designed for use on Pentium 200MHz computers which of course are now very outmoded by chips running faster by orders of magnitude. This forced The National Institute of Standards and Technology to initiate what would morph into the formidable Advanced Encryption Standard (AES). Today, AES 256-bit encryption stands as a standard in the encryption landscape.
One might be forgiven for thinking that this 256-bit encryption is unhackable. After all, the largest number expressed by 256 bits is of the order of magnitude of the number of atoms contained in 10 million galaxies. Give or take a few atoms.
Yet even these mind-boggling large numbers used in current 256-bit encryption standards can’t withstand the onslaught of quantum computing, when it’s fully realised. This means novel methods of post-quantum encryption to secure information must be innovated and this will represent both opportunities and threats for businesses, as is always the case with any kind of disruption.
Tech Tip – Using WhatsApp As A Personal Note-Taking Tool
Using WhatsApp as a personal note-taking tool allows you to conveniently store and organise your thoughts, links, and important information and provides a fast and accessible way to capture and retrieve notes whenever you need them. Here’s how to use WhatsApp as a personal note-taking tool:
– Open WhatsApp on your mobile device and create a new chat by tapping on the ‘New Chat’ icon.
– Instead of selecting a contact, search for your own phone number or name in the contacts list.
– Tap on your own contact to start a private chat with yourself and treat this chat as your personal note-taking space.
– Write down important information, ideas, links, or draft messages that you want to save for later.
– Use the text input field to type your notes or paste links. You can also use the attachment options to save photos, videos, or documents as notes.
– To keep your notes organised, you can create categories or use hashtags within the chat to label and group related notes.
– Whenever you need to access your notes, simply open the chat with yourself and scroll through the saved messages.
– Since WhatsApp synchronises across devices, you can access your notes from any device where you have WhatsApp installed.
Tech News : 20 NHS Trusts Shared Personal Data With Facebook
An Observer investigation has reported uncovering evidence that 20 NHS Trusts have been collecting data about patients’ medical conditions and sharing it with Facebook.
Using A Covert Tracking Tool
The newspaper’s investigation found that over several years, the trusts have been using the Meta Pixel analytics tool to collect patient browsing data on their websites. The kind of data collected includes page views, buttons clicked, and keywords searched. This data can be matched with IP address and Facebook accounts to identify individuals and reveal their personal medical details.
Sharing this collected personal data, albeit unknowingly, with Facebook’s parent company without the consent of NHS Trust website users and, therefore, illegally (data protection/GDPR) is a breach of privacy rights.
Meta Pixel
The Meta Pixel analytics tool is a piece of code which enables website owners to track visitor activities on their website, helps identify Facebook and Instagram users and see how they interacted with the content on your website. This information can then be used for targeted advertising.
17 Have Now Removed It
It’s been reported that since the details of the newspaper’s investigation were made public, 17 of the 20 NHS trusts identified as using the Meta Pixel tool have now removed it from their website, with 8 of those trusts issuing an apology.
The UK’s Infromation Commissioner’s Office (ICO) is now reported to have begun an investigation into the activities of the trust.
UK GDPR
Under the UK Data Protection Act 2018 and the EU General Data Protection Regulation (GDPR), organisations processing personal data must obtain lawful grounds for processing, which typically includes obtaining user consent. Personal data is any information that can directly or indirectly identify an individual.
An NHS trust using an analytics tool like Meta Pixel on their website to collect and share personal data without obtaining user consent, could potentially be illegal and both the NHS trust and the analytics tool provider (Meta) have responsibilities under data protection laws.
The GDPR and the UK Data Protection Act require organisations to provide transparent information to individuals about the collection and use of their personal data, including the purposes of processing and any third parties with whom the data is shared. Individuals must be given the opportunity to provide informed consent before their personal data is collected, unless another lawful basis for processing applies.
What Does This Mean For Your Business?
The recent revelation that 20 NHS Trusts have been collecting and sharing personal data with Facebook through the use of the Meta Pixel analytics tool raises important lessons for businesses regarding their data protection practices. The Trusts’ actions, conducted without user consent, appear to represent a breach of privacy rights and potentially violate data protection laws, including the UK Data Protection Act 2018 and GDPR.
The Meta Pixel analytics tool, although widely used as an advertising effectiveness measurement tool, can have unintended consequences when it comes to personal data, such as medical data, and data privacy. The amount of information shared through this tool is often underestimated, and the implications for the NHS trusts could be severe. As several online commentators have pointed out, the trusts may have known little about how the Meta Pixel tool works and, therefore, collected, and shared user data unwittingly, however ignorance is unlikely to stand up as an excuse.
It is, of course encouraging that in response to the investigation, 17 out of the 20 identified NHS Trusts have at least removed the Meta Pixel tool from their websites, with some going on to issue apologies. To avoid similar privacy breaches and maintain the trust of customers, businesses should take immediate action.
Examples of how businesses could ensure their data protection compliance as regards their website and any tools used could include establishing a cross-functional data protection team with members from legal, technology, and marketing, and with the support of senior management. They could also conduct a thorough analysis of all data collected and transferred by websites and apps and identify the data necessary for their operations and ensure that legal grounds (such as consent) are in place for collecting and processing that data. For most smaller businesses it’s a case of remembering to stay on top of data protection matters, check what any tools are collecting and keep the importance of consent top-of-mind.
The implications for Meta of the newspaper’s report and the impending ICO investigation are significant as well. The incident highlights the need for greater transparency and understanding of the tools and services offered by companies like Meta, especially when it comes to sensitive topics and personal data. Privacy concerns arise when information from browsing habits is shared with social media platforms. Meta must address these concerns and ensure that the data collected through its tools is handled in accordance with data protection laws and user consent.
Overall, this case emphasises the importance of data protection compliance, informed consent, and transparency in the handling of personal data. Businesses must prioritise privacy and data security to maintain customer trust and avoid legal consequences.