Sustainability-in-Tech : Huge Global Demand For Green Skills

New LinkedIn research has highlighted a shortage within the kind of green skills that are needed to help develop green industries and help it achieve it climate ambitions.

Green Skills Shortage In The Workforce

LinkedIn’s Global Green Skills Report 2023 showed that although the concentration of “green talent” in the workforce is growing, i.e. there is a growing number of workers with a ‘green’ job or who list at least one green skill on their LinkedIn profile, the increase in demand for green skills is outpacing the increase in supply. This has raised the prospect of an imminent green skills shortage that could threaten the development of new industries and slow down efforts to make homes and commercial buildings more energy efficient.

For example, the report shows that worldwide, only one in eight workers has one or more green skills – seven in eight workers lack even a single green skill.

What Is A Green Skill? 

Green skills can be described as the expertise and abilities focused on promoting environmental sustainability. Examples include renewable energy installation and maintenance, sustainable construction and design, environmental engineering, energy efficiency, sustainable agriculture, environmental education, green data analysis, waste management and recycling, green business and sustainability management, and environmental policy and planning.

Green Job? 

Although it’s relatively clear what green skills are, there’s a lack of a precise global definition of what constitutes a “green job”. For example, some see them as jobs in sectors that directly drive the net zero transition, e.g. renewable energy or electric vehicle production, while others see green jobs as simply those with a high share of green-related tasks.

The LinkedIn report makes the point that greater numbers of green skills need to be incorporated in deeper, more impactful ways into more jobs anyway to help us meet out climate goals.

Green Jobs Demand Outpacing Green Talent Supply 

Taking a broad view of what constitutes a ‘green job’, the LinkedIn report illustrates that the growth in the share of job postings requiring a green skill exceeds the growth rate of green skills being acquired by the workforce. For example:

– Between 2022 and 2023, the share of green talent in the workforce rose by a median of 12.3 per cent while the share of job postings requiring at least one green skill grew twice as quickly (by a median of 22.4 per cent).

– The five-year annualised growth rate between 2018 and 2023 shows a similar trend with the share of green talent growing by only 5.4 per cent per year over that period, while the share of jobs requiring at least one green skill growing by 9.2 per cent.

Green Hiring Still Bucked The Trend 

That said, and even though overall hiring slowed over the past year, the LinkedIn research shows that green hiring bucked that trend. For example, while overall hiring slowed globally between February 2022 and February 2023, job postings requiring at least one green skill have grown by a median of 15.2 per cent over the same period.

Some Countries Easier To Get A Green Job With No Green Experience 

LinkedIn’s research shows that although green skills generally aren’t being acquired quickly enough by the workforce, some jobs in some countries provide a better chance of workers getting a green job without prior green experience, e.g. waste management specialists and solar consultants in the UK and US.

It’s also worth noting that some jobs can be regarded as ‘gateway jobs’ to acquiring green skills.

For example, an energy ffficiency analyst works with organisations to understand their energy consumption and develop strategies to reduce it. This role typically involves conducting energy audits, analysing energy data, identifying energy-saving opportunities, and recommending energy-efficient technologies or improvements. They may also monitor and verify the effectiveness of energy efficiency measures that have been implemented.

While this role might require some background knowledge in energy systems, it is also a job that encourages on-the-job learning about energy efficiency and conservation methods, the use of renewable energy, and the understanding of energy policies and regulations.

This position can lead to a deeper understanding and acquisition of green skills, which can be used to move into more specialized roles in the future, such as an Energy Manager, Sustainability Director, or Environmental Policy Advisor.

Challenges  

In addition to the general challenge of a green skills shortage in the workforce, other challenges that could hold back green skills and the transition and green jobs include:

– Those working in fossil fuel jobs are generally paid more than in green jobs.

– There is a lack of investment in reskilling workers.

– There is an apparent lack of equality in green jobs, e.g. green jobs appear to be more the domain of men. There is also a disparity in education, with better educated people having green jobs.

– There is a lack of incentives, e.g. tax and other financial incentives for companies to pay better wages for green jobs and take on apprentices.

What Does This Mean For Your Organisation? 

As the LinkedIn report points out, “Our ambitious green goals require the rapid proliferation of green skills”. The current green skills gap in the workforce poses significant challenges as the demand for these skills continues to outpace their acquisition. This trend has implications for businesses, governments, and the achievement of environmental and carbon targets. To address this gap, businesses and organisations must recognise the importance of investing in new technologies and creating attractive working conditions to entice workers. Governments, on the other hand, should provide clear incentives for companies to adopt green practices and develop comprehensive policies and programs that equip workers with the necessary green skills.

Closing the skills gap requires a concerted effort to reskill and upskill today’s workforce, enabling workers to learn green skills on the job. Tailored reskilling programs that identify relevant green skills for each role and industry should be developed, along with expanded access to economic opportunities for workers in countries that have been left behind. Collaboration between governments, the private sector, educators, and institutions of higher learning is crucial to ensure that green skills are integrated into curricula across various fields of study.

To tackle the issue, governments should work with the private sector to accelerate skills-based hiring and leverage real-time skills and hiring data for informed decision-making. Identifying gateway jobs that facilitate transitions to other green roles, supporting workers during pay cuts in the transitional period, and creating impactful upskilling and on-the-job training programs are essential. Maximising investments in these efforts and fostering the development of new degree programs catering to specialised green skills are also crucial steps.

Ultimately, it is through the collaboration and commitment of all stakeholders that the green skills gap can be closed, driving the necessary green transformation, and contributing to the achievement of global sustainability goals.

Tech Trivia : Did You Know? This Week in History …

QWERTY Keyboards. Mightier Than The Gun?

In a world where typing has long outstripped writing, few people know the story of how the Qwerty keyboard came about or that famous gun manufacturer Remington was involved

On June 23rd, 1868, the Sholes and Glidden typewriter was patented. This was the first commercially practical device of its kind which had started in 1867 with Christopher Latham Sholes at the helm, flanked by Samuel W. Soule and Carlos S. Glidden. When Soule stepped away, James Densmore filled the void, injecting much-needed finances and pivotal guidance with his visionary insights.

Earlier typewriters used to jam frequently so Densmore’s ‘stroke of genius’ was to scatter frequently-used letter combinations so they were less likely to jam. This was then honed by Sholes into the QWERTY keyboard layout – a design still at the heart of our digital world today. Sholes’ influence extended beyond invention to politics, where he stood out for his integrity.

Interestingly, after countless refinements, his typewriter finally piqued the interest of E. Remington and Sons. This renowned firearm manufacturer, known for their innovation, saw promise in the invention. This aligned with their own core-values of pushing boundaries, as shown when the younger Remington endeavoured to forge a superior gun barrel from wrought iron, according to one of the firm’s origin stories.

John Jonathan Pratt’s ‘Pterotype’ (early prototype of a typewriter) acted as an inspiration for the inventors. This curious device piqued Glidden’s interest, who shared it with Sholes, who immediately saw potential for a more refined machine. However, the road to perfection was long. As Densmore candidly observed, the early Sholes and Glidden typewriter was “good for nothing except to show that its underlying principles were sound”, but it took numerous revisions to create a market-ready product.

When Remington took the reins, the typewriter hit the market in 1874, spawning an entirely new industry. Priced at about $125, it found buyers in various quarters, not least among them, the renowned author Mark Twain.

The Sholes and Glidden typewriter story encapsulates the essence of relentless dedication, innovative thinking and ceaseless improvement, all of which are crucial for modern companies looking to make an impact and establish themselves as market leaders.

Tech Tip – How To Use ChatGPT As A Collaborative Brainstorming Tool

If you’re working alone but need to brainstorm, you can use ChatGPT as an effective collaborative brainstorming tool, inspiring innovative thinking and generating fresh insights. Here’s how:

– Set the Context: Open ChatGPT, set ‘New chat’ and introduce the brainstorming topic or challenge.

– Engage in Dialogue: Start a conversation with ChatGPT about the topic.

– Encourage Divergent ‘Thinking’: Ask open-ended questions to generate a range of ideas.

– Prompt for Specific Inputs: Provide clear instructions for focused idea generation.

– Explore Generated Responses: Review and extract promising ideas.

– Iterate and Refine: Have a back-and-forth conversation to build upon ideas.

– Document and Evaluate: Keep track of the most promising ideas (ChatGPT stored your chats anyway).

– Combine Human and AI Creativity: Blend ChatGPT’s ideas with your own insights.

– Validate and Implement: Select the most promising ideas and develop an action plan.

Featured Article: Reddit’s ‘Blackout’ User Protest

Following thousands of moderators making their subreddit communities private for 48 hours as a protest, we look at the reasons why, together with the implications for Reddit, its users, and other stakeholders.

What Is Reddit? 

Reddit is a social media platform where users can join communities called ‘subreddits’ to share content, participate in discussions, and interact with others. These subreddits each focus on a specific topic of interest.

Reddit users, also known as ‘Redditors’ create an account, subscribe to subreddits, and contribute by submitting posts or commenting on existing ones. The platform uses a voting system where users can upvote or downvote content, influencing its visibility.

Subreddits are moderated by volunteers who enforce rules and guidelines, plus Reddit also features a ‘karma’ system that reflects user engagement.  Overall, Reddit is a diverse and interactive platform for sharing and discovering content with a vast user community.

Who Are The ‘Mods’, And What Do They Do? 

The volunteer Subreddit moderators (also known as ‘mods’) oversee and maintain specific communities within Reddit by enforcing the subreddit’s rules and guidelines, monitor user activity, review, and approve posts and comments, remove spam or inappropriate content, and respond to user reports. They may also engage in discussions, facilitate AMA (Ask Me Anything) sessions, organise events, and promote community growth. Reddit is, therefore, heavily reliant upon moderators, who tend to spend one or two hours per day on their moderating activities.

What Happened To Cause The Blackout? 

Recently, Reddit introduced a series of charges to the third-party developers who want to continue using its Application Programming Interface (API) to access its data, i.e. the code which allows third-party apps to find and show the content on Reddit. For example, ‘Apollo’, ‘Reddit is Fun’, ‘Sync’ and ‘ReddPlanet’ are four third-party apps which were set up to enable users to access Reddit on their mobile devices. However, because of the new API charges, all four have said they will be shutting down.

Many subreddit monitors (some with large numbers of users in the subreddits) protested about the effect the move would have on them, supported the third-party app owners, and tried to put pressure on Reddit to reverse its new API charging decision by imposing a blackout, known as “going dark”. For most users, this involves making their communities private for 48 hours from June 12 , although some threatened to go away permanently unless the issue is adequately addressed. Making a subreddit private means that its content and discussions are no longer accessible to the general Reddit userbase.

Key reasons why the moderators are protesting include the fact that apps leaving the API mean that the platform will be less accessible, plus voluntary moderators will have fewer quality tools to work with through the official app, thereby making their job more difficult and making it nearly impossible (some argue) for them to maintain their levels of service to users.

How Will “Going Dark” Hurt Reddit? 

“Going Dark” for a substantial period of time is likely to cause a noticeable decrease in the Reddit platform user engagement and activity since private subreddits limit content accessibility, which means less time spent on Reddit and potentially impacting metrics like page views and user interactions.

If there were more (longer) blackouts following this one, advertisers might also become concerned because private subreddits limit visibility and reach, potentially affecting the attractiveness of Reddit as an advertising platform. Also, private subreddits becoming widespread could impact the platform’s reputation as an open and inclusive community, deterring new users and affecting the overall user experience. Reddit’s business model relies on advertising revenue and premium subscriptions, so if private subreddits significantly impact these revenue streams, Reddit might need to adapt its model or explore new sources of revenue or, as the protesting moderators hope with their protest action, reverse the new policy for charges for third-party apps.

With five of the ten most popular communities on Reddit (r/gaming, r/aww, r/Music, r/todayilearned and r/pics), each of which has more than thirty million members, 48 hours of “going dark” is likely to cause some damage, generate some adverse (and potentially damaging) publicity for Reddit, and give Reddit’s owners (Advance Publications Inc) a painful reminder of the importance and power of moderators, and how things could become more challenging if their concerns aren’t addressed. Some moderators, for example, have said they will make their subreddits indefinitely inaccessible until Reddit reverses its policy.

Unpopular 

Some moderators have been quoted as saying they would not continue to moderate if the ‘unpopular’ changes were pushed through, and their hope is that strength in numbers by acting together will provide enough pressure to change Reddit’s mind about the changes, and make Reddit realise the value and the power of its moderators.

Extortionate … Or Necessary? 

Some people have suggested that the new charges are extortionate. For example, Apollo’s developer Christian Selig, (who has announced he will shut the app down on June 30) has suggested that the new Reddit charges could cost him £15.9 million if he continues operating the app.

Reddit’s CEO, Steve Huffman, has said that the platform “needs to be a self-sustaining business”, indicating that there would some form of increased revenue is needed, such as the new charges. He also said that he respected the communities taking action to highlight what they needed.

Reddit’s own (multi-million dollar) hosting costs and their need to be compensated according to usage levels to support third-party apps were two of the main reasons highlighted for introducing the charges.

Reddit Went Down 

The action taken by the 7000+ subreddit moderators is thought to have been the cause of Reddit going down on the first day of the protest, 12 June. It was reported that over two and a half billion Reddit users may have “gone dark” on the platform as part of the protest.

Declining Anyway? 

Some commentators have suggested that Reddit has been a platform in decline anyway. Also, Reddit recently downsized its (leased) office space and reportedly announced a 5 per cent cut of its staff (90 employees).

What Does This Mean For Your Business? 

Reddit’s move to get payment from the makers of third-party apps (after seven years of maintaining a free API) has been very poorly received and it has been likened to Musk’s Twitter which also stopped free usage of its API in February (and then backtracked a little). For third-party app developers, the charges are clearly very bad news, which some say will put them out of business and with several saying they will be shutting down.

The protest by moderators has already led to the whole of Reddit going down, has produced damaging publicity, has affected potentially billions of users, and could have a hugely detrimental effect on Reddit’s business if it continues, e.g. loss of advertisers, moderators not maintaining the platform (affecting users and quality), damage to reputation, users leaving, business/premium users cancelling, and more. In short, Reddit’s move to suddenly impose a change to its model and raise more revenue in this way has been met with fierce resistance and has exposed how voluntary moderators (which are a strength of the company’s services) feel undervalued and ignored and how they have enough power (if organised) to seriously put pressure on (and damage) the company. Reddit is currently showing no signs of backing down and it remains to be seen whether the pressure of moderators proves to inflict too much damage and force back-peddling or whether this is the start of major changes to Reddit’s model.

For many users of the platform, including businesses, the site going down and too much ‘going dark’ could see them run out of patience and look at alternatives.

Tech Insight : The Impact of Generative AI On Datacentres

Generative AI tools like ChatGPT plus the rapid and revolutionary growth of AI are changing the face of most industries and generating dire warnings about the future, but what about the effects on datacentres?

Data Centres And Their Importance 

Data centres are the specialised facilities that house a large number of computer servers and networking equipment, serving as centralised locations where businesses and organisations store, manage, process, and distribute their digital data. These facilities are designed to provide a secure, controlled environment for storing and managing vast amounts of data in the cloud.

In our digital, cloud computing business world, data centres, therefore play a crucial role in supporting various industries and services that rely on large-scale data processing and storage, and are utilised by organisations ranging from small businesses to multinational corporations, cloud service providers, internet companies, government agencies, and research institutions.

The Impacts of Generative AI

There are number of ways that generative AI is impacting data centres. These include:

– The need for more data centres. Generative AI applications require significant computational resources, including servers, GPUs, and data storage devices. As the adoption of generative AI grows, data centres will need to invest and expand their infrastructure to accommodate the increased demand for processing power and storage capacity and this will change the data centre landscape. For example, greater investment in (and greater numbers of) data centres will be needed. It’s been noted that the massive data-crunching requirements of AI platforms like ChatGPT, for example, couldn’t continue to operate without using Microsoft’s (soon-to-be-updated) Azure cloud platform. This has led to Microsoft now building a new 750K SF hyperscale data centre campus near Quincy, WA, to house three 250K SF server farms on land costing $9.2M. Presumably, with more data centres there will also need to be greater efforts and investment to reduce and offset their carbon consumption.

– Greater power consumption and more cooling needed. Generative AI models are computationally intensive and consume substantial amounts of power. Data centres also have backup power sources to ensure a smooth supply, such as uninterruptible power supplies (UPS) and generators. With more use of generative AI, data centres will need to ensure they have sufficient power supply and cooling infrastructure to support the energy demands of generative AI applications. This could mean that data centres will now need to improve power supplies to cope with the demands of generative AI by conducting power capacity planning, upgrading infrastructure, implementing redundancy and backup systems, optimising power distribution efficiency, integrating renewable energy sources, implementing power monitoring and management systems, and collaborating with power suppliers. These measures could enhance power capacity, reliability, efficiency, and sustainability. More data centres may also need to be built with their own power plants (like Microsoft did in Dublin in 2017).

In terms of the greater need for cooling, i.e. to improve cooling capacity for generative AI in data centres, strategies include optimising airflow management, adopting advanced cooling technologies like liquid cooling, implementing intelligent monitoring systems, utilising computational fluid dynamics simulations, exploring innovative architectural designs, and leveraging AI algorithms for cooling control optimisation. These measures could all enhance airflow efficiency, prevent hotspots, improve heat dissipation, proactively adjust cooling parameters, inform cooling infrastructure design, and dynamically adapt to workload demands to meet the cooling challenges posed by generative AI.

– The need for scalability and flexibility. Generative AI models often require distributed computing and parallel processing to handle the complexity of training and inference tasks. Data centres therefore need to provide scalable and flexible infrastructure that can efficiently handle the workload and accommodate the growth of generative AI applications. Data centres will, therefore, need to be able to support generative AI applications through means such as:

– Virtualisation for dynamic resource allocation.
– High-performance Computing (HPC) clusters for computational power.
– Distributed storage systems for large datasets.
– Enhanced network infrastructure for increased data transfer.
– Edge computing for reduced latency and real-time processing.
– Containerisation platforms for flexible deployment and resource management.

– Data storage and retrieval. Generative AI models require extensive amounts of training data, which must be stored and accessed efficiently. Data centres now need to be able to optimise their data storage and retrieval systems to handle large datasets and enable high-throughput training of AI models.

– Security and privacy. Generative AI introduces new security and privacy challenges. Data centres must now be able to ensure the protection of sensitive data used in training and inferencing processes. Additionally, they need to address potential vulnerabilities associated with generative AI, such as the generation of realistic but malicious content or the potential for data leakage. Generative AI also poses cybersecurity challenges, as it can be used to create vast quantities of believable phishing emails or generate code with security vulnerabilities. Rather than just relying upon a lot of verification, there is likely to be increased dependency on skilled workers and smart software may be necessary to address these security risks effectively.

– Customisation and integration. Generative AI models often require customisation and integration into existing workflows and applications. This means that data centres need to provide the necessary tools and support for organisations to effectively integrate generative AI into their systems and leverage its capabilities.

– Skillset requirements. Managing and maintaining generative AI infrastructure requires specialised skills and data centres will need to invest in training their personnel and/or attracting professionals with expertise in AI technologies to effectively operate and optimise the infrastructure supporting generative AI.

– Optimisation for AI workloads. The rise of generative AI also means that data centres need to find ways to optimise their operations and infrastructure to cater to the specific requirements of AI workloads. This includes considerations for power efficiency, cooling systems, network bandwidth, and storage architectures that are tailored to the demands of generative AI applications.

– Uncertain infrastructure requirements. The power consumption and hardware requirements of the increasing use of generative AI applications are yet to be fully understood and this means that the impact on software and hardware remains uncertain, and the scale of infrastructure needed to support generative AI is still not clear. The implications of this for data centres are, for example:

– A lack of clarity on power consumption and hardware needs/the specific power and hardware requirements of generative AI applications are not fully understood, makes it challenging for data centres to accurately plan and allocate resources.
– The impact of generative AI on software and hardware is still unclear which makes it difficult for data centres to determine the necessary upgrades or modifications to support these applications.
– Without a clear understanding of the demands of generative AI, data centres cannot accurately estimate the scale of infrastructure required, potentially leading to under-provisioning or over-provisioning of resources.

– The need for flexibility and adaptability. Data centres must now be prepared to adjust their infrastructure dynamically to accommodate the evolving requirements of generative AI applications as more information becomes available.

AI Helping AI 

Ironically, data centres could use AI itself to help optimise their operations and infrastructure. For example, through:

– Predictive maintenance. AI analysing sensor data to detect equipment failures, minimising downtime.

– Energy efficiency. AI optimising power usage, cooling, and workload placement, reducing energy waste.

– Workload Optimisation. AI maximising performance by analysing workload patterns and allocating resources efficiently.

– Anomaly Detection. AI monitoring system metrics, identifies abnormal patterns and flags security or performance issues.

– Capacity Planning. AI analysing data to predict resource demands, optimising infrastructure expansion.

– Dynamic Resource Allocation. AI dynamically scaling computing resources, storage, and network capacity based on workload demands.

What Does This Mean For Your Business? 

Overall, while generative AI offers opportunities for increased efficiency and productivity for businesses, it also poses several challenges related to infrastructure, trust, security, and compliance. Data centres in our digital society and cloud-based business world now play a crucial role in supporting industries, business, services and as such, whole economies so how data centres to adapt quickly and effectively to the challenges posed by AI (or not) is something that could potentially affect all businesses going forward.

As a data centre operator or a business relying on data centres for smooth operations, the impact of generative AI on data centres presents both opportunities and challenges. On the one hand, the increased demand for processing power and storage capacity necessitates investments in infrastructure expansion and upgrades, providing potential business opportunities for data centre operators. It may lead to the establishment of more data centres and the need for greater efforts to reduce their carbon footprint.

However, this growth in generative AI also brings challenges that need to be addressed. Data centres must ensure sufficient power supply and cooling infrastructure to support the energy demands and heat dissipation requirements of generative AI applications. This may require capacity planning, infrastructure upgrades, integration of renewable energy sources, and the adoption of advanced cooling technologies. It also presents huge challenges in terms of trying to provide the necessary capacity in a way that minimises carbon emissions and meeting environmental targets.

Additionally, with the rise of generative AI, data centres now need to consider scalability, flexibility, security, and privacy implications. They must provide the necessary infrastructure and tools for businesses to integrate generative AI into their workflows and applications securely. Skillset requirements also come into play, as personnel need to be trained in AI technologies to effectively operate and optimise the data centre infrastructure.

Overall, understanding and addressing the implications of generative AI on data centres is crucial for both data centre operators and businesses relying on these facilities. By adapting to the evolving demands of generative AI and investing in optimised infrastructure and pursuing innovation, data centre operators can provide reliable and efficient services to businesses, ensuring seamless operations and unlocking the potential of generative AI for various industries.

Tech News : EU Wants AI-Generated Content Labelled

In a recent press conference, the European Union said that to help tackle disinformation, it wants the major online platforms to label AI generated content.

The Challenge – AI Can Be Used To Generate And Spread Disinformation

In the press conference, Vĕra Jourová (the vice-president in charge of values and transparency with the European Commission) outlined the challenge by saying, “Advanced chatbots like ChatGPT are capable of creating complex, seemingly well-substantiated content and visuals in a manner of seconds,” and that “image generators can create authentic-looking pictures of events that never occurred,” as well as “voice generation software” being able to “imitate the voice of a person based on a sample of a few seconds.”

Jourová Warned of widespread Russian disinformation in Central and Eastern Europe and said, “we have the main task to protect the freedom of speech, but when it comes to the AI production, I don’t see any right for the machines to have the freedom of speech.”   

Labelling Needed Now 

To help address this challenge, Jourová called for all 44 signatories of the European Union’s code of practice against disinformation to help users better identify AI-generated content. One key method she identified was for big tech platforms such as Google, Facebook (Meta), and Twitter to apply labels to any AI generated content to identify it as such. She suggested that this change should take place “immediately.” 

Jourová said she had already spoken with Google’s CEO Sundar Pichai about how the technologies exist and are being worked on to enable the immediate detection and labelling AI-produced content for public awareness.

Twitter, Under Musk 

Jourová also highlighted how, by withdrawing from the EU’s voluntary Code of Practice against disinformation back in May, Elon Musk’s Twitter had chosen confrontation and “the hard way, warning that, by leaving the code, Twitter had attracted a lot of attention,” and that “Its actions and compliance with EU law will be scrutinised vigorously and urgently.” 

At the time, referring to the EU’s new and impending Digital Services Act, the EU’s Internal Market Commissioner, Thierry Breton, wrote on Twitter: “You can run but you can’t hide. Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement”.

The DSA & The EU’s AI Act 

Legislation, such as that referred to by Thierry Breton, is being introduced in the EU as a way to tackle the challenges posed by AI in the EU’s own way rather than relying on Californian laws. Impending AI legislation includes:

The Digital Services Act (DSA) which includes new rules requiring Big Tech platforms like Meta’s Facebook, Instagram and YouTube to assess and manage risks posed by their services, e.g. advocacy of hatred and the spread of disinformation. The DSA also has algorithmic transparency and accountability requirements to complement other EU AI regulatory efforts which are driving legislative proposals like the AI Act (see below) and the AI Liability Directive. The DSA directs companies, large online platforms and search engines to label manipulated images, audio, and video.

The EU’s proposed ‘AI Act’ described as “first law on AI by a major regulator anywhere” which assigns applications of AI to three risk categories. These categories are ‘unacceptable risk’, e.g. government-run social scoring of the type used in China (banned under the Act), ‘high-risk’ applications, e.g. a CV-scanning tool to rank job applicants (which will be subject to legal requirements), plus those applications not explicitly banned or listed as high-risk which are largely left unregulated.

What Does This Mean For Your Business? 

Among the many emerging concerns about AI, there are the fears that the unregulated publishing of AI generated content could spread misinformation and disinformation (via deepfakes videos, photos, and voices) and in doing so, AI could erode truth and even threaten democracy. One method for enabling people to spot AI-generated content is to have it labelled (which the DSA seeks to do anyway) however the EC’s vice-president in charge of values and transparency with the EC sees this as being needed ungently, hence asking all 44 signatories of the European Union’s code of practice against disinformation to start labelling AI-produced content now.

Arguably, it’s unlike big tech companies to act voluntarily before regulations and legislation force them to and Twitter seems to have opted out already. The spread of Russian disinformation in Central and Eastern Europe is a good example of why labelling may be needed so urgently. That said, as Vĕra Jourová acknowledged herself, free speech needs to be protected too.

With AI generated content being so difficult to spot in many cases and with AI-generated content being published so quickly (in vast amounts), along with AI tools available to all for free, it’s difficult to see how the idea of labelling could be achievable or monitored/policed.

The requirement for big tech platforms like Google and Facebook to label AI-generated content could have significant implications for businesses and tech platforms alike. For example, primarily, labelling AI-generated content could be a way to foster more trust and transparency between businesses and consumers. By clearly distinguishing between content created by humans and that generated by AI, users would be empowered to make informed decisions. This labelling could help combat the spread of misinformation and enable individuals to navigate the digital realm with greater confidence.

However, businesses relying on AI-generated content must consider the impact of labelling on their brand reputation. If customers perceive AI-generated content as less reliable or less authentic, it could erode trust in the brand and deter engagement. Striking a balance between AI-generated and human-generated content would become crucial, potentially necessitating increased investments in human-generated content to maintain authenticity and credibility.

Also, labelling AI-generated content would bring attention to the issue of algorithmic bias. Bias in AI systems, if present, could become more noticeable when content is labelled as AI-generated. To address this concern, businesses would need to be proactive in mitigating biases and ensuring fairness in the AI systems used to generate content.

Looking at the implications for tech platforms, there may be considerable compliance costs associated with implementing and maintaining systems to accurately label AI-generated content. Such endeavours (if possible, to do successfully) would demand significant investments, including the development of algorithms or manual processes to effectively identify and label AI-generated content.

Labelling AI-generated content could also impact the user experience on tech platforms. Users might need to adjust to the presence of labels and potentially navigate through a blend of AI-generated and human-generated content in a different manner. This change could require tech platforms to rethink their user interface and design to accommodate these new labelling requirements.

Tech platforms would also need to ensure compliance with specific laws and regulations related to labelling AI-generated content. Failure to comply could result in legal consequences and reputational damage. Adhering to the guidelines set forth by governing bodies would be essential for tech platforms to maintain trust and credibility.

Finally, the introduction of labelling requirements could influence the innovation and development of AI technologies on tech platforms. Companies might find themselves investing more in AI systems that can generate content in ways that align with the labelling requirements. This, in turn, could steer the direction of AI research and development and shape the future trajectory of the technology.

The implications of labelling AI-generated content for businesses and tech platforms are, therefore, multifaceted. Businesses would need to adapt their content strategies, manage their brand reputation, and address algorithmic bias concerns. Tech platforms, on the other hand, would face compliance costs, the challenge of balancing user experience, and the need for innovation in line with labelling requirements. Navigating these implications would require adjustments, investments, and a careful consideration of user expectations and experiences in the evolving landscape of AI-generated content.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives