Tech Tip – How To Get A Full Long Page Screen Capture In Chrome

If you’d like to capture long web pages in their entirety, e.g. for use in documentation, presentations, or competitor analysis, Google Chrome has a lesser-known but built-in way for doing this. Here’s how it works:

– Go to the web page you’d like to capture.

– Press Ctrl + Shift + I (or Cmd + Option + I on Mac) to open Developer Tools, then Ctrl + Shift + P to open the Command Menu (right-hand side).

– In the search bar at the top (next to ‘Run >’) type “screenshot” and select “Capture full size screenshot”.

– The screenshot will be saved in your ‘Downloads’ folder as a PNG file

Featured Article : Bots To Bots : Google Offers Protection From AI-Related Lawsuits

Google Cloud has announced in a blog post that if customers are challenged on copyright grounds through using its generative AI products (Duet AI), Google will offer limited indemnity and assume responsibility for the potential legal risks involved.

Why? 

With the many different generative AI services (such as AI chatbots and image generators) being powered by back-end neural networks / Large Language Models (LLMs) that have been trained using content from many different sources (without consent or payment), businesses that use their outputs face risks. For example, content creators like artists and writers may take legal action and seek compensation for LLMs using their work for training and which, as a result, can appear to copy their style in their output, thereby raising potential issues of copyright, lost income, devaluation of their work and more. Real examples include:

– In January this year, illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz filing a lawsuit against Midjourney Inc, DeviantArt Inc (DreamUp), and Stability AI, alleging that the text-to-image platforms have used their artworks, without consent or compensation, to train their algorithms.

– In February this year, Getty Images filing a lawsuit against Stability AI, alleging that it had copied 12 million images (without consent or permission) to train its AI model.

– Comedian Sarah Silverman joining lawsuits (in July 2023) accusing OpenAI and Meta of training their algorithms on her writing without permission.

– GitHub facing litigation over accusations that they have allegedly scraped artists’ work for their AI products.

– Microsoft, Microsoft’s GitHub, and OpenAI facing a lawsuit over alleged code copying by GitHub’s Copilot programming suggestion service.

Although all these relate to lawsuits against the AI companies themselves and not their customers, the AI companies realise that this is also a risky area for customers because of how their AI models have been trained and where they could get their outputs from.

What Are The AI Companies Saying In Their Defence? 

Examples of the kinds of arguments that AI companies being accused of copyright infringement are using in their defence include:

– Some AI companies argue that the data used to train their models is under the principle of “fair use.” Fair use is a legal doctrine that promotes freedom of expression by allowing the unlicensed use of copyright-protected works in certain circumstances. For example, the argument is that the vast amount of data used to train models like ChatGPT’s GPT-4 is processed in a transformative manner, which AI companies like OpenAI may argue means the output generated is distinct and not a direct reproduction of the original content.

– Another defence revolves around the idea that AI models, especially large ones, aggregate and anonymise data to such an extent that individual sources become indistinguishable in the final model. This could mean that, while a model might be trained on vast amounts of text, it doesn’t technically “remember” or “store” specific books, articles, or other content in a retrievable form.

– Yet another-counter argument by some AI companies is that while an AI tool has the ‘potential’ for misuse, it is up to the end-users to use it responsibly and ethically. This means that AI companies can argue that because they often provide guidelines and terms of service that outline acceptable uses of their technology, plus they actively try and discourage/prevent uses that could lead to copyright infringement, they are therefore (ostensibly) encouraging responsible use.

Google’s Generative AI Indemnification 

Like Microsoft’s September announcement that it would defend its paying customers if they faced any copyright lawsuits for using Copilot, Google has just announced for its Google Cloud customers (who are pay-as-you-go) that it will be offering them its own AI indemnification protection. Google says that since it has embedded the always-on ‘Duet AI’ across its products, it needs to put its customers first and in the spirit of “shared fate” it will “assume responsibility for the potential legal risks involved.” 

A Two-Pronged Approach 

Google says it will be taking a “two-pronged, industry-first approach” to this indemnification. This means that it will provide indemnity for both the training data used by Google for generative AI models, and for the generated output of its AI models – two layers of protection.

In relation to the training data, which has been a source of many lawsuits for AI companies and could be an area of risk for Google’s customers, Google says its indemnity will cover “any allegations that Google’s use of training data to create any of our generative models utilised by a generative AI service, infringes a third party’s intellectual property right.” For business users of Google Cloud and its Duet AI, this means they’ll be protected regardless against third parties claiming copyright infringement as a result of Google’s use of training data.

In relation to Google’s generated output indemnity, Google says it will apply to Duet AI in Google Workspace and to a range of Google Cloud services which it names as:

– Duet AI in Workspace, including generated text in Google Docs and Gmail and generated images in Google Slides and Google Meet.

– Duet AI in Google Cloud including Duet AI for assisted application development.

– Vertex AI Search.

– Vertex AI Conversation.

– Vertex AI Text Embedding API / Multimodal Embeddings.

– Visual Captioning / Visual Q&A on Vertex AI.

– Codey APIs.

Google says the generated output indemnity will mean that customers will be covered when using the above-named products against third-party IP claims, including copyright.

One Caveat – Responsible Practices 

The one caveat that Google gives is that it won’t be able to cover customers where they have intentionally created or used generated output to infringe the rights of others. In other words, customers can’t expect Google to cover them if they ask Duet AI to deliberately copy another person’s work/content.

The Difference 

Google says the difference between its AI indemnity protection and that offered by others (e.g. Microsoft), is essentially that it covers the training data aspect and not just the output of its generative AI tools.

Bots Talking To Each Other?

Interestingly, another twist in the complex and emerging world of generative AI last week were reports that companies are using “synthetic humans” (i.e. bots), each with characteristics drawn from ethnographic research on real people and using them to take part in conversations with other bots and real people to help generate new product and marketing ideas.

For example, Fantasy, a company that creates the ‘synthetic humans’ for conversations has reported that the benefits of using them include both the creation of novel ideas for clients, and prompting real humans included in their conversations to be more creative, i.e. stimulating more creative brainstorming. However, although it sounds useful, one aspect to consider is where the bots may get their ‘ideas’ from since they’re not able to actually think. Could they potentially use another company’s ideas?

What Does This Mean For Your Business? 

Since the big AI investors like Google and Microsoft have committed so fully to AI and introduced ‘always-on’ AI assistants to services for their paying business customers (thereby encouraging them to use the AI without being able to restrict all the ways its used), it seems right that they’d need to offer some kind of cover, e.g. for any inadvertent copyright issues.

This is also a way for Google and Microsoft to reduce the risks and worries of their business customers (customer retention). Google, Microsoft, and other AI companies have also realised that they can feel relatively safe in offering indemnity at the moment as they know that many of the legal aspects of generative AI’s outputs and the training of its models are very complex areas that are still developing.

They may also feel that taking responsibility in this way at least gives them a chance to get involved in the cases, argue, and have a say in the cases (particularly with their financial and legal might) that will set the precedents that will guide the use of generative AI going forward. It’s also possible that many cases could take some time to be resolved due to the complexities of the new, developing, and often difficult to frontier of the digital world.

Some may also say that many of the services that Google’s offering indemnity for could mostly be classed as internal use services, whilst others may say that the company could be opening itself up to a potential tsunami of legal cases given the list of services it covers and the fact not all business users will be versed in what the nuances of responsible use in what is a developing area. Google and Microsoft may ultimately need to build-in legal protection and guidance of what can be used to the output of their generative AI.

As a footnote, it would be interesting to see whether ‘synthetic human’ bots could be used to discuss and sort out many of the complex legal areas around AI use (AI discussing the legal aspects of itself with people – perhaps with lawyers) and whether AI will be used in research for any legal cases over copyright?

Generative AI is clearly a fast developing and fascinating area with both benefits and challenges.

Tech Insight : How A Norwegian Company Is Tackling ‘AI Hallucinations’

Oslo-based startup Iris.ai has developed an AI Chat feature for its Researcher Workspace platform which it says can reduce ‘AI hallucinations’ to single-figure percentages.

What Are AI Hallucinations? 

AI hallucinations, also known as ‘adversarial examples’ or ‘AI-generated illusions,’ are where AI systems generate or disseminate information that is inaccurate, misleading, or simply false. The fact that the information appears convincing and authoritative despite lacking any factual basis means that it can create problems for companies that use the information without verifying it.

Examples 

A couple of high-profile examples of when AI hallucinations have occurred are:

– When Facebook / Meta demonstrated its Galactica LLM (designed for science researchers and students) and, when asked to draft a paper about creating avatars, the model cited a fake paper from a genuine author working on that subject.

– Back in February, when Google demonstrated its Bard chatbot in a promotional video, Bard gave incorrect information about which satellite first took pictures of a planet outside the Earth’s solar system. Although it happened before a presentation by Google, it was widely reported, resulting in Alphabet Inc losing $100 billion in market value on its shares.

Why Do AI Hallucinations Occur? 

There are a number of reasons why chatbots (e.g. ChatGPT) generate AI hallucinations, including:

– Generalisation issues. AI models generalise from their training data, and this can sometimes result in inaccuracies, such as predicting incorrect years due to over-generalisation.

– No ground truth. LLMs don’t have a set “correct” output during training, differing from supervised learning. As a result, they might produce answers that seem right but aren’t.

– Model limitations and optimisation targets. Despite advances, no model is perfect. They’re trained to predict likely next words based on statistics, not always ensuring factual accuracy. Also, there has to be a trade-off between a model’s size, the amount of data it’s been trained on, its speed, and its accuracy.

What Problems Can AI Hallucinations Cause? 

Using the information from AI hallucinations can have many negative consequences for individuals and businesses. For example:

– Reputational damage and financial consequences (as in the case of Google and Bard’s mistake in the video).

– Potential harm to individuals or businesses, e.g. through taking and using incorrect medical, business, or legal advice (although ChatGPT passed the Bar Examination and business school exams early this year).

– Legal consequences, e.g. through publishing incorrect information obtained from an AI chatbot.

– Adding to time and workloads in research, i.e. through trying to verify information.

– Hampering trust in AI and AI’s value in research. For example, an Iris.ai survey of 500 corporate R&D workers showed that although 84 per cent of workers use ChatGPT as their primary AI research support tool, only 22 per cent of them said they trust it and systems like it.

Iris.ai’s Answer 

Iris.ai has therefore attempted to address these factuality concerns by creating a new system that has an AI engine for understanding scientific text. This is because the company developed it primarily for use in its Researcher Workspace platform (to which it’s been added as a chat feature) so that its (mainly large) clients, such as the Finnish Food Authority can use it confidently in research.

Iris.ai has reported that the inclusion of the system accelerated research on a potential avian flu crisis can essentially save 75 per cent of a researcher’s time (by not having to verify whether information is correct or made up).

How Does The Iris.ai System Reduce AI Hallucinations? 

Iris.ai says its system is able to address the factuality concerns of AI using a “multi-pronged approach that intertwines technological innovation, ethical considerations, and ongoing learning.” This means using:

– Robust training data. Iris.ai says that it has meticulously curated training data from diverse, reputable sources to ensure accuracy and reduce the risk of spreading misinformation.

– Transparency and explainability. Iris.ai says using advanced NLP techniques, it can provide explainability for model outputs. Tools like the ‘Extract’ feature, for example, show confidence scores, allowing researchers to cross-check uncertain data points.

– The use of knowledge graphs. Iris.ai says it incorporates knowledge graphs from scientific texts, directing language models towards factual information and reducing the chance of hallucinations. The company says this is because this kind of guidance is more precise than merely predicting the next word based on probabilities.

Improving Factual Accuracy 

Iris.ai’s techniques for improving factual accuracy in AI outputs, therefore, hinge upon using:

– Knowledge mapping, i.e. Iris.ai maps key knowledge concepts expected in a correct answer, ensuring the AI’s response contains those facts from trustworthy sources.

– Comparison to ground truth. The AI outputs are compared to a verified “ground truth.” Using the WISDM metric, semantic similarity is assessed, including checks on topics, structure, and vital information.

– Coherence examination. Iris.ai’s new system reviews the output’s coherence, ensuring it includes relevant subjects, data, and sources pertinent to the question.

These combined techniques set a standard for factual accuracy and the company says its aim has been to create a system that generates responses that align closely with what a human expert would provide.

What Does This Mean For Your Business? 

It’s widely accepted (and publicly admitted by AI companies themselves) that AI hallucinations are an issue that can be a threat for companies (and individuals) who use the output of generative AI chatbots without verification. Giving false but convincing information highlights both one of the strengths of AI chatbots, i.e. how it’s able to present information, as well as one of its key weaknesses.

As Iris.ai’s own research shows, although most companies are now likely to be using AI chatbots in their R&D, they are aware that they may not be able to fully trust all outputs, thereby losing some of the potential time savings by having to verify as well as facing many potentially costly risks. Although Iris.ai’s new system was developed specifically for understanding scientific text with a view to including it as a useful tool for researchers who use its own site, the fact that it can reduce AI hallucinations to single-figure percentages is impressive. Its methodology may, therefore, have gone a long way toward solving one of the big drawbacks of generative AI chatbots and, if it weren’t so difficult to scale up for popular LLMs it may already have been more widely adopted.

As good as it appears to be, Iris.ai’s new system can still not solve the issue of people simply misinterpreting the results they receive.

Looking ahead, some tech commentators have suggested that methods like using coding language rather than the diverse range of data sources and collaborations with LLM-makers to build larger datasets may bring further reductions in AI hallucinations. For most businesses now, it’s a case of finding the balance of using generative AI outputs to save time and increase productivity while being aware that those results can’t always be fully trusted and conducting verification checks where appropriate and possible.

Tech News : What’s In The New iOS 17 iPhone Update?

With the recent release of the latest update of Apple’s iPhone software, iOS 17, we look at many of the useful new features and their benefits.

What’s Happened? 

Apple recently released a new updated version of its iOS (iOS 17), which contains many new features, security updates and fixes for Apple’s iPhone, iPad, and smartwatch. It has also subsequently released the iOS 17.0.2 update to fix an issue that prevented transferring data directly from another iPhone during setup, and the iOS 17.0.3 update to fix a recently reported overheating issue in Apple’s newly launched iPhone 15. The recently launched iPhone 15 range includes the iPhone 15 Pro with a titanium finish.

The iPhone X, however, and the iPhone 8 and 8 Plus have not been included in the update (although the iPhone 8 and 8 Plus will still receive security updates).

New Features of iOS 17

Some of the key features of note of iOS 17 include:

Upgraded Autocorrect 

An upgraded autocorrect that learns the user’s normal language, allows swearing (and doesn’t substitute with the word “ducking”), reverts corrected text by tapping on the underlined where needed, and can predict full sentences during typing.

Standby Mode – More Like A ‘Hub’ 

The new Standby mode (which can be particularly useful when leaving the phone by the bed) gives a full landscape, hub style display while charging which includes the clock, calendar, weather, photos, chosen widgets, Siri interaction, and more.

Greater Personalisation Through Contact Cards 

Changes to calls and messaging sees Apple introduce greater personalisation through customisable contact cards, whereby users can create their own personalised visual cards (including a photo, text, and customisable colours) that display on the recipient’s phone and in their contacts app when calls are made. This could, of course be very useful in business interactions, e.g. including company information / branding elements in the card, displaying creativity, and creating a memorable identity that stands out.

AirDrop – Seamless Sharing of Cards 

Also, the contact cards aren’t just for viewing. The (seamless) AirDrop function allows users to share cards (like swapping a digital business cards) by bringing two devices close together.

Voicemails Get Live Transcriptions 

Voicemails have also been updated with live transcriptions that allow users to quickly grasp the essence of every message, such as if there’s background noise or if you’re multi-tasking at the time, plus FaceTime allows video voicemails.

Voice Cloning! 

A voice cloning feature allows users to create an audible version of any typed phrase, thereby helping with accessibility and adding a new AI dimension to communications.

Siri Refined 

Siri, Apple’s voice assistant, has also been refined to enable users to adjust Siri’s speaking pace thereby catering to diverse listening preferences, and activation can now happen simply by saying “Siri” rather than “hey Siri.”

Privacy And Security Updated 

Privacy and security, two elements that are particularly important to businesses, have been updated with iOS 17, as users can securely share passwords stored in their iCloud Keychain with trusted individuals. Also, Apple’s Safari browser has fortified its privacy stance by introducing facial recognition for private sessions. Users also get another useful heads-up in the form of receiving alerts before accessing potentially sensitive content.

Photo Recognition

iOS 17’s people album photo recognition also promises to be a helpful feature, e.g. with identifying people in business event photos, favourite people, and even family pets according to Apple.

Food Images – Suggestions 

For those in the food business or needing to find content about food (or simply food and cookery enthusiasts), tapping on a shared food image adds a culinary twist by offering recipe suggestions.

Paying Attention To Mental Wellbeing – Health App Updated 

Particularly since the pandemic, our mental wellbeing has been more in focus and Apple’s health app, traditionally associated with tracking physical activities, now ventures into the realm of mental health. Users can monitor their moods, thereby providing insights into patterns that might indicate anxiety or depression.

iPadOS 17 

With the update, Apple’s iPadOS 17 now has a suite of features tailored for the larger screen. The lock screen has received a lively makeover, allowing users to infuse it with widgets and animated wallpapers, thereby allowing more personalisation and convenience that could help with time-saving and productivity.

Also, the Health app (as highlighted above, with its new mental health focus) debuts on the iPad, sporting a refreshed interface and extensive health data insights.

Multitasking has also received a boost, with users now having the freedom to resize and position apps on the screen, closely mirroring a desktop experience.

WatchOS 10 Too 

With smart wearables now very popular, Apple’s watchOS 10 has also received updates including the integrated apps getting vibrant redesigns that focus on user-friendliness and quick access. For example, directly from the watch face, widgets dynamically update based on several user-specific parameters, ensuring relevant information is just a swipe away.

Choice, in terms of the range of watch faces available has been expanded, e.g. with animated faces like Snoopy and Woodstock, and even a cycling feature that transforms the iPhone into a surrogate bike computer when paired with the watch.

What Does This Mean For Your Business? 

Apple’s iOS 17’s new features, and its new iPhone 15 launch, although marred slightly by the new phone having an overheating problem (plus a radiation-fear-fuelled banning of iPhone 12 sales in France), have given Apple something positive to shout about (and bury any less welcome news).

In the dynamic landscape of UK businesses, where agility and efficiency are paramount, Apple’s iOS 17 looks like offering enhanced productivity and a more sophisticated user experience. The introduction of customisable contact cards in the phone app, for example, offers businesses a modernised touchpoint, facilitating more personalised and streamlined digital communications with clients and partners. The innovative live transcription of voicemails allows businesses to rapidly digest essential information, potentially optimising response times and decision-making processes.

Also, the significant (and always welcome) advancements in privacy and security, including the ability to securely share iCloud Keychain passwords and safeguard private browsing sessions with facial recognition, promise to embolden businesses with heightened digital safety, hopefully helping to ensure that confidential business data remains uncompromised.

AI is making inroads everywhere it seems, and the photo recognition’s intuitive capabilities may be particularly useful in sectors like marketing and retail, enabling businesses to better categorise visuals and tailor marketing strategies.

The mental health tracking in the health app underscores a broader shift towards corporate well-being, allowing businesses to foster a more supportive and aware work environment.

Meanwhile, iPadOS 17’s multitasking enhancements echo the needs of dynamic enterprises, making workflow management and multitasking more like a desktop experience, thereby potentially aiding operational efficiency.

Ultimately, the suite of features presented in iOS 17, and its counterparts for iPad and Watch, could enhance the operational, communicative, and strategic dimensions of UK businesses. Apple is keen to show its commitment to the UK (e.g. with its 500,000 sq ft, six-story space inside Apple Battersea Power Station) and its contribution to the economy (claiming it supports more than 550,000 jobs across the country), and although most businesses use Microsoft rather than Apple products, Apple’s reputation for usability, security, and quality in the UK is likely to be enhanced by the iOS 17 update’s new features.

Tech News : TikTok Trend : AI-Enhanced Profile Photos For LinkedIn Job Seekers

It’s been reported that a TikTok video has started a trend of people using AI to enhance their appearance in their LinkedIn profile photos with a view to improving their chance of getting a job via the platform.

The TikTok Video 

The short TikTok video that’s been attributed to inspiring the trend was posted during the summer and has since been watched more than 50 million times. The video shows the face of a young woman being enhanced by AI and refences the Remini AI photo and video enhancer app.

Remini 

The Remini app, which claims to have 40 million monthly active users, says that it uses “innovative, state-of-the-art AI technology to transform your old photos into HD masterpieces” and that using its app you can “Turn your social media content into professional-grade images that engage your audience”.

By uploading 8 to 10 selfies (from different angles), the app offers generative AI so users can create hyper-realistic photos or alter ego versions of themselves or can enhance “ordinary” photos of themselves. The app lets users enhance the detail, and adjust the colour, face glow, background, and other details to create a more flawless look and improve photos, e.g. for use on social media profiles.

Why? 

With so much competition in the job market for young adults (among whom the AI photo trend is most popular), and with others having access to the same technology, it may seem that enhancing a photo (within reason) to get a competitive edge seems fair to many, particularly if it’s easy and cheap to do (as it can be with AI tools).

Also, research has shown that better profile photos can yield positive results in the labour market. For example, the results of a 2016 research study by Ghent University (Belgium) found that employment candidates with the most favourable Facebook profile picture received around 21 per cent more positive responses to their application than those with the least favourable profile picture, and that their chances of getting an immediate interview invitation differed by almost 40 per cent.

Psychology

In terms of human psychology, it’s known that people tend to form more favourable judgments of individuals who appear more attractive or have a better photographic representation of themselves due to a combination of psychological factors. These include:

– The psychology of first impressions. Grounded in our instinctual ability to quickly gauge and categorise new information, this trait that was historically essential for survival. Seeing an enhanced photo, within seconds, could potentially appeal to this trait and lead to an employer making a more positive judgement about trustworthiness, competence, and likability.

– The ‘Halo Effect,’ which is a cognitive bias that leads us to assume that individuals possessing one positive trait (e.g., physical attractiveness in a photo) must also possess other desirable qualities, even when no evidence supports these assumptions.

– Social Comparison Theory, which suggests that people tend to evaluate themselves by comparing themselves to others. This could mean that when a person’s photo exudes attractiveness, viewers may subconsciously compare themselves and feel admiration or envy, thereby influencing their judgments.

– Our human tendency of ‘confirmation bias’ means that we seek out and interpret information that aligns with our existing beliefs or stereotypes. In other words, if we believe that attractive people are more successful or competent, we may selectively notice and emphasise information in the photo that confirms this belief.

– Theories of ‘Psychological Attraction’ could also mean that a positive and happy looking profile photo could lead to an employer making a more favourable evaluation by associating the positive feelings with the person’s image.

– Other possible psychological influences that could result from an enhanced profile photo could potentially include evolutionary psychology. For instance, we may subconsciously favour those who appear more attractive as potential mates or allies, and cultural or social Influences. For example, cultural and societal norms play a significant role in shaping our perception of beauty, and a profile photo that displays popular beauty ideals could play to the biases of a potential employer looking at a profile photo.

Why Use Apps Like Remini? 

Apps such as Rimini offer many benefits for young adults (or anyone) looking to get a high quality, enhanced photo for a LinkedIn profile photo. For example:

– They’re cheap. Using an AI app (perhaps on a free trial basis) is less expensive than using professional photographic services, plus they don’t require any of the expensive equipment such as lighting, studio hire, etc.

– They’re fast, require minimal effort, and offer a better chance of satisfaction for the user. From just a few selfie uploads, with no need for any photographic knowledge or professional input or equipment, users can get great results in minutes with minimal difficulty.

– They produce high quality, professional looking results.

– They can be used on-demand and offer flexibility. For example, users can virtually try out different styles and looks that could even influence their own real look or could be used as a kind of split testing of response to their profile.

Other Apps Also Available 

It’s worth pointing out that Remini is not the only such AI photo/video enhancing app available. For example, others include Snapseed, iMyFone UltraRepair, VSCO, Pho.To, PicsArt, Photo Wonder, Pixlr, and many more.

Challenges

Obviously, choosing to present a photo that is not a true representation of yourself with the intention of using it to get a job could have its challenges. For example:

– LinkedIn and similar platforms are professional networks where credibility is essential. If you meet someone in person or on a video call and they realise you don’t look like your profile photo, it can set a negative first impression. They might question your authenticity in other areas if you’re willing to misrepresent your appearance.

– Integrity is paramount in professional settings and presenting picture that doesn’t genuinely represent you might be seen as a breach of trust or even deceptive. This perception could, of course, impact your relationships with potential employers, colleagues, or clients.

– Relying on an AI-enhanced image can also have psychological implications. It may suggest that you’re not confident in presenting your true self, which could translate to lower self-esteem or self-worth over time.

– Employers / employment agencies are likely to be more interested in experience and qualifications rather than appearance and also may be wise to the fact that candidates may be using AI-enhanced photos.

– AI-enhanced images, especially those overly refined, can sometimes be clearly identified as modified which could lead people to think you’re hiding something or are overly focused on superficial aspects.

– There could be cultural and ethical implications. For example, in some cultures or industries, authenticity and honesty are valued above all else. Misrepresenting yourself, even in something as seemingly trivial as a profile photo, could be deemed as unethical or unprofessional.

– While the intention behind using an enhanced photo might be to increase job opportunities, it might actually have the opposite effect. If employers or recruiters sense any deceit, they might choose not to engage with you.

– Using AI-enhancement tools, especially those online, could pose a risk to your privacy. There’s always a chance your photos might be used without your consent or knowledge.

What Does This Mean For Your Business?

Appearances are, of course, important in first impressions, in professional environments, and where there are certain expected or required appearance and dress codes to adhere to. Also, wanting a professional-looking photo that you can be happy with, that you think shows the best aspects of yourself as a candidate is understandable, as is thinking that it may help you overcome some known biases.

Having a low price/free way to obtain professional photos quickly is also an attractive aspect of these kinds of AI apps. However, a balance is needed to ensure that the photo is not too enhanced or too unlike what a potential employer may reasonably expect to see in front of them should they choose to invite you to interview. An overly enhanced photo could, therefore, prove to be counterproductive.

It should be understood, however, that for most employers and agencies, experience, qualifications, and suitability for the role are far more important than a photo in making fair and objective recruitment decisions. It’s also worth noting that even if a photo did contribute to getting an interview, the face-to-face, in-person interview is a challenge that AI can’t yet help with (yet). That said, many corporate employers are turning to AI to filter job applications, and young people may feel that with this and with other competing applicants potentially using AI to get an edge, so why shouldn’t they?

This story also highlights the challenges that businesses now face from generative AI being widely available, e.g. being used to write applications, emails, and more, as well as risks to security with deepfake based scams. Just as generative AI has helped businesses with productivity, it also presents them with a new set of threats and challenges, and may require them to use AI image-spotting tools as a means of filtering and protection in many aspects of the business, including recruiting, and may highlight why and when, even in a digital world, face-to-face meetings continue to be important in certain situations.

Sustainability-in-Tech : AI Energy Usage As Much As The Netherlands

A study by a PhD candidate at the VU Amsterdam School of Business and Economics, Alex De Vries, warns that the AI industry could be consuming as much energy as a country the size of the Netherlands by 2027.

The Impact Of AI 

De Vries, the founder of Digiconomist, a research company that focuses on unintended consequences of digital trends, and whose previous research has focused on the environmental impact of emerging technologies (e.g., blockchain), based the warning on the assumption that certain parameters remain unchanged.

For example, assuming that the rate of growth of AI, the availability of AI chips, and servers work at maximum output continuously, coupled with chip designer Nvidia supplying 95 per cent of the AI sectors processors, Mr De Vries has calculated that by 2027 the expected range for the energy consumption of AI computers will be of 85-134 terawatt-hours (TWh) of electricity each year.

The Same Amount Of Energy Used By A Small Country 

This figure approximately equates the amount of power used annually by a small country, such as the Netherlands, and half a per cent of the total global electricity consumption. The research didn’t include the energy required for cooling (e.g. using water).

Why? 

The large language models (LLMs) that power popular AI chatbots like ChatGPT and Google Bard, for example, require huge datacentres of specialist computers that have high energy requirements and have considerable cooling requirements. For example, whereas a standard data centre computer rack requires 4 kilowatts (kW) of power (the same as a family house), an AI rack requires 20 times the power (80kW), and a single data centre may contain thousands of AI racks.

Other reasons why large AI systems require so much energy also include:

– The scale of the models. For example, larger models with billions of parameters require more computations.

– The vast amounts training data processed increases energy usage.

– The hardware (powerful GPU or TPU clusters) is energy intensive.

– The multiple iterations of training and tuning uses more energy, as does the fine-tuning, i.e. the additional training on specific tasks or datasets.

– Popular services hosting multiple instances of the model in various geographical locations (model redundancy) increases energy consumption.

– Server overhead (infrastructure support), like cooling and networking, uses energy.

– Millions of user interactions accumulate energy costs, even if individual costs are low (the inference volume).

– Despite optimisation techniques, initial training and model size is energy-intensive, as are the frequent updates, i.e., the regular training of new models to stay state-of-the-art.

Huge Water Requirements Too – Which Also Requires Energy

Data centres typically require vast quantities of water for cooling, a situation that’s being exacerbated by the growth of AI. To give an idea how much water, back in 2019, before widescale availability of generative AI, it was reported (public records and online legal filings) that Google requested (and was granted) more than 2.3 billion gallons of water for data centres in three different US states. Also, a legal filing showed that in Red Oak, just south of Dallas, Google may have needed as much as 1.46 billion gallons of water a year for its data centre by 2021. This led to Google, Microsoft, and Facebook pledging ‘water stewardship’ targets to replenish more water than they consume.

Microsoft, which is investing heavily in AI development, revealed that its water consumption had jumped by 34 per cent between 2021 and 2022, to 6.4 million cubic metres, around the size of 2,500 Olympic swimming pools.

Energy is required to operate such vast water-cooling systems and recent ideas to supply adequate power supplies for the data centre racks and cooling have even includes directly connecting a data centre to its own 2.5-gigawatt nuclear power station (Cumulus Data – a subsidiary of Talen Energy).

Google In The Spotlight

The recent research by Alex De Vries also highlighted how much energy a company like Google would need (it already has the Bard chatbot and Duet, its answer to Copilot) if it alone switched its whole search business to AI. The research concluded that in this situation Google, a huge data centre operator, would need 29.3 terawatt-hours per year, which is the equivalent to the electricity consumption of Ireland!

What Does This Mean For Your Organisation? 

Data centres are not just a significant source of greenhouse gas emissions, but typically require large amount of energy for cooling, power, and network operations. With the increasing use of AI, this energy requirement has also been increasing dramatically and only looks set to rise.

AI, therefore, stands out as both an incredible opportunity and a significant challenge. Although businesses are only just getting to grips with the many benefits that the relatively new tool of generative AI has given them, the environmental impact of AI is also becoming increasingly evident. Major players like Google and Microsoft are already feeling the pressure, leading them to adopt eco-friendly initiatives. For organisations planning to further integrate AI, it may be crucial to consider its environmental implications and move towards sustainable practices.

It’s not all doom and gloom though because while the energy demands of AI are high, there are emerging solutions that may offer hope. Investments in alternative energy sources (such as nuclear fusion) although it’s still in its very early development (it’s only just able to generate slightly more power than it uses) could redefine how we power our tech in the future. Additionally, the idea of nuclear-powered data centres, like those proposed by Cumulus Data, suggest a future where technology can be both powerful and environmentally friendly.

Efficiency is also a key issue to be considered. As we continue to develop and deploy AI, there’s a growing emphasis on optimising energy use. Innovations in cooling technology, server virtualisation, and dynamic power management are making strides in ensuring that AI operations are as green as they can be, although they still aren’t tackling the massive energy requirement challenge.

Despite the challenges, however, there are significant opportunities too. The energy needs of AI have opened the door for economic growth and companies that can offer reliable, low-carbon energy solutions stand to benefit, potentially unlocking significant cost savings.

Interestingly, AI itself might be part of the solution. Its potential to speed up research or optimise energy use positions AI as a tool that can help, rather than hinder, the journey towards a more sustainable future.

It’s clear, therefore, that as we lean more into an AI-driven world, it’s crucial for organisations to strike a balance. Embracing the benefits of AI, while being mindful of its impact, will be essential. Adopting proactive strategies, investing in green technologies, and leveraging AI’s problem-solving capabilities will be key for businesses moving forward.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives