Video Update : How To Recover Your LinkedIn Account
If you ever find yourself locked-out of LinkedIn, this video is for you …
Tech Tip – Customise Action Centre Quick Actions
The Action Centre in Windows 10/11 provides quick access to common settings and notifications. You can customise the quick actions to include the settings you use most frequently. Here’s how:
To open Action Centre:
– Click on the Action Centre icon in the taskbar (or press Win + A).
To customise Quick Actions:
– Click on Expand to see all quick actions.
– Right-click on any quick action and select Edit quick actions.
– Drag and drop icons to rearrange them or click on Add to include new actions.
To save changes:
– Click ‘Done’ to save your customised quick actions.
Featured Article : Gemini … Overblown Hype?
Two new studies show that Google’s Gemini AI models may not live up to the hype in terms of answering questions about large datasets correctly.
Google Gemini
Google Gemini is an advanced AI language model developed by Google to enhance various applications with sophisticated natural language understanding and generation capabilities. It features multimodal capabilities, enabling it to process and integrate information from text, images, and possibly audio for more comprehensive and context-aware responses. The model also boasts a deep contextual understanding, allowing it to generate relevant and accurate answers in complex conversations or tasks.
Google has highlighted Gemini’s scalability and adaptability as being its strong points, and how its highly scalable architecture can help with handling large-scale data efficiently and fine-tuning for specific tasks or industries.
Also, Gemini is thought to deliver superior performance in speed and accuracy due to advancements in machine learning techniques and infrastructure.
Studies
However, the results of two studies appear to go against Google’s narrative that Gemini is particularly good at analysing large amounts of data.
For example, the Cornell University “One Thousand and One Pairs: A ‘novel’ challenge for long-context language models” study, co-authored by Marzena Karpinska, a postdoc at UMass Amherst, tested how well long-context Large Language Models (LLMs) can retrieve, synthesise, and reason over information across book-length inputs.
The study involved using a dataset called ‘NoCha’, which consisted of 1,001 pairs of true and false claims about 67 recently published English fiction books. The claims required global reasoning over the entire book to verify, posing a significant challenge for the models.
Unfortunately, the research revealed that no open-weight model performed above random chance, and even the best-performing model, GPT-4o, achieved only 55.8 per cent accuracy. Also, the study found that the models struggled with global reasoning tasks, particularly with speculative fiction that involves extensive world-building.
The models were found to frequently fail to answer questions correctly about large datasets, with accuracy rates between 40-50 per cent in document-based tests.
The research results suggest that while models can technically process long contexts, they often fail to truly understand the content. Also, the results may highlight the limitations of current long-context language models such as Google Gemini (Gemini 1.5 Pro and 1.5 Flash).
The Second Study
The second study, co-authored by researchers at UC Santa Barbara, focused on the Gemini models’ performance in video analysis and their ability to ‘reason’ over the videos when being asked questions about them. However, the results also proved to be poor, highlighting difficulties with transcribing and recognising objects in images, thereby perhaps indicating significant limitations in the models’ data analysis capabilities.
Discrepancies Between Claims And Performance?
Both studies appear to highlight possible discrepancies between Google’s claims and the actual performance of the Gemini models, thereby raising questions about their efficacy and shedding light on the broader challenges faced by generative AI technology.
Posted On X
Marzena Karpinska, also noted (on X/Twitter) other interesting points about LLMs from the research, including:
– Even when models output correct labels, their explanations are often inaccurate.
– On average, all LLMs perform much better on pairs requiring sentence-level retrieval than global reasoning (59.8 per cent vs 41.6 per cent), but still their accuracy on these pairs is much lower than on the “needle-in-a-haystack” task.
– Models perform substantially worse on books with extensive world-building (fantasy and sci-fi) than contemporary and historical novels (romance or mystery).
What Does Google Say?
Google has not directly commented on the specific studies that critique the performance of its Gemini models. However, Google has highlighted the advancements and capabilities of the Gemini models in their official communications. For example, Sundar Pichai, CEO of Google and Alphabet, has emphasised that Gemini models are designed to be highly capable and general, featuring state-of-the-art performance across multiple benchmarks. Google asserts that Gemini’s long context understanding, and multimodal capabilities significantly enhance its ability to process and reason about vast amounts of information, including text, images, audio, and video.
Google has tried to highlight its focus on the continuous improvement and rigorous testing of Gemini models, showcasing their performance on a wide variety of tasks, from natural image understanding to complex reasoning. The company has also been actively working on increasing the models’ efficiency and context window capacity, allowing them to process up to 1 million tokens (the basic units of text that the model processes). Google hopes these improvements will enable more sophisticated and context-aware AI applications.
What Does This Mean For Your Business?
The findings from these studies may have significant implications for businesses relying on AI for data analysis and decision-making. The apparent underperformance of Google’s Gemini models in handling large datasets suggests that businesses might not be able to fully leverage these AI tools for complex data analysis tasks just yet. This could impact sectors like finance, healthcare, and any industry requiring detailed and accurate data interpretation, where businesses may need to reassess their dependence on such models for critical operations.
For Google, these studies may highlight a gap between their promotional claims and the actual capabilities of their AI models. This could prompt Google to accelerate its research and development efforts to address these shortcomings and enhance the practical utility of their models. It also places pressure on Google to maintain transparency about the limitations of their technologies while continuing to push the boundaries of AI performance.
Other AI companies might view these findings as both a caution and an opportunity. On one hand, the discrepancies in performance underline the inherent challenges in developing robust AI models. On the other hand, they provide a competitive edge for companies that can deliver more reliable and accurate AI solutions. This competitive landscape could drive innovation and lead to the emergence of more capable AI models that better meet the complex needs of businesses.
In summary then, while the current limitations of AI models like Google Gemini pose challenges, they also highlight areas ripe for innovation and improvement. Businesses should stay informed about these developments and be prepared to adapt their strategies to harness the full potential of evolving AI technologies.
Tech Insight : Jobs Threatened By ChatGPT
In this insight, we look at the kinds of industries and jobs that research has identified as being most exposed to the disruptive threat of generative AI, but we also look at how AI has created some new job roles.
Research
Research from Felten, Manav Raj, and Seamans –“How will Language Modelers like ChatGPT Affect Occupations and Industries?”), from OpenAI and also the University of Pennsylvania offered some reasonably in-depth analysis of how advances in AI language modeling, such as ChatGPT, impact various occupations and industries. As part of the key findings, the research paper identified some of the jobs most exposed to ChatGPT. These findings were:
Telemarketers
The research indicated a high exposure Level. This is because the nature of telemarketing involves repetitive tasks that could be easily automated by language models. ChatGPT can, for example, handle customer inquiries, provide information, and even persuade potential customers, thereby reducing the need for human telemarketers.
Post-secondary Teachers
(e.g. English Language and Literature, Foreign Language and Literature, History)
According to the research, there is a significant exposure level for these jobs, with the reason being they often require the creation of educational content, as well as grading, plus answering student queries, which are all tasks that ChatGPT can perform efficiently. However, the interactive and mentoring aspects of teaching are much less likely to be fully replaced by AI.
Legal Services
Famously, ChatGPT (specifically GPT-4), passed the legal bar exam back in March 2023 and exposure to ChatGPT in legal jobs is thought to be considerable. For example, many tasks within legal services, such as document review, contract analysis, and basic legal advice, can be automated using language models. ChatGPT’s ability to process and understand large volumes of text makes it suitable for these tasks.
Securities, Commodities, and Investments
Financial analysis, report generation, and market trend analysis are all areas where ChatGPT can assist significantly. Specifically, its data processing capabilities can enhance efficiency and reduce the reliance on human analysts for routine tasks.
In fact, the researchers were able to compile a list of the top 20 professions most exposed to ChatGPT, which are:
1. Telemarketers
2. English language (and literature) teachers
3. Foreign language (and literature) teachers
4. History teachers
5. Law teachers
6. Philosophy and religion teachers
7. Sociology teachers
8. Political science teachers
9. Criminal justice and law enforcement teachers
10. Sociologists
11. Social work teachers
12. Psychology teachers
13. Communications teachers
14. Political scientists
15. Cultural studies teachers
16. Arbitrators, mediators, and conciliators
17. Judges, magistrate judges and magistrates
18. Geography teachers
19. Library science teachers
20. Clinical, counseling and school psychologists
Accountants Exposed
The OpenAI / University of Pennsylvania research (working paper) also found that a significant portion of the US workforce, including accountants, mathematicians, interpreters, and writers, are highly exposed to the capabilities of generative AI technologies like ChatGPT. For instance, the research revealed that at least half of the tasks performed by accountants could be completed much faster using AI, thereby demonstrating the substantial impact of these technologies on various professions.
Not Creative & Management Jobs
Conversely however, this paper noted that professions requiring human judgment, creativity, and complex decision-making are much less likely to be replaced by AI. These include jobs in fields like:
– Creative arts, including artists, writers, and designers, where the emphasis is on originality and human creativity.
– Management – roles that require strategic decision-making and interpersonal skills.
– Healthcare. Professions that involve direct patient care and complex medical decision-making.
The findings of the OpenAI research suggest that while AI like ChatGPT can significantly impact certain job sectors by automating routine tasks, roles requiring nuanced human skills and judgment appear to remain less vulnerable to automation.
Also, researchers at Northwestern University’s Kellogg School of Management (in the US) examined the historical impact of disruptive technologies on jobs and projected the effects of ChatGPT. Not surprisingly, their findings indicated that jobs involving data analysis and information retrieval are most at risk from ChatGPT.
What Can Workers Do?
To protect themselves from the threat posed by AI technologies like ChatGPT, workers can focus on developing skills that are less likely to be automated. These include critical thinking, creativity, and complex decision-making abilities. Professions that require nuanced human judgment, such as those in creative arts, management, and healthcare, are less vulnerable to AI automation. It’s possible, therefore, that by enhancing skills in these areas, workers may remain more relevant in an AI-augmented job market.
Also, reskilling and upskilling are possible strategies for workers to stay competitive. Learning new technologies and understanding how to leverage AI tools can turn potential threats into opportunities. Workers could take advantage of AI to increase their productivity and efficiency rather than being replaced by it, suggesting that training programs focusing on AI literacy, data analysis, and digital transformation could also prove to become essential for workers to adapt to the changing landscape.
Integrating AI into their workflow in a way that complements their unique human capabilities may also be a way that workers can mitigate the threat posed to their jobs by AI such as ChatGPT. This could involve understanding how to use AI to augment tasks that require speed and accuracy while focusing on aspects of their jobs that necessitate empathy, interpersonal skills, and complex problem-solving. Embracing a collaborative approach with AI could therefore help workers enhance their roles and provide added value to their employers, thus securing their positions in the evolving job market.
What About AI Creating Jobs?
It’s worth remembering that as well as posing a risk to certain jobs/roles, ChatGPT, and other generative AI could also create new jobs and opportunities. For example:
AI specialists and engineers. The rise of generative AI has led to an increased demand for AI and machine learning specialists. These professionals are responsible for developing, maintaining, and improving AI systems. According to the World Economic Forum’s Future of Jobs Report, there is a projected 40 per cent increase in the number of AI and machine learning specialists by 2027, highlighting the growing need for expertise in this field.
Prompt Engineers. As AI models like ChatGPT become more prevalent, the role of prompt engineers has emerged. These specialists create and refine the prompts used to train AI systems, ensuring they generate accurate and relevant outputs. This role requires a deep understanding of both the technology and the specific application domains, making it a unique and valuable (as well as a high salary) position in the AI ecosystem.
AI Trainers and data annotators. Generative AI models require vast amounts of data to learn and improve. AI trainers and data annotators play a crucial role in preparing and curating this data. For example, they label datasets, review AI outputs, and provide feedback to enhance the models’ accuracy and performance. This job is critical for maintaining the quality of AI-generated content and ensuring that the models operate within ethical and practical boundaries.
Digital transformation specialists. Organisations are now increasingly integrating AI into their workflows, which is feeding the demand for professionals who can manage and lead these transformations. Digital transformation specialists can help companies adopt and leverage AI technologies effectively, optimising processes and driving innovation. The Future of Jobs Report (World Economic Forum) indicates a significant rise in demand for digital transformation specialists, underlining their importance in the modern workplace.
AI ethics consultants. With the growing influence of AI, ethical considerations are important. AI ethics consultants work to ensure that AI applications comply with legal standards and ethical guidelines. They help organisations navigate the complexities of AI implementation, addressing issues like bias, transparency, and accountability. This emerging role is proving to be important for building public trust and promoting responsible AI use.
What Does This Mean For Your Business?
The findings from the research on AI technologies like ChatGPT appear to show a real shift in the landscape of various industries. For UK businesses, this translates into a need for proactive adaptation to harness the benefits of AI while mitigating its disruptive potential. Integrating AI into business operations could significantly enhance efficiency, particularly in roles that involve routine cognitive tasks and data processing. For example, automating customer service, financial analysis, and legal documentation could free up valuable human resources to focus on more strategic, creative, and interpersonal tasks. Embracing AI can therefore lead to a more streamlined and productive business environment, reducing operational costs and improving service delivery.
Also, the evolution of AI presents an opportunity for businesses to invest in the reskilling and upskilling of their workforce. Note that there is an argument that genAI like ChatGPT can also have a deskilling effect. By providing training programs focused on AI literacy, data analysis, and digital transformation, businesses can equip their employees with the necessary skills to thrive in an AI-augmented job market. Encouraging a culture of continuous learning and adaptability will not only help in retaining talent but also foster innovation and resilience within the organisation. Workers who are adept at leveraging AI tools can hopefully transform potential threats into opportunities using AI to augment their roles and increase their productivity and value to the company.
Businesses also need to consider the ethical implications of AI deployment. Establishing roles such as AI ethics consultants could ensure that the integration of AI is conducted responsibly, addressing issues like bias, transparency, and accountability. This may not only build public trust but also help safeguard the company against potential legal and ethical pitfalls.
Tech News : AI Test For Parkinson’s
Researchers, led by scientists at UCL and University Medical Center Goettingen, Germany, have developed an AI-enhanced blood test that can predict Parkinson’s in at-risk patients up to seven years before the onset of symptoms.
What Is Parkinson’s?
Parkinson’s disease is a condition caused by the progressive breakdown of certain nerve cells in the brain, resulting in a deficiency of the neurotransmitter dopamine. This results in symptoms including slowness of movement, increased muscle tension and tremors, and non-motor symptoms like olfactory (sense of smell) loss, plus sleep disorders and even depression.
In the UK, approximately 1 in 350 adults is diagnosed with Parkinson’s disease, which translates to around 153,000 people currently living with the condition. Parkinson’s is the second most common neurodegenerative disease and is becoming increasingly common in the population. For example, estimates suggest that the number of people diagnosed will rise by nearly a fifth by 2025, reaching about 168,000 (Parkinson’s UK).
The Challenge
Up until now, the diagnosis has mostly been based on motor symptoms, which only occur when more than 70 percent of the dopamine-containing nerve cells have already been degraded. Also, there are currently no clues (biomarkers), which can indicate the specific disease process simply, directly, and at an early stage.
The New Research
The new cooperation project research, however, which involved researchers from the University Medical Center Göttingen (UMG), the Paracelsus-Elena-Klinik Kassel and University College London (UC), appears to have found a simple AI-enhanced way to diagnose the disease early.
How?
In the first stage, the researchers analysed blood samples from Parkinson’s patients and healthy study participants. This enabled them to identify 23 proteins that showed differences between the diseased and healthy participants and could therefore be considered biomarkers for the disease.
Secondly, the 23 proteins were examined in the blood samples of people with isolated rapid eye movement (REM) sleep behavior disorder because this represents a high risk for Parkinson’s disease.
The researchers then used AI to identify eight of the 23 proteins that could be used to predict Parkinson’s disease for 79 per cent of these ‘high-risk’ patients up to seven years before the onset of symptoms.
How Will This Help?
Dr Michael Bartl (a member of the UMG’s Translational Biomarker Research in Neurodegenerative Diseases working group and one of the first authors of the study) highlighted how the research findings will help, saying: “By determining eight proteins in the blood, we can identify potential Parkinson’s patients several years in advance. Drug therapies could be given at an earlier stage, which could possibly slow down the progression of the disease or even prevent it from occurring”. Dr Barl also added “We have not only developed a test, but also make the diagnosis using eight marker proteins that are directly linked to processes such as inflammation and the breakdown of non-functional proteins. These markers also represent potential targets for drug treatments”.
What Does This Mean For Your Business?
The development of a simple but AI-enhanced blood test that can indicate Parkinson’s at an early stage (seven years before symptoms) signifies a groundbreaking advancement not only in medical diagnostics but also in the broader application of AI technology.
For businesses, particularly those in the healthcare and biotech sectors, this research highlights the transformative potential of AI in tackling many complex health challenges. The ability to predict Parkinson’s disease so early could lead to earlier interventions, potentially slowing disease progression and improving patient outcomes. This breakthrough demonstrates the crucial role AI can play in early disease detection and personalised medicine, opening new avenues for innovation and investment in healthcare technologies.
For companies operating outside the healthcare sector, the implications are equally important. The research highlights how AI could address significant challenges across various industries, from health and business to climate and environmental management. For example, AI’s capability to analyse vast datasets and identify critical patterns could be leveraged to optimise operations, improve decision-making, and enhance sustainability efforts. Businesses can, therefore, draw inspiration from this study to explore AI applications that could revolutionise their own processes, leading to increased efficiency and competitive advantage.
Also, this development highlights the importance of collaboration between academic institutions and industry. The partnership between UCL and the University Medical Centre Goettingen showcases how interdisciplinary cooperation can lead to significant technological advancements. Businesses should consider fostering similar collaborations to drive innovation and stay at the forefront of technological progress.
Tech News : Adobe Lawsuit : Customer Cancellation Concerns
The US Justice Department, together with the Federal Trade Commission (FTC), are suing Adobe Inc. (and two Adobe executives) over an alleged hidden “Early Termination Fee” and an alleged overly complex subscription-cancellation process.
Hiding Important Information
In the complaint, filed in the U.S. District Court for the Northern District of California, it’s alleged that Adobe Inc systematically violated the Restore Online Shoppers’ Confidence Act (ROSCA) using fine print and inconspicuous hyperlinks to hide important information about Adobe’s subscription plans.
Using An Early Termination Fee As A Retention Tool?
Allegedly, these violations include a significant “Early Termination Fee” that customers may be charged when they cancel their subscriptions, which Adobe may have profited from. The complainant says that this may amount to misleading Adobe’s consumers about the true costs of a subscription and “ambushing” them with the fee when they try to cancel, i.e. using the fee as a powerful retention tool.
Deterred From Cancellation By The Complexity Of The Process?
The Justice Department / FTC complaint alleges that Adobe has also been violating ROSCA by not providing consumers with a simple mechanism to cancel their recurring, online subscriptions. Instead, it’s alleged, Adobe protects its subscription revenues by “thwarting subscribers’ attempts to cancel” and by “subjecting them to a convoluted and inefficient cancellation process filled with unnecessary steps, delays, unsolicited offers, and warnings”. It’s alleged, therefore, that the complexity of the cancellation process appears to be used to deter customers from cancelling (another retention tool).
Trapping Customers
The Director of the FTC’s Bureau of Consumer Protection, Samuel Levine, summed up the complaint against Adobe, saying “Adobe trapped customers into year-long subscriptions through hidden early termination fees and numerous cancellation hurdles,” and that “Americans are tired of companies hiding the ball during subscription signup and then putting up roadblocks when they try to cancel”.
Responsibility
U.S. Attorney Ismail J. Ramsey for the Northern District of California highlighted how “Companies that sell goods and services on the internet have a responsibility to clearly and prominently disclose material information to consumers”. He added that “It is essential that companies meet that responsibility to ensure a healthy and fair marketplace for all participants. Those that fail to do so, and instead take advantage of consumers’ confusion and vulnerability for their own profit, will be held accountable.”
Principal Deputy Assistant Attorney General Brian M. Boynton (head of the Justice Department’s Civil Division) also highlighted the importance of stopping “companies and their executives from preying on consumers who sign up for online subscriptions by hiding key terms and making cancellation an obstacle course”.
What Does Adobe Say?
In a statement on Adobe’s website, in answer to the allegations in the lawsuit, Adobe’s general counsel and chief trust officer Dana Rao denies the FTC’s claims and says Adobe will contest the charges in court.
Mr Rao says: “Subscription services are convenient, flexible and cost effective to allow users to choose the plan that best fits their needs, timeline and budget. Our priority is to always ensure our customers have a positive experience. We are transparent with the terms and conditions of our subscription agreements and have a simple cancellation process. We will refute the FTC’s claims in court.”
Penalties
The lawsuit seeks unspecified amounts of consumer redress and monetary civil penalties from the defendants, as well as a permanent injunction to prohibit them from engaging in future violations.
Not The Only Ones
Adobe is, of course, not the only big tech company to have attracted the attention of the US Federal Trade Commission (FTC) in recent times. For example, earlier this month, the FTC filed a lawsuit against Amazon for allegedly enrolling customers in its Prime subscription service without their consent and making it difficult to cancel the subscription. The FTC accused Amazon of using “dark patterns” to mislead customers and hinder their attempts to unsubscribe easily
What Does This Mean For Your Business?
The lawsuit against Adobe should be an important reminder for businesses about the importance of transparency and simplicity in subscription services. The allegations against Adobe highlight the potential risks and legal repercussions of not clearly disclosing all terms and conditions associated with subscription plans. UK businesses offering similar services must ensure that all subscription-related fees, particularly early termination fees, are clearly communicated to customers upfront to avoid misleading them.
The complexity of the cancellation process is another significant issue raised in the Adobe case. Businesses must create a straightforward and user-friendly cancellation process. Any attempt to complicate this process could be viewed as a strategy to retain customers unfairly, which could lead to legal challenges. Also, ensuring that customers can easily unsubscribe from services not only builds trust but also complies with consumer protection laws.
The involvement of two high-level executives in the Adobe lawsuit (David Wadhwani and Maninder Sawhney) highlights the accountability at all levels of an organisation. Business leaders should, therefore, be vigilant and ensure their company’s practices are transparent and compliant with regulations. This includes regularly reviewing and updating terms of service and cancellation policies to meet legal standards and customer expectations.
For UK businesses, this case also signals the increasing scrutiny from regulatory bodies worldwide, including the UK’s Competition and Markets Authority (CMA), which has similar oversight on consumer rights and business practices. Staying informed about both local and international regulations and aligning business practices accordingly can prevent potential legal issues.
The Adobe lawsuit, therefore, illustrates the crucial need for businesses to be transparent, honest, and straightforward in their dealings with customers. By adopting clear communication, simplifying processes, and ensuring compliance, UK businesses can foster better customer relationships and avoid costly legal disputes.