Featured Article : A Big Stink About Ink

After trying to dismiss a lawsuit from HP customers angry at a firmware update (meaning that their HP printers wouldn’t work with third-party ink cartridges), we look at how HP is answering the arguments within the antitrust ink cartridge lawsuit and what the implications could be for customers.

The Lawsuit

Back in January, printing premier HP was sued in a Federal court in Chicago by 11 consumers (a class action lawsuit) who claimed that their HP printers wouldn’t accept replacement ink cartridges made by other manufacturers, thereby forcing them to pay artificially high prices for HP-branded cartridges. The lawsuit accused HP of violating US and state antitrust laws in a bid to monopolise the market for replacement ink.

The plaintiffs allege that they weren’t told that automatic software updates (firmware updates between late 2022 and early 2023) from HP would disable some printers unless HP-branded ink was used and that faced with non-functional printers, they were then forced to purchase more expensive HP-branded ink that they would not otherwise have purchased.

Damages

The plaintiffs, in this case, are seeking damages of greater than $5 million from HP, which include the cost of their useless third-party cartridges (the ones that won’t work in their printers because of the firmware update) as well as an injunction to disable the part of the firmware updates that prevent the usage of third-party ink.

Trying To Get IT Dismissed

HP’s lawyers recently attempted to have all 79 causes of action in the lawsuit dismissed on the grounds that the central premise of the Plaintiffs’ case was wrong, i.e. that HP failed to disclose to consumers that their printers were equipped with “dynamic security” measures designed to prevent the use of third-party printer cartridges that copy HP’s security chips, thereby locking them into an aftermarket where they were overcharged.

HP argued that it goes to great lengths to disclose that its printers are intended to work only with cartridges that “have an HP chip, and that they may not work with third-party cartridges that do not have an HP chip.” HP also argued that “this information is displayed in clear terms on the printer box, on HP’s website, and in many other materials.” It also highlighted that “many third-party cartridges are not affected by dynamic security. HP does not block cartridges that reuse HP security chips, and there are many such options available for sale. Nor does HP conceal its use of dynamic security.”

HP’s lawyers additionally argued that the plaintiffs also didn’t allege that they didn’t authorise firmware updates in their printers and that many plaintiffs also claim that they purchased HP-branded ink cartridges after receiving the software or firmware updates, and that their printers began to again function properly.

In short, HP’s lawyers attempted to find a long list of reasons to have the lawsuit dismissed.

Previously

These types of allegations against HP have gone on for some time now. For example, back in 2019, HP agreed to resolve related consumer claims in a California case, for a $1.5 million payment, without admitting any wrongdoing (as part of the settlement). However, just last year (in California) a judge said that HP must at least face some claims that it designed some all-in-one printers to stop scanning and faxing when the machine was low on ink, thereby forcing consumers to buy cartridges.

The Backdrop

All these antitrust printing arguments are taking place at a time when HP has been through a long period of shrinking revenues, mainly due to enterprise customers affected by the uncertain economic environment, holding off on their hardware purchases a bit longer.

Instant In Subscription & All-in-One service

Following a strategy re-think, two solutions that HP has devised to help it through these difficult times are its ‘Instant Ink’ services and its All-in-One service, both of which see it focusing on a subscription model going forward.

HP’s Instant Ink service is a subscription-based model that is beneficial for users who want to avoid the inconvenience of running out of ink and dealing with last-minute replacements. It also helps in managing printing costs more predictably. With Instant Ink (for a monthly fee, on an agreed plan), the HP printer’s ability to monitor ink levels means that before users’ ink runs low, HP sends replacement cartridges directly to the doorstep. HP claimed to have 13 million sign-ups to the service back in the beginning of March.

As the name suggests, The All-in-One service, which launched in the US last month, includes not just the ink but hardware as well, i.e. the HP Envy or HP OfficeJet models. This is also a two-year subscription contract, based on a printed page plan, with cancellation fees (to raise the barriers to exit).

In addition to trying to reduce its costs, HP’s CEO, Enrique Lores, speaking recently at the Morgan Stanley Technology, Media and Telecom conference outlined HP’s strategy since the 2019 rethink as trying to “protect supplies revenue by upping subscription services, selling hardware loaded with ink, smart models, and charging more for printers when a customer isn’t committing to HP ink.”

AI Apps Too

HP is also hoping that AI will boost PC sales and has indicated that alongside its PCs, it’s developing new AI applications to run on top of its installed base of more than 200 million commercial devices.

Printing Declining Anyway

Despite HP’s court battles over printer ink and its move to a subscription-based model, for many businesses, the need (and demand) for printers and ink has declined in recent years. This has been due to factors like the greater proliferation of digital tools and technologies, advancements in cloud computing and software-as-a-service (SaaS) platforms and businesses are moving towards greener practices (despite printer companies trying to produce more sustainable/greener ink). Also, the need to reduce costs has favoured digital storage over printed documents, alongside a disruption in global supply chains (e.g. for paper), plus the effects of the pandemic also meant a lowering of demand for printers and ink.

What Does This Mean For Your Business?

Having to constantly renew expensive ink cartridges or running out of ink at the wrong time have long been a significant cost and source of frustration to many businesses. In recent years, however, many businesses, for many of the reasons above, have updated to becoming more reliant on the cloud and digital solutions rather than printed documents. HP itself has had to change its strategy in 2019, moving customers to a subscription model for its ink and hardware in order to weather difficult economic times and falling demand.

This court case around HP’s attempt to curtail consumers’ adoption of cheaper third-party ink cartridges in favour of more expensive HP ones is likely to be unwelcome and reputationally damaging for HP at a time where it needs to protect its position in the marketplace. For competitors, HP’s dominance being challenged is good news and could provide a beneficial commercial outcome for them if events go the wrong way for HP.

For business customers who still need a printer, the ability to have trouble-free operation with their printers and to be able to benefit from the choice of using different, lower-priced print cartridge alternatives are likely to be valuable. Most of us will understand the frustration that printer ink problems can cause.

Looking ahead for HP, its cost-cutting and its shift to a subscription model for its ink/printer products, plus the promise of developing AI apps for its large installed base of commercial devices are ways it hopes to turn around the declining revenues of challenges of recent years. The company has a trusted business brand and the hope for HP is that their valuable brand won’t be tarnished too much by the outcome of the lawsuit that’s currently making the headlines.

Tech Insight : Stop Your Data Being Used To Train AI

In this insight, we look at the process of AI training, the potential pitfalls of misused data, and what measures can be taken to protect your personal and business data from being used to train AI.

Data – For AI Training 

AI training, at its core, involves feeding large datasets to algorithms, thereby enabling them to learn and make ‘intelligent’ decisions. These datasets are often culled from user-generated content across various platforms. Understanding the source and nature of this data is crucial for recognising the implications of its use.

Data, therefore, is the lifeblood of AI models and the quality, quantity, and variety of data directly influences an AI model’s performance. For example, language models require vast amounts of text data to understand and generate human-like responses, while image recognition models need diverse visual data to improve accuracy.

One of the most contentious ways that generative AI companies have allegedly used in recent years, resulting in many lawsuits, to gather enough training data is by the scraping/automatic collection of online content/data. High-profile examples include:

– A class action lawsuit filed in the Northern District of California accused OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claimed that this practice violates the Computer Fraud and Abuse Act (CFAA).

– Google was accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems, thereby raising issues about the boundaries of data use and copyright infringement in the context of AI training.

– A Stability AI, Midjourney, and DeviantArt class action lawsuit claiming that AI companies used copyrighted images to train their AI systems without permission.

– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.

– Last December, The New York Times sued OpenAI and Microsoft, alleging that they used millions of its articles without permission/consent (and without payment) to help train chatbots.

In many of these cases, the legal argument to allow such use has been “fair use” and “transformative outputs.” For example, the AI companies know that under US law, the “fair use” doctrine allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research.

What About Your Data? Could It Be Used AI Training … And How? 

When it comes to your personal and business data, many of the big AI companies have already scraped the web, so whatever you’ve posted is probably already in their systems. There are also many other ways that your data could end up being part of AI training data through several channels. For example:

– Online Activity. When you browse websites, search engines, and social media, companies collect your data to personalise services and train AI to predict user-behaviour.

– Device usage. Smartphones, wearables and smart home devices collect data about your daily activities, locations, health statistics, and preferences, all of which is useful for training AI in areas like health monitoring, personal assistance, and device-optimisation.

– Service Interactions. Interacting with customer service chatbots or voice assistants provides conversational data that helps train AI to understand and generate human-like responses.

– Content creation. Uploading videos, writing reviews, or other content creation on platforms can provide data for AI to learn about content preferences and creation styles.

– Transactional Data. Purchases, financial transactions, and browsing products online give insights into consumer behaviour, used by AI to enhance recommendation engines and advertising algorithms.

All these methods, therefore, which could involve your data, help AI systems learn and adapt to provide more personalised and efficient services.

The Risks of Data Misuse 

There are, of course, risks in having your data used/misused by AI. These risks include:

– Privacy and security concerns. The primary risk of using data in AI training is the potential for significant privacy breaches. Sensitive information, if not adequately protected, can be exposed or misused, leading to serious consequences for individuals and businesses alike.

– Bias and ethical implications. Another critical concern is the propagation of bias through AI systems. If AI is trained on biased or unrepresentative data, it can lead to unfair or prejudiced outcomes, which is especially problematic in sectors like recruitment, law enforcement, and credit scoring.

Checking 

For some people, their creative artwork/images have been used to train AI and this is a particular issue. The website https://haveibeentrained.com/, for example, is an online tool that uses clip-retrieval to search the largest public text-to-image datasets. In this way, links to images that artists want to opt-out from being used to train generative AI systems can be removed.

What Proactive Measures Can You Take To Protect Your Data? 

Bearing in mind the significant privacy risk posed by AI, there are a number of proactive measures you can take to stop your data from being used to train AI. For example:

Opt-Out Options and User Consent 

Many of the services you use from the big tech companies provide mechanisms for users to opt-out of data sharing. Familiarising yourself with these options and understanding how to activate them is essential for maintaining control over your data. Examples include:

If you store your files in Adobe’s Creative Cloud, to opt out of having them used for training, for a personal account, go to the Content analysis section, and click the toggle to turn it off.

If you’re a Google Gemini (AI) user, to prevent your conversations being used, open Gemini in a browser, click on Activity, and select the Turn Off drop-down menu.

If you’re a ChatGPT account holder and are logged in through a web browser, select ChatGPT, Settings, Data Controls, and then turn off Chat History & Training.

For the Squarespace website building tool, to block AI bots, open Settings (in your account), find Crawlers, and turn off Artificial Intelligence Crawlers.

These are just a few examples and it will be a case of going through each of the main services you use and trying to find the opt-out (perhaps using Google to help as you go). However, it’s worth noting that some are either very difficult to find or simply aren’t available for certain types of account. Overall, this can be quite a time-consuming process.

Enhanced Data Management Practices 

Businesses should implement strict data management policies that govern the collection, storage, and use of data. These policies can help ensure that data is handled ethically and in compliance with relevant data protection laws and shielded from AI use for training.

Leveraging Technology for Data Security 

Advanced technological solutions, such as encryption and secure data storage systems, may also be able to play a critical role in protecting data from unauthorised access and breaches that could lead to it finding its way into the hands of AI companies for training.

What Does This Mean for Your Business? 

For businesses today, the pervasive use of data by AI underscores the dual imperatives of protection and vigilance. The reality is that many AI companies have likely already collected extensive swathes of public internet data, including potentially from your own business activities, which poses a distinct challenge. This means that data posted online (either deliberately or inadvertently) may already be part of training sets used to enhance AI capabilities.

That said, businesses can still do things and still hold significant power to influence future data usage and secure existing data. For example, businesses can take proactive steps by regularly reviewing the privacy policies and settings of the digital platforms they use. This includes social media, cloud storage, business software, and any platform where data is stored or shared. Although navigating these settings can be complex, finding and activating opt-out features may be necessary for maintaining control over how your data is used.

Businesses may also wish to educate their employees about data sharing and privacy settings. Training sessions can help employees understand the importance of data-privacy and the steps they can take to ensure data is not inadvertently shared or used for AI training without consent.

Developing and enforcing robust data management policies is essential anyway and this not only complies with data protection regulations but also limits unnecessary data exposure that could be exploited by AI systems. These policies should govern how data is collected, stored, and shared, ensuring that data handling within the company is done ethically and responsibly.

Deploying advanced technological solutions such as encryption, secure access management, and data loss prevention tools can also significantly reduce the risk of unauthorised data access. This is particularly relevant in preventing breaches that could see sensitive information being used to train AI (without your knowledge). While it is challenging to completely control all data that may already be within AI training datasets, businesses can still exert some significant influence over their current data handling and future engagements.

Finally, with ongoing AI legal battles and new regulations, staying informed about your rights and the latest developments in data privacy law could be prudent. This knowledge could help businesses advocate for their interests and respond more adeptly to changes in the legal landscape that affect how their data can be used.

Tech News : FM Market Dominance Concerns

Following an initial report on AI Foundation Models (FMs) last year, the Competition and Markets Authority (CMA) has expressed “real concerns” about it and is investigating the dominance of a small number of big tech firms at the centre of the FM market.

Foundation Models 

Foundation models (FMs) are AI systems / large-scale machine learning models that are pre-trained on large amounts of data and can be adapted to a range of different, more specific purposes. Examples include the GPT (Generative Pre-trained Transformer) models such as GPT-3 and GPT-4 (different versions of the model behind ChatGPT).

What’s The Issue? 

As highlighted by the original CMA report, fast-changing FMs have the potential to transform how we live and work, i.e. they possess significant potential to impact people, businesses, and the UK economy. The CMA wants to ensure this AI market develops in a way that doesn’t undermine consumer trust or is dominated by a few players who can exert market power that prevents the full benefits being felt across the economy.

The previous report on the FMA market led to a set of proposed guiding principles to help achieve this, including making FM developers and deployers accountable for outputs provided to consumers, asking for sufficient choice for businesses so they can decide how to use FMs, plus stressing the need for fair dealing, i.e. no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.

Move Away From “Winner Takes All Dynamics” 

Now, in the next step from its previous report and development of guiding principles around the FM market, the CMA has outlined growing concerns. The CMA’s Sarah Cardell, for example, has expressed the need to learn from history and move away from the kind of “winner takes all dynamics” that has led to the rise of a small number of powerful platforms.

3 Areas Of Focus 

The CMA identifies three (what it believes to be key) interlinked risks to fair, effective, and open competition in the FM market. These are:

1. Firms controlling critical inputs for developing FMs restricting access to shield themselves from competition.

2. Powerful incumbents possibly exploiting their positions in consumer (or business-facing) markets to distort choice in FM services and restrict competition in deployment.

3. Partnerships involving key players being able to exacerbate existing positions of market power through the value chain.

In an update paper, the CMA has, therefore, provided details on how each risk would be mitigated by its principles, and also by the actions it’s taking at the moment.

An Interconnected Web 

One point highlighted by the CMA that illustrates the complication of regulating the FM market effectively is the “interconnected web” of “over 90 partnerships and strategic investments involving the same firms: Google, Apple, Microsoft, Meta, Amazon, and Nvidia (which is the leading supplier of AI accelerator chips).” 

The CMA says that although it recognises the wealth of resources, expertise and innovation these large firms can bring to bear, the role they will likely have in FM markets, and that such partnerships can play a pro-competitive role in the technology ecosystem, it also recognises that powerful partnerships and integrated firms shouldn’t reduce rival firms’ ability to compete, or be used “to insulate powerful firms from competition”.

As the CMA CEO, said: “The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences.”

What Does This Mean For Your Business?  

Given the fast pace with which the FM market is growing and changing, plus the fact that there is an ever more complicated “interconnected web” of partnerships between the big tech companies at the heart of this market, it’s not surprising that regulator wants to stay involved to have any chance of understanding and regulating it effectively. As highlighted by the CMA’s CEO Sarah Cardell, the transformative promise of FMs as a potential “paradigm shift” for societies and economies is the prize. Although the big tech companies have been the big investors in the development of AI so far, it’s still a ‘market’ that needs fair, open, and effective competition where there’s plenty of choice for buyers, prices are kept low enough, and where innovation isn’t stifled.

We’re still at the stage where guidelines are being given and warnings are being issued but if other aspects of big tech company activities are anything to go by, this is going to be a very challenging market for the CMA to stay on top of and regulate. As the CMA has said, it’s going to be a difficult job to confront the “winner takes all dynamics” of the big tech companies and it remains to be seen how much trouble the CMA has regulating this incredible marketplace.

Tech News : Oxford’s Secure Quantum Computing Breakthrough

Researchers at Oxford University’s UK Quantum Computing and Simulation Hub claim to have made what could be an important breakthrough in quantum computing security.

The Issue 

As things stand, if businesses want to use cloud-based quantum computing services, they face privacy and security issues when trying to do so over a network, similar to the issues in traditional cloud computing. For example, users can’t keep their work secret from the server or check their results on their own when tasks get too complex for classical simulations, i.e. they risk disclosing sensitive information like the results of the computation or even the algorithm used itself.

The Breakthrough – ‘Blind Quantum Computing’ 

However, Oxford researchers have now developed “blind quantum computing” which is a method that enables users to access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without having to reveal any useful information (thereby retaining security and privacy). In short, this breakthrough has developed a system for connecting two totally separate quantum computing entities (potentially an individual user accessing a cloud server) in a completely secure way.

How? 

The researchers achieved the breakthrough by creating a system from a fibre network link between a quantum computing server and a simple device detecting photons (particles of light), at an independent computer remotely accessing its cloud services.

This system was found to allow ‘blind quantum computing’ over a network as every computation incurs a correction which must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers say it’s the unique combination of quantum memory and photons that’s the secret to the system.

What Will It Mean? 

As study lead-scientist, Dr Peter Drmota, pointed out: “Realising this concept is a big step forward in both quantum computing and keeping our information safe online.” Also, as Professor David Lucas, the Hub’s Principal Investigator, observed: “We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity”. 

What Does This Mean For Your Business? 

Quantum computers are able to dramatically accelerate tasks that have traditionally taken a long time, with astounding results, e.g. crunching numbers that would take a classical computer a week, could take a quantum computer less than a second.  As such, quantum computers are capable of solving some of the toughest challenges faced by many different industries, and some of the biggest challenges facing us all, such as how to successfully treat some of our most serious diseases and tackle the climate crisis.

However, they are very expensive and for businesses and organisations, the only hope is that they will be able to have access to quantum computers via the cloud as part of ‘Quantum-as-a-Service’, which at least a dozen companies are already offering. The opportunities for innovation and creating competitive advantages and/or achieving their own industry/sector breakthroughs or medical advances using the power of quantum computing are very attractive to many organisations. However, the security and privacy challenges of connecting with a quantum computer over a network have presented a considerable risk – up until now.

This breakthrough from the Oxford researchers appears, therefore, to be an important step in tackling a key challenge and also for potentially opening up access to quantum computing securely and privately, at scale for many businesses and organisations. The results could be a boost in value-adding innovations, and valuable new discoveries that could change the landscape in some sectors. This breakthrough represents another important step towards the future and puts the power of quantum computing within reach of many more ordinary people.

An Apple Byte : Used iPhone Components To Be Allowed For iPhone Repairs

Apple has announced that beginning in the autumn with select iPhone models, customers and independent repair providers will be able to utilise used Apple components in the repair process.

Apple’s senior vice president of Hardware Engineering, John Ternus, said: “With this latest expansion to our repair program, we’re excited to be adding even more choice and convenience for our customers, while helping to extend the life of our products and their parts.” 

Apple says that its teams have been working over the last two years to enable the reuse of parts such as biometric sensors used for Face ID or Touch ID, and that, beginning this autumn, “calibration for genuine Apple parts, new or used, will happen on-device after the part is installed.”

Security Stop Press : Apple Warns of Mercenary Spyware Attacks In 92 Countries

Apple has reported sending threat notifications to iPhone users in 92 countries, warning them that they may have been targeted by mercenary spyware attacks. These types of attacks use software designed to infiltrate and monitor computer systems or mobile devices and are typically “state-sponsored” and are used for intelligence gathering, surveillance of dissidents, journalist, and politicians, for corporate espionage, and more.

Apple reports sending these kinds of notifications multiple times a year and says it has notified users to such threats in over 150 countries since 2021. The notifications sent by Apple contain parts such as “Apple detected that you are being targeted by a mercenary spyware attack that is trying to remotely compromise the iPhone associated with your Apple ID -xxx-,” and “This attack is likely targeting you specifically because of who you are or what you do.”

Apple relies on its own internal threat-intelligence information and investigations to detect these attacks and is keen to point out that mercenary spyware attacks such as those using Pegasus from the NSO Group, are still very rare.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives