Sustainability-in-Tech : Ultra-Fast Charging Sodium Battery Developed
Research by a team of doctoral candidates, supported by the National Research Foundation of Korea, has resulted in the development of an ultrahigh-energy density and fast-rechargeable hybrid sodium-ion battery.
Why?
As highlighted in the published research paper, there is now an increasing demand for low-cost electrochemical energy storage devices with high energy-density for prolonged operation on a single charge and fast-chargeable power density. These are needed to meet a wide range of applications from mobile electronic devices to electric vehicles.
Sodium-Ion Batteries
Sodium is approximately 1000 times more abundant than lithium, making sodium-ion batteries (SIBs) potentially more sustainable. Also, since Sodium can be sourced from seawater and other abundant minerals, this reduces the environmental impact associated with mining (a significant issue with lithium sourcing). This could also mean lower costs in producing SIBs – they are a more cost-effective solution than lithium-ion batteries.
Challenges
However, as noted by the researchers, SIBs have “slow redox-reaction kinetics,” which results in poor rechargeability due to their low power density, although they provide a relatively high energy density. However, another sodium-ion battery option, sodium-ion capacitors (SICs), have high power density due to charge storage via fast surface ion adsorptions but extremely low energy density.
A Hybrid
Bearing in mind the strengths and limitations of both SIBs and SICs, the researchers’ answer was to develop a hybrid version of the two with newly developed anode and cathode materials. The researchers described these new materials as “a low-crystallinity multivalence iron sulfide-embedded S-doped carbon/graphene (FS/C/G) anode and a ZIF-derived porous carbon (ZDPC) cathode of 3D porous N-rich graphitic carbon frameworks.”
The Result
The result was the development of a high-performance hybrid sodium-ion energy storage device (a battery) which surpasses the energy density of commercial lithium-ion batteries and has the characteristics of supercapacitors’ power density. In other words, a high-energy, high-power hybrid sodium-ion battery that can charge in just a couple of seconds.
Applications
Clearly, this development could have a number of applications, not least for EVs. The development of a high-energy, high-power hybrid sodium-ion battery could be particularly advantageous in addressing the cost, environmental, and safety concerns associated with current lithium-ion batteries in EVs.
What Does This Mean For Your Business?
This sounds like a breakthrough in overcoming the main limitations of sodium-ion batteries. Although it’s one piece of research, the combination of adding new materials to the anode and cathode with a hybrid of SICs and SIBs appears to have created a potentially cheaper, more environmentally friendly, and better performing replacement for lithium-ion batteries.
More research and investment will be needed to fully explore and develop the idea, but it is a promising development in terms of its potential to provide a boost to the flagging EV market. The fact that this new battery can charge in seconds and offers high energy density for prolonged operation means it could tackle challenges like range-anxiety and reduce worries about the availability of an effective charging network in the UK. A cheaper battery may also mean lower prices for EVs which could also provide a boost to the market. This breakthrough (although it needs more exploration) could prove to be a big leap forward that could have a positive impact on many industries as well as helping to reduce environmental damage (no need for lithium mining).
That said, it could be not-so-welcome news for countries that have recently discovered potentially lucrative large lithium deposits, e.g. the US (at the McDermitt Caldera), Iran (Qahavand Plain), Nigeria, and India (the Reasi district of Jammu and Kashmir).
Video Update : Meta Launches Llama-3 (Ahead of Schedule)
This week’s Social and AI news video-update takes a look at Meta’s new offering, namely Llama-3, which has been released well ahead of schedule and which provides a pretty interesting offering to play around with.
Tech Tip – Enable Clipboard History for Easy Access to Multiple Clipboard Items
If you frequently copy and paste various items, Windows Clipboard History is an invaluable tool that saves multiple clipboard items for later use, allowing you to access a history of copied text, images, or files. Here’s how to use it:
– Press Win + V to open the clipboard history panel. If it’s your first time using it, you may need to enable Clipboard History by clicking the ‘Turn on’ button that appears in the panel / type ‘Clipboard Settings’ into the start menu and toggle the ‘Clipboard history’ switch to ‘on’.
– Once enabled, each item you copy will be saved in the clipboard history, and you can access and paste older items by pressing Win + V and clicking on the item you want to use.
Featured Article : A Big Stink About Ink
After trying to dismiss a lawsuit from HP customers angry at a firmware update (meaning that their HP printers wouldn’t work with third-party ink cartridges), we look at how HP is answering the arguments within the antitrust ink cartridge lawsuit and what the implications could be for customers.
The Lawsuit
Back in January, printing premier HP was sued in a Federal court in Chicago by 11 consumers (a class action lawsuit) who claimed that their HP printers wouldn’t accept replacement ink cartridges made by other manufacturers, thereby forcing them to pay artificially high prices for HP-branded cartridges. The lawsuit accused HP of violating US and state antitrust laws in a bid to monopolise the market for replacement ink.
The plaintiffs allege that they weren’t told that automatic software updates (firmware updates between late 2022 and early 2023) from HP would disable some printers unless HP-branded ink was used and that faced with non-functional printers, they were then forced to purchase more expensive HP-branded ink that they would not otherwise have purchased.
Damages
The plaintiffs, in this case, are seeking damages of greater than $5 million from HP, which include the cost of their useless third-party cartridges (the ones that won’t work in their printers because of the firmware update) as well as an injunction to disable the part of the firmware updates that prevent the usage of third-party ink.
Trying To Get IT Dismissed
HP’s lawyers recently attempted to have all 79 causes of action in the lawsuit dismissed on the grounds that the central premise of the Plaintiffs’ case was wrong, i.e. that HP failed to disclose to consumers that their printers were equipped with “dynamic security” measures designed to prevent the use of third-party printer cartridges that copy HP’s security chips, thereby locking them into an aftermarket where they were overcharged.
HP argued that it goes to great lengths to disclose that its printers are intended to work only with cartridges that “have an HP chip, and that they may not work with third-party cartridges that do not have an HP chip.” HP also argued that “this information is displayed in clear terms on the printer box, on HP’s website, and in many other materials.” It also highlighted that “many third-party cartridges are not affected by dynamic security. HP does not block cartridges that reuse HP security chips, and there are many such options available for sale. Nor does HP conceal its use of dynamic security.”
HP’s lawyers additionally argued that the plaintiffs also didn’t allege that they didn’t authorise firmware updates in their printers and that many plaintiffs also claim that they purchased HP-branded ink cartridges after receiving the software or firmware updates, and that their printers began to again function properly.
In short, HP’s lawyers attempted to find a long list of reasons to have the lawsuit dismissed.
Previously
These types of allegations against HP have gone on for some time now. For example, back in 2019, HP agreed to resolve related consumer claims in a California case, for a $1.5 million payment, without admitting any wrongdoing (as part of the settlement). However, just last year (in California) a judge said that HP must at least face some claims that it designed some all-in-one printers to stop scanning and faxing when the machine was low on ink, thereby forcing consumers to buy cartridges.
The Backdrop
All these antitrust printing arguments are taking place at a time when HP has been through a long period of shrinking revenues, mainly due to enterprise customers affected by the uncertain economic environment, holding off on their hardware purchases a bit longer.
Instant In Subscription & All-in-One service
Following a strategy re-think, two solutions that HP has devised to help it through these difficult times are its ‘Instant Ink’ services and its All-in-One service, both of which see it focusing on a subscription model going forward.
HP’s Instant Ink service is a subscription-based model that is beneficial for users who want to avoid the inconvenience of running out of ink and dealing with last-minute replacements. It also helps in managing printing costs more predictably. With Instant Ink (for a monthly fee, on an agreed plan), the HP printer’s ability to monitor ink levels means that before users’ ink runs low, HP sends replacement cartridges directly to the doorstep. HP claimed to have 13 million sign-ups to the service back in the beginning of March.
As the name suggests, The All-in-One service, which launched in the US last month, includes not just the ink but hardware as well, i.e. the HP Envy or HP OfficeJet models. This is also a two-year subscription contract, based on a printed page plan, with cancellation fees (to raise the barriers to exit).
In addition to trying to reduce its costs, HP’s CEO, Enrique Lores, speaking recently at the Morgan Stanley Technology, Media and Telecom conference outlined HP’s strategy since the 2019 rethink as trying to “protect supplies revenue by upping subscription services, selling hardware loaded with ink, smart models, and charging more for printers when a customer isn’t committing to HP ink.”
AI Apps Too
HP is also hoping that AI will boost PC sales and has indicated that alongside its PCs, it’s developing new AI applications to run on top of its installed base of more than 200 million commercial devices.
Printing Declining Anyway
Despite HP’s court battles over printer ink and its move to a subscription-based model, for many businesses, the need (and demand) for printers and ink has declined in recent years. This has been due to factors like the greater proliferation of digital tools and technologies, advancements in cloud computing and software-as-a-service (SaaS) platforms and businesses are moving towards greener practices (despite printer companies trying to produce more sustainable/greener ink). Also, the need to reduce costs has favoured digital storage over printed documents, alongside a disruption in global supply chains (e.g. for paper), plus the effects of the pandemic also meant a lowering of demand for printers and ink.
What Does This Mean For Your Business?
Having to constantly renew expensive ink cartridges or running out of ink at the wrong time have long been a significant cost and source of frustration to many businesses. In recent years, however, many businesses, for many of the reasons above, have updated to becoming more reliant on the cloud and digital solutions rather than printed documents. HP itself has had to change its strategy in 2019, moving customers to a subscription model for its ink and hardware in order to weather difficult economic times and falling demand.
This court case around HP’s attempt to curtail consumers’ adoption of cheaper third-party ink cartridges in favour of more expensive HP ones is likely to be unwelcome and reputationally damaging for HP at a time where it needs to protect its position in the marketplace. For competitors, HP’s dominance being challenged is good news and could provide a beneficial commercial outcome for them if events go the wrong way for HP.
For business customers who still need a printer, the ability to have trouble-free operation with their printers and to be able to benefit from the choice of using different, lower-priced print cartridge alternatives are likely to be valuable. Most of us will understand the frustration that printer ink problems can cause.
Looking ahead for HP, its cost-cutting and its shift to a subscription model for its ink/printer products, plus the promise of developing AI apps for its large installed base of commercial devices are ways it hopes to turn around the declining revenues of challenges of recent years. The company has a trusted business brand and the hope for HP is that their valuable brand won’t be tarnished too much by the outcome of the lawsuit that’s currently making the headlines.
Tech Insight : Stop Your Data Being Used To Train AI
In this insight, we look at the process of AI training, the potential pitfalls of misused data, and what measures can be taken to protect your personal and business data from being used to train AI.
Data – For AI Training
AI training, at its core, involves feeding large datasets to algorithms, thereby enabling them to learn and make ‘intelligent’ decisions. These datasets are often culled from user-generated content across various platforms. Understanding the source and nature of this data is crucial for recognising the implications of its use.
Data, therefore, is the lifeblood of AI models and the quality, quantity, and variety of data directly influences an AI model’s performance. For example, language models require vast amounts of text data to understand and generate human-like responses, while image recognition models need diverse visual data to improve accuracy.
One of the most contentious ways that generative AI companies have allegedly used in recent years, resulting in many lawsuits, to gather enough training data is by the scraping/automatic collection of online content/data. High-profile examples include:
– A class action lawsuit filed in the Northern District of California accused OpenAI and Microsoft of scraping personal data from internet users, alleging violations of privacy, intellectual property, and anti-hacking laws. The plaintiffs claimed that this practice violates the Computer Fraud and Abuse Act (CFAA).
– Google was accused in a class-action lawsuit of misusing large amounts of personal information and copyrighted material to train its AI systems, thereby raising issues about the boundaries of data use and copyright infringement in the context of AI training.
– A Stability AI, Midjourney, and DeviantArt class action lawsuit claiming that AI companies used copyrighted images to train their AI systems without permission.
– Back in February 2023, Getty Images sued Stability AI alleging that it had copied 12 million images to train its AI model without permission or compensation.
– Last December, The New York Times sued OpenAI and Microsoft, alleging that they used millions of its articles without permission/consent (and without payment) to help train chatbots.
In many of these cases, the legal argument to allow such use has been “fair use” and “transformative outputs.” For example, the AI companies know that under US law, the “fair use” doctrine allows limited use of copyrighted material without permission or payment, especially for purposes like criticism, comment, news reporting, teaching, scholarship, or research.
What About Your Data? Could It Be Used AI Training … And How?
When it comes to your personal and business data, many of the big AI companies have already scraped the web, so whatever you’ve posted is probably already in their systems. There are also many other ways that your data could end up being part of AI training data through several channels. For example:
– Online Activity. When you browse websites, search engines, and social media, companies collect your data to personalise services and train AI to predict user-behaviour.
– Device usage. Smartphones, wearables and smart home devices collect data about your daily activities, locations, health statistics, and preferences, all of which is useful for training AI in areas like health monitoring, personal assistance, and device-optimisation.
– Service Interactions. Interacting with customer service chatbots or voice assistants provides conversational data that helps train AI to understand and generate human-like responses.
– Content creation. Uploading videos, writing reviews, or other content creation on platforms can provide data for AI to learn about content preferences and creation styles.
– Transactional Data. Purchases, financial transactions, and browsing products online give insights into consumer behaviour, used by AI to enhance recommendation engines and advertising algorithms.
All these methods, therefore, which could involve your data, help AI systems learn and adapt to provide more personalised and efficient services.
The Risks of Data Misuse
There are, of course, risks in having your data used/misused by AI. These risks include:
– Privacy and security concerns. The primary risk of using data in AI training is the potential for significant privacy breaches. Sensitive information, if not adequately protected, can be exposed or misused, leading to serious consequences for individuals and businesses alike.
– Bias and ethical implications. Another critical concern is the propagation of bias through AI systems. If AI is trained on biased or unrepresentative data, it can lead to unfair or prejudiced outcomes, which is especially problematic in sectors like recruitment, law enforcement, and credit scoring.
Checking
For some people, their creative artwork/images have been used to train AI and this is a particular issue. The website https://haveibeentrained.com/, for example, is an online tool that uses clip-retrieval to search the largest public text-to-image datasets. In this way, links to images that artists want to opt-out from being used to train generative AI systems can be removed.
What Proactive Measures Can You Take To Protect Your Data?
Bearing in mind the significant privacy risk posed by AI, there are a number of proactive measures you can take to stop your data from being used to train AI. For example:
Opt-Out Options and User Consent
Many of the services you use from the big tech companies provide mechanisms for users to opt-out of data sharing. Familiarising yourself with these options and understanding how to activate them is essential for maintaining control over your data. Examples include:
If you store your files in Adobe’s Creative Cloud, to opt out of having them used for training, for a personal account, go to the Content analysis section, and click the toggle to turn it off.
If you’re a Google Gemini (AI) user, to prevent your conversations being used, open Gemini in a browser, click on Activity, and select the Turn Off drop-down menu.
If you’re a ChatGPT account holder and are logged in through a web browser, select ChatGPT, Settings, Data Controls, and then turn off Chat History & Training.
For the Squarespace website building tool, to block AI bots, open Settings (in your account), find Crawlers, and turn off Artificial Intelligence Crawlers.
These are just a few examples and it will be a case of going through each of the main services you use and trying to find the opt-out (perhaps using Google to help as you go). However, it’s worth noting that some are either very difficult to find or simply aren’t available for certain types of account. Overall, this can be quite a time-consuming process.
Enhanced Data Management Practices
Businesses should implement strict data management policies that govern the collection, storage, and use of data. These policies can help ensure that data is handled ethically and in compliance with relevant data protection laws and shielded from AI use for training.
Leveraging Technology for Data Security
Advanced technological solutions, such as encryption and secure data storage systems, may also be able to play a critical role in protecting data from unauthorised access and breaches that could lead to it finding its way into the hands of AI companies for training.
What Does This Mean for Your Business?
For businesses today, the pervasive use of data by AI underscores the dual imperatives of protection and vigilance. The reality is that many AI companies have likely already collected extensive swathes of public internet data, including potentially from your own business activities, which poses a distinct challenge. This means that data posted online (either deliberately or inadvertently) may already be part of training sets used to enhance AI capabilities.
That said, businesses can still do things and still hold significant power to influence future data usage and secure existing data. For example, businesses can take proactive steps by regularly reviewing the privacy policies and settings of the digital platforms they use. This includes social media, cloud storage, business software, and any platform where data is stored or shared. Although navigating these settings can be complex, finding and activating opt-out features may be necessary for maintaining control over how your data is used.
Businesses may also wish to educate their employees about data sharing and privacy settings. Training sessions can help employees understand the importance of data-privacy and the steps they can take to ensure data is not inadvertently shared or used for AI training without consent.
Developing and enforcing robust data management policies is essential anyway and this not only complies with data protection regulations but also limits unnecessary data exposure that could be exploited by AI systems. These policies should govern how data is collected, stored, and shared, ensuring that data handling within the company is done ethically and responsibly.
Deploying advanced technological solutions such as encryption, secure access management, and data loss prevention tools can also significantly reduce the risk of unauthorised data access. This is particularly relevant in preventing breaches that could see sensitive information being used to train AI (without your knowledge). While it is challenging to completely control all data that may already be within AI training datasets, businesses can still exert some significant influence over their current data handling and future engagements.
Finally, with ongoing AI legal battles and new regulations, staying informed about your rights and the latest developments in data privacy law could be prudent. This knowledge could help businesses advocate for their interests and respond more adeptly to changes in the legal landscape that affect how their data can be used.
Tech News : FM Market Dominance Concerns
Following an initial report on AI Foundation Models (FMs) last year, the Competition and Markets Authority (CMA) has expressed “real concerns” about it and is investigating the dominance of a small number of big tech firms at the centre of the FM market.
Foundation Models
Foundation models (FMs) are AI systems / large-scale machine learning models that are pre-trained on large amounts of data and can be adapted to a range of different, more specific purposes. Examples include the GPT (Generative Pre-trained Transformer) models such as GPT-3 and GPT-4 (different versions of the model behind ChatGPT).
What’s The Issue?
As highlighted by the original CMA report, fast-changing FMs have the potential to transform how we live and work, i.e. they possess significant potential to impact people, businesses, and the UK economy. The CMA wants to ensure this AI market develops in a way that doesn’t undermine consumer trust or is dominated by a few players who can exert market power that prevents the full benefits being felt across the economy.
The previous report on the FMA market led to a set of proposed guiding principles to help achieve this, including making FM developers and deployers accountable for outputs provided to consumers, asking for sufficient choice for businesses so they can decide how to use FMs, plus stressing the need for fair dealing, i.e. no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
Move Away From “Winner Takes All Dynamics”
Now, in the next step from its previous report and development of guiding principles around the FM market, the CMA has outlined growing concerns. The CMA’s Sarah Cardell, for example, has expressed the need to learn from history and move away from the kind of “winner takes all dynamics” that has led to the rise of a small number of powerful platforms.
3 Areas Of Focus
The CMA identifies three (what it believes to be key) interlinked risks to fair, effective, and open competition in the FM market. These are:
1. Firms controlling critical inputs for developing FMs restricting access to shield themselves from competition.
2. Powerful incumbents possibly exploiting their positions in consumer (or business-facing) markets to distort choice in FM services and restrict competition in deployment.
3. Partnerships involving key players being able to exacerbate existing positions of market power through the value chain.
In an update paper, the CMA has, therefore, provided details on how each risk would be mitigated by its principles, and also by the actions it’s taking at the moment.
An Interconnected Web
One point highlighted by the CMA that illustrates the complication of regulating the FM market effectively is the “interconnected web” of “over 90 partnerships and strategic investments involving the same firms: Google, Apple, Microsoft, Meta, Amazon, and Nvidia (which is the leading supplier of AI accelerator chips).”
The CMA says that although it recognises the wealth of resources, expertise and innovation these large firms can bring to bear, the role they will likely have in FM markets, and that such partnerships can play a pro-competitive role in the technology ecosystem, it also recognises that powerful partnerships and integrated firms shouldn’t reduce rival firms’ ability to compete, or be used “to insulate powerful firms from competition”.
As the CMA CEO, said: “The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences.”
What Does This Mean For Your Business?
Given the fast pace with which the FM market is growing and changing, plus the fact that there is an ever more complicated “interconnected web” of partnerships between the big tech companies at the heart of this market, it’s not surprising that regulator wants to stay involved to have any chance of understanding and regulating it effectively. As highlighted by the CMA’s CEO Sarah Cardell, the transformative promise of FMs as a potential “paradigm shift” for societies and economies is the prize. Although the big tech companies have been the big investors in the development of AI so far, it’s still a ‘market’ that needs fair, open, and effective competition where there’s plenty of choice for buyers, prices are kept low enough, and where innovation isn’t stifled.
We’re still at the stage where guidelines are being given and warnings are being issued but if other aspects of big tech company activities are anything to go by, this is going to be a very challenging market for the CMA to stay on top of and regulate. As the CMA has said, it’s going to be a difficult job to confront the “winner takes all dynamics” of the big tech companies and it remains to be seen how much trouble the CMA has regulating this incredible marketplace.