Featured Article : DeepSeek? Here’s The $500 Billion Stargate

At a time when China’s “DeepSeek” chatbot has jolted the AI industry (having developed incredibly quickly and on a shoestring budget), we take a look at the US “Stargate Project,” a $500 billion initiative aimed at cementing the United States’ leadership in artificial intelligence (AI) by constructing cutting-edge infrastructure.

Heated Debate

Announced by President Donald Trump and backed by industry titans such as SoftBank, OpenAI, Oracle, and MGX, the Stargate Project has garnered significant attention. With promises of transformative economic benefits alongside concerns over its financial feasibility, energy demands, and political undertones, it is rapidly becoming one of the most talked-about developments in the AI landscape.

However, the project has also ignited a heated debate (laptop bags at dawn) among the biggest names in tech, including Elon Musk, Sam Altman, Satya Nadella, and Marc Benioff.

What Is the Stargate Project?

At its core, the Stargate Project is an ambitious plan to build state-of-the-art AI infrastructure across the United States. The initiative will see an initial investment of $100 billion, ramping up to $500 billion over four years. The funds will be used to construct massive data centres, with the first one-million-square-foot facility already underway in Texas. According to OpenAI, the project aims to secure American dominance in AI, create hundreds of thousands of jobs, and drive global economic growth.

The venture is spearheaded by SoftBank and OpenAI, with SoftBank’s Masayoshi Son serving as chairman. While SoftBank will handle financial responsibilities, OpenAI will oversee operations. Key technology partners include Microsoft, Nvidia, Arm, and Oracle, marking a collaborative effort among some of the most influential companies in the tech industry.

President Trump, speaking at the White House, declared the Stargate Project as the “largest AI infrastructure project in history.” Emphasising its strategic importance, he stated, “We want to keep it in this country. China’s a competitor and others are competitors – we want it to be in this country, and we’re making it available.”

The Numbers Behind the Vision

The scale of the Stargate Project does appear to be pretty staggering. For example, each data centre will require an estimated 6GW of power, with annual operating costs predicted to reach $4 billion per site! In total, the energy consumption of these centres could significantly strain regional power grids, with projections suggesting that data centres could account for a massive 12 per cent of U.S. energy use by 2028, up from 4.4 per cent today.

Research by the Lawrence Berkeley National Laboratory predicts power demands for data centres will rise to between 325TWh and 580TWh over the next four years. This has raised concerns among environmental groups and energy experts, who worry about the sustainability of such rapid expansion.

Criticised By Musk

Despite the grand vision, the Stargate Project has faced scepticism regarding its financial feasibility. Elon Musk, a frequent critic of OpenAI and its CEO Sam Altman (perhaps a big clue to the reason for his criticism), has cast doubt on the project’s funding. “They don’t actually have the money,” Musk recently claimed on X (formerly Twitter). “SoftBank has well under $10 billion secured. I have that on good authority.”

Sam Altman, however, was quick to rebut Musk’s allegations, stating, “Wrong, as you surely know. Want to come visit the first site already underway? This is great for the country.” OpenAI maintains that the funding commitments are solid, with SoftBank’s $24.3 billion in cash reserves and MGX’s $100 billion in capital commitments cited as evidence. Oracle, another key partner, boasts $11 billion in cash on its balance sheet, while OpenAI itself has secured over $10 billion in venture capital.

Microsoft Weighs In Too

Adding to the voices from big tech leaders about the project, Microsoft CEO Satya Nadella has also weighed in, saying, “All I know is, I’m good for my $80 billion,” referencing Microsoft’s massive investment in Azure data centres to support AI efforts. Nadella’s comments essentially highlight Microsoft’s ongoing partnership with OpenAI, though tensions have emerged over OpenAI’s recent decision to end Microsoft’s exclusivity as its cloud provider.

Industry Feud

The announcement of the Stargate Project appears to have exposed deep rifts within the tech industry. Elon Musk, who co-founded OpenAI but later parted ways, has been vocal in his criticism of the organisation’s shift towards profit-driven ventures. His scepticism extends beyond financial concerns, as he has accused OpenAI of abandoning its original mission to prioritise humanity’s benefit.

Meanwhile, Salesforce CEO Marc Benioff has raised questions about the potential fallout between OpenAI and Microsoft, saying: “I think it’s extremely important that OpenAI gets to other platforms quickly because Microsoft is building their own AI,” adding that Microsoft’s hiring of Mustafa Suleyman (a co-founder of DeepMind) may signal its intent to develop independent AI models.

Microsoft’s Nadella, however, has downplayed the possibility of a rift, describing Microsoft’s relationship with OpenAI as a “critical partnership” and emphasising that Microsoft retains the right of first refusal for OpenAI’s cloud needs and is committed to supporting the organisation’s ambitions.

Political and Environmental Implications

The Stargate Project appears to be as much a political statement as it is a technological endeavour. President Trump has framed the initiative as a dual strategy, i.e. to counter China’s rapid advancements in AI and to revitalise the U.S. economy through technological innovation. By accelerating domestic AI infrastructure development, the U.S. hopes to not only secure its position as a global leader in the field but also to reindustrialise key sectors, generate jobs, and strengthen national security in the face of growing global competition. Some economic commentators have suggested that the debt-laden U.S. could be showing signs of an ‘empire’ now in decline, with China and BRIC nations emerging as dominant players on the global stage. The Stargate Project, therefore, could be seen as an effort to reassert America’s dominance by leveraging technological leadership as a cornerstone for economic and geopolitical power in the 21st century.

Environmental Concerns

However, the project’s environmental impact has become a point of contention. For example, as highlighted in a recent LinkedIn post by Mark Nelson, managing director of the Radiant Energy Group, the Stargate Project’s data centres will have enormous power requirements. He estimated that each data centre would require at least 6GW of firm power capacity, warning that this could strain existing energy infrastructure, exacerbate shortages, and significantly drive up costs. Nelson also criticised the project’s reliance on fossil fuel-based energy generation, arguing that this approach runs counter to global climate goals. His detailed analysis has sparked broader debate, with environmentalists calling for a stronger focus on sustainable energy solutions to power such ambitious developments.

President Trump, however, is unlikely to heed such environmental concerns, given his long-standing scepticism of climate change initiatives, his reference to “drill, baby, drill” in his inauguration speech, and his signing of an executive order directing the U.S. to withdraw from the Paris Climate Agreement for the second time. President Trump, therefore, appears more committed to prioritising economic growth over environmental regulations. Also, by declaring a “national energy emergency,” Trump has taken steps to reverse previous climate policies and bolster oil and gas development, further indicating that projects like Stargate, with their substantial energy demands, are in line with his administration’s priorities (which aren’t the same priorities as environmental campaigners).

A Divisive Vision for the Future

The Stargate Project may be an ambitious plan to reshape AI infrastructure, with promises of economic and technological breakthroughs but its financial, operational, and environmental obstacles have sparked sharp debates among industry leaders and policymakers. As construction begins in Texas, the project remains a focal point for discussions about the future of AI and its broader implications.

What Does This Mean For Your Business?

The Stargate Project embodies both ambition and controversy. On one hand, the promises of economic revitalisation, job creation, and technological advancement reflect a vision for a transformed future. On the other, the financial feasibility of such a monumental endeavour, coupled with its environmental and political undertones, is fuelling intense debate.

For proponents, the project offers a strategic response to growing competition from nations like China, hopefully positioning the U.S. as a global leader in AI infrastructure while potentially reinvigorating key sectors of its economy. The involvement of major players such as SoftBank, OpenAI, Oracle, and Microsoft lends credibility to its aspirations. However, critics (like Musk) have questioned whether the funding commitments are truly secure and whether the reliance on non-renewable energy undermines global climate efforts.

The environmental concerns raised may also highlight a significant challenge, i.e. balancing progress in AI with sustainable practices. With President Trump prioritising energy independence and economic growth over climate commitments, these issues are unlikely to disappear from the discourse anytime soon.

For businesses, the Stargate Project could herald significant change. By dramatically increasing the availability of cutting-edge infrastructure, it has the potential to lower entry barriers for smaller companies while further empowering established players like Microsoft, Oracle, and Nvidia. This could lead to intensified competition, spurring innovation but also challenging businesses to keep pace with rapidly advancing technologies. The influx of infrastructure might enable startups to leverage powerful AI tools (previously out of reach), creating a more dynamic and diverse AI ecosystem. However, with such a significant investment at stake, large corporations could also use their scale to dominate key markets, potentially sidelining smaller players in the process.

Beyond competition, the project’s focus on domestic production and innovation could shift global market dynamics, reshaping supply chains and forging new partnerships. By making the U.S. a central hub for AI development, it might draw talent and investment away from other nations, accelerating its dominance in a field critical to the future of technology and industry. This centralisation could benefit American businesses with greater access to advanced AI capabilities but also risk exacerbating global inequalities in technological advancement.

The Stargate Project, therefore, could be seen to encapsulate the complexities of navigating the intersection of technology, economics, and geopolitics in a rapidly changing world. Its success or failure will not only shape the future of AI but also reflect broader societal priorities and the willingness of leaders to address the pressing challenges of our time. Whether it becomes a successful example of progress or a cautionary tale remains to be seen.

Tech Insight : ‘Operator’ – Agents That Automate Web Tasks

OpenAI has introduced ‘Operator’, a new AI-powered agent designed to autonomously perform web-based tasks on behalf of users.

Just In The US For Now

Currently available as a research preview, Operator is accessible to ChatGPT Pro subscribers in the United States, with plans to expand availability in the near future. This launch signifies a major step in OpenAI’s efforts to redefine how artificial intelligence interacts with the digital world.

What is Operator?

At its core, Operator is an AI agent capable of navigating the web much like a human would. An ‘AI agent’ is essentially a software program that autonomously performs tasks or actions on behalf of a user.

Powered by OpenAI’s Computer-Using Agent (CUA) model, Operator can complete tasks such as booking travel, ordering groceries, and even creating memes. It interacts with websites via simulated mouse clicks, scrolling, and typing, mirroring how a person would operate a browser.

Unlike traditional AI integrations that rely on APIs, Operator uses its ability to interpret screenshots and graphical interfaces. This makes it adaptable to various websites, even those without specific developer tools or APIs. OpenAI CEO Sam Altman describes Operator as “an early glimpse into the future of AI agents automating our digital interactions.”

However, Operator is not perfect. OpenAI has explicitly labelled it as a “research preview”, cautioning users about potential mistakes and urging active supervision during high-stakes tasks.

How Does Operator Work?

Operator is built on GPT-4o, a specialised version of OpenAI’s flagship model, which combines advanced reasoning capabilities with vision technology to interpret on-screen elements. Users can initiate tasks by describing them in natural language. For example:

– “Book a flight from London to Madrid for next Thursday.”

– “Order my weekly groceries from Instacart.”

– “Make a dinner reservation for two at an Italian restaurant in central London.”

Operator then uses its dedicated browser to execute the task, visible to the user via a pop-up window. It can navigate menus, fill out forms, and confirm actions. If it encounters challenges (e.g. CAPTCHAs, password fields, or a particularly complex interface) it pauses and prompts the user to intervene. Once the issue is resolved, the user can hand control back to Operator, ensuring seamless collaboration.

Operator also allows users to save frequently performed workflows as reusable tasks, which can be started with a single click. Also, it supports sharing video recordings of completed tasks, enabling users to showcase or review the agent’s actions.

Availability and Pricing

For now, Operator is a research preview that’s exclusive to ChatGPT Pro users in the United States, with the Pro plan costing $200 per month. OpenAI plans to roll out the feature to other tiers, such as Plus, Team, and Enterprise subscriptions, as well as expand its availability to users in other countries. However, Altman has noted that European expansion may face delays due to regulatory hurdles.

Safety, Privacy, and Limitations

Although software operating autonomously sounds a little risky, OpenAI has emphasised safety as a cornerstone of Operator’s design. For example, the tool includes multiple safeguards, such as user confirmations for critical actions, refusal patterns for prohibited tasks, and monitoring for suspicious activity. Operator also requires users to manually handle sensitive inputs like credit card details or passwords. In terms of privacy, OpenAI also assures users that these interactions are not logged or captured in screenshots.

Uses Screenshots To “See”

Screenshots, which Operator uses to “see” and interact with interfaces, are securely stored and can be deleted by the user. OpenAI says Operator retains user data for up to 90 days unless deleted earlier, thereby giving users some control over their privacy.

However, despite its impressive capabilities, Operator is limited in several key areas, such as:

– It can’t really perform complex or specialised tasks, such as creating detailed presentations or managing intricate calendar systems.

– High-stakes actions, such as sending emails or conducting financial transactions, are restricted in this early stage (which is perhaps just as well!).

– Usage is subject to rate limits to prevent overloading the system.

Benefits and Criticisms

Some of the key benefits of Operator could be summed up as:

– Enhanced productivity i.e., Operator automating repetitive tasks, frees up time for users.

– Broad applicability. Its ability to interpret GUIs makes it versatile across a wide range of websites.

– Customisation. Users can save workflows for regular use, streamlining frequent activities.

– Collaboration with businesses. Partnerships with platforms like DoorDash, Uber, and Instacart can ensure smooth operation and compliance with terms of service.

Inevitably, with something this complex that’s still in its preview stage, where it hasn’t been widely used by millions of users yet, there are some potential issues and concerns. For example:

– Reliability concerns. As a research preview, Operator may not perform flawlessly, and may require quite a bit of human oversight.

– Privacy risks. While OpenAI has implemented robust safeguards, the reliance on screenshots and data retention has raised concerns among privacy advocates.

– Accessibility. The pretty steep $200 monthly subscription fee limits may prove a barrier to less affluent users and organisations with more modest budgets.

– Ethical considerations. The potential misuse of autonomous AI agents, such as for phishing scams or malicious activity, could prove to be a significant challenge.

The ‘World’ Project

Operator is not an isolated innovation. In fact, it forms part of a broader vision spearheaded by OpenAI’s Sam Altman. His ‘World’ project, formerly known as Worldcoin, aims to address the growing challenge of distinguishing humans from AI agents in digital spaces. By scanning users’ irises with a metallic orb, World creates blockchain-based digital identities, known as World IDs, to verify “proof of personhood.”

Why?

World is now exploring how to link AI agents like Operator to these digital identities. This would allow businesses and users to confirm that an agent is acting on behalf of a real person. For example, an Operator task could be tagged with a verified World ID, thereby ensuring trustworthiness in sensitive interactions such as ticket purchases or legal transactions.

Criticism of World

While the concept is ambitious, it has faced significant criticism. For example, World’s reliance on biometric data has raised privacy concerns, and the project has faced regulatory scrutiny in Europe. That said, proponents argue that linking AI agents to verified identities shows promise and could foster trust and mitigate risks in a rapidly evolving digital ecosystem.

What Does This Mean For Your Business?

OpenAI’s Operator gives a fascinating glimpse into the future of AI, where software agents can automate an increasing number of tasks on behalf of users. By leveraging its ability to interact with websites much like a human, Operator offers an innovative and adaptable approach to web-based automation. Its potential to save time, streamline processes, and improve productivity is undeniable, particularly for users and businesses willing to invest in the technology and learn to navigate its current limitations.

However, as promising as Operator may be, it is still a work in progress. As a research preview, it is not yet fully reliable, with OpenAI itself acknowledging the need for active user supervision and manual intervention in many situations. While there do appear to be safeguards in place around privacy and sensitive data handling, there is still a long way to go to address concerns about security, privacy, and ethical use. For now, Operator’s high price point and restricted availability may make it inaccessible to a broader audience, thereby limiting its immediate impact.

The larger vision behind Operator, as part of Sam Altman’s interconnected strategy with the World project, offers a glimpse into the challenges and opportunities of an AI-driven future. By linking AI agents to verified digital identities, OpenAI and World could help foster trust and transparency in a landscape increasingly populated by bots and automated systems. While the concept holds promise, it also raises significant questions about privacy, control, and the implications of such systems for individual autonomy and online interactions.

Operator is an ambitious step forward in AI innovation, but it is also a reminder of the complexities that come with introducing such transformative technologies. Its success will depend not only on its technical evolution but also on OpenAI’s ability to address the legitimate concerns surrounding its use, ensuring it becomes a tool that enhances lives rather than complicating them. As the technology matures and expands to more users, Operator could redefine how we interact with the digital world, but only if its deployment is handled responsibly, transparently, and inclusively.

Tech News : Google Combats Fake Reviews (After Investigation)

Following an extensive investigation by the UK’s Competition and Markets Authority (CMA), Google has agreed to implement significant changes to its processes for detecting and addressing fake reviews.

To Improve Transparency and Trust

This landmark development is essentially aimed at ensuring fairness for consumers and businesses in an online marketplace that’s increasingly influenced by customer reviews. It’s hoped, therefore, that the new measures will improve transparency and trust in online reviews while providing consequences for businesses and individuals engaging in dishonest practices.

What’s The Problem With Fake Reviews?

Online reviews have become a powerful tool in shaping consumer decisions, with the CMA estimating that a staggering £23 billion of UK consumer spending is influenced annually by such reviews. Research indicates that 89 per cent of consumers actually rely on online reviews when deciding on products or services. However, today’s proliferation of fake reviews threatens to undermine trust in these platforms.

The issue with fake reviews is that they can create an uneven playing field by misleading consumers into choosing poorly-reviewed products or services and giving unethical businesses an unfair advantage. The problem is exacerbated by the increasing sophistication of fake review schemes, including paid reviews and bot-generated content.

Google and Amazon in the Frame

Concerns about the authenticity of reviews prompted the CMA to launch investigations into Google and Amazon back in June 2021. While Google has now reached an agreement with the CMA, the investigation into Amazon’s practices remains ongoing.

Why Was Google Under Investigation?

In the case of Google, the CMA’s investigation revealed shortcomings in its systems for detecting, removing, and preventing fake reviews. These gaps included insufficient action against suspicious patterns of behaviour and inadequate enforcement against businesses and reviewers engaged in fraudulent activity. The CMA’s scrutiny of Google centred on its compliance with consumer protection laws, particularly regarding the responsibilities of platforms hosting user-generated reviews.

Sarah Cardell, Chief Executive of the CMA, highlighted the broader implications of fake reviews, saying: “Left unchecked, fake reviews damage people’s trust and leave businesses who do the right thing at a disadvantage.”

The urgency of the issue has now led to the CMA having to secure legally binding commitments from Google, ensuring a more robust and transparent approach to tackling the problem.

Key Changes Google Has Agreed to Implement

In response to the CMA’s findings, Google has pledged to introduce several sweeping changes to its review system. These measures are aimed at detecting and deterring fake reviews, penalising offenders, and restoring consumer confidence in online reviews. The key undertakings agreed with the CMA by Google are:

– Enhanced detection of fake reviews. Google says it will employ more rigorous methods to identify and remove fake reviews, leveraging advanced technology and manual oversight to investigate suspicious activities. This should enable quicker and more accurate responses to fraudulent practices.

– Consequences for rogue reviewers. Individuals repeatedly posting fake or misleading reviews for UK businesses will face severe penalties. Their reviews will be deleted, and they will be banned from posting new reviews on Google, irrespective of their location.

– Sanctions for businesses engaging in fake reviews. Businesses found to be using fake reviews to inflate their star ratings will face visible warnings on their Google profiles. These alerts will inform consumers of detected suspicious activity. Additionally, businesses engaging in repeated misconduct will have all reviews removed for six months or more and will lose the ability to receive new reviews.

– Improved reporting mechanisms. Google will introduce a more robust reporting system, enabling consumers to easily report suspicious reviews or incentives offered for positive reviews. This will apply to both online and offline inducements.

– Regular oversight and reporting to the CMA. Google will report to the CMA over the next three years to ensure compliance with these commitments. This ongoing scrutiny will provide accountability and ensure that the changes are effectively implemented.

– Adaptation to evolving technology. After the three-year period, Google will have the flexibility to adapt its processes to address new challenges posed by advancements in technology, including artificial intelligence-driven fake reviews.

The Wider Implications for Businesses and Consumers

These changes could be a major step forward in the fight against fake reviews and signal Google’s commitment to trying to foster a fairer digital marketplace. As the CMA’s Sarah Cardell says, “The changes we’ve secured from Google ensure robust processes are in place, so people can have confidence in reviews and make the best possible choices. They also help to create a level-playing field for fair dealing firms.”

Consumer advocacy groups, including Which?, have welcomed the CMA’s success in securing these commitments. Rocio Concha, Director of Policy and Advocacy at Which?, also emphasised the importance of monitoring Google’s compliance, saying: “The regulator must monitor the situation closely and be prepared to use new enforcement powers… to take strong action, including issuing heavy fines, if Google fails to make improvements.”

The Broader Regulatory Context

This development comes as the UK government is trying to strengthen consumer protection laws. For example, the Digital Markets, Competition and Consumers Act 2024, which actually comes into force in April 2025, will empower the CMA to independently determine breaches of consumer law without needing court approval. This legislation also introduces the potential for fines of up to 10 per cent of a company’s global turnover for non-compliance.

Also, the CMA has collaborated with the Department for Business and Trade to explicitly ban the posting or commissioning of fake reviews. Businesses that fail to address fake reviews and hidden advertising will face penalties under these new rules.

The CMA’s work extends beyond Google. As part of its broader effort to ensure fair online practices, the regulator has issued draft guidance to help businesses comply with consumer law. This guidance will be finalised later in 2025.

Industry Response and the Road Ahead

Google has expressed its commitment to combating fake reviews. A spokesperson for the company stated, “Our longstanding investments to combat fraudulent content help us block millions of fake reviews yearly – often before they ever get published. Our work with regulators around the world, including the CMA, is part of our ongoing efforts to fight fake content and bad actors.”

These changes highlight the influence of consumer feedback in shaping marketplace dynamics. By holding businesses and reviewers accountable, the CMA’s actions, therefore, aim to restore trust in online reviews and ensure that genuine businesses are not overshadowed by dishonest competitors.

As the CMA continues its investigation into Amazon and monitors compliance across the sector, this case sets a precedent for how regulatory bodies can work with tech giants to protect consumers and promote fair competition. The changes promised by Google are not just about tackling fake reviews but are also about reinforcing the integrity of the digital marketplace.

What Does This Mean For Your Business?

Google’s commitment to tackling fake reviews, under the watchful eye of the CMA, is quite a significant step towards restoring trust and fairness in the online marketplace. For businesses, these changes could clearly help in levelling the playing field. Ethical firms that rely on genuine customer feedback may finally see their efforts shielded from the unfair advantage enjoyed by competitors using dishonest practices. By penalising those who manipulate review systems, Google and the CMA are setting a clear standard that prioritises transparency and fairness.

As an initial reaction, it’ll be interesting to see whether it’s possible to ‘black hat’ the reviews for a competitor’s business, by deliberately leaving fake reviews in the hope the business will be penalised.

For consumers, this development may be equally impactful. With nearly 90 per cent of shoppers relying on reviews when making purchasing decisions, the assurance that review platforms are working harder to weed out fraudulent content is critical. The addition of more robust detection measures, clearer warnings, and improved reporting mechanisms will empower consumers to make better-informed choices. The visibility of warnings on business profiles and the suspension of review functions for repeat offenders will also serve as valuable signals, allowing customers to avoid potentially unscrupulous businesses.

However, while the measures introduced by Google are promising, their ultimate success hinges on consistent enforcement. As Which? has pointed out, these changes must be backed by strong oversight and, where necessary, punitive measures for non-compliance. The CMA’s ongoing role in monitoring Google’s implementation of these commitments will be pivotal in ensuring that promises translate into real-world impact.

The broader implications for the online marketplace are also worth noting. The CMA’s proactive stance and collaboration with the Department for Business and Trade send a clear message that unethical behaviour will no longer be tolerated. With stronger consumer laws on the horizon, businesses will need to adopt more rigorous review policies to avoid regulatory scrutiny and potential fines. These developments could encourage the entire sector to adopt higher standards, fostering an environment where consumers and honest businesses can thrive.

Looking ahead, the digital marketplace is likely to face new challenges as technology evolves. AI, for example, has already made the creation of fake reviews more sophisticated, posing fresh hurdles for platforms like Google. However, the commitments secured by the CMA ensure that Google’s approach will remain adaptable to emerging threats, keeping pace with technological advancements.

The CMA’s intervention has, therefore, set a precedent for holding powerful tech companies accountable and ensuring that consumer interests are protected. By cracking down on fake reviews, Google’s new measures offer a pathway to rebuilding trust in online platforms. While challenges remain, this initiative signals a shift towards a more transparent and equitable digital landscape, where authenticity and fairness take centre stage. For businesses and consumers alike, these changes could (hopefully) prove transformative, reinforcing the integrity of a marketplace increasingly driven by the voice of the customer.

Tech News : John Lewis Introduces AI Verification For Online Knife Sales

John Lewis has unveiled a groundbreaking AI tool to verify the age of customers purchasing knives online, marking a shift in how retailers address legal requirements for the sale of bladed items.

According to the AI provider, their AI (which estimates the age of the user from their image) is “Better Than Human Judgement”.

Why Is John Lewis Introducing AI for Age Verification?

The decision to implement an AI-driven facial age estimation system stems from a broader effort to prevent underage access to knives amidst increasing scrutiny of age verification processes. This move comes as part of the retailer’s commitment to safety and compliance with government regulations while reintroducing online knife sales after a 15-year hiatus, against the backdrop of high-profile cases, such as the tragic murders linked to underage perpetrators purchasing knives online, which have reignited debates about stricter controls on bladed items.

John Lewis stopped selling knives online in 2009 due to the difficulty of verifying buyers’ ages effectively. By 2022, the retailer went a step further, removing cutlery knives from its online catalogue. However, the retailer has now reintroduced these products, citing confidence in the efficacy of AI-powered age estimation technology to meet strict legal and ethical requirements.

As a spokesperson for John Lewis recently explained: “We take safety incredibly seriously, and in line with strict government guidelines, have added an additional layer of security when customers purchase knives online.”

How Does the AI Tool Work?

The facial age estimation technology, developed by British company Yoti, analyses a photograph of the customer’s face to determine whether they are over 18. This streamlined process occurs at the point of purchase and takes only a few seconds. Customers are prompted to enable their device’s camera and position their face within a frame on the screen, akin to using a passport photo booth.

The AI system then estimates the individual’s age and immediately deletes the image once verification is complete. If the system determines the customer is over 18, they can proceed to checkout. For those who do not pass this initial check, an alternative verification method is available, allowing customers to upload a photo of their ID and a selfie to confirm their identity. Accepted forms of ID include passports, driving licences, and other official identification cards.

In addition to this online verification, a second layer of age checking occurs at delivery. For example, Royal Mail or DPD couriers require customers to present valid photo identification, such as a passport or driving licence, before handing over the parcel. If the recipient cannot provide proof of age, the item is returned to John Lewis, and a refund is issued.

What Technology Powers the Tool?

Yoti’s AI age estimation system relies on advanced machine learning algorithms trained on millions of images paired with verified ages. The technology does not rely on facial recognition, meaning it does not match the scanned face to a database of images or identities. Instead, it estimates age based on facial characteristics and deletes the image immediately after processing.

Better Than Human Judgement, Says Yoti

Yoti claims the system offers superior accuracy compared to human judgment. For example, for individuals aged 13–24, the tool estimates age within a margin of 1.3 years. The tool’s accuracy rate for correctly identifying 13–17-year-olds as under 18 is an impressive 99.3 per cent, with negligible variance across different skin tones, according to a 2023 white paper. The system also incorporates anti-spoofing technology to prevent attempts to bypass the check using photos, masks, or deepfake videos.

The Benefits of the System

The reintroduction of online knife sales by John Lewis demonstrates the potential of AI to address regulatory challenges while improving customer convenience. For the retailer, the technology enables compliance with laws requiring age verification at the point of sale and delivery.

The integration of this technology is expected to reduce the administrative burden associated with manual ID checks while offering customers a seamless and fast checkout process. Also, the system helps protect public safety by reducing the risk of knives falling into the hands of minors.

Commander Stephen Clayman of the National Police Chiefs’ Council was recently quoted (in The Times) praising the initiative, saying: “We welcome technology which can help to ensure knives do not end up in the wrong hands. Responsible retailing is a key element in this, and innovations like this are a step in the right direction.”

Privacy-Focused

One other key compliance benefit of the tool is that it’s also privacy-focused, as no images or personal data are stored, shared, or used for further training. This ensures compliance with data protection regulations and alleviates concerns about surveillance.

Challenges and Criticisms

Despite its benefits, the system is not without its challenges. One concern is the tool’s reliance on accurate camera functionality, which may exclude customers who lack access to modern devices or are unfamiliar with using such technology. Customers experiencing technical issues may find the process cumbersome, particularly if they need to switch to the manual ID verification method.

Another issue lies in potential inaccuracies. While the system boasts a high degree of accuracy, its effectiveness diminishes slightly for edge cases, e.g. individuals who appear significantly older or younger than their actual age. Critics have also pointed out that, although rare, the slight variation in accuracy across skin tones highlights an area where further refinement is needed.

Also, broader societal concerns remain about over-reliance on AI in public-facing applications. Privacy advocates, for example, have cautioned against the widespread adoption of AI for age verification, arguing that such systems, while anonymised, may normalise invasive technologies.

A Retail Trend?

It should be noted here that John Lewis is not alone in adopting AI for age verification. For example, Yoti’s technology is already used by social media platforms, alcohol retailers, and other businesses requiring age-restricted transactions. The wider adoption of AI age estimation tools could represent a turning point in retail, enabling businesses to meet regulatory demands while enhancing customer experience.

With the UK government considering stricter regulations on knife sales, including potential requirements for multiple forms of ID, John Lewis’ proactive use of technology may set a precedent for other retailers. As the national conversation around knife crime continues, innovations like this highlight the role of technology in tackling complex societal challenges.

By blending cutting-edge AI with robust checks and balances, John Lewis may have found a way to navigate a path forward in a contentious area of retail, but the journey is far from over. How other retailers respond, and whether customers embrace or resist this technological shift, remains to be seen.

What Does This Mean For Your Business?

By integrating advanced facial age estimation technology into its operations, the retailer has taken a proactive, technology-led approach to tackling what has been, up until now, a complex issue. This initiative has allowed John Lewis to re-enter the online knife market after years of hiatus, balancing customer convenience with security and showcasing the transformative potential of AI in retail.

However, as with any technological innovation, the implementation of such systems raises broader questions. While the facial age estimation tool offers a streamlined and privacy-focused solution, it is not without limitations. Issues such as accessibility for those without modern devices, potential inaccuracies at the margins of the system’s age-detection capabilities, and ongoing concerns about the normalisation of AI in public-facing applications highlight areas for further development and debate.

The integration of a secondary verification step, requiring proof of age upon delivery, ensures an additional layer of security. This dual-layered system strengthens compliance and demonstrates John Lewis’ commitment to responsible retailing. At the same time, it underscores the importance of redundancy in technological systems to account for potential failures or inaccuracies in AI processes.

While this initiative could position John Lewis as a leader in leveraging AI for compliance, it may also signal the beginning of a broader trend within the retail sector. As more businesses explore AI-based solutions for age-restricted sales, a wider conversation about the ethical, practical, and societal implications of these technologies is inevitable. The delicate balance between leveraging innovation for efficiency and ensuring equitable access and fairness will be crucial for widespread acceptance.

John Lewis’ adoption of AI age verification could offer a glimpse into the future of retail. It demonstrates how technology can address pressing regulatory and societal challenges, albeit with some caveats. Whether this approach becomes an industry standard or prompts further refinements in the application of AI remains to be seen, but what is clear is that this marks an important moment in the ongoing evolution of responsible retail practices. For now, John Lewis can say it has set a benchmark, but the effectiveness and reception of this technology will ultimately shape its long-term role in retail. No doubt other retailers will be watching with interest.

Company Check – LinkedIn : Allegations Of Using Private Messages To Train AI

LinkedIn, the professional networking giant owned by Microsoft, is under fire as a new lawsuit alleges the platform disclosed the private messages of its Premium customers to train generative AI models without consent.

The lawsuit, filed in California on behalf of Alessandro De La Torre and millions of other Premium subscribers, accuses LinkedIn of breaching contractual promises and violating US privacy laws.

The controversy centres on LinkedIn’s policy changes in 2024, which allowed user data to be used for AI training purposes. While LinkedIn exempted users in countries with stringent privacy regulations (e.g. the UK, EU, and Canada) from this practice, US users were automatically enrolled in the data-sharing programme unless they manually opted out. Crucially, the lawsuit alleges that LinkedIn extended this data-sharing to include the contents of private InMail messages, which often contain sensitive personal and professional information.

The lawsuit highlights the potential implications for users, stating that these private messages could include “life-altering information about employment, intellectual property, compensation, and other personal matters.” This, the plaintiff argues, breaches the LinkedIn Subscription Agreement (LSA), which explicitly assures Premium customers that their confidential information will not be disclosed to third parties. The complaint also points out that LinkedIn’s alleged failure to notify customers of these changes undermines user trust and constitutes a breach of the US Stored Communications Act.

LinkedIn has denied the allegations, labelling them as “false claims with no merit.” However, for many, the platform’s response to the privacy concerns raised last year casts a shadow over its denials. For example, in August 2024, LinkedIn introduced a setting allowing users to opt out of data-sharing for AI training, but this was enabled by default, raising questions about informed consent. Also, the platform discreetly updated its privacy policy in September 2024 to include the use of user data for AI training, with a notable caveat: opting out would not affect data already used to train models.

Some legal commentators have noted that this case could set a significant precedent for how social media platforms and tech companies handle user data in the age of AI. For example, as the plaintiff’s attorney, Rafey Balabanian, says: “This lawsuit underscores a growing tension between innovation and privacy,” and that “LinkedIn’s actions, if proven, represent a serious breach of trust, particularly given the sensitive nature of the information involved.”

The potential fallout for LinkedIn could extend beyond the courtroom. Premium customers, who pay up to $169.99 per month for features like InMail messaging and enhanced privacy, may, for example, choose to reconsider their subscriptions if these allegations prove true. Also, the case draws attention to the broader issue of how companies disclose and manage data for AI development, a concern that has already prompted regulatory scrutiny in regions like the UK and EU. Notably, the UK Information Commissioner’s Office (ICO) had earlier pressed LinkedIn to halt the use of UK user data for AI training, to which LinkedIn had agreed.

For users, this lawsuit serves as a reminder of the need to scrutinise privacy settings and policies. If successful, the plaintiffs seek damages, statutory penalties of $1,000 per affected user, and the deletion of any AI models trained using their data. With LinkedIn potentially facing financial and reputational damage, this case could act as a catalyst for greater transparency and accountability in the tech industry. Whether LinkedIn’s alleged actions were an oversight or a deliberate strategy to accelerate AI innovation, the coming months will undoubtedly shape the future of user privacy in the digital age.

Security Stop Press : Record-breaking DDoS Attack Highlights Growing Cybersecurity Threats

Cloudflare’s latest DDoS Threat Report for Q4 2024 highlights a dramatic surge in Distributed Denial of Service (DDoS) attacks, including a record-breaking 5.6 Tbps assault.

The web security and infrastructure company’s report reveals a 53 per cent year-over-year rise in DDoS activity, with Cloudflare blocking 21.3 million attacks in 2024, 6.9 million of which occurred in Q4, a staggering 83 per cent increase from the same period in 2023!

The largest attack, a 5.6 Tbps assault by a Mirai-variant botnet of over 13,000 IoT devices, targeted an ISP in Eastern Asia. Cloudflare says it mitigated it autonomously within seconds, preventing any disruption. Hyper-volumetric attacks exceeding 1 Tbps grew by 1,885 per cent quarter-over-quarter, reflecting the increasing scale and intensity of these threats. Nearly half of all attacks targeted OSI Layers 3 and 4, with the remainder focused on HTTP-based attacks, predominantly launched by botnets exploiting IoT devices.

Cloudflare’s report also highlighted how emerging attack methods like Memcached and BitTorrent DDoS vectors have seen dramatic growth, and ransom-motivated attacks surged by 78 per cent compared to Q3. The report also identifies telecommunications and marketing as the most attacked industries, with China, the Philippines, and Taiwan being key hotspots. Cloudflare says those responsible for the attacks include competitors, state-sponsored groups, and disgruntled users, highlighting diverse motives behind these incidents.

To counter these growing threats, businesses should deploy always-on, automated DDoS protection, secure all connected devices, and adopt proactive defence strategies. With attacks becoming faster and more sophisticated, real-time mitigation and robust security are critical to minimising risk.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives