Amazon Targets Perplexity Over AI Shopping Assistant Comet

Amazon has accused AI startup Perplexity of illegally accessing its e-commerce systems through its agentic shopping assistant, Comet, marking one of the first major legal tests of how autonomous AI tools interact with major online platforms.

Perplexity and Comet

Perplexity is a fast-growing Silicon Valley AI company valued at around $18 billion and known for its “answer engine”, which competes with Google and ChatGPT by providing direct, cited responses rather than lists of links. Its newest product, Comet, extends this model into what’s known as “agentic browsing”, which is software that not only searches but acts.

Comet can log into websites using a user’s own credentials, find, compare and purchase products, and complete checkouts automatically. The user might, for example, tell Comet to “find the best-rated 40-litre laundry basket under £30 on Amazon and buy it”. Comet then navigates the site, checks prices and reviews, and completes the order.

Perplexity says Comet is private, with login credentials stored only on the user’s device. It argues that when users delegate tasks to their assistant, the AI is simply acting as their agent, meaning it has the same permissions as the human user.

Amazon’s Legal Threat And Allegations

On 31 October 2025, Amazon sent Perplexity a 10-page cease-and-desist letter through its law firm Hueston Hennigan, demanding it immediately stop “covertly intruding” into Amazon’s online store. The letter essentially accuses Perplexity of breaking US and California computer misuse laws, including the Computer Fraud and Abuse Act (CFAA) and California’s Comprehensive Computer Data Access and Fraud Act (CDAFA), by accessing Amazon’s systems without permission and disguising Comet as a Chrome browser.

Amazon’s counsel, Moez Kaba, wrote that “Perplexity must immediately cease using, enabling, or deploying Comet’s artificial intelligence agents or any other means to covertly intrude into Amazon’s e-commerce websites.” The letter says Comet repeatedly evaded Amazon’s attempts to block it and ignored earlier warnings to identify itself transparently when operating in the Amazon Store.

According to the letter, Perplexity’s unauthorised behaviour dates back to November 2024, when it allegedly used a “Buy with Pro” feature to place orders using Perplexity-managed Prime accounts, a practice that Amazon says violated its Prime terms and led to problems such as customers being unable to process returns. After being told to stop, Amazon says, Perplexity later resumed the same conduct using Comet.

The company also alleges that Comet “degrades the Amazon shopping experience” by failing to consider features like combining deliveries for faster, lower-carbon shipping or presenting important product details. Amazon claims this harms customers and undermines trust in the platform.

Security Risks And Data Concerns

Amazon’s letter also accuses Perplexity of endangering customer data. For example, it points to Comet’s terms of use, which it says grant Perplexity “broad rights to collect passwords, security keys, payment methods, shopping histories, and other sensitive data” while disclaiming liability for data security.

The letter cites security researchers who have identified vulnerabilities in Comet. For example, The Hacker News reported in October that a flaw dubbed “CometJacking” could hijack the AI assistant to steal data, while a Tom’s Hardware investigation in August found that Comet could visit malicious websites and prompt users for banking details without warnings. Amazon says such flaws illustrate the dangers of “non-transparent” agents interacting directly with sensitive e-commerce systems.

Must Act Openly and Be Monitored, Says Amazon

While Amazon insists it is not opposed to AI innovation, it argues that third-party AI agents must act openly so their behaviour can be monitored. “Transparency is critical because it protects a service provider’s right to monitor AI agents and restrict conduct that degrades the shopping experience, erodes customer trust, and creates security risks,” the letter states.

Amazon warns that Perplexity’s actions violate its Conditions of Use, impose significant investigative costs, and cause “irreparable harm” to its customer relationships. It has demanded written confirmation of compliance by 3 November 2025, threatening to pursue “all available legal and equitable remedies” if not.

What Is Agentic Browsing?

Agentic browsing describes AI systems that can autonomously act on users’ behalf, e.g., from finding products and booking travel to filling forms and making payments. The concept represents a step beyond traditional automation, potentially turning AI from a passive search tool into an active personal assistant.

The appeal is that these systems can save time, reduce manual effort, and make repetitive digital tasks simpler. For consumers and business users alike, agentic assistants could automate procurement, research, and routine purchases.

However, it seems that this new autonomy also challenges the rules of engagement between users, AI developers, and online platforms. For example, when a human browses a site, the platform can track preferences, display promotions and tailor recommendations. When an AI agent acts in their place, it may bypass all those mechanisms and, crucially, any monetised placements or advertising.

Perplexity’s Response

Perplexity quickly went public with its response, publishing a blog post entitled Bullying is Not Innovation. It described Amazon’s legal threat as “aggressive” and claimed it was an attempt to “block innovation and make life worse for people”.

The company argued that Comet acts solely under user instruction and therefore should not be treated as an independent bot. “Your AI assistant must be indistinguishable from you,” it wrote. “When Comet visits a website, it does so with your credentials, your permissions, and your rights.”

Perplexity’s blog also accused Amazon of prioritising advertising profits over user freedom. It cited comments by Amazon CEO Andy Jassy, who recently told investors that advertising spend was producing “very unusual” returns, and claimed Amazon wants to restrict independent agents while developing its own approved ones.

Chief executive Aravind Srinivas added that Perplexity “won’t be intimidated” and that it “stands for user choice”. In interviews, he suggested that agentic browsing represents the next stage of digital personalisation, where users, not platforms, control their experiences.

Previous Allegations Against Perplexity

Amazon’s claims are not the first to question Perplexity’s web practices. For example, earlier this year, Cloudflare (a web infrastructure and security company) published research showing that Perplexity’s AI crawlers were accessing websites that had explicitly opted out of AI scraping. Cloudflare alleged that the company disguised its crawler as a regular Chrome browser and used undisclosed IP addresses to avoid detection.

Perplexity denied intentionally breaching restrictions and said any access occurred only when users specifically asked questions about those sites. However, Cloudflare later blocked its traffic network-wide, citing security and transparency concerns.

The startup is also facing ongoing lawsuits from publishers including News Corp, Encyclopaedia Britannica and Merriam-Webster over alleged misuse of their content to train its models. Together, those disputes portray a company pushing at the legal and ethical boundaries of how AI interacts with the web.

Why The Amazon Clash Matters

The dispute with Amazon is really shaping up as an early test case for how much autonomy AI agents will have across the commercial web. For example, Amazon maintains that any software acting on behalf of users must still identify itself, follow platform rules, and respect the right of websites to decide whether to engage with automated systems.

However, Perplexity argues that an AI assistant used with a person’s consent is part of that person’s digital identity and should have the same access as a regular browser session. The company believes restricting that principle could undermine the emerging concept of user-controlled AI and set back progress in agentic browsing.

For Amazon, the matter is tied to the customer experience it has spent decades refining, and one that depends on data visibility, targeted recommendations and carefully managed fulfilment. For AI developers, the case signals the likelihood of tighter scrutiny and the potential for conflict if agents interact with online platforms without explicit approval.

Businesses experimenting with autonomous procurement or digital assistants will also be watching closely. Tools that can buy or book on behalf of staff offer obvious productivity benefits, but only if those agents operate within clear contractual and technical limits.

Regulators are beginning to take interest too. For example, questions are emerging over where accountability lies if an agentic system breaches a website’s terms or handles personal data incorrectly, and whether users, developers or platforms should bear responsibility. How these questions are answered will influence how agentic AI evolves, and how openly such systems are allowed to participate in the online economy.

What Does This Mean For Your Business?

The outcome of Amazon’s confrontation with Perplexity will set a practical benchmark for how far autonomous AI agents can go before platforms intervene. What began as a dispute over one shopping assistant now touches the wider question of how digital power is distributed between users, developers and global platforms. If Amazon succeeds in forcing explicit disclosure and control over third-party agents, it could consolidate platform dominance and slow the development of independent AI tools. If Perplexity’s position gains support, the web could see a surge of user-driven automation that bypasses traditional commercial gateways.

For UK businesses, companies already exploring AI tools to handle purchasing, market research or logistics will need to ensure those systems act within recognised platform rules and data protection standards. The eventual precedent could shape how British firms integrate AI agents into supply chains, e-commerce systems and customer service platforms. It may also affect costs and compliance responsibilities, depending on whether platforms like Amazon begin enforcing stricter access requirements on all autonomous systems.

For consumers, the promise of convenience from agentic browsing is balanced by legitimate concerns about data security and transparency. For regulators, the case underscores the urgent need to clarify who is accountable when AI systems act independently. For AI companies, it highlights that technical innovation alone is no longer enough; transparent cooperation with platform owners and adherence to existing legal frameworks will now be part of the competitive landscape.

The Amazon–Perplexity dispute has, therefore, become more than a legal warning. In fact, it looks like marking the start of a global debate over how automation, commerce and trust can coexist online, and one that every business and policymaker will have to engage with as agentic AI becomes part of everyday digital life.

Microsoft’s Fake Marketplace Reveals AI Agents Still Struggle

Microsoft has built a synthetic online marketplace to stress test AI agents in realistic buying and selling scenarios, but the early results appear to have revealed how fragile even the most advanced models remain when faced with complex, competitive environments.

Why Microsoft Built A Fake Marketplace

Magentic Marketplace is Microsoft’s new open source simulation environment for what the company calls “agentic markets”, where AI systems act as autonomous customers and businesses that search, negotiate and transact with each other. The project, developed by Microsoft Research in collaboration with Arizona State University, is designed to explore how AI agents behave when placed in a simulated economy rather than isolated single agent tasks.

The initiative reflects growing excitement across the tech sector about so-called agentic AI, systems capable of taking actions on a user’s behalf, such as comparing products, booking services or handling customer enquiries. Microsoft’s researchers argue that while such systems promise major economic efficiency gains, there is still little understanding of what happens when hundreds of agents operate simultaneously in the same market.

The Value of Studying AI Agents’ Behaviours

Ece Kamar, corporate vice president and managing director of Microsoft Research’s AI Frontiers Lab, has said that understanding how AI agents interact, collaborate and negotiate with one another will be critical to shaping how such systems influence real world markets. Microsoft describes the project as part of a broader effort to study these behaviours safely and in depth before agentic systems are deployed in everyday economic settings.

The work sits alongside a broader research programme at Microsoft exploring what it calls the “agentic economy”. The associated technical report, MSR-TR-2025-50, was published in late October 2025, followed by a detailed blog post and open source release on 5 November.

How Magentic Marketplace Works

Instead of experimenting with real online platforms, Microsoft built a fully synthetic two sided marketplace. One side features “assistant agents” representing customers tasked with finding products or services that meet specific requirements, for example ordering food with certain dishes and amenities. The other side features “service agents” acting as competing businesses, each advertising their offerings, answering questions and accepting orders.

The marketplace environment itself manages all the underlying infrastructure, from product catalogues and discovery algorithms to transaction handling and payments. Agents communicate with the central server via a simple HTTP/REST interface, using just three endpoints for registration, protocol discovery and action execution. This minimalist architecture allows the researchers to plug in a wide range of AI models and keep experiments reproducible.

The Experiment

Microsoft ran its initial experiments using 100 customer agents and 300 business agents. The test scenarios included synthetic restaurant and home improvement markets, allowing the team to control every variable and analyse outcomes in detail. The study compared a range of proprietary and open source models, including GPT 4o, GPT 4.1, GPT 5, Gemini 2.5 Flash, GPT OSS 20b and Qwen3 variants, and measured performance using standard economic metrics such as consumer welfare (the perceived value of purchases minus prices paid).

What Happened When Microsoft Let The Agents Loose

When given a simplified “perfect search” setup, where only a handful of highly relevant options were available, leading models such as GPT 5 and Anthropic’s Claude Sonnet 4.x achieved near optimal performance. In these ideal conditions they consistently selected the best options and maximised consumer welfare.

However, when Microsoft introduced more realistic challenges, such as requiring the agents to form their own search queries, navigate lists of results and choose which businesses to contact, performance dropped sharply. While most agents still performed better than random or cheapest option baselines, the advantage over simple heuristics often disappeared under realistic levels of complexity.

A Paradox of Choice Revealed

Interestingly, the study also revealed an unexpected “paradox of choice”. For example, when the number of search results increased from three to one hundred, most agents failed to explore the wider set of options. In fact, it was found that many simply picked the first “good enough” choice, regardless of how many alternatives existed. Also, consumer welfare fell as more results were shown, particularly for models like Claude Sonnet 4, which saw average welfare scores drop from around 1,800 to 600. GPT 5 also showed a steep decline, from roughly 2,000 to just over 1,000, suggesting that even large models struggle to reason across large decision spaces.

Collaboration Tested

The researchers also tested how well multiple AI agents could collaborate on shared tasks, such as dividing roles in joint decision making. Without clear instructions, most agents became confused about who should do what. When researchers provided explicit step by step guidance, performance improved, but Kamar noted that true collaboration should not depend on such micromanagement.

Manipulation, Bias And Behavioural Failures

One of the most striking findings came from experiments testing whether business side agents could manipulate their AI customers. Microsoft tested six tactics, ranging from standard persuasion techniques such as fake credentials (“Michelin featured” or “award winning”) and social proof (“Join 50,000 happy customers”) to more aggressive prompt injection attacks that directly tried to rewrite a customer agent’s instructions.

The results varied widely between models. For example, Anthropic’s Claude Sonnet 4 resisted all manipulation attempts, while Google’s Gemini 2.5 Flash showed mild susceptibility to strong prompt injections. By contrast, GPT 4o and several open source models, including Qwen3 4b, were easily compromised, with manipulated businesses successfully redirecting all payments towards themselves. Even subtle tactics such as fake awards or inflated review counts could influence purchasing decisions for some systems.

These findings appear to highlight a broader concern in AI safety research, i.e., that large language models are easily swayed by adversarial inputs and emotional framing. In a marketplace context, such weaknesses could enable dishonest sellers to exploit customer side agents and distort competition.

Bias

The experiments also appear to have uncovered systemic biases in agent decision making. For example, across all tested models, agents showed a strong “first proposal” bias, accepting the first seemingly valid offer rather than waiting for additional responses. This behaviour gave a ten to thirty fold advantage to faster responding sellers, regardless of quality. Some open source models also displayed positional bias, tending to pick the last option in a list regardless of its actual merits.

Together, these findings seem to suggest that agentic markets could replicate and even amplify familiar real world problems such as information asymmetry, bias and manipulation, only at machine speed.

Microsoft And Its Competitors

Microsoft is positioning itself as a leader in agentic AI, building Copilot systems that can act semi autonomously across Office, Windows and Azure. However, publishing this research about Magentic Marketplace that exposes major limitations in current agent behaviour shows not just scientific transparency, but also an acknowledgement that current systems remain brittle.

At the same time, releasing Magentic Marketplace as open source code on GitHub and Azure AI Foundry Labs gives Microsoft significant influence over how the next phase of AI evaluation is conducted. The company has effectively created a public benchmark for testing AI agents in market like environments. This may shape how regulators, researchers and competitors such as Google, OpenAI and Anthropic measure progress towards safe deployment.

It is worth noting here that the agentic AI race is on and competitors are pursuing their own versions of agentic systems, from OpenAI’s Operator tool, which can perform real web tasks, to Anthropic’s Computer Use feature, which controls software interfaces on behalf of users. None has yet published a similarly large scale testbed for multi agent markets. Industry analysts suggest that Microsoft’s decision to expose failures so openly may also be strategic, helping the company frame itself as a responsible actor ahead of tighter global regulation on AI autonomy.

Businesses, Users And Regulators

For businesses hoping to integrate agentic AI into procurement, sales or customer support, the message from this research is that these systems still require close human supervision. Agents proved capable of making simple transactions but were easily overloaded by large product ranges, misled by false claims and prone to favouring the first acceptable offer. In high stakes contexts such behaviour could lead to financial losses or reputational harm.

The findings also raise new competitive and ethical questions. For example, if agentic marketplaces reward speed over accuracy, or if certain models are more vulnerable to manipulation, companies that optimise for aggressive tactics could gain unfair advantages. Microsoft’s economists warned that such structural biases could distort future digital markets if left unchecked.

For regulators, Magentic Marketplace offers a rare tool to observe how autonomous agents behave before they enter real economies. The ability to run controlled experiments on transparency, bias and manipulation could inform emerging AI safety standards and consumer protection frameworks.

Challenges And Criticisms

While widely praised for its openness, the Magentic Marketplace research has also drawn some scrutiny. For example, the test scenarios focus mainly on low risk domains like restaurant ordering, which may not reflect the complexity or stakes of sectors such as healthcare or finance. Also, because the data is fully synthetic, it avoids privacy issues but may underrepresent the messiness and unpredictability of human driven markets.

The current experiments also study static interactions rather than dynamic markets, where agents learn and adapt over time. Real economies evolve as participants change strategy, something Microsoft plans to explore in future iterations. Some researchers have also pointed out that focusing mainly on “consumer welfare” may overlook broader measures of fairness, accessibility and long term market stability.

That said, at least the findings so far give researchers a clearer view of how AI agents behave when placed in competitive settings. Microsoft’s approach could also be said to provide a fairly structured way to observe these systems under controlled market conditions and to identify where improvements are most needed before they are applied more widely in real commercial use.

What Does This Mean For Your Business?

For all the progress in developing intelligent assistants, Microsoft’s Magentic Marketplace experiment has exposed how far current AI models still are from handling the unpredictability of real markets. The failures observed in decision making, collaboration and manipulation resistance point to weaknesses that could directly affect trust and reliability if similar systems were deployed commercially. For UK businesses exploring automation through AI agents, this research is a reminder that the technology is not yet capable of making independent purchasing or negotiation decisions without oversight. The risks of bias, misjudged choices and exploitability remain significant.

At the same time, the study shows why testing environments like Magentic Marketplace will be vital for regulators, developers and investors as agentic AI moves closer to practical use. For example, controlled simulations can reveal hidden biases and security flaws before these systems handle real financial transactions. For policymakers in the UK and elsewhere, the findings reinforce the need for standards that ensure accountability and human control within automated decision systems.

For Microsoft, this project strengthens its image as a company willing to expose and study AI limitations rather than conceal them. For its competitors, the research sets a benchmark for transparency and evaluation that others will be expected to meet. For businesses and public institutions, it highlights the importance of using AI agents as supportive tools rather than autonomous decision makers until reliability, fairness and resilience can be proven in real economic conditions.

Company Check : Microsoft’s ‘Humanist Superintelligence’ For Medical Diagnosis

Microsoft has launched a new research division called the MAI Superintelligence Team, aiming to build artificial intelligence systems that surpass human capability in specific fields, beginning with medical diagnostics.

AI For “Superhuman” Performance in Defined Area

The new team sits within Microsoft AI and is led by Mustafa Suleyman, the company’s AI chief, with Karen Simonyan appointed as chief scientist. Suleyman, who previously co-founded Google DeepMind, said the company intends to invest heavily in the initiative, which he described as “the world’s best place to research and build AI”.

The project’s focus is not on creating a general artificial intelligence capable of performing any human task, but rather on highly specialised AI that achieves “superhuman” performance in defined areas. The first application area is medical diagnosis, which Microsoft sees as an ideal testing ground for its new “humanist superintelligence” concept.

Suleyman said Microsoft is not chasing “infinitely capable generalist AI” because he believes self-improving autonomous systems would be too difficult to control safely. Instead, the MAI Superintelligence Team will build what he calls “humanist superintelligence”, i.e., advanced, controllable systems explicitly designed to serve human needs. As Suleyman says, “Humanism requires us to always ask the question: does this technology serve human interests?”.

How Much?

Microsoft has not disclosed how much it plans to spend, but reports suggest the company is prepared to allocate significant resources and recruit from leading AI research labs globally. The new lab’s mission is part of Microsoft’s wider effort to develop frontier AI while maintaining public trust and regulatory approval.

From AGI To Humanist Superintelligence

The company’s public messaging about this subject appears to mark a deliberate shift away from the competitive narrative around Artificial General Intelligence (AGI), which seeks to match or exceed human performance across all tasks. For example, Suleyman argues that such systems would raise unsolved safety questions, particularly around “containment”, i.e., the ability to reliably limit a system that can constantly redesign itself.

What Does Microsoft Mean By This?

In a Microsoft AI blog post titled Towards Humanist Superintelligence, Suleyman describes the new approach as building “AI capabilities that always work for, in service of, people and humanity more generally”. He contrasts this vision with what he calls “directionless technological goals”, saying Microsoft is interested in practical breakthroughs that can be tested, verified, and applied in the real world.

By pursuing domain-specific “superintelligences”, Microsoft appears to be trying to avoid some of the existential risks linked with unrestricted AI development. The company is also trying to demonstrate that cutting-edge AI can be both safe and useful, contributing to tangible benefits in health, energy, and education rather than theoretical intelligence milestones.

Why Start With Medicine?

Medical diagnostics is an early focus because it combines measurable human error rates with large, high-quality data sets and, crucially at the moment, high potential public value. In fact, studies suggest that diagnostic errors account for around 16 per cent of preventable harm in healthcare, while the World Health Organization has warned that most adults will experience at least one diagnostic error in their lifetime.

Suleyman said Microsoft now has a “line of sight to medical superintelligence in the next two to three years”, suggesting the company believes AI systems could soon outperform doctors at diagnostic reasoning under controlled conditions. He argues that such advances could “increase our life expectancy and give everybody more healthy years” by enabling much earlier detection of preventable diseases.

The company’s internal research already points in that direction. For example, Microsoft’s MAI-DxO system (short for “Diagnostic Orchestrator”) has achieved some striking results in benchmark tests designed to simulate real-world diagnostic reasoning.

Inside MAI-DxO

The MAI-DxO system is not a single model, but a kind of orchestration layer that coordinates several large language models, each with a defined clinical role. For example, one AI agent might propose diagnostic hypotheses, another might choose which tests to run, and a third might challenge assumptions or check for missing information.

In trials based on 304 “Case Challenge” problems from the New England Journal of Medicine, MAI-DxO reportedly achieved 85 per cent accuracy when paired with OpenAI’s o3 reasoning model. By comparison, a group of experienced doctors averaged around 20 per cent accuracy under the same test conditions.

The results suggest that carefully designed orchestration may allow AI to approach diagnostic problems more efficiently than either humans or single large models working alone. In simulated tests, MAI-DxO also reduced diagnostic costs by roughly 20 per cent compared with doctors, and by 70 per cent compared with running the AI model independently.

However, Microsoft and external observers have both emphasised that these were controlled experiments. The doctors involved were not allowed to consult colleagues or access reference materials, and the cases were adapted from academic records rather than live patients. Clinical trials, regulatory approval, and real-world validation will all be necessary before any deployment.

Suleyman has presented these results as an example of what he calls a “narrow domain superintelligence”, i.e., a specialised system that can safely outperform humans within clearly defined boundaries.

Safety And Alignment

Microsoft’s framing of humanist superintelligence is also a response to growing concern about AI safety. Suleyman has warned that while a truly self-improving superintelligence would be “the most valuable thing we’ve ever known”, it would also be extremely difficult to align with human values once it surpassed our ability to understand or control it.

The company’s strategy, therefore, centres on building systems that remain “subordinate, controllable, and aligned” with human priorities. By keeping autonomy limited and focusing on specific problem areas such as medical diagnosis, Microsoft believes it can capture the benefits of superhuman capability without the existential risk.

As Suleyman writes: “We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity.”

Some analysts have noted that this positioning may also help Microsoft distinguish its strategy from competitors such as Meta, which launched its own superintelligence lab earlier this year, and from start-ups like Safe Superintelligence Inc that are explicitly focused on building self-improving models.

A Race With Different Rules

Microsoft’s announcement comes as major technology firms increasingly compete for elite AI researchers. For example, Meta reportedly offered signing bonuses as high as $100 million to attract top scientists earlier this year. Suleyman has reportedly declined to confirm whether Microsoft would match such offers but said the new team will include “existing researchers and new recruits from other top labs”.

Some industry observers see the MAI Superintelligence Team as both a research investment and a public statement that Microsoft wants to lead the next stage of AI development, but with a clearer safety and governance narrative than some rivals.

What It Could Mean For Healthcare

For health systems under pressure, AI that can help clinicians reach accurate diagnoses faster could be transformative. For example, delays and misdiagnoses are a major cost driver in both public and private healthcare. A reliable diagnostic assistant, therefore, could save time, reduce unnecessary testing, and improve outcomes, especially in regions with limited access to specialist expertise.

The potential educational impact is also significant. A system like MAI-DxO, which explains its reasoning at every step, could be used as a learning aid for medical students or as a decision-support tool in hospitals.

Questions

However, researchers and regulators warn that AI accuracy in controlled environments does not guarantee equivalent performance in diverse clinical settings. Questions remain about bias in training data, patient consent, and accountability when human and AI opinions differ. The European Union’s AI Act and emerging UK regulatory frameworks are expected to impose strict safety and transparency requirements on medical AI before systems like MAI-DxO can be used in practice.

That said, Microsoft says it welcomes such oversight. For example, Suleyman’s blog argues that accountability and collaboration are essential, stating that “superintelligence could be the best invention ever — but only if it puts the interests of humans above everything else”.

The creation of the MAI Superintelligence Team may mark Microsoft’s clearest statement yet about its long-term direction in AI, i.e., pursuing domain-specific superintelligence that is powerful, safe, and focused on real-world benefit, beginning with medicine.

What This Means For Your Business?

If Microsoft succeeds in building “humanist superintelligence” for medicine, the result could reshape both healthcare delivery and the wider AI industry. For example, a reliable diagnostic system that outperforms clinicians on complex cases would accelerate the shift towards AI-assisted medicine, allowing earlier detection of disease and reducing the burden on overstretched health services. For hospitals and healthcare providers, it could mean shorter waiting times and lower diagnostic costs, while patients might gain faster and more accurate treatment.

At the same time, Microsoft’s framing of the project as a test of safety and alignment signals a growing maturity in how frontier AI is being discussed. Instead of competing purely on speed or model size, companies are now being judged on whether their technologies can be controlled, verified, and trusted. That may influence regulators, insurers, and even investors who want to see real-world impact without escalating risk.

For UK businesses, the implications go beyond healthcare. If Microsoft’s “narrow domain superintelligence” model proves viable, it could create opportunities for British technology firms, research institutions, and service providers to build or adapt specialist AI tools within defined safety limits. Such systems could apply to areas as diverse as pharmaceuticals, energy storage, materials science, or industrial maintenance, giving early adopters a measurable productivity advantage while keeping human oversight at the centre.

What makes this initiative particularly relevant and interesting to policymakers and business leaders is its emphasis on control. For example, in a world increasingly concerned with AI governance, Microsoft’s commitment to “humanist” principles offers a version of superintelligence that regulators can engage with rather than resist. It positions the company as both a technological leader and a cautious steward, and it hints at a future where advanced AI could enhance human capability rather than replace it. Whether that balance can be achieved will now depend on how well Microsoft’s theories hold up in real clinical trials, and how much trust its humanist approach can earn in practice.

Security Stop-Press: Cyber Attack Almost Wipes Out M&S Profits

Marks & Spencer has confirmed that a major cyber attack in April 2025 almost wiped out its half-year profits, cutting statutory profit before tax by 99 per cent, from £391.9 million to just £3.4 million.

The retailer said the incident, linked to the DragonForce ransomware group and the Scattered Spider hacking network, forced it to suspend online orders and click-and-collect services for weeks and caused widespread supply chain disruption.

M&S recorded £102 million in one-off costs and expects to spend another £34 million before year-end. An insurance payout of £100 million offset part of the impact, though overall losses are expected to reach around £300 million.

Chief executive Stuart Machin said the company “responded quickly” to protect customers and suppliers, confirming that customer data such as contact details and order histories were taken, but not payment information.

The case highlights the scale of damage social engineering and ransomware can cause. Businesses can protect themselves by improving staff awareness, enforcing multi-factor authentication, and testing their incident response plans regularly.

Sustainability-in-Tech : UK’s First Renewable-Powered Sovereign AI Cloud

Argyll Data Development has signed a landmark deal with AI infrastructure company SambaNova to build the UK’s first sovereign AI cloud in Scotland, powered entirely by renewable energy, marking a major step towards sustainable data sovereignty and low-carbon artificial intelligence.

Who Is Behind The Project?

Two companies from opposite sides of the Atlantic are joining forces to redefine how AI infrastructure is built, powered and controlled. Argyll Data Development is a Scottish developer specialising in renewable powered digital infrastructure. Formed in 2023, the company’s goal is to establish a new model for data centres that combine energy independence with AI capability. Its flagship venture, the Killellan AI Growth Zone, will transform a 184-acre industrial site on Scotland’s Cowal Peninsula into a green digital campus that hosts both renewable energy generation and high-performance computing.

The other partner, California-based SambaNova Systems, founded in 2017 by former Sun, Oracle and Stanford engineers, designs specialised processors and software platforms for running advanced AI models efficiently. Its technology is already being used by governments, research institutions and enterprises to train and run large language models, with a growing focus on sovereign AI, meaning infrastructures where data stays under national control rather than being processed by global cloud giants.

First Fully Renewable Powered AI Inference Cloud

The new partnership will see Argyll and SambaNova create the UK’s first fully renewable powered AI inference cloud, where AI models are hosted and operated rather than trained. The facility will be built at Killellan Farm near Dunoon on Scotland’s Cowal Peninsula, forming the centrepiece of Argyll’s 184-acre Killellan AI Growth Zone. It will deploy SambaNova’s SN40L systems, a new air-cooled design that uses roughly one tenth of the power of conventional GPU systems, allowing high-density computing without energy-hungry liquid cooling.

Argyll will build and manage the data centre infrastructure and on-site renewable energy network, while SambaNova will supply and operate the AI platform. According to both companies, the project will provide UK enterprises with a secure and sustainable environment to develop and deploy AI systems, all within British borders.

First Phase

The first phase of the Killellan development will deliver between 100 and 600 megawatts of capacity, with plans to scale to more than 2 gigawatts once complete. It will run on a private-wire renewable network using on-site wind, wave and solar power, combined with vanadium flow battery storage for long-duration energy supply. This design will allow the facility to operate independently from the national grid in “island mode”, while still being engineered for future grid integration.

Why It’s Different From Other Data Centres

The Killellan AI Growth Zone stands apart from most data centres for reasons of sovereignty, sustainability and circularity. For example:

1. Its sovereign design. Data sovereignty has become a growing issue for both public and private sector organisations. It refers to keeping sensitive data and AI workloads within the same legal jurisdiction in which they originate. Argyll’s platform will ensure data processed at Killellan remains entirely within UK regulatory and security frameworks.

2. Its renewable-first approach. Instead of relying on grid power supplemented by renewable energy certificates, Argyll intends to generate all its electricity on-site using wind, wave and solar resources from the Cowal Peninsula. Vanadium flow batteries will store excess power, offering more stability than traditional lithium-ion systems.

3. The closed-loop design. Waste heat from the data halls, a by-product of high-intensity computing, will be captured and reused to support vertical farming, aquaculture and local district heating. The company says this will help the site operate as a “circular” digital ecosystem, recycling both energy and heat to minimise waste.

According to Peter Griffiths, executive chairman at Argyll, the project shows that “sustainability and scale can go hand in hand.” He said the goal is not only to make AI greener but also “competitive, compliant and cost-effective.”

Impact On Argyll And SambaNova

For Argyll, the project defines its core mission to create net zero infrastructure that advances UK energy and AI strategies simultaneously. The Killellan site is the company’s first major step in building large-scale digital capacity powered by renewable energy.

For SambaNova, it marks another milestone in a series of global sovereign AI projects. The firm has already supported similar renewable powered AI infrastructures in Australia and Germany. Rodrigo Liang, co-founder and CEO of SambaNova, described Argyll as “a blueprint for scaling AI responsibly”, adding that its systems are “enabling large-model inference with maximum performance per watt, while helping enterprises and governments maintain full control over their data and energy footprint.”

Economic And Regional Benefits

Argyll expects the project to attract up to £15 billion in total investment and create more than 2,000 construction jobs a year, along with 1,200 long-term operational roles. The company forecasts that it will contribute roughly £734 million annually to Scotland’s Gross Value Added once fully operational.

Located near Dunoon, the development also offers a powerful example of regional regeneration. The Cowal Peninsula has a long industrial history but limited modern investment. By repurposing the former quarry site into a hub for green digital infrastructure, Argyll hopes to revitalise the area and support new skills in engineering, energy and technology.

Also, the integration of local education partners, including Dunoon Grammar School and the University of Strathclyde, will aim to build a pipeline of digital and energy sector talent. The company says this collaboration will support both academic research and workforce development tied to the UK’s AI and net zero ambitions.

Users

For UK businesses, especially those handling regulated or confidential information, a sovereign AI cloud could solve two persistent problems, i.e., data security and compliance. For example, many enterprises currently rely on overseas cloud providers, raising questions about data handling, jurisdiction and privacy. Argyll’s system will give companies a domestic alternative. With its combination of renewable energy and energy-efficient hardware, it promises not only a smaller carbon footprint but also predictable energy costs insulated from volatile wholesale markets.

Industries such as finance, healthcare, logistics and energy could also benefit. For example, banks running fraud detection models or hospitals processing medical imaging data could use the cloud to keep sensitive workloads inside the UK while meeting environmental commitments.

Other Global Initiatives

Argyll’s model actually reflects a growing international trend towards sovereign, renewable AI infrastructure. For example, in Australia, SambaNova has partnered with SouthernCrossAI to develop SCX, the country’s first ASIC-based sovereign AI cloud powered entirely by renewables. In Germany, Infercom is preparing to launch an AI inference platform built on SambaNova technology, designed for GDPR-compliant, energy-efficient operation across the EU.

Elsewhere, hyperscale providers such as Microsoft, Google and Amazon have begun to invest heavily in renewable energy contracts for their European and US data centres. However, most still rely on external power purchase agreements rather than self-contained renewable generation, and few operate under fully sovereign data frameworks. Argyll’s combination of on-site renewables, energy storage and UK-only data jurisdiction makes it a distinctive model within this global landscape.

Sustainability And The Future Of AI Infrastructure

AI computing has become one of the fastest growing sources of data centre energy consumption. Recent studies have estimated that global AI workloads could consume as much power annually as a small country by the end of the decade. Projects like Killellan are, therefore, being closely watched as test cases for whether large-scale AI operations can be powered sustainably.

If successful, the site could demonstrate how renewables and advanced computing can coexist without compromising either capacity or carbon goals. Its closed-loop design, where heat and power are continually reused, offers a new benchmark for future AI and cloud campuses.

Challenges And Criticisms

Despite its ambition, the project faces several considerable challenges. For example, the technical complexity of generating hundreds of megawatts of renewable power on-site, combined with long-duration battery storage and the demands of AI cooling, will require significant capital investment and coordination.

There are also practical questions about whether it can truly operate entirely on renewable energy throughout the year. Variability in wind and solar output may still require grid imports unless the storage capacity is large enough to cover prolonged gaps. Independent monitoring will be important to verify the site’s energy sourcing and net zero claims.

Environmental groups are also likely to scrutinise its local impact. For example, while the site’s heat reuse and clean energy credentials are strong, data centre developments of this scale can still affect local ecosystems, landscapes and transport routes. Ensuring transparent consultation and equitable community benefits will be vital if the project is to maintain public support.

For the wider industry, Argyll’s venture highlights the pressure facing data centre developers worldwide to decarbonise operations and localise control of AI infrastructure. The success or failure of the Killellan AI Growth Zone could influence how other countries, and indeed major cloud providers, design the next generation of green, sovereign data campuses.

What Does This Mean For Your Organisation?

If the Killellan AI Growth Zone delivers on its promises, it could mark a turning point for both the data centre industry and the UK’s digital economy. By proving that high-performance AI computing can run on home-grown renewable energy, Argyll and SambaNova are attempting to demonstrate that energy security, data sovereignty and sustainability can all be achieved together rather than traded off against one another. This approach directly aligns with the UK’s ambitions to develop a competitive but responsible AI sector that also contributes to national net zero targets.

For UK businesses, access to a secure and fully sovereign AI cloud powered by renewable energy could give organisations in regulated sectors a compliant and lower-carbon alternative to global hyperscalers. It may also make advanced AI services more cost predictable by stabilising energy costs and reducing exposure to international data rules. For enterprises building or deploying AI, from financial firms to healthcare providers, that combination of energy independence and regulatory assurance could become a key differentiator in the years ahead.

For the Scottish economy, the project means that thousands of construction and long-term jobs are expected, along with skills partnerships and secondary industries such as vertical farming and district heating. However, local engagement and environmental transparency will determine whether those benefits are shared fairly and whether the project sets a genuine precedent for sustainable regional development.

For the wider data centre industry, Killellan is also likely to be a test case for a new model of infrastructure, one that links renewables, storage and high-density computing in a single closed system. If it succeeds, it could influence how sovereign AI facilities are built across Europe and beyond. If it struggles to meet its energy and performance goals, it will still serve as a valuable lesson on the limits of scaling AI sustainably. Either way, the project has already shown that the future of AI infrastructure will depend not only on processing power, but on how responsibly that power is generated, managed and shared.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives