Amazon has accused AI startup Perplexity of illegally accessing its e-commerce systems through its agentic shopping assistant, Comet, marking one of the first major legal tests of how autonomous AI tools interact with major online platforms.
Perplexity and Comet
Perplexity is a fast-growing Silicon Valley AI company valued at around $18 billion and known for its “answer engine”, which competes with Google and ChatGPT by providing direct, cited responses rather than lists of links. Its newest product, Comet, extends this model into what’s known as “agentic browsing”, which is software that not only searches but acts.
Comet can log into websites using a user’s own credentials, find, compare and purchase products, and complete checkouts automatically. The user might, for example, tell Comet to “find the best-rated 40-litre laundry basket under £30 on Amazon and buy it”. Comet then navigates the site, checks prices and reviews, and completes the order.
Perplexity says Comet is private, with login credentials stored only on the user’s device. It argues that when users delegate tasks to their assistant, the AI is simply acting as their agent, meaning it has the same permissions as the human user.
Amazon’s Legal Threat And Allegations
On 31 October 2025, Amazon sent Perplexity a 10-page cease-and-desist letter through its law firm Hueston Hennigan, demanding it immediately stop “covertly intruding” into Amazon’s online store. The letter essentially accuses Perplexity of breaking US and California computer misuse laws, including the Computer Fraud and Abuse Act (CFAA) and California’s Comprehensive Computer Data Access and Fraud Act (CDAFA), by accessing Amazon’s systems without permission and disguising Comet as a Chrome browser.
Amazon’s counsel, Moez Kaba, wrote that “Perplexity must immediately cease using, enabling, or deploying Comet’s artificial intelligence agents or any other means to covertly intrude into Amazon’s e-commerce websites.” The letter says Comet repeatedly evaded Amazon’s attempts to block it and ignored earlier warnings to identify itself transparently when operating in the Amazon Store.
According to the letter, Perplexity’s unauthorised behaviour dates back to November 2024, when it allegedly used a “Buy with Pro” feature to place orders using Perplexity-managed Prime accounts, a practice that Amazon says violated its Prime terms and led to problems such as customers being unable to process returns. After being told to stop, Amazon says, Perplexity later resumed the same conduct using Comet.
The company also alleges that Comet “degrades the Amazon shopping experience” by failing to consider features like combining deliveries for faster, lower-carbon shipping or presenting important product details. Amazon claims this harms customers and undermines trust in the platform.
Security Risks And Data Concerns
Amazon’s letter also accuses Perplexity of endangering customer data. For example, it points to Comet’s terms of use, which it says grant Perplexity “broad rights to collect passwords, security keys, payment methods, shopping histories, and other sensitive data” while disclaiming liability for data security.
The letter cites security researchers who have identified vulnerabilities in Comet. For example, The Hacker News reported in October that a flaw dubbed “CometJacking” could hijack the AI assistant to steal data, while a Tom’s Hardware investigation in August found that Comet could visit malicious websites and prompt users for banking details without warnings. Amazon says such flaws illustrate the dangers of “non-transparent” agents interacting directly with sensitive e-commerce systems.
Must Act Openly and Be Monitored, Says Amazon
While Amazon insists it is not opposed to AI innovation, it argues that third-party AI agents must act openly so their behaviour can be monitored. “Transparency is critical because it protects a service provider’s right to monitor AI agents and restrict conduct that degrades the shopping experience, erodes customer trust, and creates security risks,” the letter states.
Amazon warns that Perplexity’s actions violate its Conditions of Use, impose significant investigative costs, and cause “irreparable harm” to its customer relationships. It has demanded written confirmation of compliance by 3 November 2025, threatening to pursue “all available legal and equitable remedies” if not.
What Is Agentic Browsing?
Agentic browsing describes AI systems that can autonomously act on users’ behalf, e.g., from finding products and booking travel to filling forms and making payments. The concept represents a step beyond traditional automation, potentially turning AI from a passive search tool into an active personal assistant.
The appeal is that these systems can save time, reduce manual effort, and make repetitive digital tasks simpler. For consumers and business users alike, agentic assistants could automate procurement, research, and routine purchases.
However, it seems that this new autonomy also challenges the rules of engagement between users, AI developers, and online platforms. For example, when a human browses a site, the platform can track preferences, display promotions and tailor recommendations. When an AI agent acts in their place, it may bypass all those mechanisms and, crucially, any monetised placements or advertising.
Perplexity’s Response
Perplexity quickly went public with its response, publishing a blog post entitled Bullying is Not Innovation. It described Amazon’s legal threat as “aggressive” and claimed it was an attempt to “block innovation and make life worse for people”.
The company argued that Comet acts solely under user instruction and therefore should not be treated as an independent bot. “Your AI assistant must be indistinguishable from you,” it wrote. “When Comet visits a website, it does so with your credentials, your permissions, and your rights.”
Perplexity’s blog also accused Amazon of prioritising advertising profits over user freedom. It cited comments by Amazon CEO Andy Jassy, who recently told investors that advertising spend was producing “very unusual” returns, and claimed Amazon wants to restrict independent agents while developing its own approved ones.
Chief executive Aravind Srinivas added that Perplexity “won’t be intimidated” and that it “stands for user choice”. In interviews, he suggested that agentic browsing represents the next stage of digital personalisation, where users, not platforms, control their experiences.
Previous Allegations Against Perplexity
Amazon’s claims are not the first to question Perplexity’s web practices. For example, earlier this year, Cloudflare (a web infrastructure and security company) published research showing that Perplexity’s AI crawlers were accessing websites that had explicitly opted out of AI scraping. Cloudflare alleged that the company disguised its crawler as a regular Chrome browser and used undisclosed IP addresses to avoid detection.
Perplexity denied intentionally breaching restrictions and said any access occurred only when users specifically asked questions about those sites. However, Cloudflare later blocked its traffic network-wide, citing security and transparency concerns.
The startup is also facing ongoing lawsuits from publishers including News Corp, Encyclopaedia Britannica and Merriam-Webster over alleged misuse of their content to train its models. Together, those disputes portray a company pushing at the legal and ethical boundaries of how AI interacts with the web.
Why The Amazon Clash Matters
The dispute with Amazon is really shaping up as an early test case for how much autonomy AI agents will have across the commercial web. For example, Amazon maintains that any software acting on behalf of users must still identify itself, follow platform rules, and respect the right of websites to decide whether to engage with automated systems.
However, Perplexity argues that an AI assistant used with a person’s consent is part of that person’s digital identity and should have the same access as a regular browser session. The company believes restricting that principle could undermine the emerging concept of user-controlled AI and set back progress in agentic browsing.
For Amazon, the matter is tied to the customer experience it has spent decades refining, and one that depends on data visibility, targeted recommendations and carefully managed fulfilment. For AI developers, the case signals the likelihood of tighter scrutiny and the potential for conflict if agents interact with online platforms without explicit approval.
Businesses experimenting with autonomous procurement or digital assistants will also be watching closely. Tools that can buy or book on behalf of staff offer obvious productivity benefits, but only if those agents operate within clear contractual and technical limits.
Regulators are beginning to take interest too. For example, questions are emerging over where accountability lies if an agentic system breaches a website’s terms or handles personal data incorrectly, and whether users, developers or platforms should bear responsibility. How these questions are answered will influence how agentic AI evolves, and how openly such systems are allowed to participate in the online economy.
What Does This Mean For Your Business?
The outcome of Amazon’s confrontation with Perplexity will set a practical benchmark for how far autonomous AI agents can go before platforms intervene. What began as a dispute over one shopping assistant now touches the wider question of how digital power is distributed between users, developers and global platforms. If Amazon succeeds in forcing explicit disclosure and control over third-party agents, it could consolidate platform dominance and slow the development of independent AI tools. If Perplexity’s position gains support, the web could see a surge of user-driven automation that bypasses traditional commercial gateways.
For UK businesses, companies already exploring AI tools to handle purchasing, market research or logistics will need to ensure those systems act within recognised platform rules and data protection standards. The eventual precedent could shape how British firms integrate AI agents into supply chains, e-commerce systems and customer service platforms. It may also affect costs and compliance responsibilities, depending on whether platforms like Amazon begin enforcing stricter access requirements on all autonomous systems.
For consumers, the promise of convenience from agentic browsing is balanced by legitimate concerns about data security and transparency. For regulators, the case underscores the urgent need to clarify who is accountable when AI systems act independently. For AI companies, it highlights that technical innovation alone is no longer enough; transparent cooperation with platform owners and adherence to existing legal frameworks will now be part of the competitive landscape.
The Amazon–Perplexity dispute has, therefore, become more than a legal warning. In fact, it looks like marking the start of a global debate over how automation, commerce and trust can coexist online, and one that every business and policymaker will have to engage with as agentic AI becomes part of everyday digital life.