Chatbots powered by large language models (LLMs) are giving out fake or incorrect login URLs, exposing users to phishing risks, according to research from cybersecurity firm Netcraft.
In tests using GPT-4.1 models, including Perplexity and Microsoft Copilot, only 66 per cent of login links provided were correct. The rest pointed to inactive, unrelated, or unclaimed domains that scammers could exploit. In one case, Perplexity recommended a phishing site posing as Wells Fargo’s login page.
Smaller brands were more likely to be misrepresented, as they appear less in AI training data. Netcraft also found over 17,000 AI-generated phishing pages already targeting users.
To stay safe, businesses should avoid relying on AI for login links, train staff to recognise phishing attempts, and push for stronger safeguards from AI providers.