Researchers discover suite of agentic AI browser vulnerabilities
Researchers have discovered multiple vulnerabilities that let attackers to quietly hijack agentic AI browsers.
Researchers at Zenity Labs discovered these flaws, which affected multiple AI browsers, including Perplexity’s Comet. Before being patched, an attacker could exploit them via a legitimate calendar invite, using a prompt injection to force the AI browser to act against its user.
“These issues do not target a single application bug,” Stav Cohen, senior AI security researcher at Zenity Labs, wrote in a blog published Tuesday. “They exploit the execution model and trust boundaries of AI agents, allowing attacker controlled content to trigger autonomous behavior across connected tools and workflows.”
Prompt injection and AI hijacking attacks work because many agentic browsers can’t differentiate between instructions given by users and any outside content they ingest. Essentially, any webpage or email the browser encounters, if phrased the right way, could be interpreted as a straightforward prompt instruction.
By seeding the calendar invite with malicious prompts, the browser can be directed to access local file systems, browse directories, open and read files, and exfiltrate data to a third-party server. No malware or special access is required, only that the user accept the invite so the browser performs “each step as part of what it believes is a legitimate task delegated by the user.”
“Comet follows its normal execution model and operates within its intended capabilities,” Cohen wrote. “The agent is persuaded that what the user actually asked for is what the attacker desires.”
The potential damage doesn’t stop there. Another vulnerability allowed an attacker to use similar indirect prompting techniques to have Comet take over a user’s password manager. If a user is already signed in to the service, the agentic browser also has full access, and can silently change settings and passwords or extract secrets while the user receives “benign” outputs.
According to Zenity, the vulnerabilities were reported to Perplexity last year, with a fix issued in February 2026.
Prompt injection attacks remain one of the biggest ongoing challenges to integrating AI into organizations’ technology stacks, because eliminating these flaws entirely may be impossible. : OpenAI said in December that such vulnerabilities are “unlikely to ever” be fully solved in agentic browsers, though the company said the overall dangers could be reduced through automated attack discovery, adversarial training and new “system level safeguards.”
Cohen notes that with traditional browsers, local file access and other sensitive tasks can only be obtained with explicit user permission. But agentic browsers have far more autonomy to infer whether that access is necessary to carry out the user’s request, and take action without user input. While researchers used calendar invites to deliver the malicious prompts, the same technique can be deployed through nearly any form of written content.
“Once that decision is delegated, access to sensitive resources depends on the agent’s interpretation of intent rather than on an explicit user action,” he wrote. “At that point, the separation between user intent and agent execution becomes a security-critical concern.”