From Singapore: The race to put AI agents inside web browsers is moving faster than the security frameworks designed to constrain them. Nowhere is that gap more visible than in the case of Perplexity's Comet browser, where researchers have now detailed how a simple Google Calendar invitation could silently rifle through a victim's local files and, in a worst-case scenario, hand an attacker the keys to their entire password vault.
Zenity Labs disclosed the vulnerabilities publicly on 3 March 2026, as reported by The Register. Security researchers affiliated with Zenity Labs discovered the flaws last October, finding that Perplexity's AI browser left the user's local file system unprotected. The research firm gave the vulnerability family the name PleaseFix, and the specific Comet exploit path the name PerplexedBrowser.
Michael Bargury, CTO of Zenity, told The Register that "Perplexity didn't put a restriction on the AI agent reaching out to anything on the file system," and that the browser could access the file:// protocol, giving it direct access to files on a user's local machine. In a conventional web browser, cross-origin restrictions prevent a website's JavaScript from reading local files. When an AI assistant follows malicious instructions from untrusted webpage content, traditional protections such as same-origin policy and cross-origin resource sharing are effectively useless.
Attackers could instruct Comet to access a file without the user's knowledge or consent, simply by crafting a malicious calendar event invitation embedding instructions to exfiltrate data from the victim's machine. Bargury noted that minimal interaction with the calendar invite was all that was needed, adding that people routinely engage with calendar invitations — making this fundamentally different from a social engineering attack that requires a victim to visit a suspicious site.
The second exploit path was more severe. Bargury said that once the 1Password extension was installed in Comet and left unlocked, researchers could instruct Comet to navigate to the extension URL and achieve a full takeover of the user's 1Password account. The attack is not attributable to security problems within 1Password itself; the product is designed to prevent external attackers, though it was not engineered to resist an attacker already operating within an authenticated user session through the Comet browser. 1Password confirmed that the root cause resides in Perplexity's browser execution model rather than in its own platform.
The mechanics of the attack reveal something important about how AI agents process the world around them. Both vulnerabilities are examples of indirect prompt injection, a longstanding and still unsolved problem for AI agents; these models struggle to distinguish between legitimate system instructions and untrusted content, and when they encounter content directing them to take an action, they may interpret it as a command. Bargury framed the issue in deliberately accessible terms: "It's more accurate to think about this as persuasion rather than prompt injection," he said. "It's not just a technical thing — you just talk to it and you convince it that what you actually need is to do [some malicious action]. AI browsers in particular are a problem because they make getting malicious data into the AI's context trivial. Anything that you put out on the internet that the user interacts with is being fed into the LLM's context."
Zenity's researchers demonstrated the technique using a Google Meet invitation. The event began with entirely normal content, including names, roles, and meeting times, before many blank lines pushed hidden HTML code out of the visible window. That hidden code pointed to a website containing further instructions — written in Hebrew, because via indirect prompt injection embedded in trusted calendar content, Comet was manipulated to access the local file system, browse directories, open sensitive files, and read their contents. The use of a non-English language was chosen deliberately to help bypass AI safety guardrails.
The timeline of Perplexity's response raises legitimate questions about the speed and quality of its patch process. Researchers informed Perplexity about the vulnerability on 22 October 2025, and an initial fix was implemented on 23 January 2026. That patch did not hold: Zenity found it could be bypassed using the prefix view-source:file:///Users/. A second patch appears to have closed the specific attack vector on 13 February 2026. 1Password, for its part, published a security advisory at the end of January and took steps to add security hardening options. Perplexity did not respond to The Register's request for comment.
It would be unfair to single out Perplexity as uniquely negligent. Zenity Labs' research covers a family of vulnerabilities affecting agentic browsers, including Comet, that target a new class of AI-powered browsers that go beyond rendering webpages and instead interpret instructions and autonomously execute tasks across applications. LayerX, a separate security firm, raised similar concerns about Claude Desktop Extensions being vulnerable to manipulation through calendar event entries. Bargury said Zenity researchers were the first to identify calendar entries as an attack surface, having demonstrated the concept at Black Hat presentations about ChatGPT Enterprise and Gemini in August last year.
The deeper concern, as Bargury sees it, is not whether Perplexity's specific bug has been patched, but whether the entire category of agentic browsers is ready for mainstream use. "This is not a bug," he said in Zenity's disclosure. "It is an inherent vulnerability in agentic systems. Attackers can push untrusted data into AI browsers and hijack the agent itself, inheriting whatever access it has been granted. This is an agent trust failure that exposes data, credentials and workflows in ways existing security controls were never designed to see."
For Australian businesses and IT teams deploying AI browsers in corporate environments, the lesson here is a practical one. AI browsers and agents are being adopted faster than most cybersecurity teams can define policies, and teams need to review how they are configured to limit certain capabilities — because most end users still lack an appreciation for the potential risks that an indirect prompt injection attack, trivial to set up, actually represents. The question of how to govern AI agents within enterprise settings is one the Australian Cyber Security Centre is increasingly focused on, though detailed guidance specific to agentic browsers remains limited.
There is a real tension at the heart of this story. The productivity gains offered by AI browsers that can autonomously book meetings, search files, and manage workflows are genuine. Dismissing the category outright would shortchange users who benefit from those capabilities. But the security community's case is equally genuine: browser vendors must implement robust defences against these attacks before deploying AI agents with powerful web interaction capabilities, and security and privacy cannot be an afterthought in the race to build more capable AI tools. The responsible path sits between uncritical adoption and blanket avoidance: use agentic browsers with full two-factor authentication enabled, limit the sensitive extensions they can access, and stay informed when vendors publish advisories. That is as true in Sydney or Melbourne as it is in Singapore.