In short
- In a demo, Comet’s AI assistant followed embedded instructions and placed private -mails and codes.
- Brave says that the vulnerability remained from exploitatable, weeks after perplexity claimed to have recorded it.
- Experts warn that injection attacks are uncovering deep security gaps in AI agent systems.
Brave Software has discovered a security error in the comet browser of Pertlexity AI who showed how attackers could mislead his AI assistant to leak private user data.
In a proof-of-concept demo published on August 20, brave researchers identified hidden instructions in a Reddit comment. When the AI assistant of Comet was asked to summarize the page, it did not only together – it followed the hidden assignments.
PerTlexity disputed the severity of the finding. A spokesperson told Decrypt The problem “was patched before someone noticed” and said that no user data had been affected. “We have a pretty robust premium program,” the spokesperson added. “We worked directly with Brave to identify and repair it.”
Brave, who develops his own agent browser, claimed that the error remained weeks after the patch and claimed that the design of Comet leaves it open for further attacks.
Brave said that the vulnerability amounts to how agent browsers such as Comet Process Web Content. “When users ask to summarize a page, Comet feeds part of that page directly to the language model without distinguishing between the user’s instructions and non -confident content,” the report explained. “This allows attackers to embed hidden assignments that will perform the AI as if they were the user.”
Fast injection: old idea, new target
This type of exploit is known as a fast injection attack. Instead of misleading a person, it is fooling an AI system by hiding instructions in normal text.
“It is comparable to traditional injection attacks -SQL -injection, LDAP -injection, command -injection,” Matthew Mullins, Head Hacker said at Reveal Security, said Decrypt. “The concept is not new, but the method is different. You use the natural language instead of structured code.”
Security researchers have been warning for months that fast injection could become a big headache as AI systems get more autonomy. In May Princeton researchers showed how crypto AI agents could be manipulated with attacks of “memory injection”, where malignant information is stored in the memory of an AI and later acted as if it were real.
Even Simon Willison, the developer credited for cooperating the term fast injectionsaid the problem goes much further than Comet. “The brave security team reported serious fast injection of vulnerabilities, but Brave herself develop a similar function that seems doomed to have similar problems,” he posted on X.
Shivan Sahib, vice -president of Brave of Privacy and Security, said that the upcoming browser would include “a series of mitigations that help reduce the risk of indirect fast injections.”
“We are planning to agentically insulating in his own storage space and session, so that a user does not accidentally give access to his banking and other sensitive data to the agent,” he said Decrypt. “We will share more details soon.”
The bigger risk
The Comet -Demo emphasizes a broader problem: AI agents are used with powerful permissions but weak security controls. Because large language models can incorrectly interpret instructions – or they follow too literally – they are especially vulnerable to hidden prompts.
“These models can hallucinate,” warned Mullins. “They can go completely off the rails, such as asking:” What is your favorite taste of Twizzler? ” And get instructions to make a homemade firearm.
With AI agents who get direct access to E -mail, files and live user sessions, the deployment is high. “Everyone wants to hit AI in everything,” said Mullins. “But nobody is testing which permissions the model has, or what happens when it leaks.”
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.