AI Browsers Could Leave Users Penniless: A Prompt Injection Warning

AI Browsers Could Leave Users Penniless: A Prompt Injection Warning

As Artificial Intelligence (AI) browsers continue to gain traction, concerns are growing about the potential dangers of something called "prompt injection." Large language models (LLMs), such as those that power popular AI chatbots like ChatGPT, Claude, and Gemini, are designed to follow "prompts," which are the instructions and questions that people provide when looking up information or getting help with a topic.

In a chatbot, the questions you ask the AI are the "prompts." However, AI models aren't great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like "don't write ransomware") from the types of requests that come from users. This lack of clarity can lead to security vulnerabilities.

To showcase the risks, web browser developer Brave recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. What they found was alarming, as they wrote in a blog this week: "As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged-in sessions—such as banking, healthcare, and other critical websites—the risks multiply."

Prompt injection is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data to nudge or outright force an AI into doing something it wasn't meant to do. This can be done using language, not code, making it a sophisticated and stealthy attack method.

Attackers don't need to break into servers or look for traditional software bugs; they just need to be clever with words. For an AI browser, part of the input is the content of the sites it visits. So, it's possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

We need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users, such as answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

However, more recently, we're seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention.

This means they can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish. For example, when you tell your agentic browser, "Find the cheapest flight to Paris next month and book it," the browser will do all the research, compare prices, fill out passenger details, and complete the booking without any extra steps or manual effort—provided it has all the necessary details of course.

Are you seeing the potential dangers of prompt injections here? What if my agentic browser gets new details while visiting a website? I can imagine criminals setting up a website with extremely competitive pricing just to attract visitors, but the real goal is to extract the payment information which the agentic browser needs to make purchases on my behalf.

You could end up paying for someone else's vacation to France. During their research, Brave found that Perplexity's Comet has some vulnerabilities which "underline the security challenges faced by agentic AI implementations in browsers."

The vulnerabilities allow an attack based on indirect prompt injection, which means the malicious instructions are embedded in external content (like a website, or a PDF) that the browser AI assistant processes as part of fulfilling the user's request. There are various ways of hiding that malicious content from a casual inspection.

Brave uses the example of white text on a white background which AI browsers have no problem reading and a human would not see without closer inspection. To quote a user on X: "You can literally get prompt injected and your bank account drained by doomscrolling on reddit"

To prevent this type of prompt injection, it is imperative that agentic browsers understand the difference between user-provided instructions and web content processed to fulfill the instructions and treat them accordingly. Perplexity has attempted twice to fix the vulnerability reported by Brave, but it still hasn't fully mitigated this kind of attack as of the time of this reporting.

While it's always tempting to use the latest gadgets, comes with a certain amount of risk. To limit those risks when using agentic browsers you should:

  • Stay informed about potential threats and security vulnerabilities
  • Keep your devices and software up-to-date
  • Use reputable security software to protect against malware
  • Be cautious when using agentic browsers, especially for sensitive tasks

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.