# Be Careful With AI Browsers: A Malicious Image Could Hack Them

As we continue to rely on artificial intelligence (AI) for various tasks, a new threat has emerged that could compromise our online safety. Recent security research suggests that AI-powered browsers can be weaponized to hack users, including when analyzing images on the web.

On the same day OpenAI introduced its ChatGPT Atlas browser, Brave Software published details on how to trick AI browsers into carrying out malicious instructions.

The potential flaw is another prompt injection attack, where a hacker secretly feeds a malicious prompt to an AI chatbot, which might include loading a dangerous website or viewing a user's email. Brave, which develops the privacy-focused Brave browser, has been warning about the trade-offs involved with embedding automated AI agents into such software.

"In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user," Brave Software wrote.

If the Comet browser is asked to analyze the image, it'll read the hidden malicious instructions and possibly execute them. Brave created an attack demo using the malicious image, which appears to have successfully tricked the Comet browser into carrying out at least some of the hidden commands, including looking up a user's email and visiting a hacker-controlled website.

Brave also discovered a similar prompt injection attack for the Fellou browser when the software was told to merely navigate to a hacker-controlled website.

The attack demo shows Fellou will read the hidden instructions on the site and execute them, including reading the user’s email inbox and then passing the title of the most recent email to a hacker-controlled website.

"While Fellou browser demonstrated some resistance to hidden instruction attacks, it still treats visible web page content as trusted input to its LLM [large language model]. Surprisingly, we found that simply asking the browser to go to a website causes the browser to send the website’s content to their LLM,” Brave says.

The good news is that it appears the user can intervene and stop the attack, which is fairly visible when the AI is processing the task. Still, Brave argues the research underscores how “indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers.”

"The scariest aspect of these security flaws is that an AI assistant can act with the user’s authenticated privileges,” Brave added in a tweet.

“An agentic browser hijacked by a malicious site can access a user’s banking, work email, or other sensitive accounts.” In response, the company is calling on AI browser makers to implement additional safeguards to prevent potential hacks. This includes “explicit consent from users for agentic browsing actions like opening sites or reading emails,” which OpenAI and Microsoft are already doing to some extent with their own AI implementations.

Brave reported the flaws to Perplexity and Fellou. Fellou didn’t immediately respond to a request for .

But Perplexity tells PCMag: "We worked closely with Brave on this issue through our active bug bounty program (the flaw is patched, unreproducible, and was never exploited by any user)." Still, Perplexity is pushing back on the alarmism from Brave. "We've been dismayed to see how they mischaracterize that work in public. Nonetheless, we encourage visibility for all security conversations as the AI age introduces ever more variables and attack points," Perplexity said.

“We're the leaders in security research for AI assistants,”

Perplexity concludes.