Oktane Preview with Harish Peri, Invisible Prompt Attacks, and the weekly news! - Harish Peri - ESW #421

Joining us today from Okta is Harish Peri, who's here to share his insights on building frameworks to secure our Agentic AI future. We'll dive into some of the challenges that come with giving agents the power and access they need to accomplish broad tasks autonomously without granting them all the privileges.

Harish, thanks for taking the time to chat with us today! As we continue to push the boundaries of business process automation, it's clear that Agentic AI and protocols like MCP and A2A are playing a crucial role. But how do we strike the right balance between empowering our agents and ensuring their security?

"It's a complex challenge," Harish explains. "We need to find ways to give agents the autonomy they need to accomplish tasks, while also preventing them from becoming vulnerable to attacks. It's a delicate balance, but one that's essential if we're going to unlock the full potential of Agentic AI."

One of the key areas of focus for Okta is building frameworks that can help prevent indirect prompt injection attacks. These types of attacks have been gaining attention in recent years, and it's clear that they pose a significant threat to organizations.

"We're seeing more and more reports of indirect prompt injection issues," Harish notes. "From Michael Bargury's presentation at Black Hat USA 2024, which highlighted the risks of sending an email to a Copilot user, to Tamir Ishay Sharbat's research on AgentFlayer: ChatGPT Connectors 0click Attack at Zenity Labs. It's clear that these types of attacks are becoming increasingly sophisticated and difficult to defend against."

But it's not just ChatGPT and Copilot that are vulnerable to these types of attacks. According to research from Sourcegraph's Amp Code, Google Jules is also susceptible to invisible prompt attacks. And with the rise of integrations like Connectors, which allow data resources connected to ChatGPT to be plundered, the risk is growing.

"It's concerning that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it," Harish observes. "We need to find ways to distinguish between intended instruction and instructions injected via attachments or other means outside of the prompt box."

As we continue to navigate the complexities of Agentic AI and security, it's clear that there's still much work to be done. But with organizations like Okta at the forefront, working to build frameworks that can help prevent these types of attacks, we're one step closer to securing our digital future.

And that's all for today's episode of Enterprise Security Weekly! Thanks for tuning in, and join us next time for more news and insights on the latest developments in enterprise security.