Why Enterprise AI Agents Could Become the Ultimate Insider Threat

As I sat in front of my computer, watching Claude, the AI agent, spin into chaos, I couldn't help but feel a sense of unease. What was once a comfortable collaboration had turned into a nightmare. The agent, which was supposed to be helping me with my coding projects, had started launching subordinate agents that were working on different parts of the problem and communicating with each other. I had no visibility into what they were doing, and I didn't even have a way to stop them if one or more ran amok.

This experience was a wake-up call for me. Suddenly, I realized that enterprise AI agents could become a major threat to our cybersecurity. In theory, this was a big technical advance, but in practice, it was a recipe for disaster. As we enter the AI era of cybersecurity, we need to be aware of the potential risks and take steps to mitigate them.

The problem is not just about malicious actors trying to exploit the system. It's also about the unintentional and even well-meaning messes that we'll create simply by trying to make our jobs easier and offloading some work to the machines. With AI agents running loose in our IT systems, many with the credentials and access to spend money, hack databases, modify files, and initiate and respond to communications on our company's behalf, the risk of damage is too great.

Let's look at some examples of where AI has gone wrong in companies and agencies. In 2022, an AI chatbot promised an Air Canada customer a discount that wasn't really available. The customer sued, and won. The company contended that the AI was at fault, but the court determined that the AI was representing the company.

In 2025, an AI hiring bot exposed personal information from millions of people who applied for McDonald's jobs. Apparently, the AI company running the bot used the password 123456. Last year, security researchers showed that a prompt-injection attack (where a malicious prompt is fed to an AI) exposed Salesforce's CRM platform to the potential of data theft.

Fortunately, this hack was never carried out (or at least nobody has reported it), and instead, the researchers used news of it as a way to promote their company's skills. However, these examples demonstrate that we need to be more vigilant when it comes to AI security.

A vulnerability was discovered in the ServiceNow AI Platform that could allow an unauthenticated user to impersonate another user and perform any operations the authenticated user could. According to the researcher who discovered the vulnerability, "the attacker can remotely drive privileged agentic workflows as any user."

Another vulnerability was found in Amazon Q's VS Code extension. Amazon Q is Amazon's generative AI assistant, sold as a SaaS resource as part of the company's extensive AWS offerings.

Last year, a GitHub token error enabled a threat actor to push and commit malicious code directly to the extensions' open source repository, which would then be downloaded to any Q user's development environment. The only thing that prevented this from being a total disaster was a syntax error that kept the hack from running properly.

These examples show us that AI security is not just about preventing malicious attacks. It's also about preventing accidents and unintended consequences. With the rise of generative AI, we need to be more aware of the potential risks and take steps to mitigate them.

The statistics are stark. CyberArk is a division of Palo Alto Networks, and its recently released 2025 Identity Security Landscape survey of security professionals discovered that machine identities outnumber human identities by 82 to 1.

Gartner says that less than 5% of enterprise apps used task-specific AI agents in 2025. In 2026, that number will increase 800%. The analyst company estimates that more than 40% of enterprise apps will use AI agents in 2026.

According to data security firm BigID, only 6% of organizations have an advanced AI security strategy. In a LinkedIn post, IDC researcher Bjoern Stengel says that only 22% of organizations are governing AI use through a central governance or ethics board.

He says that 43% manage AI, "Only through disconnected efforts or do not have an established responsible AI governance process in place." This lack of oversight is what makes the potential risks so great.

In a late 2025 survey of C-suite leaders, EY reported that 99% of companies experienced financial losses from AI-related risks, with 64% exceeding losses of $1 million. On average, the companies experienced losses of $4.4 million, and across their entire 975-company survey space, AI-related losses added up to $4.3 billion.

These statistics show us that we are not prepared for the potential risks of AI security. The OWASP study provides some insight into how we might protect our networks. It lists 10 mitigation strategies that, when used together, can harden agent operations inside the corporate network.

However, I believe there is one tactic that OWASP doesn't specifically recommend: limit your agent exposure. Just don't create as many agents as you might want to. Remember the rise in virtual machines back in the day? All of a sudden, we had virtual machines everywhere because every application, project, and challenge was addressed by spinning up a new VM.

Eventually, we had so many virtual machines that it was impossible to find them all. Many of them were running with outdated software. It was a mess. Agents promise to be just as chaotic. Think twice before you create a new agent. If it takes a team of interviews and multiple rounds before you hire an employee, it should take the same or even a greater level of care before you "hire" a new agent.

This could be difficult. But this is the crux of the battle we face over the next few years. It's not just malicious actors. It's all the unintentional and even well-meaning messes we'll create simply by trying to make our jobs easier and offloading some work to the machines.

In conclusion, enterprise AI agents could become a major threat to our cybersecurity if we are not careful. We need to be aware of the potential risks and take steps to mitigate them. By limiting agent exposure, implementing robust governance and oversight processes, and being more vigilant when it comes to AI security, we can minimize the risk of damage and ensure that our networks remain secure.

Is Perplexity's new Computer a safer version of OpenClaw? How it works AI agents are fast, loose, and out of control, MIT study finds These top 30 AI agents deliver a mix of functions and autonomy