**AI Has Made Hacking Cheap. That Changes Everything for Business**

Welcome to Eye on AI, where we explore the latest developments in artificial intelligence and their impact on our world. In this edition, I'm focusing on a pressing concern: how AI is making cyberattacks cheap and accessible to hackers.

Two months ago, I quoted a security leader who described the current moment as "grim," as businesses struggle to secure systems in a world where AI agents are no longer just answering questions, but acting autonomously. This week, I spoke with Gal Nagli, head of threat exposure at $32 billion cloud security startup Wiz, and Omer Nevo, cofounder and CTO at Irregular, a Sequoia-backed AI security lab that works with OpenAI, Anthropic, and Google DeepMind.

Wiz and Irregular recently completed a joint study on the true economics of AI-driven cyberattacks. They found that AI-powered hacking is becoming incredibly cheap. In their tests, AI agents completed sophisticated offensive security challenges for under $50 in LLM costs — tasks that would typically cost close to $100,000 if carried out by human researchers paid to find flaws before criminals do.

According to Nevo, even seasoned professionals who have seen both AI and cybersecurity have been surprised by what AI can now do. "We're seeing more and more that models are able to solve challenges that are genuine expert level, even for offensive cybersecurity professionals," he said. This is a particular problem now, because in many organizations, non-tech professionals, such as those in marketing or design, are bringing applications to life using accessible coding tools like Anthropic's Claude Code and OpenAI's Codex.

These individuals may not be engineers, but they're developing new applications without knowing the security risks involved. "They don't know anything about security, they just develop new applications by themselves, and they use sensitive data exposed to the public Internet, and then they are super easy to exploit," Nagli explained.

The research suggests that the cat-and-mouse game of cybersecurity is no longer constrained by cost. Criminals no longer need to carefully choose their targets if an AI agent can probe and exploit systems for just a few dollars. In more realistic, real-world conditions, the researchers did see performance drop and costs double. But the larger takeaway remains: attacks are getting cheaper and faster to launch.

Most companies are still defending themselves as if every serious attack requires expensive, human labor. "If we reach the point where AI is able to conduct sophisticated attacks, and it's able to do that at scale, suddenly a lot more people will be exposed, and that means that [even at] smaller organizations people will need to have considerably better awareness of cybersecurity than they have today," Nevo said.

At the same time, this raises the question: "Are we helping defenders utilize AI fast enough to be able to keep up with what offensive actors are already doing?" With AI on the rise, companies must adapt quickly or risk being left behind. Here's more AI news:

**More AI News**