I saw how an “evil” AI finds vulnerabilities. It’s as scary as you think

I saw how an “evil” AI finds vulnerabilities. It’s as scary as you think

Sherri Davidoff, Founder and CEO of LMG Security, reinforces this belief with her opener about software vulnerabilities and exploits. But then Matt Durrin, Director of Training and Research at LMG Security, drops an unexpected phrase: “Evil AI.” Cue a soft record scratch in my head. “What if hackers can use their evil AI tools that don’t have guardrails to find vulnerabilities before we have a chance to fix them?” Durrin says.

[We're] going to show you examples.” And not just screenshots, though as the presentation continues, plenty of those illustrate the points made by the LMG Security team. I’m about to see live demos, too, of one evil AI in particular—WormGPT. Davidoff and Durrin start with a chronological overview of their attempts to gain access to rogue AI. The story ends up revealing a thread of normalcy behind what most people think of as dark, shadowy corners of the internet.

Durrin first describes a couple of unsuccessful attempts to access an evil AI. The creator of one such AI tool tried to recruit hackers for their cause, but was met with skepticism and ultimately failed. Another attempt was made through social engineering tactics, but again fell flat.

The next attempt was successful, however. Durrin explained that the hackers were able to gain access to a high-security server using a combination of exploit kits and password cracking tools. The team then spent months gathering information on the organization's systems and waiting for the perfect moment to strike.

Finally, after several failed attempts, they were able to breach the organization's network and steal sensitive data. It was a major coup for the hackers, and one that left the security team stunned.

The experts here are far calmer than I am. I’m remembering something Davidoff said at the beginning of the session: “We are actually in the very early infant stages of [hacker AI].” This moment is when I realize that as a purpose-built tool, WormGPT and similar rogue AIs have a head start in both sniffing out and capitalizing on code weaknesses. Plus, they lower the bar for getting into successful hacking.

Now, as long as you have money for a subscription, you’re in the game. On the other side, I start wondering how constrained the good guys are by their ethics—and their general mindset. The general talk around AI is about the betterment of society and humanity, rather than how to protect against the worst of humanity.

As Davidoff pointed out during the session, AI should be used to help vet code, to help catch vulnerabilities before dark AI does. This situation is a problem for us end users. We are the soft, squishy masses; we still pay (sometimes literally) if the systems we rely on daily aren’t well-defended.

We have to deal with the messy aftermath of scams, compromised credit cards, malware, and such. The only silver lining in all this? Those in the shadows typically don’t look too hard at anyone else there with them. Cybersecurity experts should be able to still research and analyze these hacker AI tools and ultimately improve their own methodologies.

In the meanwhile, you and I have to focus on how to minimize splash damage whenever a service, platform, or site becomes compromised. Right now it takes many different tricks—passkeys and unique, strong passwords to protect accounts (and password managers to store them all); two-factor authentication; email masks to hide our real email addresses; reliable antivirus on our PCs; a VPN to ensure privacy on open or otherwise unsecure networks; temporary credit card numbers (if available to you through your bank); credit freezes; and yet still more.