Report Claims 'The Era of AI Hacking Has Arrived' - But Is It Really?

The Future of Cybersecurity: A Digital Game of "Rock 'Em Sock 'Em Robots"

In a recent report by NBC, the future of cybersecurity is being likened to a digital version of "Rock 'Em Sock 'Em Robots," where offensive- and defensive-minded AI are pitted against each other in an ongoing arms race. According to the report, hackers of all stripes, including cybercriminals, spies, researchers, and corporate defenders, have started incorporating AI tools into their work.

LLMs like ChatGPT have become remarkably adept at processing language instructions and translating plain language into computer code or identifying and summarizing documents. This has proven useful in various scenarios, such as the North Korean tech worker scheme where AI is used to create resumes and social media accounts, tricking Western tech companies into hiring them.

Google vice president of security engineering Heather Adkins told NBC that she hasn't seen anything novel with AI, and it's "just kind of doing what we already know how to do." However, researchers and companies assure us that AI will advance far beyond its current limits. But is it really?

Xbow, a startup, developed an AI tool that managed to climb to the top of the HackerOne U.S. leaderboard in June. This achievement has sparked interest in the industry, but it's essential to note that similar AI tools produce a significant amount of "slop," wasting time and resources for security researchers.

Daniel Stenberg, lead developer of the open source curl project, which relies on practically every internet-connected device, has repeatedly bemoaned the amount of time wasted on vulnerabilities found by AI. He emphasizes that in 2025, about 20% of all submissions to his security report have been irrelevant, with only 5% of them being genuine vulnerabilities.

The phenomenon is not unique to Stenberg's project. Many open source maintainers are struggling with similar problems, but without the same degree of visibility. This raises questions about the effectiveness and utility of AI in cybersecurity, particularly when it comes to finding vulnerabilities on its own.

Russian hackers have started embedding AI in malware used against Ukraine, which automatically searches for sensitive files to send back to Moscow. However, the impact of this development is unclear, and it's essential to separate hype from reality.

So, can we declare the era of AI hacking? It seems premature, especially when considering that AI is often being used as a force multiplier rather than a fully automated solution. Is this just another side effect of the broader interest in AI being fueled by exorbitant spending and geopolitical conflicts?

The Verdict: A Cautionary Tale

As we navigate the complex landscape of AI-driven cybersecurity, it's essential to maintain a critical perspective. While AI has proven useful in certain scenarios, its limitations and potential pitfalls should not be ignored.

Ultimately, the future of cybersecurity will likely involve a continued cat-and-mouse game between offensive- and defensive-minded AI. But let's take a step back and assess the situation before declaring the era of AI hacking has arrived.

The Takeaway: Stay Informed

To stay up-to-date on the latest developments in cybersecurity and AI, follow Tom's Hardware on Google News for our breaking news, analysis, and reviews.