From Vibe Coding To Vibe Hacking — AI In A Hoodie
A new era of cyber threats is emerging, one that leverages the power of artificial intelligence (AI) to create sophisticated and autonomous attacks. But how advanced are these AI-powered cyberattacks, and can we expect to see "vibe hacking" become a major threat in the near future?
From what we know so far, AI has already been weaponized by threat actors in various ways. AI-powered phone calls have been used to target Gmail users, and 51% of all spam is now reported to be generated by AI. Deepfakes have also become a cybersecurity issue, with attackers using them to spread misinformation and create convincing fake content.
But what about vibe hacking? Vibe coding, a phenomenon that has gained popularity in recent times, involves using large language models (LLMs) to generate code from scratch without requiring human input. While vibe coding makes life easier for developers by delegating some programming tasks to AI, it doesn't mean that the programmer can simply sit back and relax.
However, the question remains: how close are we to fully autonomous attacks using vibe hacking? According to Michele Campobasso, a senior security researcher at Forescout, there is "no clear evidence of real threat actors" doing this. Campobasso argues that most reports link LLM use to tasks where language matters more than code, such as phishing, influence operations, contextualizing vulnerabilities, or generating boilerplate malware components.
Campobasso's latest analysis tested over 50 AI models against four test cases drawn from industry-standard datasets and cybersecurity wargames. The results were informative: attackers still cannot rely on one tool to cover the full exploitation pipeline. LLMs produced inconsistent results with high failure rates, and even when models completed exploit development tasks, they required substantial user guidance.
"Even when models completed exploit development tasks," Campobasso said, "they required substantial user guidance." This suggests that vibe hacking is still in its early stages and requires significant human intervention to be effective.
However, the threat of vibe hacking cannot be ignored. As LLMs continue to improve, it's possible that we'll see more sophisticated attacks emerge. Defenders should start preparing now by understanding the fundamentals of cybersecurity remain unchanged: an AI-generated exploit is still just an exploit, and it can be detected, blocked, or mitigated by patching.
"The age of vibe hacking is approaching," Campobasso stated, "although not as fast as the vibe coding phenomenon would imply." But with the right knowledge and preparation, defenders can stay ahead of these emerging threats.
Conclusion
In conclusion, while vibe hacking may seem like a distant threat, it's essential to take its potential seriously. As AI-powered cyberattacks continue to evolve, defenders must be prepared to adapt and improve their defenses. By understanding the strengths and limitations of LLMs, we can stay one step ahead of the emerging threat of vibe hacking.