**AI-Powered Cyberattack Kits: "Just a Matter of Time" Warns Google Exec**
As the threat landscape continues to evolve, cybersecurity experts are sounding the alarm on the potential for AI-powered cyberattack kits to become a reality. Heather Adkins, Vice President of Security Engineering at Google, warned that it's only a matter of time before such tools become available to malicious actors.
In an interview with the Google Cloud Security podcast, Adkins noted that while we may not see full-fledged end-to-end AI-powered toolkits just yet, cybercriminals are already leveraging AI for small tasks, such as grammar and spell-checking in phishing emails. "It's just a matter of time before somebody puts all of these things together, end-to-end," she said.
Adkins expressed her greatest concern about the potential for an AI-enabled attack to be prompted to hack any company, with the model able to return with a root prompt within a week. She warned that this could lead to a "slow ramp" over the next six to 18 months, as attackers continue to refine their techniques.
The Google Threat Intelligence Group (GTIG) has been monitoring the development of AI-powered attacks and notes that malware families are already using Large Language Models (LLMs) to generate commands for stealing victim data. Sandra Joyce, VP at GTIG, added that China, Iran, and North Korea are all abusing AI tools to aid different stages of their respective attacks.
Anton Chuvakin, Security Advisor at Google's office of the CISO, echoed Adkins' concerns, noting that the real threat lies in the democratization of threats. "To me, the more serious threat isn't the APT (Advanced Persistent Threat), it's the Metasploit moment," he said.
Chuvakin is referring to the 20-year-old exploit framework, Metasploit, which was originally designed as a legitimate pentesting tool but soon fell into the wrong hands. He worries that a similar fate could befall AI-powered toolkits if they become easily accessible to malicious actors.
The potential for an AI-enabled attack is a worst-case scenario that Adkins likened to a Morris worm-type event, where autonomously executing ransomware encrypts computers en masse. Alternatively, it could resemble the Conficker worm, which caused widespread panic without causing significant damage.
While LLMs are still struggling with basic tasks such as discerning right from wrong, experts fear that when or if they do become more sophisticated, attackers may gain an even greater first-mover advantage over defenders. In a post-AI era, the definition of success for cybersecurity professionals may shift from preventing attacks to minimizing damage and duration.
To combat this threat, Adkins suggested that AI-enabled defenses should be implemented in a way that allows them to turn off instances detecting malicious activity without causing reliability problems. She emphasized the need for real-time decision-making and disruption capabilities to counter AI attackers, who may be less resilient than human attackers.
As the threat landscape continues to evolve, it's clear that cybersecurity professionals must adapt and prepare for a "really different world" where AI-powered cyberattack kits are a reality. With the potential for significant damage and disruption on the horizon, experts urge everyone to start thinking about this new challenge and preparing for it.
**Related articles:**
* Google Cloud Security Podcast: Heather Adkins on AI-powered cyberattack kits * Google Threat Intelligence Group (GTIG) overview
© 2023 The Register. All rights reserved.