Don’t Believe the Hype: Learn How Cybercriminals Are Actually Using AI

AI is transforming cybersecurity on both sides of the cyber battlefield, with threat actors utilizing it to enhance the volume, velocity, and sophistication of their attacks. However, not every sensational claim about AI-powered cyberattacks reflects reality. As defenders, we must separate the signal from the noise concerning AI-enabled cybercrime.

We're at a critical juncture where understanding how attackers are using AI today and predicting future trends will enable us to defend better and anticipate these operations. Next week at the RSA Conference (RSAC) 2025 in San Francisco, I'll be speaking on a panel alongside experts from UC Berkeley's Center for Long-term Cybersecurity (CLTC), the Berkeley Risk and Security Lab (BRSL), and Singapore Nanyang Technological University to discuss AI-enabled cybercrime in depth.

The session will combine practice, policy, and academic perspectives to cut through the hype around AI-enabled cybercrime. Fortinet looks forward to contributing as a part of the panel and continuing its commitment to the related initiative, AI-Enabled Cybercrime: Exploring Risks, Building Awareness, and Guiding Policy Responses.

Understanding the Different Types and Applications of AI

Conversation about AI is everywhere, from marketing hype to media outlet reporting. Beyond differentiating between hype and reality relating to AI, understanding the different types and applications of AI, including generative AI, agentic AI, and weaponized AI, is essential as organizational leaders seek to evolve and improve their respective security and networking strategies.

Generative AI uses machine learning algorithms to generate new data, such as images or text. Agentic AI refers to autonomous systems that can make decisions based on their own objectives. Weaponized AI, on the other hand, involves using AI to carry out malicious activities, such as cyberattacks.

The Most Critical Gap in AI-Enabled Cybercrime

Dr. Gil Baram, non-resident research scholar with UC Berkeley, notes that "The most critical gap in AI-enabled cybercrime isn't technical—it's human." This highlights the need for analysts and policymakers to train for uncertainty, question machine-generated insights, and stay alert to deception.

How Attackers Are Using AI Today

AI is making it easier for nearly anyone to delve into cybercrime, giving individuals with little to no experience with coding or hacking tools the ability to craft malicious code with minimal effort. Novice and skilled attackers are using AI today in various ways, including:

  • Crafting malicious code with ease
  • Utilizing AI-powered deepfakes for social engineering attacks
  • Accessing AI-fueled Cybercrime-as-a-Service (CaaS) offerings, such as reconnaissance services

Future Scenarios and Defense Efforts

We'll break down theoretical weaponization cases of AI, discuss what we're currently seeing today, and explore the feasibility of future scenarios. We'll also offer insights on where and how CISOs, CTOs, and their teams should focus their defense efforts.

Join Us at RSAC 2025

If you're headed to San Francisco for RSAC, join us to hear public, private, and academic perspectives on AI-driven cybercrime. You'll get a pragmatic assessment of the role of AI in cybercrime, explore future AI developments, and understand their implications for the cybersecurity community.

Session Name: AI-Enabled Cybercrime: Separating Hype from Reality

Date and Time: Thursday, May 1, 10:50 a.m. PT

Moderator: Leah Walker, Berkeley Risk and Security Lab