How Cybercriminals are Weaponizing AI and What CISOs Should Do About It
In a recent case tracked by Flashpoint, a finance worker at a global firm joined a video call that seemed normal. By the end of it, $25 million was gone. Everyone on the call except the employee was a deepfake. Criminals had used AI-powered cybercrime tactics to impersonate executives convincingly enough to get the payment approved.
The Top Observed Malicious LLMs Mentioned on Telegram

Threat actors are building LLMs specifically for fraud and cybercrime. These are trained on stolen credentials, scam scripts, and hacking guides.
AI-Powered Cybercrime: Deepfakes and Custom Models
Underground communities have created a market for jailbreak prompts, which are inputs that bypass safety restrictions in popular AI models. These are sold in collections, often with Discord support or tiered pricing.
Some prompts are built for phishing, impersonation, or bank fraud. Deepfake services are also becoming more accessible. Vendors offer synthetic video and voice kits bundled with fake documents.
What Makes These Tools Harder to Track
Developers gather user feedback from underground forums and chats, then refine their models. In some cases, they release improved versions within days.
This feedback loop improves performance and expands use cases over time, making it harder for defenders to keep up with the pace of AI-powered cybercrime.
How Defenders Are Responding With AI
Security teams are using AI to keep up with the pace of AI-powered cybercrime, scanning large volumes of data to surface threats earlier.
AI helps scan massive amounts of threat data, surface patterns, and prioritize investigations. For example, analysts used AI to uncover a threat actor’s alternate Telegram channels, saving significant manual effort.
Using AI to Uncover Connections Between Fake Personas
AI can help uncover connections between fake personas, even when their names and avatars are different. It flags when a new tactic starts gaining traction on forums or social media.
Best Practices for Using AI in Cybersecurity
Researchers suggests beginning with repetitive tasks like log review, translation, or entity tagging.
Build confidence in the AI’s accuracy and keep human analysts in control. Have checkpoints to review results, and make sure your team can override or adjust what the tool suggests.
The Limitations of AI in Cybersecurity
“This diffuse environment, rich in vernacular and slang, poses a hurdle for LLMs that are typically trained on more generic or public internet data,” Ian Gray, VP of Cyber Threat Intelligence at Flashpoint, told Help Net Security.
The problem goes deeper than just slang. Threat actors often communicate across multiple niche platforms, each with its own shorthand and tone.
Human Oversight is Crucial
“Despite the utility of AI as a tool for sifting through vast amounts of information, it still requires a human analyst in the loop,” said Gray. “This human oversight is crucial not only for validating the data’s meaning but also for actively helping to train the AI, ensuring that the information is being properly contextualized and understood by the LLM.”
Validating the AI Tool's Core Integrity
Gray also notes that defenders must look beyond surface functionality. “Beyond interpretation challenges, defenders must validate the AI tool’s core integrity and security. This involves assessing its resilience against potential adversarial attacks, such as data poisoning, where malicious data could be injected to corrupt its training or manipulate its outputs.”
Explainability and Retraining are Key
“To build trust and facilitate effective use, AI tools should offer a degree of explainability, allowing human analysts to understand the rationale behind the AI’s conclusions. This transparency is vital for validating its insights and correcting any misinterpretations, especially when dealing with the complexities of underground jargon.”
Moreover, given the constantly evolving threat landscape, mechanisms to detect ‘model drift,’ where the AI’s performance degrades over time due to changes in data patterns, are essential. Regular retraining of the model with updated and curated threat intelligence is paramount to maintaining its effectiveness.
A Symbiotic Relationship Between Human Expertise and Artificial Intelligence
In Gray’s view, “Ultimately, the successful and secure adoption of AI tools for threat intelligence hinges on a symbiotic relationship between human expertise and artificial intelligence. The AI can efficiently process and identify patterns in immense datasets, but the human analyst provides the contextual understanding, linguistic nuance, and critical validation necessary to transform raw data into actionable intelligence.”
Tracking the Lifecycle of Illicit AI Tools
Watching underground forums and chat platforms is important, but it only goes so far. As threat actors shift their tactics, defenders need better ways to keep up with how illicit AI tools are being built, sold, and refined.
“Beyond monitoring chat platforms and underground forums, additional mechanisms and governance strategies are needed to track and respond to the lifecycle of illicit AI tools, especially with the evolution of ‘prompt engineering-as-a-service’ and malicious LLM development,” said Gray.
Understanding How Adversaries Build and Sell These Tools
Anticipating New Iterations and Capabilities
“Flashpoint analysts have observed these models being refined using underground forum posts, breach dumps, and Telegram logs,” Gray said. “Understanding these feedback loops, where users submit failed prompt attempts back to developers for improved performance and expanded outputs, is critical. This insight can help defenders anticipate new iterations and capabilities of these malicious tools.”
Conclusion
Cybercriminals are building AI-powered cybercrime tactics using LLMs specifically designed for fraud and cybercrime.
The key to keeping up with this pace is a symbiotic relationship between human expertise and artificial intelligence. By leveraging AI, defenders can process and identify patterns in immense datasets, but human analysts must provide contextual understanding, linguistic nuance, and critical validation to transform raw data into actionable intelligence.