# The Rise of Dark AI: A New Frontier in Cyber Threats

The world of cybersecurity is constantly evolving, with new threats emerging every day. One of the most significant concerns among experts is the rise of Dark AI, a new frontier in cyber threats that is being weaponized by nation-state actors and other hacking groups.

### The Commodification of Malicious AI Tools

Cybersecurity experts warn that Dark AI has become increasingly accessible, with malicious AI tools being sold on marketplaces on the dark web. These tools can be used to generate phishing messages, create imposter websites, write malicious code, and produce deepfakes in just seconds. Deepfake audio and video technologies, which previously required significant resources and expertise, are now easily available.

"The rise of Dark AI is not a distant threat—it is already here," said Apeksha Kaushik, Principal Analyst at Gartner. "We are witnessing a rapid shift from theoretical misuse to AI-as-a-service models, where easily accessible text-to-speech tools enable attackers to gather information and impersonate trusted users."

### The Evolution of Cyber Attacks

Dark AI has enabled cybercriminals to launch sophisticated attacks with minimal resources and advanced cybercrime capabilities. A cybercriminal can now craft and send 10 phishing emails customized only for each individual in 10 days, with a success rate that is fractional. With Dark AI, the same criminal can launch thousands of personalised phishing emails simultaneously, each tailored to a specific individual based on their digital footprint.

"AI enables single actors to execute enterprise-level attacks with minimal resources and advanced cybercrime capabilities," said Ankit Sharma, Senior Director and Head - Solutions Engineering at Cyble. "Traditional attack methods were static. They couldn’t be modified after the launch button was pressed. However, AI-powered attacks have now altered this narrative. They can adapt based on their target’s responses and change tactics mid-way to increase the chances of a successful hit."

### The Threat Landscape

Kaspersky experts are now observing a darker trend – nation-state actors leveraging Large Language Models (LLMs) in their campaigns. Dark AI refers to the local or remote deployment of unrestricted LLMs within a comprehensive framework or chatbot system, used for malicious, unethical, or unauthorised purposes.

"These systems operate outside standard safety, compliance, or governance controls, often enabling capabilities such as deception, manipulation, cyberattacks, or data abuse without oversight," said Sergey Lozhkin of Global Research & Analysis Team (GReAT) at Kaspersky.

### The Consequences

The rise of Dark AI has significant consequences for companies that fail to recognize this evolving threat and do not integrate disinformation security into their broader risk management strategies. According to Apeksha Kaushik, about 80% of companies without robust AI risk mitigation strategies may face catastrophic outcomes, including litigation, reputational damage to leadership and long-term brand impairment.

### The Response

Enterprises must take proactive steps to address this evolving threat. "We've been tracking such incidents with our detection models that can flag LLM-generated phishing campaigns and malware designed to evade conventional signatures," said Swapna Bapat, Vice-President and Managing Director (India and SAARC) of Palo Alto Networks.

"Active threat vector" is now a critical concern for companies. Palo Alto Networks' Unit 42 research has shown just how versatile and dangerous these capabilities can be. "In one example, we 'jailbroke' the open-source model DeepSeek using multiple techniques to bypass its built-in safeguards," said Swapna Bapat.

The rise of Dark AI is a significant concern for cybersecurity experts, and companies must take proactive steps to address this evolving threat.