How Clickfix and AI are helping hackers break into your systems at an alarming rate
Cybercriminals are shifting their techniques to focus on the human element, with Clickfix social engineering and AI abuse becoming even more popular. According to Mimecast's latest Global Threat Intelligence Report, which tracked threat activity and analyzed trillions of signals from January to September 2025, the report highlights two trends that indicate a shift in tactics targeting the human element in scams.
Many cybersecurity companies and tech giants, including Microsoft, are alerting users to Clickfix -- a social engineering technique that is being adopted by threat actors worldwide. Clickfix is a method to bypass traditional anti-phishing techniques by luring victims into providing initial access to a network or system, thereby eliminating the need for malware to do so.
Fake error messages, seemingly minor technical issue alerts, and more dubious messages -- such as apparently free ways to install licensed software -- are displayed to a victim alongside a simple step-by-step guide. Unfortunately, these "guides" direct users to launch PowerShell and input commands that trigger the download of a malicious payload, including information stealers and ransomware.
Mimecast says that Clickfix rates surged by 500% in the first half of 2025, accounting for around 8% of all attacks. According to Hiwot Mendahun, Mimecast Threat Research Engineer, threat actors are adopting Clickfix as a means of initial access, and the company believes "it will continue to be used as a means to download infostealers, ransomware, remote access trojans (RATs), and custom malware."
"The use of RMM [Remote Monitoring and Management] tools to enable initial access in the same way is also a vector we continue to see an increase in, with campaigns really focusing on the social engineering aspect," Mendahun added.
New wave of AI-powered BEC scams
Artificial intelligence (AI), for example, is being increasingly adopted in phishing and Business Email Compromise (BEC) scams. While impersonating employees or high-profile executives in phishing and BEC scams is nothing new, AI is being employed in ways that make email chains look more convincing -- and not just for creating initial phishing emails.
Mimecast says that AI is being used to generate full conversation chains that impersonate multiple people, including vendors, executives, and third parties. For example, during the reconnaissance phase, an attacker may find financial information and reports, HR data, and payroll information that could be used in AI-generated email threads.
AI is then used to fabricate a conversation between vendors, employees, and high-profile figures, typically with a sense of urgency -- such as a request to pay an invoice immediately. Recent BEC attack vectors focus on fake invoice payments, bank account detail changes, payroll updates, and wire transfers.
The team believes that as AI abuse ramps up with the use of deepfake voice and video content, these scams will become increasingly difficult to detect. And as AI tools are readily available, more cybercriminals will be able to enter the field.
"The use of AI in these campaigns specifically gives threat actors the ability to really mass-produce a more targeted thread using automation and potentially altering content to help bypass content-based detection," Mendahun said. "Outside of the automated emails, we do see the use of deep voice and videos in BEC campaigns, which enhance the success rate for large fraudulent transactions to be made."
Most at risk of impersonation and social engineering-based attacks
According to Mimecast, education, IT, telecommunications, the legal sector, and real estate companies are the most at risk of impersonation and social engineering-based attacks, "as these sectors often have direct access to high-value targets, handle sensitive financial transactions, and manage confidential client information."
Reducing the risk of a successful intrusion
To reduce the risk of a successful intrusion, consider the following:
- Educate employees on social engineering tactics and phishing attacks.
- Implement robust security measures, such as multi-factor authentication and encryption.
- Regularly update software and systems to ensure you have the latest security patches.
- Use anti-phishing tools and software that can detect and block malicious emails.
- Monitor your network activity and system logs for suspicious activity.
Stay protected from AI-powered scams
To stay protected from AI-powered scams, keep an eye out for the following:
- Deepfake voice and video content in email attachments or links.
- Unsolicited emails that claim to be from a trusted source.
- Urgent requests for payment or sensitive information.
- Suspicious pop-ups or alerts on your computer or mobile device.