Security Blog
Hackers’ New Partner: Weaponized AI for Cyber Attacks! HKCERT Exposes Six Emerging AI-assisted Attacks
In recent years, artificial intelligence (AI) technology has advanced rapidly. Large language models (LLMs) and generative models have been widely applied in writing, reasoning, and generating images and videos. However, the Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT) warns that hackers are also weaponizing AI for various cyberattacks, making defence much more difficult.
The latest analysis reveals six AI-assisted attack methods, urging businesses and the public to stay alert. The recent rise of Agentic AI has transformed AI from being just a chatbot into a more powerful tool capable of directly operating computer systems. This means that complex attacks that previously required team collaboration can now be executed by a single hacker commanding multiple AI agents.
According to a threat intelligence report released by AI company Anthropic in August 2025, Agentic AI has evolved from merely providing suggestions to becoming an active participant capable of executing attacks. Researchers have named this new type of attack “Vibe-hacking”, noting that a cybercrime group used it to carry out infiltration and data extortion attacks on more than a dozen organisations.
The group used AI to complete the entire process — from reconnaissance, infiltration, ransomware development, file theft, and content analysis to drafting ransom notes. The emergence of this type of attack further lowers the barrier for hackers to carry out cyberattacks. It is predicted that in the future, organisations will face more frequent and highly sophisticated attacks driven by AI under the direction of hackers.
Furthermore, since the rise of Agentic AI, major vendors have begun integrating this capability into browsers. Users can issue direct commands via a chat interface, such as booking a restaurant or buying daily necessities. The AI-powered browser will then carry out web searches and, based on the results, make decisions and perform actions such as completing purchases.
AI company Perplexity’s Agentic AI browser Comet was recently found to be vulnerable to a technique where hidden, invisible text embedded in a webpage could serve as instructions to the AI, indirectly injecting commands. Without the user's knowledge, the AI might perform extra actions — such as opening an email inbox to retrieve a verification code and uploading it to another site.
Since Agentic AI actions are considered equivalent to user actions, traditional cross-site attack protections cannot block them. Users' personal data may be silently and invisibly leaked while commands are being executed in the browser.
AI Cracking CAPTCHAs: A New Threat to Traditional Website Security
HKCERT recommends that organisations and users must remain vigilant and continue to enhance their security awareness to counter increasingly sophisticated attacks. While Agentic AI still requires additional security controls at the application level to prevent it from performing unauthorised actions.
However, as Agentic AI browsers are still an emerging technology, HKCERT recommends that when using Agentic AI browsers to handle sensitive data or transactions, users should review the operational steps involved, or avoid linking email accounts, personal information, or credit card details to the browser.
CAPTCHAs may consist of distorted alphanumeric characters or require selecting specific images from multiple pictures. In the past, hackers had to write their own algorithms to bypass them, which involved high execution costs. Now, with AI systems equipped with image analysis, cracking CAPTCHAs has become much easier.
This means that hackers can write programs to have AI help them automatically bypass traditional CAPTCHAs, quickly go ahead to the next stage of attack, and make the CAPTCHA’s function of protecting the website virtually useless. HKCERT recommends that for websites still using traditional CAPTCHAs, administrators should consider upgrading to interactive CAPTCHAs or behaviour-based verification to enhance security and reduce the risk of automated attacks.
DDoS Attacks in the AI Era: A New Level of Complexity
Hackers often search online for various login pages and attempt brute-force attacks to obtain credentials for infiltration. With AI help, parts of the process — from finding login pages to executing brute-force attacks — can be automated.
When multiple AI programs run simultaneously, they can scan dozens or even hundreds of websites at once. Earlier, a cybersecurity researcher had developed and released tools that could use AI to aid in penetration attacks. It is believed that hackers will soon follow suit and develop more efficient attack tools.
Like AI cracking CAPTCHAs, this reveals that traditional website security will face greater defensive challenges in the AI era. HKCERT recommends that website administrators should strengthen security checks, enforce strict password policies (such as multi-factor authentication), and regularly review system logs to analyse suspicious activities and patch vulnerabilities early to prevent exploitation.
DDoS was mainly a brute-force attack driven by overwhelming network traffic, preventing the target’s services from accepting requests and paralyzing the target network. In the AI era, however, attackers have more advanced tools, such as using AI to crack website CAPTCHAs and to automatically scan and attack web pages.
Agentic AI can also monitor attacks in real time, automatically switching to other weak link in response to defensive strategies like rate limiting. They can also mimic human user behaviour to bypass traditional defences, implying that conventional approaches may no longer suffice.
AI Ransomware: A New Threat to Data Security
Cybersecurity developers understand the idea of fighting fire with fire. They will train AI models with traffic data, knowledge of past attacks, and threat intelligence to analyze real-time network traffic, respond to attacks, and automatically adjust defensive strategies.
More importantly, AI models can continuously collect network traffic as reference data for fine-tuning, making the system more accurate over long-term operation. Recently, a university research team developed an AI ransomware as a prototype , naming it PromptLock..
This research suggests that in the future, hackers may no longer need to write ransomware separately for different platforms — instead, infected devices connect to a large language model in real time, using preset prompts to generate attack code on the spot and execute it.
The study demonstrates that the mode of operation of AI ransomware need no longer be confined to executing prewritten code. Instead, it may automatically customise itself based on the target organisation’s system architecture, network environment, and security controls, to maximise its destructive impact — significantly increasing both the difficulty of defense and the potential scale of damage.
The Hidden Dangers of Third-Party Risk: Lessons from the Recent Data Breaches
Ai-generated fake courier company webpage (Source: Proofpoint)
AI technology brings convenience to society, but it also provides new tools for cybercriminals. From cracking CAPTCHAs to automating penetration attacks, and even full-process Agentic AI attacks, threats are constantly evolving.
Humans alone can hardly keep up with the rapidly changing, complex attacks of the AI era; human expertise augmented by AI will be the emerging trend in countering hackers.
HKCERT has also introduced AI tools to assist in detecting phishing websites. In August 2025, HKCERT used AI to conduct 3.5 billion scans and, in total, discovered several suspicious websites.
This shows that businesses and users must enhance their security awareness and technology in parallel to counter the dangers posed by weaponised AI. The public should never download or run files or programs from unknown sources. The public should also install antivirus software or cybersecurity applications and update them regularly.