North Korean Hackers Use ChatGPT to Forge Deepfake ID
A suspected North Korean state-sponsored hacking group has been revealed to have used the AI-powered tool ChatGPT to create a deepfake of a military ID document as part of a sophisticated phishing attack. The attackers, identified by cybersecurity researchers as Kimsuky, employed the artificial intelligence tool to craft a fake draft of a South Korean military identification card in order to make their phishing attempt seem more credible.
The email, which was sent from an email address ending in .mil.kr, an impersonation of a South Korean military address, linked to malware capable of extracting data from recipients' devices. However, instead of including a real image, the attackers used ChatGPT to create a realistic-looking image meant to make the phishing attempt seem more legitimate.
The Role of Kimsuky
Kimsuky is a suspected North Korea-sponsored cyber-espionage unit that has been linked to other spying efforts against South Korean targets. According to the US Department of Homeland Security, Kimsuky "is most likely tasked by the North Korean regime with a global intelligence-gathering mission." This latest attack highlights the group's sophistication and willingness to use emerging technologies such as AI in their operations.
The Use of ChatGPT
Genians, a South Korean cybersecurity firm, discovered that Kimsuky used ChatGPT to create the deepfake ID document. The researchers experimented with ChatGPT while investigating the fake identification document and found that it initially returned a refusal when asked to create an ID. However, by altering the prompt, they were able to bypass the restriction and create a realistic-looking image.
The Impact of the Attack
Phishing targets in this latest cybercrime spree included South Korean journalists and researchers as well as human rights activists focused on North Korea. Exactly how many victims were breached was not immediately clear, but it is likely that the attackers managed to gain access to sensitive information.
The Trend of North Korean Hackers Using AI
This latest attack highlights a trend in which suspected North Korean operatives are deploying AI as part of their intelligence-gathering work. In recent months, researchers have discovered that North Korean hackers used the Claude Code tool to get hired and work remotely for US Fortune 500 tech companies. OpenAI has also banned suspected North Korean accounts that had used the service to create fraudulent résumés, cover letters, and social media posts.
The Future of AI in Cybercrime
The trend shows that attackers can leverage emerging AI during the hacking process, including attack scenario planning, malware development, building their tools, and impersonating job recruiters. As AI continues to evolve, it is likely that we will see more sophisticated cyber attacks using this technology.
Consequences for North Korea
American officials have alleged that North Korea is engaged in a long-running effort to use cyberattacks, cryptocurrency theft, and IT contractors to gather information on behalf of the government in Pyongyang. The regime's tactics are also used to generate funds meant to help the regime subvert international sanctions and develop its nuclear weapons programs.