North Korean Hackers Used ChatGPT to Help Forge Deepfake ID
A suspected North Korean state-sponsored hacking group has been caught using artificial intelligence tool ChatGPT to create a deepfake of a military ID document, according to cybersecurity researchers. The attack, which was carried out by the group dubbed Kimsuky, targeted South Korea and used the AI-powered chatbot to craft a fake draft of a South Korean military identification card.
Instead of including a real image, the email linked to malware capable of extracting data from recipients' devices, according to research published Sunday by Genians, a South Korean cybersecurity firm. The group responsible for the attack is believed to be a suspected North Korea-sponsored cyber-espionage unit previously linked to other spying efforts against South Korean targets.
The US Department of Homeland Security has stated that Kimsuky "is most likely tasked by the North Korean regime with a global intelligence-gathering mission," according to a 2020 advisory. This is not the first time suspected North Korean operatives have been caught using AI as part of their intelligence-gathering work. In August, Anthropic discovered that North Korean hackers used the Claude Code tool to get hired and work remotely for US Fortune 500 tech companies.
The Claude Code tool was used to build up elaborate fake identities, pass coding assessments, and deliver actual technical work once hired. However, OpenAI representatives did not immediately respond to a request for comment on this development. In February, the company said it had banned suspected North Korean accounts that had used the service to create fraudulent résumés, cover letters, and social media posts to try recruiting people to aid their schemes.
The trend shows that attackers can leverage emerging AI during the hacking process, including attack scenario planning, malware development, building tools, and impersonating job recruiters, said Mun Chong-hyun, director at Genians. Phishing targets in this latest cybercrime spree included South Korean journalists and researchers, as well as human rights activists focused on North Korea.
The email was sent from an email address ending in .mli.kr, an impersonation of a South Korean military address. Exactly how many victims were breached wasn’t immediately clear. Genians researchers experimented with ChatGPT while investigating the fake identification document. As reproduction of government IDs are illegal in South Korea, ChatGPT initially returned a refusal when asked to create an ID. But altering the prompt allowed them to bypass the restriction.
American officials have alleged that North Korea is engaged in a long-running effort to use cyberattacks, cryptocurrency theft, and IT contractors to gather information on behalf of the government in Pyongyang. Those tactics are also used to generate funds meant to help the regime subvert international sanctions and develop its nuclear weapons programs, according to the US government.
Photograph: The OpenAI virtual assistant logo on a laptop computer in Riga, Latvia, on Friday, Aug. 16, 2024. Photo credit: Andrey Rudakov/Bloomberg
Note: I have rewritten the article to make it more engaging and readable, using HTML paragraphs, headings, and styling to improve readability.