# This North Korean Phishing Attack Used ChatGPT's Image Generation
As the threat landscape continues to evolve, cybersecurity experts are sounding the alarm on a shocking new tactic employed by North Korea's notorious hacking group, Kimsuky. In a bizarre turn of events, it appears that this group has successfully leveraged the image generation capabilities of OpenAI's popular chatbot, ChatGPT, to develop a sophisticated phishing attack.
## The Phishing Scam
In July, security vendor Genians uncovered a suspicious email campaign emanating from a South Korean defense-related institution. The emails were crafted to appear as if they were coming from an authentic military official, complete with a .zip file attachment that contained the recipient's real name – albeit partially masked. This seemingly innocuous detail was designed to lend credibility to the email and pique the interest of potential targets.
However, the true intention behind this email was far more insidious. The .zip file contained a malicious shortcut that tricked the recipient into manually running a PowerShell command on their PC. This command secretly connected the computer to a hacker's server, allowing malware capable of acting as a backdoor to be downloaded and installed.
But that wasn't all – the same phishing email also included a fake government military ID image designed to convince victims that nothing unusual had occurred. According to Genians' investigation, this image was sourced from OpenAI's older GPT-4o model, with metadata referencing "GPT-4o*OpenAI API" and ChatGPT.
## The AI Jailbreak
Genians discovered that the Kimsuky group had likely developed a workaround or "jailbreak" to bypass OpenAI's existing safeguards, which prevent the company's chatbots from generating AI images for government IDs. It appears that the hackers exploited a vulnerability in ChatGPT's image generation capabilities by framing requests as creating mock-up designs for legitimate purposes – essentially using the chatbot's own tools against it.
## The Consequences
This brazen attack highlights the ongoing cat-and-mouse game between cyber threat actors and tech giants like OpenAI. While OpenAI has implemented various safeguards to prevent state-sponsored hackers from exploiting its technology, the Kimsuky group seems to have found a way to circumvent these measures.
As North Korea continues to explore the use of AI technologies, including real-time deepfakes, for malicious purposes, cybersecurity experts are sounding the alarm. This incident serves as a stark reminder that even seemingly innocuous technologies can be turned against us when used by nefarious actors.
## The Verdict
The use of ChatGPT's image generation capabilities in this phishing attack is a sobering reminder of the evolving threat landscape. As we move forward, it's essential to stay vigilant and adapt our defenses to counter these emerging threats. With its reputation as a beacon of innovation, OpenAI must remain proactive in addressing these concerns and ensuring that its technology is used for the greater good.
**Disclosure:** Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging infringement on copyrights related to training and operating AI systems.