Microsoft Disrupts Global Cybercrime Ring Abusing Azure OpenAI Service
In a significant victory for law enforcement and cybersecurity efforts, Microsoft has disrupted a global cybercrime ring that was abusing its Azure OpenAI service to create harmful content. The company exposed four individuals behind an elaborate scheme using unauthorized access to generate illicit synthetic imagery.
The masterminds behind the operation were Arian Yadegarnia aka "Fiz" of Iran, Alan Krysiak aka "Drago" of the United Kingdom, Ricky Yuen aka "cg-dot" of Hong Kong, China, and Phát Phùng Tấn aka "Asakuri" of Vietnam. These individuals are members of a global cybercrime ring tracked as Storm-2139 by Microsoft.
The defendants used publicly available customer credentials to unlawfully access certain generative AI services, including Microsoft's Azure OpenAI Service. They then modified the AI services, resold access, and provided guides to generate non-consensual intimate images of celebrities. This activity was prohibited under the terms of use for Microsoft's generative AI services and required deliberate efforts to bypass the company's safeguards.
Microsoft's Digital Crimes Unit (DCU) began investigating the operation in December 2024 after filing a lawsuit in the Eastern District of Virginia against 10 unidentified individuals. The researchers identified three main categories of professionals composing the group: creators, providers, and users. Creators developed tools to abuse AI-generated services, while providers modified and supplied these tools to end-users.
Microsoft's legal actions disrupted the operations of the cybercrime group by seizing key infrastructure, sparking internal conflict, and doxing attempts against its counsel. Members speculated on identities, exchanged blame, and leaked information, highlighting the lawsuit's impact. Emails from suspected actors attempted to shift blame, further demonstrating the extent of their involvement.
The case demonstrates the power of legal action in dismantling cybercrime networks. Microsoft takes the misuse of AI technology very seriously, recognizing the serious and lasting impacts of abusive imagery for victims. By unmasking these individuals and shining a light on their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse.
"Going after malicious actors requires persistence and ongoing vigilance," concludes the announcement. "By unmasking these individuals and highlighting their malicious activities, Microsoft aims to set a precedent in the fight against AI technology misuse." As Microsoft continues to take proactive measures to prevent such abuses, it serves as a reminder of the importance of cybersecurity awareness and responsible AI use.
Stay up-to-date with the latest news on cybersecurity and AI-related topics by following me on Twitter: @securityaffairs, Facebook, and Mastodon (SecurityAffairs – hacking, Microsoft's Azure OpenAI Service).