**Meta Disrupts Covert Operations Spreading Propaganda from Iran, China, and Romania**

In a significant move to combat misinformation and propaganda on its platforms, Meta has announced the disruption of three influence operations from Iran, China, and Romania. The social media giant revealed that it had removed covert operations using fake accounts to spread propaganda and manipulate discourse on Facebook, Instagram, and other platforms.

The first operation was conducted by a network based in China, which targeted Myanmar, Taiwan, and Japan. Meta reported that this network used fake accounts, including AI-generated photos, to spread local-language content. The topics of support for Myanmar's junta, criticism of Japan's U.S. ties, and corruption claims in Taiwan were among the propaganda spread by this network. The operation had gained approximately 7,800 Page followers, 25 Group members, and 700 Instagram followers before being disrupted.

In contrast to the China-based network, those behind the Iranian operation posed as female journalists and pro-Palestine activists on Meta platforms, YouTube, X, and websites. They used hashtags like #palestine and #gaza to spread spammy content across multiple platforms. The fake accounts were designed to appear credible by sharing posts on global events and anti-U.S. rhetoric. However, Meta's automated systems had disabled some of the fake accounts.

The Iranian operation gained 44,000 Page followers, 63,000 Instagram followers, and spent $70 on ads before being removed. In a stark contrast to its previous operations, this network was marked by an absence of credible content, instead focusing solely on spreading misinformation.

**Romania-Based Operation Targeted Azeri-Speaking Users**

The third operation came from Romania, where Meta removed 658 Facebook accounts, 14 Pages, and 2 Instagram accounts linked to violating coordinated inauthentic behavior policies. These fake accounts posed as locals sharing sports, travel, and local news to appear credible across various platforms. They also engaged with posts by politicians and news outlets.

The campaign gained 18,300 Page followers on Facebook, 40 on Instagram, and spent around $177,000 on ads before being disrupted. Meta reported that some of these fake accounts were auto-detected using its automated systems.

**Threat Indicators Revealed**

Meta has published an Adversarial Threat Report detailing the threat indicators for these campaigns. The report is part of a broader effort to collate and organize threats according to the Online Operations Kill Chain framework, which Meta uses to analyze many sorts of malicious online operations. By sharing this information with the open-source community, Meta aims to enable further research into related activities across the web.

The report includes threat indicators for each campaign, providing valuable insights into the tactics used by these actors to spread propaganda and manipulate discourse on social media platforms.

**A Growing Concern for Social Media**

These disruptions highlight the growing concern of social media platforms in addressing the spread of propaganda and misinformation. As Meta continues to work towards a safer online environment, its efforts serve as an example of how companies are taking steps to protect users from malicious actors.

Stay informed about the latest developments in cybersecurity by following me on Twitter: @securityaffairs