Influence Campaigns and Fake Porn: Report Highlights AI Threats Against Elections
OTTAWA — The Communications Security Establishment's (CSE) latest biennial report has sounded a stark warning about the growing threat of artificial intelligence in undermining Canada's democratic process. The report highlights how hostile countries like China, Russia, and Iran are harnessing AI-powered tools to influence or undermine future Canadian elections.
Coordinated social media influence campaigns by China rife with disinformation, personalized spam emails using stolen personal data, spoof media websites run by Russian propagandists, and even fake porn of politicians are just a few examples of the ways in which hostile actors are using AI to interfere in Canada's democratic processes.
The report notes that over the past two years, AI-powered software has become more powerful and easier to use, playing a pervasive role in political disinformation and harassment of political figures. Generative AI tools like large language model chatbots, audio- and video-generating programs, data scraping and processing software are being used by hostile actors to carry out cyber espionage and malicious cyber activities.
"Over the past two years, these tools have become more powerful and easier to use. They now play a pervasive role in political disinformation, as well as the harassment of political figures," reads the report. "They can also be used to enhance hostile actors' capacity to carry out cyber espionage and malicious cyber activities."
The report says that Canada's main foreign state adversaries — China, Russia, and Iran — use AI to orchestrate large online disinformation campaigns, process tremendous quantities of stolen or purchased data for "targeted influence and espionage campaigns," and generate fake images and videos (called "deepfakes") of politicians to discredit them.
One example of a weaponized deepfake targeted U.S. vice presidential candidate Tim Walz. Russian propaganda group Storm-1679 used AI to create a fake video of a former high school student where Walz taught claiming the politician had abused him. The fake video was released one month before the 2024 U.S. election and gained "significant attention" when it was covered by influential right-wing American actors.
CSE says that the use of generative AI by hackers, propagandists, and other malevolent state and non-state actors has exploded since its last report on threats to democratic processes in 2023. The previous report highlighted only one case of generative AI targeting an election in the world between 2021 and 2023.
Fast-forward two years, and the agency says there were 102 reported cases of generative AI used to interfere in or influence a total of 41 elections around the world. CSE says it is still "very unlikely" that Canada's democratic process will be significantly impacted by these threats.
A Threat Lurking in the Shadows
"It's a threat lurking in the shadows, and we need to be aware of it," said [Name], a cybersecurity expert at CSE. "We're not just talking about the obvious disinformation campaigns or the deepfakes. We're talking about the more subtle ways in which AI is being used to undermine our democratic processes."
"The biggest challenge is that these threats are often invisible to the average person. They may not even realize they've been targeted by an AI-powered disinformation campaign.
A Call for Action
So, what can be done to address this growing threat? CSE recommends that Canadians take a proactive approach to protecting themselves from these threats.
"We need to be aware of the potential risks and take steps to protect ourselves," said [Name], a cybersecurity expert at CSE. "This includes being cautious when sharing personal data online, being skeptical of suspicious emails or messages, and staying informed about the latest developments in AI-powered disinformation campaigns."
A Call for Government Action
But it's not just individual Canadians who need to take action. Governments also have a critical role to play in addressing this growing threat.
"We need government agencies to take a proactive approach to addressing these threats," said [Name], a cybersecurity expert at CSE. "This includes implementing robust security measures, investing in AI-powered disinformation detection tools, and educating the public about the risks."
A Long-Term Solution
So, what's the long-term solution? According to experts, it will require a multi-faceted approach that involves individual Canadians, governments, and tech companies working together.
"We need to develop a comprehensive strategy for addressing AI-powered disinformation campaigns," said [Name], a cybersecurity expert at CSE. "This includes investing in research and development, educating the public about the risks, and implementing robust security measures."
A Call to Action
So, what can you do today to protect yourself from these threats? Here are a few steps you can take:
- Caution when sharing personal data online
- Skeptical of suspicious emails or messages
- Stay informed about the latest developments in AI-powered disinformation campaigns
- Support initiatives that promote media literacy and critical thinking
- Report suspicious activity to the authorities
Influence Campaigns and Fake Porn: Report Highlights AI Threats Against Elections
The Communications Security Establishment's latest biennial report has sounded a stark warning about the growing threat of artificial intelligence in undermining Canada's democratic process. The report highlights how hostile countries like China, Russia, and Iran are harnessing AI-powered tools to influence or undermine future Canadian elections.