# The AI Threat: What Australians Fear Most About AI-Related Crimes
A recent report by the Australian Institute of Criminology has shed light on the growing concerns of Australians regarding AI-related crimes. The study found that more than half of Australian adults are worried about AI being used to cause them harm, and almost as many fear falling victim to a crime enabled by the technology. In this article, we'll delve into the key findings of the report and explore what Australians fear most about AI-related crimes.
The study, which surveyed over 16,000 Australians, revealed that the majority of respondents had concerns about platforms that used AI to track their location, access their devices or accounts, or impersonate or deceive them. AI-generated deepfake content was also a strong concern for more than three in ten people. The report's findings suggest that Australians are becoming increasingly aware of the potential risks associated with AI and its widespread use.
The study found that almost three in four Australians used at least one AI app in the past 12 months, and most adults used at least three platforms in that time. This suggests that AI is deeply integrated into everyday life for many Australians. However, despite this familiarity, concerns about AI-related crimes persist. The report highlights the need for awareness and education about AI misuse and its potential consequences.
The Australian Institute of Criminology divided the focus of its research across two focuses: rating understanding of how common specific "illegal or unethical" misuses of AI occurred in Australia, and worrying about oneself or a family member falling victim to those misuses. The report revealed that Australians believed AI location tracking tools monitoring a person's whereabouts and behaviour was the most common misuse in Australia, followed by AI-generated deepfake videos created of public figures speaking or acting badly to manipulate public opinion.
The study also found that many Australians were concerned about AI being used to impersonate someone to trick others into giving them money or information. Another 41% of people said they believed AI impersonations for the purpose of revenge pornography were common, and almost 30% believed people using AI to pretend to be a child to groom another child happened frequently.
Most Australians reported feeling less concerned about the platforms being used to cause harm when they frequently used three or more AI apps. However, despite this trend, concerns about AI-related crimes persisted. The report emphasized that awareness and education are essential in addressing these concerns.
Dr Andrew Childs, a criminology lecturer at Griffith University, noted that AI was rapidly becoming normalized in the everyday lives of ordinary Australians. "People are using it as part of productivity tools in their work, image generation for fun, helping them plan their personal lives and learn new skills," he said.
Abhinav Dhall, an associate professor in data science at Monash University, attributed the increasing use of AI to corporate investment and marketing efforts that highlighted its potential benefits. "There is a lot of marketing and communication happening regarding what these new tools could do," he said. The barrier to entry was low, making it easy for users to try out AI-powered tools.
The report's findings have significant implications for the Australian cybersecurity landscape. With over 80% of respondents holding a moderate to very high level of knowledge and ability to use digital technologies, Australians are well-equipped to address AI-related concerns. However, the rise of malicious AI tools and their potential misuse raises concerns about the spread of harmful misinformation.
The study also revealed divides along age, gender, and parent status demographics in relation to Australians' concerns about AI. Australian males were found more likely than women overall to believe that AI was commonly misused and said they would likely fall victim to an AI-related crime. Respondents aged between 35 and 49 or older than 50 were the most likely to believe AI was commonly misused.
The report concluded that Australians are increasingly using AI technologies for entertainment, productivity, and assistance in their everyday lives. However, concerns about AI misuse and its potential consequences persist. It is essential to address these concerns through awareness and education efforts, taking into account demographics and user behavior.
In conclusion, the Australian Institute of Criminology's report highlights the growing concern among Australians regarding AI-related crimes. The study's findings emphasize the need for awareness and education about AI misuse and its potential consequences. As AI continues to integrate into everyday life, it is crucial to address these concerns and develop strategies to mitigate the risks associated with AI-related crimes.
---
Keywords: AI-related crimes, Australian Institute of Criminology, deepfakes, hacking, cybersecurity, data breach, vulnerability