Fighting AI with AI: Finance Firms Prevented $5 Million in Fraud - But at What Cost?
The notion of artificial intelligence (AI) is often associated with unprecedented productivity gains, enhanced creativity, and superintelligence. However, for a small but significant segment of the population - scammers and fraudsters - AI has become an equally powerful tool to swindle others out of their hard-earned money.
In recent years, the proliferation of advanced generative AI tools has made it easier for these nefarious individuals to carry out their schemes. A notable example is a finance employee at a firm based in Hong Kong, who wired $25 million to fraudsters after being instructed to do so on a video call with what he believed to be company executives - only to discover later that they were AI-generated deepfakes.
Another recent incident saw an unknown party use AI to imitate the voice of US Secretary of State Marco Rubio on calls that went out to a handful of government officials, including a member of Congress. Such cases highlight the need for financial services companies to not only detect but also prevent such fraudulent activities.
A Turning Point: AI-Powered Fraud Prevention
However, counterintuitively, AI is also being deployed by financial services companies to prevent fraud. In a recent survey conducted by Mastercard and Financial Times Longitude, 42% of issuers and 26% of acquirers reported that AI tools have helped them save more than $5 million from attempted fraud in the past two years.
Most of these organizations have begun using AI-powered techniques to enhance their digital security in conjunction with traditional methods like two-factor authentication and end-to-end encryption. The most commonly cited technique was anomaly detection - an automated alarm that flags unusual requests. Other use-cases included scanning for vulnerabilities in cybersecurity systems, predictive threat modeling, "ethical hacking" (another form of searching for system vulnerabilities), and employee upskilling.
The survey found that the vast majority of respondents (83%) agreed that AI has significantly reduced the time needed for fraud investigation and resolution. Moreover, 90% of them stated that unless their use of AI for fraud prevention increases in the coming years, their financial losses will likely increase.
Barriers to Adoption
Despite these promising findings, several barriers are preventing financial services companies from adopting fraud-preventing AI tools at scale. Chief among these are technical complexities - integrating new AI systems with existing software and data that's already deployed within an organization.
Another significant concern is the rapid pace at which fraud tactics themselves are evolving, which many fear will quickly outpace any attempt to use AI-powered fraud prevention. This highlights the need for financial services companies to stay ahead of these threats by continually investing in their AI-powered security measures.
In conclusion, while AI has become an effective tool for scammers and fraudsters, its adoption by financial services companies can help prevent significant losses. However, the technical complexities and evolving nature of fraud tactics pose significant challenges that need to be addressed.