Deepfake detectors are slowly coming of age, at a time of dire need

DEF CON While AI was on everyone's lips in Las Vegas this week at the trio of security conferences in Sin City – BSides, Black Hat, AND DEF CON – there were a lot of people using the F-word too: fraud. The plummeting cost of using AI, coupled with the increasing sophistication of deepfakes and electronic communications becoming the norm, means that we're likely facing a massive amount of machine-learning mayhem.

Deloitte estimates deepfake fraud will cost the US up to $40 billion by 2027, but everyone we've spoken to thinks that's an underestimation. Sam Altman's last month that "AI has fully defeated most of the ways that people authenticate currently, other than passwords," has ruffled some feathers in the security industry, with various vendors selling software that they claim does just that.

But others are more cautious about its capabilities. Karthik Tadinada, who spent over a decade monitoring fraud for the UK's biggest banks when he worked at Featurespace, said that the anti-deepfake detection technology he has encountered manages about a 90 percent accuracy rate for spotting crime and eliminating false positives.

"The economics of people generating these things versus what you can detect and deal with, well actually that 10 percent is still big enough for profit," said Tadinada, who notes the costs of generating ID are only going to fall further. Video impersonation predates AI, and Tadinada recounted cases where security teams had spotted fakers in high-quality silicone masks, but said that machine learning has turbocharged this.

He and fellow speaker Martyn Higson, who also is ex-Featurespace, demonstrated the easy overlay of the face of British Prime Minister Keir Starmer on Higson's body and a pretty good mimicry of his voice, all just using a MacBook Pro. This wasn't good enough to fool anti-scanning technology – AI tends to make the jowls more puffy and stiffen up the appearance of human faces – but it would certainly be good enough for propaganda or misinformation.

This was demonstrated this week when journalist Chris Cuomo posted a deepfake video of US Representative Alexandria Ocasio-Cortez (D-NY) apparently accusing actress Sydney Sweeney of "Nazi propaganda," before pulling it and apologizing. Mike Raggo, the red team leader for media monitoring biz Silent Signals, agreed, pointing out that the quality of video fakes has improved drastically.

But new techniques were going mainstream that might detect such fakes more easily. He does have skin in the game. Silent Signals developed a free Python-based tool, dubbed Fake Image Forensic Examiner v1.1, for the launch of GPT-5 by OpenAI last week. This will take an uploaded video and sample frames one at a time to look for manipulation, such as blurring on the edges of objects in the video, comparing the first, last, and middle frames for background anomalies.

In addition, examining the metadata is absolutely key. Video manipulation tools, both commercial and open source, typically leave traces of code in the metadata, and a good detection engine must have the ability to perform such searches.