There have been so many deepfake videos produced recently that they’re all starting to blend together—not just the people in the videos, but the videos themselves. And while watching deepfake technology become better and better is definitely a source of ample consternation, the good news is that there are AI tools that can detect these fakes, like the one in the below video. Apparently this kind of tech better catch on fast though, because, according to these data, determining what’s real and fake online is no longer the province of lowly humans.
While you’ve probably seen plenty of deepfake videos (like this one of Tom Cruise’s face on Bill Hader’s or this one of Nick Offerman’s face on every one of Full House‘s characters), there isn’t usually a lot of talk about what’s going on in regards to defending against this reality distorting trend. But the technology to detect deepfakes has, thankfully, been taking off alongside the ability to make them, as Károly Zsolnai-Fehér discusses.
Zsolnai-Fehér makes a few critical points in the video as he demonstrates how the technology works, including how deepfake artifacts can be hidden inside of compressed videos uploaded online (artifacts in this context are errors in the video that betray the fact that the video is fake rather than real), as well as how awful people are at identifying these artifacts.
In fact, the data from the paper Zsolnai-Fehér is discussing in the clip—which is titled “FaceForensics++: Learning to Detect Manipulated Facial Images” and is available to read here—found that only about 69% of people asked to spot deepfakes in raw video could, and only about 59% could when the videos were low quality.
Two Minute Papers
The AI engineered for this study can decipher between what’s a deepfake and what isn’t in this example. Can you?
And while some may think that six or seven out of 10 people being able to spot a fake doesn’t sound like a horrible track record, being able to fool between 30 and 40% of people into thinking something is real when it’s not is doubtlessly a huge problem. Plus, deepfake technology is only getting better and better.
The AI used to decipher deepfakes, on the other hand, could identify deepfakes in raw video over 99% of the time, and 81% of the time in low-quality videos. With a discrepancy like this, it’s no wonder that Zsolnai-Fehér ends the video by saying that “now it is of utmost importance that we let the people know about the existence of these [AI detection] techniques.” If this kind of technology doesn’t become as ubiquitous (and well trained on data) as deepfakes have, AI may not be able to save us from itself.
What do you think of this paper and this video? And who do you think will win the arms race between AI that creates deepfakes and AI that detects them? Let us know your thoughts in the comments!
Featured Image: Two Minute Papers