Using Algorithms to Guard Against Deepfakes

The rise of deepfakes has shaken one of the most fundamental aspects of how we distinguish fact from fiction: seeing things happen with our very own eyes. Not too long ago if you were seeing something happen you could be more or less sure that it was real but that really isn’t the case anymore. With deepfakes you could be looking at something like a video of Donald Trump talking and to your human eyes the video would look perfectly real even if it was made using deepfake algorithms that are designed to mimic human facial expressions, movements and speech.

If you are worried about how deepfakes can impact the world, an issue that can actually be quite serious if you check out this video of Barack Obama that is completely fake but looks startlingly realistic, you should try to understand how this algorithm works. Basically, you feed as many images and videos of the subject as possible into the program and the algorithm will start to create videos of its own that will composite the various facial expressions it has detected. Subsequently the program will use the audio of the person speaking to create a synthetic version of the voice that would, once again, seem very real indeed.

One way in which researchers are attempting to guard against the malicious use of deepfakes is by trying to figure out the flaws in the algorithm. While deepfake videos do look real, they are made by a kind of computer program and no matter how well the program does its job the technology just isn’t advanced enough to be actually flawless. Trying to understand the flaws in the program could be the key to figuring out which videos are deepfakes and which are real.


One of the initial techniques that was used was analyzing how the subject of a deepfake blinked. If the subject was a real person then they would blink a little more often than someone created through a deepfake algorithm. Deepfake faces don’t blink as often as real people do, but the newest iteration of this technology has corrected for this error so now you can’t check how often the person in a video is blinking to see if they are real or if they have been created using deepfake technology.

There are still a few techniques that researchers are developing. As deepfake technology progresses, so too will the technology that will eventually be used to keep it in check. Researchers are now looking into the misalignment of certain facial features in deepfakes, and are creating algorithms that would make it less likely that people would be fooled.

The world is becoming increasingly difficult to understand with simulations of reality beginning to become indistinguishable from the real thing. These new algorithms are going to help make the world a safer place by telling people what is real and what isn’t.



Read next: Facial Transformation with Adobe Photoshop Can Be Detected With Newly Trained AI
Previous Post Next Post