A new AI-powered tool can spot deepfakes through corneal reflection

The synthetic media that can replace someone in a picture or video with someone else is known as Deepfake. This medium is used for notorious reasons, such as spreading misinformation or using someone’s face in immoral videos.

A new artificial intelligence tool has been created that can detect such deep fakes by analyzing the reflection in the eyes. This AI tool was made by a group of researchers at University of Buffalo and proved itself to be ninety-four percent capable of spotting deep fakes. According to the insight shared by the lead writer of the research team, Siwei Lyu, the cornea of the eye is just like a semi-sphere.

This part of the eye is highly reflective. So anything originating from a light source will be visible in the corneal reflection. And since both eyes see the exact same thing, then both eyes should have the same reflection as well. This detail is usually left out while analyzing a face, as told by Lyu during his testimony in front of Congress. Siwei Lyu was accompanied by 2 co-writers, namely Shy Hu and YuenzunLi, both PhDs and linked with the University of Buffalo.

What someone sees is reflected in their eyes as well. If a picture or a video is genuine, then the reflection would have identical features, including color contrast and figure. But the media that is made with the help of AI usually lacks identical shape or color because of the overlapping of numerous pictures to make a forged one.

This is where this tool comes in and identifies this tiny detail. To test the efficiency of the tool, both fake and original images were used from different sources. Portrait pictures in which the person is looking straight at the camera were preferred. The tool works in 3 steps. The first step is when the eye is analyzed. Then the eyeballs are examined and light refraction is observed. The tool focuses on shapes and the intensity of the reflection.

Though the tool is highly accurate, it still has some limitations. Such as, if the other eye is not available or visible, the tool becomes useless. Secondly, it works separately on pixels, and not on a combined image reflected on the cornea. Deepfake content, according to Lyu, consists of an unreal blinking rate for the subject present in a media.

Lyu added that identifying such content is necessary because misinformation can spread like a wildfire and the outcomes could become violent or aggressive in a matter of time. Lyufurther added that most of the time, deepfakes are used to feature people in pornographic content, which can have a great negative impact on someone's mental health. Other than this, the use of deepfakes is also used for politicians to malign their personalities by linking them with something they haven’t done.


Read next: Study Reveals That Half Of All Employers Implementing AI Have Found The Technology To Be Beneficial To The Overall Business

Previous Post Next Post