New Breakthrough DIVID Tool Can Detect Fake Generative AI Videos

A new tool is making headlines for its breakthrough capabilities of detecting videos made using Generative AI.

The news is major because the rate at which frauds and scams are taking place featuring content made using AI programs is high. And tech experts seem to be struggling in terms of determining what’s real and what’s fake, especially when it comes to video design.

At the start of this year, we saw an employee belonging to a multinational firm send millions of dollars to fraudsters by accident, assuming the instructions they received were from the company’s CFO. This was not the case as the fake demand was made by threat actors who disguised themselves as leading members of the organization.

The elaborate scheme entailed a $25 million loss only because the employee couldn’t decipher through currently-in-use detection systems how the demand was indeed fake.

Keeping this incident in mind, researchers from Columbia’s top engineering university were shown to be making headwaves with the launch of a new tool dubbed DIVID which can prevent such scams from taking place through detection of fake AI content.

The Diffusion Generated Video Detector works by carrying out an analysis of the text without attaining access to the inner functionalities of LLMs.

This model gadget is an improvement of all previous methods that used generative AI video detection and therefore carry out the most effective means of identification which may not have been possible in the past.

The methodology includes the detection of video made using older models like GAN which features double neural networks. One of those gives rise to fake content while the other evaluates to distinguish real from fake.

With the right kind of feedback, both of these networks are designed to improve and in the end, you get the most realistic video. Right now, all detection tools for AI search for obvious signs that can determine the authenticity of the video such as bizarre pixels, fake movements, and frame inconsistencies that you can’t find in real content.

We already have a few AI video tools from the likes of OpenAI and more that make use of other models for video creation. This diffusion model uses AI methods to curate pictures by slowly switching random sounds into clear images that seem so realistic.

When it comes to videos, they refine every frame at a single instance and that gives rise to smoother transitions, better quality, and more realistic results. So as you can see, it’s more sophisticated and that makes it a bigger hurdle for experts to determine real from fake.

We’ve already heard about the launch of Raider from the same team of experts where researchers could decipher text produced using AI technology by analyzing the content directly. There was no need for access to LLMs like OpenAI’s ChatGPT-4 amongst others.

This simply makes use of LLMs to change the text and then determine the figure for edits in the system. If a lot of change is made, it is likely produced by humans but if the number of alterations is few, it’s more likely produced using machine technology.

Seeing that expand into the world of video is major news and from what we can see right now, it’s going to help decipher real video produced by diffusion models from those that are inauthentic.

It’s a concept that has already caught the attention of many in the tech world as the latest method by researchers entails datasets and codes that are open-sourced. The idea was presented recently at this year’s Computer Vision Pattern Recognition Event a few days back in Seattle and since then has been gaining attention for the right reasons.

Read next: Google Introduces Address Bar Shortcuts and Live Event Scores for Chrome Mobile
Previous Post Next Post