New Report Explains Even AI Can’t Protect Us from Deepfakes Now

A new study has explained that by relying on artificial intelligence, we are trying to push ourselves under the bus. Relaying on tech mean, more data and power will be shifted to the private organizations, making it quite difficult to handle the issue. Studies have revealed that the majority of the people are worried about the deepfakes because there is no quick technical solution that addresses all the structural inequalities. Previously, deepafakes were considered elite problems, celebrities, politicians and famous people were facing these issues. These celebrities had a proper team handing their PR problem and personal image and these teams were aware of the process to handle deepfakes with money and advance technology. However, with the spread of social media, kids and normal people are also falling prey of the deepfakes. The worst thing is that these people are not well equipped to address these issues or resolve any problem.

Using only technology to detect deepfakes is not a reliable method according to most people. One of the biggest reason is mistrust while detecting deepfakes, media and technology combinations are not perceived trustworthy. The audience feels that media companies manipulate data according to their personal preference.


Tech giants like Facebook and Google are publishing their databases to help the users figure out the new models that have been designed for detecting for fake videos and images. There are startups like TruePic with advanced technologies that can help in detecting fake images and deepfakes. These startups are using advanced technologies including AI and blockchain, apart from this, companies like Medifor are now enjoying their best years as they can detect deefakes depending on the video pixel.


Photo: Alexandra Robinson / Getty Images

Read next: Elon Musk just warned about the swarms of bots on social media platforms and its something we surely need to take a look at

No comments:

Post a Comment