AI Based Voice Cloning Is Giving Rise To Another Big Security Scam

A handful of cases and around $17 million already stolen, the CEO of a company that is known for detecting voice fraud is warning the world about the upcoming threat to security. Cybercriminals have started to clone people’s voices with AI-powered software and that means even your voice is not fully secure.

Vijay Balasubramaniyan from Pindrop presented a dozen cases at RSA that the company has personally investigated and showed the world how fraudsters “deepfake” someone’s voice to carry out their scams. According to him if you’re a CEO or Manager of a company and you have a lot of content of your own up on YouTube, then these fraudsters can synthesize your audio with the AI-based software to put you in danger.

With the trick implemented well, the fraudsters can also build upon the attack of business email compromise where they can pretend to be a senior officer at a company with fake emails. Moreover, hearing your CEO’s voice on a phone can also even fool lower-level employees to a point that they would be obliged to follow the order and fulfill the large money requests initiated by the fraudsters.

To create a fairly realistic clone, only five minutes of recording is enough. But if there is a recording of five hours or more then the software can deceive humans to a point that they cannot imagine. Nonetheless, the deep faking threat is still small when compared to phone call related to scams which involve identity theft.

Balasubramaniyan also showed a demo of an internal system that his company developed to synthesize voices from public figures. To make it even more fun, their software deep faked President Donald Trump’s voice.

The company took out Trump’s previous recordings to stimulate his voice and it took less than a minute to create the replica. The example of the US President’s voice also raises the concern regarding how deepfakes can also spread misinformation to fool the public.



The only bright side, for now, is that computer scientists have started to work on finding solutions for detecting deepfakes. In fact in Pindrop’s case, the company has reached one step further by creating an AI-powered algorithm that has the ability to differentiate human speech from deepfake audio tracks. It first checks how the spoken words are actually pronounced by a real human and then match the recorded voice with human speech patterns.

All in all, this looming threat of audio deepfaking will soon force users to remain more careful when it comes to uploading their voices on the internet.



Read next: Meet Meena, Google's most advanced conversational Artificial intelligence

Featured photo: Freepik
Previous Post Next Post