Cybercriminals Don't Need Sophisticated 'Artificial Intelligence Powered Hacking Tools' To Get The Job Done - At Least For Now

Artificial Intelligence is that one word which is not only dominating the tech news with every passing day but our lifestyle as well. It is changing the way people interact with each other, making tasks simpler for human life and even giving the much-needed boost to businesses. But with all its good impact, it is often assumed that AI has served to be the most effective tool for hackers as well.

While there is no doubt in the fact that many would agree to this widespread assumption that AI based techniques can better collect, resell or commit fraud with stolen data, but when it comes to actually testing the usefulness of AI, you will be surprised!

AI is still not “intelligent”

First and foremost, the biggest challenge of AI hacking lies in its limitation with actual intelligence. If we go deep into the explanation of AI, then for now it is data science that involves massive sets of information used to train machines. While machine learning takes a lot of time along with consumption of ample amount of data, the outcome still remains stuck to limited binary actions.

When we compare its ability with the demands of hackers, they need machine learning tools which should be able to take action on its own - especially the functionality to create or change according to stuff that they encounter when deployed. With the current model, it is near to impossible that all of the hackers would have enough data to create self adjusting models.

The most common example related to the usage of machine learning models by threat actors has been of bypassing CAPTCHA challenges. Once the CAPTCHA’s vulnerability gets fixed to a test of more real intelligence - like identifying objects in an image, AI starts to lack.


This occurs more obviously because threat actor’s model do need to be trained on a set of categorized images which should have basic knowledge regarding what a car, storefront, street sign or other random item is and how they are incorporated into the image on the whole. Not to anyone’s doubt, this indeed would require a more advanced level of training on partial images, more data resources, data science expertise and patience. Till then hackers stick to the simpler CAPTCHA crackers where they win some and lose many.

What Can AI really hack?

In 2018, a study with the title of “The Malicious Use of Artificial Intelligence” directed us to some well-known examples of AI hacking used tools that turned out to be the hard work of funded researchers, who are actually striving for AI to be true weapons. One of them included IBM’s evasive hacking tools last year - which can make connected devices on a corporate network dormant in large number until it becomes impossible for security pros to beat the attack. And the other big technique revolved around spoofing problematic medical imagery by a team from Israel.

You can also expect to see malicious applications in the near future, that would be specifically designed to disrupt legitimate machine learning models and might even become available for purchase on dark web networks. (However, why would anyone with so much of resources for developing malicious AI would think about generating money from cybercrime when selling the software can be a great alternative in the first place?)


Moreover there was another anecdotal evidence of malicious AI in the 2018 report stating that phishing attacks could be the best early use of this hypothetical breed of malicious machine learning as a start. Attackers can easily set their target and the program will eventually extract public social media data, online activity and any available private information to develop an effective message and attack method.

But then again these spear phishing and malware propagation require large attack surface and they won’t be cost-effective to hackers when compared with getting the job done from labor and existing methods. Hence, AI still can’t hack anything which can be a groundbreaking achievement for cybercriminals.

AI is just not important

To put it into simple words, machine learning is an extremely complicated solution to take over accounts or even infecting systems. You might not believe it but somewhere, someone will have your information (e.g email address, your Facebook username, or an old password.) Over the period of time this gets bundled up as a profile of your own to the hacker, which further gets sold buyers who then enter your credentials for every banking, food delivery, gaming, email or other services they want to avail. In a worst case scenario they can even try to get into the software that you use at work just to hi-jack corporate systems. That’s how vast they have already evolved.


On the other hand, considering all the threats average internet users still have no idea about how to change bad passwords, stop clicking malicious links, identify phishing emails, or avoid insecure websites.

So, while hackers are getting the job with more simple methods and with great efficiency, AI is just isn’t necessary for them.



Read next: Pegasus can go inside iCloud, and Apple seems OK with it
Previous Post Next Post