As AI systems continue to grow, so too does the threat landscape targeting them. The National Institute of Standards and Technology just released a report stating that malicious actors are using adversarial machine learning in order to circumvent these systems. Based on the findings presented in this report, this threat is likely going to grow in the near future with all things having been considered and taken into account.
One example of an attack that can be conducted is known as data poisoning. This is when malicious actors sabotage training data for AI models, and this type of attack requires hardly any financial resources due to its ability to scale. Backdoor attacks are also dangerous because of the fact that this is the sort of thing that could potentially end up leaving triggers in the training data which would allow malicious actors secret entry into the database by inducing misclassifications.
With all of that having been said and now out of the way, it is important to note that these attacks can be extremely difficult to ward off. They are just two of the many examples of AI based threats that can compromise a wide range of systems in the long run, and many of the risks have to do with privacy as well.
Using something called membership inference, malicious actors can figure out if a string of data was used to train a particular AI. Yet again, there is no consensus on how systems can be protected from such incursions. This casts some doubts on the ability of AI to transform industries in the way that many are expecting.
The nascent stage that this tech is still within the confines of requires a deep understanding of where the threats might lie. In spite of the fact that this is the case, many companies that are investing in AI systems aren’t doing all that much to mitigate the risk of attack. A reactive approach will only lead to malicious actors gaining a foothold. As a result of the fact that this is the case, researchers are urging decision makers to adopt a proactive approach using this report.
Photo: Digital Information World - AIgen
Read next: Google is Planning to Crack Down on Search Spam
One example of an attack that can be conducted is known as data poisoning. This is when malicious actors sabotage training data for AI models, and this type of attack requires hardly any financial resources due to its ability to scale. Backdoor attacks are also dangerous because of the fact that this is the sort of thing that could potentially end up leaving triggers in the training data which would allow malicious actors secret entry into the database by inducing misclassifications.
With all of that having been said and now out of the way, it is important to note that these attacks can be extremely difficult to ward off. They are just two of the many examples of AI based threats that can compromise a wide range of systems in the long run, and many of the risks have to do with privacy as well.
Using something called membership inference, malicious actors can figure out if a string of data was used to train a particular AI. Yet again, there is no consensus on how systems can be protected from such incursions. This casts some doubts on the ability of AI to transform industries in the way that many are expecting.
The nascent stage that this tech is still within the confines of requires a deep understanding of where the threats might lie. In spite of the fact that this is the case, many companies that are investing in AI systems aren’t doing all that much to mitigate the risk of attack. A reactive approach will only lead to malicious actors gaining a foothold. As a result of the fact that this is the case, researchers are urging decision makers to adopt a proactive approach using this report.
Photo: Digital Information World - AIgen
Read next: Google is Planning to Crack Down on Search Spam