A Report by Zscaler Finds Out that As Many Businesses are Adopting AI, Their Data is At Risk More than Ever

Zscaler’s ThreatLabz 2024 AI Security Report shows that many enterprises are depending on AL/Machine learning tools and it has increased by 600% with 521 million in April 2023 transactions to 3.1 billion in January 2024. The threats about security have also increased by 577% with 18.5% of AI/ML transactions being blocked. The report talks about measures that needed to be taken to protect AI/ML tools. Data protection is important while managing AI and the report has analyzed about 18 billion transactions to find out how enterprises are using AI tools.


Many industries like healthcare, finance & insurance and technology are unprepared for the data risks AI brings with it. Manufacturing industry has the most AI traffic and makes 20.9% of all the AI/ML transactions. It is followed by finance & insurance that makes 19.9% of total transactions.

To provide protection against cyberattacks, CISOs and other teams have decided to block the AI/ML tools if they pose a threat to the system. ChatGPT is the most used AI tool today but it is the most blocked too. Followed by ChatGPT are OpenAI, fraud.net, Forethought and Hugging Face are the most blocked. The most blocked domains are Bing.com, Divo.ai, Drift.com, and Quillbot.com.

The finance & insurance sector has the most blocked AI transactions at 37.16% and this means it is at risk of concerns about security and data the most. Healthcare has the least blocked AI transactions at 17.23% which is below average. But blocking these transactions isn’t the only solution. CrowdStrike, Palo Alto Networks and Zscaler are working on finding new ways to block these AI transactions. Co-founder of CrowdStrike, George Kurtz, says that they have reached the point where they can take weak signals from different endpoints and join the dots to find the threat. We are not collaborating with third-parties that are able find the weak signals to detect the threats.

The report also talks about two categories of AI threats; the data protection and security risks and risks of new cyber threat landscapes. Businesses have a long way to protect their data from leaking out through AI/ML tools like ChatGPT. Many attackers are ready to launch new ransomware attacks on AI/ML tools. Businesses need to take measures if they want to not get affected by it.

Read next: Microsoft Rolls Out New Safety Features Including LLM-Powered Tools For Vulnerability Detection
Previous Post Next Post