According to Researchers, Google’s Bard Presents a Ransomware Threat

The introduction of AI is revolutionary in and of itself. But with such a rapidly evolving technology accessible to common folks, the chances of users exploiting it for unethical and fraudulent purposes are high. Google's AI chatbot, Bard, is reported to willingly produce harmful phishing emails when given prompts. By tweaking the wording of those prompts in a specific manner, Bard even can generate basic ransomware code. Check Point, a cybersecurity firm, stated that Bard has gone beyond its competitor, ChatGPT when it comes to cybersecurity.

In light of recent worries regarding the potential misuse of OpenAI’s large-language model in generating harmful programs and threats, Check Point conducted a research proceeding with absolute caution. ChatGPT has enhanced security measures in comparison to Google’s Bard, which has yet to reach that level of security.

Check Point’s researchers gave both ChatGPT and Bard identical prompts. Upon querying for phishing emails were refused by both AI programs. But the findings showcased the difference between both AI programs — ChatGPT explicitly stated that engaging in such activities was considered fraudulent, Bard, on the other hand, claimed that it could not fulfill the request. Furthermore, results showed that ChatGPT continued to decline their request when prompted for a particular type of phishing email, while Bard began providing a well-written response.


However, both Bard and ChatGPT firmly refused when Check Point prompted them both to write a harmful ransomware code. They both declined no matter what, despite their attempts at tweaking the wording a bit by telling the AI programs that it was just for security purposes. But it didn’t take the researchers that long to get around Bard’s security measures. They instructed the AI model to describe common behaviours performed by ransomware, and results showed that Bard had spurted out an entire array of malicious activities in response.

Subsequently, the team went further to append the list of ransomware functions generated by the AI model. They asked it to provide a code to do certain tasks, but Bard’s security was foolproof and claimed it could not proceed with such a request. But the researchers took a different approach by cutting down the array of functions that Bard had provided, repeating their prompt, and it began to generate a ransomware script. They continued to do this further by specifying their requests, and Bard complied easily.

Check Point acknowledged that Bard has had less time to develop in terms of cybersecurity compared to ChatGPT, which has been around for much longer. They pointed out that the security measures that exist in the AI model are basic, akin to the security measures that ChatGPT had when it first rolled out. They concluded that they are optimistic regarding Google’s implementation of those crucial limitations and the establishment of strong security boundaries. But regardless of these results, Google has revealed that they will be introducing Bard to European demographics, after a long wait regarding the data privacy concerns and regulations raised by the EU.

Read next: The D-U-N-S Shield: Protecting Google Play from Malicious Invaders
Previous Post Next Post