New Challenge For OpenAI Emerges After Its Customized ChatGPT Proved To Have Serious Security Vulnerabilities

The thought of ChatGPT rolling out a new program where users get the chance to carefully curate their tailored versions of the initiative had people talking. After all, you can customize the popular AI endeavor to your liking and for that reason, OpenAI was hailed.

However, it appears that the success might not be long-lasting after all after a new study highlighted how Custom GPTs may end up giving users a bundle of security loopholes, making them vulnerable to a host of cyber crimes.

Now, a leading team of researchers from Northwestern University is launching a new warning linked to the huge security vulnerability in this regard that may lead to data belonging to users getting leaked.


The makers of ChatGPT first rolled out the breakthrough news regarding the customized variant in November and how users could make the most of this customizable feature with a simple click of a button.

It was deemed to be as simple as beginning a new conversation, rolling out instructions, and even attaining more knowledge than through the usual means as well as benefiting from more functions of the chatbot. This meant having greater control over the capabilities of the chatbot in terms of comprehending data and rolling out pictures among a host of other offerings.

It was simple and useful, and what else could a user ask for, right? But thanks to the latest study on this front that highlights its shortcomings, we’re getting more knowledge in regards to how all that glitters might not be gold and perhaps users can’t be kept any longer in the dark in terms of the serious security and privacy vulnerabilities at risk.

While OpenAI might be promoting its simple usage and no need for extra coding skills to benefit, what they failed to realize is that it comes at an added cost. How such models are obeying the user’s commands is giving rise to new security challenges.

The study highlighted how so many malicious actors may end up extracting prompts from the system and even take out data uploaded via documents that weren’t actually intended for the sake of publishing.

In the same manner, they delineated how two leading security risks also exist in terms of how the system could extract data by fooling the GPT into leaking sensitive files and giving rise to data at a swift pace. And in the end, the entire mechanism through which the customized GPTs function is brought to the limelight.

A total of 200 different customized GPT models were experimented with for this research, which led to the authors rolling out the alarming findings.

As a whole, they attained 100% success rates in terms of leaking files while the figure for extracting prompts through the system attained 97% success rates. Therefore, such prompts could nearly expose the whole system and unveil uploaded data from a host of customized GPTs.

Getting such results without having any kind of expertise in the field or experience linked to coding is just a lot to digest for obvious reasons. So many injection attacks continue to arise as we speak and it’s a top-of-the-line reason for worry, ever since we saw LLMs get popular with growing time.

Prompt injections were again delineated to be serious attacks that include crafting the right type of prompt in a manner that could trick GPTs into producing biased and ill-themed outputs where the user is left in a vulnerable position.

Also, such attacks can make the LLMs roll out misinformation easily or biased content that produces all sorts of social prejudices while exposing data that is not only dangerous and personal but embarrassing for the user too.

Read next: QR Codes Are Being Used to Scam People, FTC Warns
Previous Post Next Post