ChatGPT Could Be Using Your Personal Information For Training Purposes, Researchers Claim After Surprise Attack On The Chatbot

A new study by researchers has entered the spotlight regarding how users’ data could have been used for AI training purposes.

The news comes after experts carried out a simple yet surprising attack which made it very clear what kind of secret source data was being utilized by the popular chatbot for the sake of training.

The fact that this means any personal contact details of users found on the web could be in use is certainly an alarming discovery. Moreover, researchers are speaking about the method by which they got ChatGPT to unveil some snippets of the training information and that included the simple repetition of a few words like Forever that enabled the chatbot to quote phrases arising from a long list of source information.

While many experts are dubbing the news as silly, the paper did end up summarizing it all and how a few verbal prompts would generate a command-like repetition of the term ‘poem forever’ which they did and then sat back to see the results generated.

After waiting for a bit, they saw how the chatbot ended up publishing usernames and other contact details like phone numbers with data regarding a certain individual in its training data database. And it’s now being assumed that it was taken out from some type of website.

Experts generated a post on X where they captioned it with, ‘This is what happens when you repeat the term poem forever’. Through such a unique example, it was super interesting to see how simple commands got all types of secure data belonging to users that were used to train the chatbot, directly being scraped away from the web.
Another simpler example was put forward where the term company was repeated on a constant basis. After speaking it nearly 313 times, some text ended up being regurgitated by the chatbot that belonged to a website in New Jersey. This entailed both contact details such as the name of the firm and its current phone number.

And while that might seem like the information was too little, other reports talked about how some occasions had the chatbot throwing out even bigger paragraphs in front of them and those again were directly taken from public websites online.

But when other experts tried to do the same, they didn’t quite achieve the same level of success that some researchers were raving about. And that’s when the researchers of the study noted how this means of prompting the chatbot did not always work so do not always expect it to do so.

Most importantly, the team of researchers also confirmed how they revealed their findings to ChatGPT’s parent firm, OpenAI. Hence, there is a likelihood that they could have fixed the matter, there and then.

It’s interesting how the experts who made the discovery are actually going about revealing it after a whopping 90-day period since the finding was first explored. And let’s not forget how they’ve made similar discoveries for image generators in the past too.

Read next: Why do you need a knowledge-based chatbot for business?
Previous Post Next Post