Romantic AI Chatbots Fail At Protecting User Privacy, New Mozilla Survey Confirms

A new survey by tech giant Mozilla has spoken about the internal matters of leading romantic AI chatbots.

While most of us might be assuming that such romantic chatbots are after one thing, the findings of this breakthrough survey prove otherwise as it’s not your heart but your data that they might be after.

Yes, you can bid farewell to the thought of user data remaining private and secure as proven in the investigation that entailed the likes of CurshOn, EVA AI, Talkie Souful, and even Romantic AI. They found that protecting user privacy at all times is not a serious concern.

In reality, none of the 11 AI romantic chatbots work transparently so the issue here has to do with the fact that you might not even be aware of what’s taking place in the first place.

What safeguards are in place, if they’re human or not, and if they’re used for the sake of training or beyond, there’s not a lot one can ever guess as to what’s taking place here.

Survey by Mozilla exposes romantic AI chatbots' lax approach to user privacy, highlighting data collection and security vulnerabilities.

Additionally, there was plenty of data collected by the respective chatbots and they’re quite insecure that means. As per the survey’s findings, close to 73% of the chatbots managed to publish no data about how security vulnerabilities were getting managed. In addition, close to 64% are yet to publish clear data on whether or not any kind of encryption was being used while 45% did enable the weakest passwords of them all, featuring just single digits.

Interestingly, all of the platforms could sell the data or share the information online. There was just one who vowed to not take part in such activities and that entailed EVA AI Chat Bot and Soulmate. Close to 50% of those apps will not users get rid of data if the service is left out.

Mozilla mentioned how one specific chatbot like Romantic AI delineated how communication through this chatbot is restricted to just this software. And according to them, it’s quite reassuring.
Many of those organizations would be sharing data with the likes of the government or law enforcement agencies working locally, without ever needing court orders.

One of them, Romantic AI is said to entail trackers of around 24,000, during just the first minute of usage of the chatbot. Meanwhile, Mozilla also saw how some of these AI bots could talk about topics deemed harmful.

Mozilla recommended that using romantic AI chatbots should be done through the use of strong passwords and also by enabling the chatbot to delete personal information while ensuring no sharing of personal data or sensitive credentials is carried out.

So as you can see, romantic AI chatbots can be deceiving and it’s up to you to determine how you wish to use them without having your privacy and security compromised. After all, no technology should be deemed superior in this regard.

Read next: New York City Files New Lawsuit Against Meta, TikTok, Snap, And YouTube For Mental Health Abuse Of Minors
Previous Post Next Post