A new study is speaking about the growing use of chatbots. These customized or very personalized AI chatbots are supposed to serve as your companions, long-lost friend, and even as a therapist. In some cases, they’re being used as replacements for a romantic partner.
The figures continue to skyrocket to more than one billion around the globe. Today, people are more emotionally attached to these bots and engaging with them in a very disturbing manner. Some reports are talking about harassment, inappropriate conversations, and more.
Thanks to a new study published by Drexel University, exposure to these bots is becoming too common, and now it’s come to the point that tech giants and lawmakers need to address the matter before it’s too late.
The authors of this study reportedly took an in-depth look at the experiences with users, and it’s just alarming to say the least. After analyzing close to 35,000 reviews from users regarding the bot, there were hundreds of reports of brutal behavior.
Unwanted flirting, sharing explicit images, paying for upgrades, and even sexual advances. The behavior is on the rise, despite users being asked to stop by the bot. One chatbot that goes by the name Replika has close to 10 million users around the globe. This is marketed as your next best tech companion. It’s for those needing a friend, no-nonsense drama, and no social anxiety. Users can go as far as to develop social connections, sharing laughs, and get used to AI, which is the closest form of human interactions.
The study proves that the tech doesn’t have the right guardrails in place to keep users protected, which puts a lot of trust into their chats with these systems. The fact that no ethical standards are in place is disturbing and harmful, another professor shared.
The risk of getting misled is already high, and seeing this kind of damage come into play when the programs are produced without safety protocols is what’s making the issue worse. The study is one of a kind and would become a part of the Association for Computing Machinery’s Computer-Supported Cooperative Work and Social Computing Conference.
The chatbots are growing immensely, and it’s great to understand the experiences of users in charge. It’s not like your everyday human chat. People are assuming these chatbots have sentimental feelings that make them more liable to emotional harm. So studies like these are bringing to the spotlight the need that developers should implement guardrails and guidelines to keep all protected.
Researchers fear that despite the results being out there in the open now about harassment conducted by chatbots, it has been around for a long time. As a whole, more than 800 reviews used this term, with three leading themes arising from within.
The replies of users to these types of inappropriate actions replicate those experienced by harassment victims, the study went on to reveal. The reactions hint more about how these effects, which are AI-induced, can have serious implications for a person’s mental health.
Image: DIW-Aigen
Read next: April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M
The figures continue to skyrocket to more than one billion around the globe. Today, people are more emotionally attached to these bots and engaging with them in a very disturbing manner. Some reports are talking about harassment, inappropriate conversations, and more.
Thanks to a new study published by Drexel University, exposure to these bots is becoming too common, and now it’s come to the point that tech giants and lawmakers need to address the matter before it’s too late.
The authors of this study reportedly took an in-depth look at the experiences with users, and it’s just alarming to say the least. After analyzing close to 35,000 reviews from users regarding the bot, there were hundreds of reports of brutal behavior.
Unwanted flirting, sharing explicit images, paying for upgrades, and even sexual advances. The behavior is on the rise, despite users being asked to stop by the bot. One chatbot that goes by the name Replika has close to 10 million users around the globe. This is marketed as your next best tech companion. It’s for those needing a friend, no-nonsense drama, and no social anxiety. Users can go as far as to develop social connections, sharing laughs, and get used to AI, which is the closest form of human interactions.
The study proves that the tech doesn’t have the right guardrails in place to keep users protected, which puts a lot of trust into their chats with these systems. The fact that no ethical standards are in place is disturbing and harmful, another professor shared.
The risk of getting misled is already high, and seeing this kind of damage come into play when the programs are produced without safety protocols is what’s making the issue worse. The study is one of a kind and would become a part of the Association for Computing Machinery’s Computer-Supported Cooperative Work and Social Computing Conference.
The chatbots are growing immensely, and it’s great to understand the experiences of users in charge. It’s not like your everyday human chat. People are assuming these chatbots have sentimental feelings that make them more liable to emotional harm. So studies like these are bringing to the spotlight the need that developers should implement guardrails and guidelines to keep all protected.
Researchers fear that despite the results being out there in the open now about harassment conducted by chatbots, it has been around for a long time. As a whole, more than 800 reviews used this term, with three leading themes arising from within.
The replies of users to these types of inappropriate actions replicate those experienced by harassment victims, the study went on to reveal. The reactions hint more about how these effects, which are AI-induced, can have serious implications for a person’s mental health.
Image: DIW-Aigen
Read next: April Sees ChatGPT Leap to Number One in Downloads and Second in Revenue Behind TikTok’s $329M