Snapchat Addresses Concerns Regarding Misuse Of Its ‘My AI’ Chatbot By Adding More Protection

Snapchat has recently provided some more exciting updates on the creation of its chatbot called My AI.

The company is producing an AI-powered tool that makes use of the technology from OpenAI. Hence, this would cause Snapchat+ subscribers to set out more queries toward the bot on the app and gain replies on things they adore.

It all appears to be a smooth sailing process, at least, for the most part. But when you’ve got AI technology involved, you know there are going to be some eyebrows raised and questions asked.

This includes how the tool holds the great potential of being misused and that’s why it is on the lookout to include more safeguards and provide greater protection along the way.

As reported by Snapchat, so many people that reviewed its My AI product have assisted in identifying which guardrails are in place and which need to be made better. To provide more input on this, they’ve been conducting a series of reviews regarding the commonly asked questions and how some replies use language that’s not conforming.

Non-conforming language is simply that text that uses explicit references to things like abuse, violence, sexual content, drug usage, and racism among other controversies. And as a part of the company policy, they are not allowed on the platform.

When signing up to use the service, the users of the app need to agree to the terms and conditions outlined. Therefore, any question that a user puts forward in front of the chatbot would be up for scrutiny by the app’s own team.

So far, the company revealed how a small proportion of the My AI responses were generated that fell under this non-conforming label. Still, it is still doing more research in this regard and developing work that is designed to offer better protection of the app so users can benefit from a safe and efficient search experience while staying protected from negative aspects of this My AI ordeal.

Snap says this is a learning process and it hopes to better the entire AI experience for users. Moreover, the data would also assist in setting out a new system that restricts misuse of the My AI tool.

With the addition of moderation technology that arises from the makers of ChatGPT, it feels confident in assessing the extreme nature of harmful content. And when things go haywire or not as per the criteria outlined for a safe experience, access of users on the app would be restricted if misuse is identified.

But that’s not all. The company appears to be on a mission to enhance replies to abusive requests generated by users of the platform. Similarly, it hopes to include data on the interaction history for My AI toward Family Center Tracking.

This would allow parents to gauge if their younger ones are communicating well and if yes, how frequently on the chatbot.

Read next: Snapchat Rolls Out New Insights Displaying Content Diversity Across Its Stories
Previous Post Next Post