Facebook's New Chatbot has Already Started Providing Concerning Responses!

One of the main problems with Facebook chatbots is that the entire conversation can start making no sense at all, upon entering the “open domain” i.e. a conversation covering various topics. However, a couple of Facebook researchers recently claimed that Facebook has done some work to improve the open domain chatbots.

With these improvements, the conversations will seem more consistent, empathetic, particular and earlier parts of a conversation will also be recalled. For accomplishing these goals, Facebook introduced a new chatbot game to provide researchers with high-signal data from ongoing conversations rather than static language data.

Facebook doesn’t like the safe and generic responses provided by the chatbots and supports the belief that chatbots should be able to provide bold and spicy answers.

The above-mentioned game was tested with three different conversations and it went off-track in every case. The provided responses were highly intriguing. According to Vice, in the first conversation, a simple discussion regarding pop music led the bot to eventually respond that “together, we are going to make America great again, by getting rid of fake news”. In the other conversation, the bot refused to provide a clear answer when asked if CEO Mark Zuckerberg has ever killed someone.

Although Facebook didn’t comment about the new bot, its description states that users interacting with the bot are actually training it so the responses are conditioned according to this factor.

The purpose of “Beat the Bot” game (Currently only available within the US) is for Facebook users to compete with the bot and in turn, help in its training. Users are paired up and assigned roles, according to which they must communicate. Each message sent by a user is responded to both by the other user with whom the first user has been paired, and by the bot. The user must determine which response they have found better out of the two.



The responses provided by the first user are also sent to the second user and if the second user chooses their response over the bot’s, the first user gets some Facebook points. In return, Facebook gets its hands on data that it can use to train the bots.


It should be common knowledge by now that a good chatbot can’t be programmed easily. Well-established companies shouldn’t be rolling out unsupervised bots to experiment these with their users. This step has led to controversies such as Microsoft’s “Tay” AI becoming a Nazi. The new Facebook bot has also showed some concerning signs.

It remains to be seen if Facebook’s new approach with bots can prove to be a game changer (in a positive way) once the Company retrieves enough data to complete their training. Only time will answer this question. As of now, it is best for us and Facebook to keep a close eye on such developments.

Read next: Social Media Bots – An unpopular choice reveals a study

1 comment:

  1. So uttering make America great again is problematic? For whom?

    ReplyDelete