Twitter Updates Problematic Reply Detection Algorithm

You might have noticed over the course of the past year or so that Twitter often sends you a prompt if you reply to a tweet with foul language or anything else that might go against Twitter’s community guidelines. A big part of the reason why that is the case has to do with the fact that Twitter is trying to clean itself up, but with all of that having been said and out of the way it is important to note that a lot of users have complained that these prompts and warnings are unclear. This prompt is essentially a suggestion that you should revise your reply so that it no longer contains abusive language and the like, with the option to revise it, delete the reply or ignore the prompt entirely.

Twitter has now updated these warnings to make it so that users can get more information about what aspect of their tweet was considered harmful. Twitter’s algorithm will also take into account the context of previous tweets from the account. A lot of users have complained that the algorithm doesn’t take context into account, so the fact that Twitter is now changing this is the sort of thing that most people would welcome quite a bit.

This is also going to help prevent the spread of misinformation, something that has become increasingly common on various social media platforms that are out there. Some people are critical of this whole initiative saying that it is a threat to free speech and that Twitter is trying to impart its own morals and values on others, but others recognize that it is a necessary aspect of the platform whilst also saying that there is no obligation to edit a tweet that gets prompted. There is still a lot of work that needs to be done on this algorithm, but this is definitely something that indicates positive change in the near future.



Read next: Twitter is bringing new labels for the accounts belonging to government officials and state-affiliated media for five countries
Previous Post Next Post