Is Artificial Intelligence Ready to Replace Humans in the Facebook Content Moderation Challenge?

Facebook has found itself in hot water numerous times for allowing offensive content on its platform. Regardless of how hard the social media giant works to tackle this issue, moderating billions of comments and posts is not an easy task for the people in charge. Artificial Intelligence was thought of as a solution to the problem but AI won’t be taking over completely anytime soon as it still requires a lot of work.

Firstly, the right data should be found to train the AI algorithms. Secondly programs capable of identifying hate speech or offensive content should be developed. Moreover, it is important to edge out the violators who keep coming up with ways to beat the system.

Back in the day, Facebook depended on its users to report questionable content so that the moderators could review it and proceed accordingly. Over the years however, Facebook has turned its attention to developing algorithms to tackle such content. There are improvements being made in the AI department but it’s still a novice concept.

The system is still in the learning phase. Every time it comes across a new piece of training data, it moves up the learning ladder. In the absence of appropriate training data, the system won’t be able to learn what’s right and what’s not.

According to Facebook’s Chief Technology Officer Mike Schroepfer, who recently talk to The Financial Times, data sets composed of countless examples will be fed to the system for learning purposes. This will in turn help the algorithms in spotting extremely as well as borderline offensive content.

It is believed that in addition to the datasets with numerous learning examples, the AI will be trained by Facebook on content uploaded and shared by its users. As for photos and memes, dedicated data sets can be created as people are well aware of altering an original piece of content to pass through the detection process.

Language also plays an important role. Facebook is used by people across the globe, so the various languages should also be considered while training the AI. Facebook received backlash last year for its delayed action on taking down groups provoking violence, especially in Myanmar.

Now, Facebook is translating watchwords to and from various languages.

When it comes to “grey area” content (especially harassment or hate speech that needs the understanding of the ongoing slangs or terms), AI still has a long way to go. Facebook is in the process of rolling out an autonomous content moderation system which will enable users to file a review claim against any sort of content decision.

It should be noted that it is nearly impossible for the existing algorithms to spot subtle content and whether to deem it suitable or not. A possible way to go around this obstacle is to study a user’s behavior on the platform and proceed according to that.

Facebook claims that humans will always be needed by AI for the task of labeling and reviewing borderline offensive content. Moreover, the mentality of Man vs. Machine should be dropped in this case as according to Mr. Schroepfer, it is human augmented.

A few researchers pointed out the fault in Facebook’s approach. According to them, the focus should be on the news feed algorithms and how they present the content to users. According to one of the directors at Harvard Kennedy School, these algorithms are developed in a way that they present the users with content that it is of interest to them (to keep them engaged). The algorithmic design then slowly moves them towards extreme content.

Photo: Soeren Stache/Getty Images

Read next: Up to one third have considered leaving Facebook! (Study)
Previous Post Next Post