A new analysis suggests that Facebook has not been able to keep up with its promises for efficient content moderation policies

Recently, the Wall Street Journal reported Facebook’s tardiness in the enforcement of policies to remove hateful and harmful content from its platform despite all its recent promises. It seems that either Facebook does not care about the magnitude of this misinformation and does not want to stop it, or it is unable to implement better policies and systems to take care of this issue.

Facebook only starts talking about it when it faces intense external pressure, like the recent #StopHateForProfit campaign and the advertiser spend block that it faced, or previous incriminating incidents too. The company only tightens its reigns around content moderation for a little while after facing serious backlash, but it does not last long, and soon another piece of news springs up incriminating Facebook all over again.

In recent years, Facebook faced various challenges, but 2020 was specifically tough, with a bulk load of misinformation around the coronavirus pandemic, heated discussions and disinformation about climate, and now transparency and security issues in the upcoming US General elections. Facebook has a lot of content on its plate to moderate, and its AI models and machine-learning systems are not exactly working as efficiently as they should have been, mainly focusing on removing the content that has the highest chances to go viral.

This was recently analyzed by the Wall Street Journal. They reported around 276 toxic content pieces to Facebook in September. These content pieces revolved around violence, hatred, and disinformation that was dangerous if went viral. Facebook’s content moderation systems took down only 32 out of these 276 cases. When the Wall Street Journal inquired about the remaining cases, that was when Facebook realized and confirmed that 50% of those remaining posts should have been removed instantly. They did remove that 50% of content after 24 hours though, but it still was not quick or efficient enough because many other posts from the same set were removed after two weeks! It shows that Facebook’s content moderation policies and systems are still lacking despite so much criticism from everywhere.

Other users also can report such content that violates the platform’s guidelines, but it is still a questionable matter whether the company will pay heed to those reports or not?

When the Wall Street Journal’s recent analysis surfaced, Facebook’s spokesperson Sarah Pollack called the analysis as unreflective of the overall accuracy of the company’s posts review systems. She also said that Facebook has started relying quite heavily on AI systems amidst the coronavirus pandemic. But this means that Facebook’s AI systems are not working efficiently too. They could have been removing more posts instead of less! So, this reasoning by Sarah Pollack does not really add up.

The Wall Street Journal reported in August about Facebook’s refusal to enforce and improve hate speech policies after an incident in India but looks like Facebook has not learned its lesson, and it does not even seem likely that it will learn it and mend its ways anytime soon.
Previous Post Next Post