AI Content Moderation Systems Are Causing More Troubles For Social Media Users Than Ease

Can social media users and the companies governing the platforms rely on artificial intelligence for content moderations? Well in the recent past, it sounded like a fancy and promising idea, mostly because of how Facebook’s CEO Mark Zuckerberg sold it during the hearings and conferences, but after a number of complaints against the new system in place, AI seems more like an enemy.

Before we cite examples and you begin to wonder how, can you expect machines to understand the human culture - especially when the matters include power dynamics, race relations, political dynamics, and economic dynamics, etc.? On top of that, as all of these platforms operate globally, the systems are also required to incorporate varying cultural norms.

AI content moderation is something that social media companies together have dreamed of for years after considering the misery of human moderators. Almost all of them aim to keep unwanted content off their platforms with automated filtering which human moderators can take advantage of to bring ease to their jobs.

Eventually, the companies got the chance to try out their luck with algorithms when their moderators were forced to remain at homes during the COVID-19 pandemic. But unfortunately, right after few months, the users started to miss the manual content moderation system badly.

Let’s take the example of a more recent trouble caused by Facebook when a glitch in the content moderation system resulted in many accounts of normal users and business owners being blocked. Almost all of them got the notification of violating the Community Standards and in some cases, the posts were even five years old. Nevertheless, upon submitting a review and after 24 hours of being blocked, Facebook reached out to the affected users with an apology in which they revealed the glitch.

Apart from that, the phenomenon continued in Syria as accounts of campaigners and journalists, who have social media as their only medium to report potential war crimes, were shut down overnight - and that too with no right to appeal.

Some news agencies also saw their articles and health information related to the ever-spreading coronavirus getting removed from the platforms right when machines took over the job.

On the other hand, there were also some questionable posts that stayed on the platforms. For instance, in France, there was a 40 percent increase in hate speech on Twitter as noticed by campaigners who fight against racism and anti-semitism. But only 12 percent of those posts got removed.

Posts that belonged to the category of child exploitation and self-harm also stayed because there were no humans during the pandemic to make tough calls about them and overall, there was a 40 percent fall in the removal of such posts during the second quarter of 2020.

What’s Wrong With AI?

As AI and machine learning in content moderation mainly assist in pushing suspected content to human moderators, at the same time, they also have the authority to remove a lot of unwanted content on their own from the platforms. And the way AI does it is where the trouble begins.

The system is based on visual recognition for a broad category of content that includes terms like “human nudity” or “guns” and therefore it will always be prone to mistakes as it first matches the content to an index of banned items - which is also created by humans.

The process is designed in this way in order to first identify the most obvious content showcasing violence such as videos from terrorist organizations, and promoting child abuse, along with copyrighted content as well.

So, in all of these cases, the content first gets identified by humans and then “hashed” into a unique number for the sake of processing it quickly. One can say that this technology is reliable but there are more problems, at least till now. A good example of the problem is YouTube’s ContentID system, which in the past flagged uploads like white noise and bird song while claiming it as copyright infringement.

Hence, the situation becomes worst when it comes to the matter of content not being easily classified by humans as well. Then for algorithms, it means that they do recognize, but can the varied meaning like nudity for breastfeeding (for example) count? Moreover, what about the decision for more context-dependent cases of harassment, fake news, misinformation, and so on?

For all of such cases, there would be no objective status, examples of someone’s background to consider, or personal ethos, etc.

Filters are also not enough to figure out hate speech, parody, or news reporting of controversial events as all of such content relies a great deal on the cultural context and other extrinsic information.

What’s The Future of AI in Content Moderation?

There is no doubt in the fact that AI as a field is moving to greater heights but despite all the advancements effective moderation can only take place with more human involvement as the matters which require to be filtered depends a great deal on the context and machine learning cannot go that deep.

In order to make attempts more successful, the automated content moderation systems need a large and diverse number of examples to understand what offending content can look like to an average user on the platform and also fall in line with the companies’ Community Standards. And training with those datasets will of course take years.

For now, people like Mark Zuckerberg are using the AI tools as a shield to cover for the company’s mistakes.

Previous Post Next Post