Facebook Artificial Intelligence to take the place of its Content Moderators

Facebook is continuously facing privacy and security issues, and the user’s exposure to sensitive data on the platform just adds salt to the wounds. Facebook is famous for the spread of misinformation along with offensive content that could lead to wrong messages. Even though these are serious allegations, still Facebook says that they try to remove harrowing content as much as they can.

Facebook has thousands of content moderators all over the world including their always-on artificial intelligence service to detect offensive content. Since technology has taken over our lives, Facebook is also making its way towards it.

Now, most of the content moderation on Facebook is done by machine-learning systems. In this way, the moderators do not have to review a lot of content themselves, instead, artificial intelligence is doing all the work for them.

Facebook claims that they detect 98% of terrorist photos and videos before users can even see them. This is how far Facebook has come in content moderation, much appreciated!

Currently, Facebook is training its machine learning systems to identify objects as dangerous in its videos by labeling them. They are using neural networks to identify objects based on their behaviors and features and label them with confidence and percentage.

Right now, Facebook is training these networks on a variety of videos along with pre-labeled videos. The networks have the capability to identify the whole scenario in the picture and highlight any flags (if any).

In case some problematic behavior is predicted in the videos, images or any other content, Facebook will send the data to human moderators for review. In case it is confirmed, Facebook will create a hash that will allow Facebook to automatically remove any of similar data on the internet, in case, the user re-upload the video. Facebook can share these hashes with other social media platforms as well to take them down from the internet.

This is a wise step taken by Facebook, but it is still struggling to automate the machine’s understanding of language, meaning, and nuance. It is because of the machine’s inability that Facebook highly depends on content moderators to review harassment and bullying content on the platform. AI systems do not have the capability to identify much content as of now, but it might in the future.

Read next: Facebook to Start Suggesting Moderators for Groups
Previous Post Next Post