Facebook Is Implementing A New Policy To Crack Down On Harmful Movements On The Platform

Facebook is further expanding upon its rulebook for managing “coordinated inauthentic attacks” on the platform, adding upon its policy to cover movements that don’t involve online impersonation.

The old policy Facebook had in place for coordinated inauthentic behavior (CIB) was reportedly limited, as reported by the popular UK publication Reuters. In a nutshell, the action such a policy allowed for wasn’t aggressive enough for larger movements on the platform. Banning individual accounts or posts only goes so far. To be fair to the social network, however, a wave of misinformation is very different from the sort of harmful action that certain extremist groups can come up with, and developers were simply not prepared. Case and point: #StopTheSteal.

The 2021 US Capitol Riots were a coordinated attack, with much of said coordination occurring over platforms such as Facebook. While Facebook is certainly not free of racists and the like, not even its own devs were ready for the amount of violence such users were planning on inciting. Therefore, when the horrific attack occurred, everyone was taken aback. Lots of fingers were point at the social network for shirking its duty of online moderation. Early on, Facebook denied any planning having occurred on its platform, blaming other smaller platforms for the incident. However, as further evidence cropped up of planning occurring on the platform, under the hashtag #StopTheSteal, it became an unavoidable fact: Facebook had a big, unwilling hand, in planning a major attack on US liberty.

It seems that with the announcement of its new CIB policy, Facebook is now willing to accept its part in the US Capitol riots, and wishes to not repeat such a mistake. Now, for the policy itself. How does it work? Well, it seems that the new policy will allow devs to go for larger groups instead of just banning individuals. Namely, the platform’s moderators will now directly dismantle entire networks. An example brought up by the company in a blog post is action taken against the conspiratorial Querdenken anti-vaccination movement. The German movement, relying on accounts both authentic and fake, propagate misinformation on a vast scale on the platform. However, many of its major pages and groups were removed, accounts banned, and linked websites blocked.

Reuters itself cited two examples of where Facebook could use their new policy, once it is widely implemented across the platform. The first example, much like what happened with Querdenken, could be to dismantle large sources of misinformation on the platform. The second could be to stop troll accounts in their tracks, as they attempt to carry out online harassment and account banning.

Read next: Facebook Is Adding More Content To Its Climate Science Center, But Will It Be Enough To Counter Anti-Climate Change Rhetoric?
Previous Post Next Post