Facebook Updates Community Regarding Q4 2020 Enforcement As Well As AI Development

Facebook has recently published its Community Standards Enforcement Report for Q4 of 2020, taking an in-depth look back at policy enforcement across both the social network and Instagram. The company has also simultaneously published a post regarding the ongoing development of its hate speech detecting AI.

Jumping straight into the Enforcement report, Facebook starts proceedings by boasting that hate speech on the platform fell from 0.10-0.11% to 0.07-0.08% across the last quarter of 2020. The percentage represents views of contextual hate across every 10,000 views of general content on both itself and Instagram. Posts and the like depicting violence fell from 0.07% to 0.05%, and online promiscuous/adult nudity dropped from 0.05-0.06% to 0.03-0.04%. Overall, these numbers, while general in nature, seem to depict a solid amount of growth for the platforms. But more on that later.

Further elaborating on anti hate speech protocols being enforced online, Facebook goes on to discuss improvement in certain areas that earlier proved troublesome. Bullying and harassment saw an increase of 23%(26% to 49%) more proactivity from Q3 to Q4 on the Facebook platform, while Instagram showed an ever so slightly more promising increase by 25% (55% to 80%). The tech conglomerate specifies advancements in AI and machine learning algorithms as the driving force behind such improvements across both social media hubs.


A particular area of trouble in 2020 for all forms of online communities was the COVID-19 pandemic; more specifically, conducting the rather macabre act of both balancing hate speech along with one's employees in the face of economic downfall. Facebook actively calls the pandemic out as an impediment for its goals of online policing, attempting to manage user expectations by actively opening up about the struggles such an unexpected situation brought about for everyone. Especially since the pandemic itself brought a new slew of online hate speech and misinformation.

Overall, Facebook's team dealt with 6.3 million cases of isolated bullying and harassment, from Q3's 3.5 million. Developers further cracked down on 6.4 million pieces of organized hate speech on the platform, increasing from 3 million. Adapting the AI to learn Spanish, Portugese, and Arabic led to a further 26.9 million individualised pieces of hate speech content, from 22.1 million in Q3. Finally, potentially triggering content regarding self-harm and suicide saw strikes on 2.5 million pieces, increasing from the previous quarter's 1.3 million.

But we've been talking about Facebook' AI for quite a while now. Considering how much it's at the helm of hate speech detection, let's take a closer look at its development, shall we? A post on the Newsroom made by Facebook's CTO, Mike Schroepfer, makes much of strides being taken in online development of the machine learning technology. He goes on to especially laud developers that have managed to add more contextual recognition of hate speech, since many instances of policy infractions go undetected due to examples varying between posts. Schroepfer cites Facebook's own afore-mentioned data as proof that these developments have properly taken root on the social media platform. He finally concludes by mentioning how AI needs to further incorporate cultural differences as well, since many instances of purported hate speech i.e. cursing and name calling is not interpreted the same across borders.

This author's final opinions on the matter hold Facebook's swinging about its self-provided data with grave mistrust. It's AI, while certainly better than one's average fodder, is still highly suspect, prone to attacking both innocents as well as perpetrators, and ultimately has proven ineffective. Despite these improvements, multiple news sites have aggressively reported on how Facebook was involved with the US Capitol riots, with neo-Nazis, white supremacists and far right insurrectionists actively planning an attack on the heart of American democracy. A horrifying and glaring set of circumstances that the AI ultimately missed; either through a lack of awareness regarding such outlandish planning, or due to developer incompetence.

The COVID-19 pandemic itself marks a source of failure for the social network, where anti-vaccination rhetoric and hate speech remains prevalent despite the amount of blocking and flagging numbers depict. Then again, the company itself is sort of responsible for tripping up it's own technology, actively refusing to remove notorious anti-vaxxer Robert F. Kennedy from the platform, and taking days before choosing to only temporarily ban Donald Trump as well. There's only so much machine learning can do when its developers are so lax and indifferent to active change.

Read next: Facebook’s Broken Content Moderation Systems Repeatedly Causing Widespread Bans of Harmless Content
Previous Post Next Post