Meta’s Automated Moderation Is Raising Serious Concerns With Its Oversight Board After Controversial Instagram Post Allowed

Using automated moderation means to handle a wide array of controversial decisions across Meta’s apps is being questioned by its own Oversight Board.

The news comes after the tech giant was slammed for leaving a post regarding the Holocaust denial on Instagram that many found shocking, including the board itself. As per the firm’s policies, such acts of denial are considered hate speech.

This questionable post showed a SpongeBob rolling out ‘true facts’ about the Holocaust and many were appalled at how it was still there, despite being so untrue or misrepresenting the real facts from history.

So many users mentioned that over the past several years, the post keeps popping up, and the fact that we’re seeing this since 2020 and Meta doing nothing about it has raised questions.

The system keeps claiming that it did not violate any rules of the company so they closed such cases through automated means.

Last year in May, we saw one person generate an appeal about how Meta’s choice to leave offensive posts on the app is just questionable and bizarre and why Instagram was allowing it. But again, it was ignored as they had other policies like COVID-19 automation in place so that’s when Meta’s Oversight Board was contacted.

A new assessment linked to the Holocaust denial was getting generated through Meta’s apps and that’s when it was revealed how the famous Squidward meme kept on being used for spreading narratives that were antisemitic in nature. It also mentioned how some of the apps’ users were preventing themselves from getting detected so they could spread fake denial content left and right.

They were doing this through smart strategies in place like denying the Holocaust by using alternate spellings of certain terms as well as cartoons of jokes.

They are very concerned about Meta’s acts in this regard, the board added and it hoped that the company would put an end to automated policies arising in May of last year after a wide array of conditions tried to justify the wrong act.

It just feels it’s no longer effective and useful and therefore must be removed for this reason. And the fact that human reviewers cannot label offensive content as the Holocaust denial so they are filtered into a bracket called hate speech.

The board asked to have more data on this front including the chance to keep hate speech enforcement at the top of the list through policies as it relies greatly on the use of AI tech for moderating content.

So now, the Oversight Board wishes for Meta to take the right technical steps to ensure it can measure the accuracy of forcing denial content regarding the Holocaust. This entails getting more kinds of granular data.

The board asked if Meta could validate transparently if it stopped all of its automation policies that were put into place during the start of the pandemic. It rolled out recommendations about Meta’s technical steps that should be considered to ensure measures are accurate and that it was enforcing the Holocaust denial there and then.

This includes a gathering of a lot of granular information, the board added which it feels needs to stop for Meta’s own better interests.

When requests were generated for comments, Meta explained to Engadget how there was a formal response generated in terms of transparency. The company says it did leave out all offensive posts so that they couldn’t be published but perhaps this was done in error. Thankfully, they did accept the mistake, if that’s what it feels it might be.

During that time, it vowed to get to the bottom of the matter and figure out what really went wrong. It would now be rolling out a review and comprehensive investigation on this front after taking parallel context into consideration.

If they feel that action has to be taken more stringently, then they will do so immediately. But for now, they’re good to go, and hope to review the matter in detail and issue updates on this front too.

Controversy surrounds Meta as Oversight Board criticizes the automated handling of Holocaust denial posts on Instagram.
Photo: Digital Information World - AIgen/HumanEdited

Read next: Google Introduces AI-Powered Conversational Features to Enhance Search Ads
Previous Post Next Post