Meta Releases ‘Community Standards Enforcement’ Report That Highlights Its Efforts At Combatting Abuse And Misuse In Apps

With the first quarter of this year over and done with, Meta has put out its latest Community Standards Enforcement Report of 2022.

The new report delineates the company’s efforts at giving the world a closer look at what practices it employs to combat both cases of abuse as well as misuse across its applications. Similarly, the tech giant hopes to address all concerns that critics may have regarding how it actually detects and then ends up removing such issues.

Thankfully, the efforts do seem to be working well in Meta’s favor as revealed by the company’s latest stats. For instance, when it came down to bullying, the company says it has worked in a proactive manner to detect such incidents where rates have gone up from 58% to nearly 67%. And all credit goes to Meta’s enhanced detection technology.

The company calls this to be a major achievement and we do agree because popular apps like Instagram can negatively impact the younger generation and hence further result in great mental damage due to in-stream abuse or negative feedback taken from their friends.

Let’s not forget the great surge in both self-harm and self-death incidents across Instagram that have arisen in the last two quarters. Therefore, such improved detection technologies could end up working well in so many people’s favor.

Moving on, Meta has delineated in its report how it has seen a rise in spam content removal, which rose from 1.2 billion to nearly 1.8 billion this year. This spam arose from a small number of social media users who continuously produced a huge number of posts that violated the company’s policies.

In the same way, the tech giant also seems to be taking a lot more action against derogatory posts including those of extremist nature as well as those designed to do nothing but spread hate. The figures rose from last quarter when looking at the company’s leading apps like Facebook and even Instagram.

In addition to that, Meta has boldly confirmed that views relating to content that revolves around terrorism are extremely minimal and infrequent. And that has to do with strict moderation, where links or content are removed much earlier so they’re not viewed by users.

To be more specific, the report highlighted that on average, out of 10,000 viewers, only 5 of them were related to terrorism content or those that violated the company’s policy.

In Q1 of this year, the report outlined a mega 1.6 billion removal of bots or spam accounts. Meanwhile, it continues to fight fake profiles at a constant rate. The company says that it estimates that spam accounts were no more than 5% for this quarter.

Interestingly, that figure is aligned with the same number of fake accounts given by Twitter. Moreover, the latter is a trending topic for Elon Musk’s current $44 billion acquisition deal but we’ll save that for later.

For many, it wouldn’t be wrong to say that 5% of fake accounts may soon prove to be the industry’s norm as experts believe it’s not possible to actually deduce what the actual figures could be.

Following in the footsteps of Twitter, Meta hopes to carry out more sampling to help get as accurate of a picture as possible of bots and fake rates.

Whatever the case may be, Meta’s figures are not being questioned as confirmed by a recent audit report conducted by EY recently who stated that the figures were very fairly and accurately mentioned.

In addition to that, Meta is on a roll because it has even managed to go that extra mile by having its metrics verified along with data tracking methods being evaluated independently, Hence, that should definitely make critics happy for the time being at least.


Read next: Instagram Goes Up While Facebook Declines, Will Meta Be Able to Manage These Diverging Paths?
Previous Post Next Post