New Report Highlights Dark Side Of Twitter And Why It Fails At Moderating Self-Harm Content

For many, Twitter is a known social media platform where users engage with each other through tweets. But new research is highlighting the dark side of the app.

While the firm’s terms of service boldly outline and ban certain posts that glorify the concept of inflicting self-harm, some researchers have proven the app to do otherwise.

The experts claim in their findings that Twitter may be the first to promote bans on such content but tends to look the wrong way when it comes to moderating such concerns.

Researchers from the NCRI have estimated how there are literally hundreds of thousands of people who continually violate the terms outlined, yet Twitter fails at paying any heed. And they’re now providing evidence of Twitter’s carelessness by proving how hashtags related to the concerning subject are growing prolifically since last October.

Furthermore, Twitter was also made aware of the concerning matter last year in October and how it was doing poorly at moderating such content. This is the same time that we saw a charity organization based in the UK come forward and alert a regulator about issues linked to the app’s algorithm and their respective recommendation system.

Research by 5Rights proved how the app’s algorithm was steering accounts entailing kids’ aged avatars that searched for terms like ‘self-harm’ to those sharing images and video clips of people harming themselves through cutting.

But then something interesting happened. The app came out in the open and mentioned to media outlet The FT that it was certainly against the company’s policy to market, encourage, and glorify words like suicide or harm. They even bragged about how their main goal was restricted to providing users with great safety measures and ridding violence of all kinds.

They even vowed to take extreme action against those involved in violence and its glorification of any type.

But there are an increasing number of evidence that has proven over the past few months that Twitter has done little to nothing to combat the issues and even when it did try, it failed miserably at combatting the concerns.

Moreover, the NCRI proved in their report how people with small followings were also able to get away with the promotion of such content. At the same time, researchers also found that users who searched for such explicit terms in the hashtags also doubled in number since last fall.

Mentions of such terms have also risen by nearly 500% on the app, despite Twitter being alerted by the matter a while ago.

To help better put things into perspective, last year in October, the number of such posts was about 3000. And in the month of July of this year, it has nearly gone up to 30,000. And again, the only answer that Twitter is able to provide is that it’s trying to combat the issue that it calls a huge concern.

So why is there so much neglect and what can the app do to better its surveillance?

Researchers from the NCRI claim there are multiple reasons why the platform isn’t able to take proper moderation measures.

For starters, users are intentionally very evasive. They communicate in code words that might not be noticed by Twitter. Then some put out claims of pictures having fake blood. This prevents content removal. Next, Twitter seems to be more engaged with political content and its moderation. This upsets communities more and gets reported more too. It’s hard not to notice such people.

The issue is huge and researchers claim if it continues, we could be dealing with serious disorders. And in the end, we’ll be dealing with serious or fatal injuries.


Read next: Twitter’s Attempt To Make More Revenue From Adult Content Put On Hold Due To Major Flaws
Previous Post Next Post