New Study Says Misinformation Can Be Tackled If Users Are Given Control Of Moderation

The misinformation crisis is one that many digital platforms are forced to deal with on a daily basis. It’s been affecting the tech world for a while now and that is what led to a new study by MIT.

The institution developed an experimental platform and put users in the driving seat for content moderation. This was very different from what other social media platforms do as in those situations, it’s the user in the passenger seat with human algorithms or fact checkers taking center stage. The latter is the one to flag false content or any misinformation that may arise.

With the power of the platform in the hand of the users, the researchers opted to survey how people would avoid misinformation on the app. Such findings would assist them in determining how much accuracy was used during fact-checking. Moreover, posts were then filtered which appeared on feeds based on such assessments.

And that’s when it was revealed through such a study how effective it would be to assess posts that went on to misinform the masses.

It was interesting to see how so many users went on to value the chance to assess such posts and even witness the process of assessments through a more structured ordeal. In the same way, we saw researchers make use of content filters in a different manner.

While some blocked misinformation, others resorted to filters to seek articles.

This shows how a more decentralized manner was utilized for moderation that could end up in seeing more reliable content online. And it was shocking to see how research studies assumed incorrectly that users would not be able to differentiate right from wrong.

In fact, they did a more than decent job at conducting scrutiny and trying to assist one another.

The efforts aren’t being supported by platforms and researchers of this MIT study feel that is wrong as the time has come to put users in charge.

It can be said without any doubt that online misinformation is a problem occurring worldwide. But the current methods used to remove misinforming matter has some major downsides. For example, the apps end up using algorithms to go through posts and that creates so much tension as it is.

The major issue is how people tend to react to misinforming content without realizing what they were doing but you’ll be shocked to learn that apps take that on as content engagement. Hence, they start to show the posts, even more, creating an even bigger problem than before.

We saw a total of 192 people getting surveyed in this study and they were recruited through Facebook and a long mailing list. It was done to see if users would end up valuing such features or not.

The results really shocked plenty of people including those leading the study. It was just shocking to see users be hyper-aware of the content in their surroundings and then they would end up tracking and reporting it too. So that is a clear sign that users can be in the driving seat to help curb this issue.


Read next: Toxic Social Media Comments Grab More Attention Than Positive Ones According to Eye Tracking Study
Previous Post Next Post