Study Shows, Verifying Facts Appears To Have an Impact On Social Media Recommendation Algorithm

A research conducted by Cornell University reveals that the more individuals are encouraged to actively engage (particularly in terms of critical thinking) with the news they encounter, the higher the chances of them significantly reducing the act of circulating misinformation.

They observed this discovery in an example of a rumored terrorism case that took place in a Spanish supermarket back in 2017. Users on Reddit came across the news but were unaware of the fact that the stories were completely fabricated, with sources picked up from various tabloids and boosted by the app’s recommender algorithms.

Assistant professor of communication, J. Nathan Matias, initiated a study with a group of fourteen million Redditors. He noticed that prompting users to engage in fact-checking could steer the needle of the algorithm. He claimed that the fabricated news and stories ranked low on Reddit after users went and fact-checked them out of suspicion.

In his study published in Scientific Reports, Mathias stated that the key takeaway was that individuals need not see themselves as chained to algorithms and technology platforms.

Researchers programmed software after the experiment with the users of the world news Subreddit that centered around platforms and webpages notorious for posting tampered news. The program monitored each time the user of the subreddit posted a hyperlink for further dialogue.

After that, the program allocated the discussions in a condition out of three.

In the first one, they prompted the users by repeatedly sending them a notification, pushing the readers to verify the claims made in the stories and to comment on links that led to debunking them for evidence.

In the second one, they sent messages to Redditors that motivated them to down-vote such articles.

The third was a control group where they did not give a prompt to the users.

According to Mathias, he believed that the first option that required the participants to fact-check might flop as the more Redditors interacted with the fabricated articles, the higher the chances of the algorithm to take it as positive reinforcement and rank it above average.

Mathias expressed his unease about the potential of persistently spreading misinformation, which may lead to it becoming permanently entrenched in people’s minds. Mathais further raised the question of whether popularity algorithms have the ability to discern between what’s accurate and what’s not, even after getting them checked for authenticity as they might prioritize engagement over accuracy when presenting content to a wider audience.

However, the results proved otherwise.

After collecting 1104 story discussions from December 2016 to February 2017, he observed that simply encouraging the participants to fact-check made the story drop its rank by an average of minus-25 spots, even without the down-voting prompt. That made the article go down on the Reddit home page, reducing the chances of readers coming across it by a great measure.

Although the experiment made use of a small group of participants, the assistant professor viewed the research as evidence, claiming that readers can wield the news that they consume however they like. He further stressed upon the feeling of power that the people have when it comes to refusing false information.

Mathias was optimistic about humans having the power to improve the newspaper to make algorithms react appropriately and accordingly, but only after they unite to influence such change. Additionally, he declared the results of the experiment to be a powerful example and that this could further contribute important data for scientific purposes.

Read next: Just 13% of Firms Are Complying With California’s Strict New Privacy Laws
Previous Post Next Post