Research Sheds Light on Regrettable Video Recommendations by YouTube

YouTube despite being one of the most widely used video-sharing platforms has been found out to have several problems just seconds short of debacles with its algorithms and suggestions.

This problem of YouTube’s suggestion algorithm was brought to the limelight by the consumer-sourced study of Mozilla Foundation that highlighted the discrepancies in recommending regrettable videos according to users. This study was centered around Mozilla’s RegretsReporter, a browser extension that helps to flag and report videos on YouTube that users deem as regrettable.

The issue reached its hallmark when the suggestion algorithm was regularly recommending videos that breached YouTube’s own policies to its users that would find these videos not only as disturbing but offensive and regrettable as well.

Even though YouTube has regularly been making adjustments here and there to its algorithm, there is been only the very slightest change in the result, and the statistics published by the Mozilla foundation attack at the candidacy of YouTube as the best video sharing and viewing platform.

The findings of the study mentioned that there were around four thousand videos that were flagged regrettable by users from over ninety countries across the globe. What was more offensive was the fact that non-english speakers reported about 60% more regrettable content than English speakers. It was right into the face of the diverse and global algorithm that YouTube boasted and stood proud on.

Brandi Geurkink, Mozilla’s Senior Manager of Advocacy mentioned in his statement about the harm that the algorithm brings and how it has sparked on the flames of misinformation. He further went on to say that these findings are very few in nature and just the tip of the iceberg of the ocean of misinformation and regrettable videos that YouTube suggests. He hopes that the results of these findings may urge the public and lawmakers to have a transparent YouTube Artificial Intelligence and suggestion algorithm.

Despite the fact that these videos work well and garner more views than any other video of a similar sort, it has been brought to the light that these videos are often unrelated and often violate YouTube’s policies, Brandi mentioned as the end of her statement.

A YouTube spokesperson on 7th July 2021 went to the TNW to clarify these allegations with his statement. He added on the basis that around 80 billion data analysis is used to recommend about 200 million videos on just the YouTube homepage. His numbers faltered all claims as he further went on to state that only around 1% of the videos that it recommends are borderline offensive.

The eagerness of YouTube to update its algorithm was shown by the openness to the idea of external research and more researches like Mozilla’s but the question still stands that how far can AI go to predict the best and is there a limit to the AI and its suggestions on YouTube.


Read next: YouTube starts testing new policy violation emails which include time stamps and reasons, thus offering more transparency
Previous Post Next Post