Facebook’s Toolkit to Combat Algorithm Bias Is Not Much Effective According To Experts

Over the past half decade or so, Facebook’s primary goal has been to fight against the various forms of controversy that are coming up that make the public mistrust the platform as a whole. From the usage of Facebook data in the Cambridge Analytica scandal to widespread implications that Facebook’s algorithm is biased against people of color and other disadvantaged minorities, there was a great deal of cleanup work that Facebook had to do with regards to its image but the unfortunate fact is that the social media platform has failed to make the kind of changes that people would ideally have wanted to see.

This problem extends to Facebook’s subsidiaries as well, with one particular controversy involving Instagram accounts belonging to black people having a 50% chance of being disabled than accounts belonging to white people. In an attempt to mitigate the fallout from such controversies, Facebook established a team called its RAI team which stands for Responsible AI. The main goal of this team was to develop a new feature that Facebook calls Fairness Flow which can attempt to ascertain the kind of impact that the algorithm has on varying groups of people.

Facebook suggests that its attempts are pure hearted in nature, but there is a clear reason for them to try and take matters into their own hands. The alternative would be increased regulation against the platform which could seriously hamper its growth into the future, and Facebook wants regulators to feel like enough is happening internally to stave off any regulations that they might want to put in place at any point in the future.

Since Facebook has an incentive to keep regulations in house without really doing much about the problem, it should come as no surprise that most experts call the Fairness Flow toolkit rather useless in a lot of different ways. According to experts from MIT, Queen Mary University of London and a variety of other reputable institutions, this toolkit basically only shows statistical data that proves what people already know.

Facebook needs to undertake widespread structural changes if it actually wants to fix the problem at hand. Using underhanded techniques to try and manipulate public policy and make it more favorable towards themselves than might have been the case otherwise is just not sustainable, and this toolkit seems to be just another addition to Facebook’s surface level solutions that don’t really do much to actually repair any of the damage that it has caused.

For one thing, Facebook’s engineers are not required to use Fairness Flow in any way, shape or form. Rather, it is an optional tool that they can make use of, and even if it reveals certain biases there is no obligation upon the company’s engineers to act on this information and attempt to rectify these biases.

Facebook is quickly heading into a situation where people will just stop trusting it entirely, and this could result in a massive shakeup of the world of social media by allowing other companies to step up and take a more commanding role in this regard. It will be interesting to see where things go from here as more and more experts weigh in on Facebook’s attempts and find them to be lacking.


Photo: Jeff Chiu/AP

H/T: VB.

Read next: The new slow down comments option added by Facebook can make you wait 5 minutes before adding a new comment in groups
Previous Post Next Post