Google to Introduce Two New Tools to End Bias in Artificial Intelligence

Artificial Intelligence has recently been into focus due to its flawed system. Human input is required to train machines and training resources, in the form of actual people or other dummy data is used.

Major technology companies often use white males to train AI, which leads to biases in the algorithm generated. AI monolith Google presented its efforts at the I/O 2019 to showcase the efforts to eliminate this biases.

After years of working, the company introduced one of his methods known as Testing with Concept Activation Vectors (TCAV). CEO of Google, Sundar Pichai gave a brief introduction about it at the annual developer conference.



Through this analysis tool, developers will be able to judge how samples are inferred by AI, how algorithms are evaluated and the notions AI link with samples. Group pf pixels are analyzed by neural networks to study the picture. TCAV then goes in depth of it to see the most relatable and closest concept of it according to humans that the network is preferring.

TCAV is capable of only analyzing and detecting whereas developers themselves will have to fix problems in AI, instead of depending upon TCAV to correct it on their behalf. However, it is an important step towards progress if used rightly.


Another development in order to decrease biases is federated learning. It is applied to Gboard which suggest other users about the jargons, words that are not common but some of the users use it.

Google gives master word database model to phones, and whenever an uncommon word is typed in using Gboard, it is sent to training data of the device. The whole sentence is not fetched, instead only that unique word is sent to Google by the device.

Gathering photo negatives from users can be useful for developers to use mass data for sampling.

Much more efforts are still needed to overcome the potential biases in AI, as it is becoming more and more common into use, such as by Police departments and other governmental authorities.

Read next: Facebook to Make AI More Inclusive By Focusing On Algorithm Development, User Studies and System Validations
Previous Post Next Post