Researchers believe that curators show bias while training AI models for algorithm-generated art

Art is a complex 'science' of creating various things, using different media of expression and tools. It takes years and years for an artist to completely evolve as a ‘creator.’ These are terms that we nowadays use very casually. It's all because of various applications in this digitalized world that help people create any type of art with the help of AI-based algorithms. Now, some researchers from Fujitsu have published a new study in which they have shown the role of curators who create datasets to train AI models that are used by various applications to create generative art.

For their research, they surveyed different academic papers, applications that help the users create generative art with the help of AI, and various online platforms. They selected models for their research on the basis of various famous Art movements like Renaissance, cubism, futurism, impressionist art, the expressionist form of art, the post-impressionism period of art, and romanticism.

They took examples from varied genres like landscapes, war designs, and paintings, illustrations, portraits, etc. They studied the work of artists like Clementine Hunter, Mary Cassatt, Van Gogh, Gustave Doré, Gino Severini, all of them who have been famous for introducing various reforming patterns in art.

When they studied real art and compared it with the one that is ‘reproduced’ by Artificial Intelligence in different applications and platforms like DeepArt and GoArt, they found out large discrepancies. During their research, they found out examples like a Cubism artwork being represented in ways that a Futuristic artwork appears, or a piece of artwork which was depicting realism by the real artist would be shown as a piece of expressionism.

So, the researchers looked into the cause of these mix-ups and imbalances, and it turned out that one important factor to blame is bias, that too, in different forms.

First off, the researchers deduced that the datasets that are used to train models for generative art are all highly reliant on the curators’ choices, likes, and dislikes.

Secondly, some apps were found to have trained around 45000 Renaissance portraits of white-skinned people only. Now, automatically, the apps’ AI generates art with white-toned people in it only, and that is the root cause of skin-tone bias.

There were many discrepancies in the labeling process also or the annotation of datasets was also found to be quite faulty. And there also, the annotators’ bias towards culture, beliefs, and other preferences was reflected in the labels they created. All of that inadvertently had an impact on the art that these models would generate for the lay users who wouldn’t even know the faults with the actual art.

Read next: This New AI Model Changes Silence into Sound!
Previous Post Next Post