Researchers found that use of emoji’s can decrease the chances of abusive posts being tracked down

The internet has seen large growth in the past few decades, the reason behind the trend being the easiness that internet brings into our lives. It has made almost everything just a click away thus reducing efforts and making things faster. Within the last few years, we have become largely dependent on the internet and living without it seems difficult for us. While the internet and its related products have made life easier and effort less for us, they also have a few disadvantages. The most primary disadvantage that comes with the use of internet is cyber bullying and abuse.

A lot of internet users, including minorities, religions, women and children’s have faced quite a lot of abuse while on the internet. Social media platforms are one of the most primary places for abusive posts and messages against all sorts of people. Social networks understand this well, which is why majority of them are working on minimizing abuse on their platforms. Well, they have been successful in limiting abuse by some extent. However, to completely stop abuse on such large scale is near to impossible.

Recently, Oxford Internet Institute's report discovered that online posts that contains abuse are becoming less likely to be identified by artificial intelligence because of emoji’s used in it. The report revealed that, while most algorithms used to track down hate posts work well with posts that only include text, it becomes largely difficult for them to work with posts that include emoji’s. This has led to a lot of harmful posts that contain abuse comments with emoticons to prevail on the platform, one example of this is the Euro 2020 finals, when players from England received racist remarks upon losing. The remarks however were not detected by the algorithms since almost all of them contained a monkey emoji.

A large reason behind systems not being able to detect abuse on the internet, is them being trained on data bases that mostly contain text and rarely has emoji’s. When these algorithms come across abusive posts with emoji’s it automatically sorts it as acceptable.

A recent analysis disclosed that Instagram accounts that posted racist content but used emoji’s with it were three times less likely to be banned than those accounts that posted racist content without using emoji’s.

Amid this problem, research from Oxford University came forward to develop a solution for this problem, which led them to create a new database of around 4000 sentences that contained different emoji’s. Researchers then used the new database to train an Artificial Intelligence based model. The AI based model was then tested upon different hate comments such as those on minorities, religions and the LGBTQ community.

Google’s model named ‘Perspective API’ when tested with the database created by researchers, was only found to be 14 percent efficient. The AI model created by researchers increased Google’s efficiency by about 30 to 80 percent.

The researchers after publishing their report have shared their data base online for other developers and companies to use.


Read next: Google Trends Just Turned 15, Here are the Trends That Have Stood Out Since 2006
Previous Post Next Post