Microsoft's LinkedIn Is Curbing Inappropriate Profiles and Content from The Platform Via AI Technology

As technology and social media are becoming more accessible for all, problems are becoming inevitable. Research says there is more abusive and damaging content on social media than on other sites, which is the reason social media giants like Facebook, InstagramTwitter, and Pinterest are using AI-based models and machine learning to detect this issue and remove the content. Following the footsteps, LinkedIn has just highlighted how its AI-based models are detecting and filtering objectionable content that doesn’t match the criteria of their platform. Owned by Microsoft, LinkedIn has over 660 million users, making it a big market that has to deal with the issue of spamming, publishing of inappropriate content and bullying. To address this problem, the social network takes the help of an AI-based model to detect inappropriate profiles and spam content but also illegal services, advertisement and scam based content.

Previously to filter out the inappropriate content, LinkedIn was using a blocklist where certain words and phrases are to be added according to the terms of service and community guidelines. This was used for removing potentially harmful content or spamming accounts on LinkedIn. The main issue was that maintaining this blocklist was quite difficult as it required a lot of manual effort and in case the list is not maintained properly it could lead to potential damage later. There are few words with several different meanings and restricting one might end up making it difficult for the user to communicate, in short, the key is to have a filter process that understands the context first and then filter out a possible issue.


To solve this issue, LinkedIn has now adopted machine learning approach that mainly involves a Convolutional Neural Network that is mainly an algorithm class for imagery analysis. Through this AI, the content type is taken into account and the profiles are labeled as appropriate and inappropriate. This filter has been trained by spotting the inappropriate content that’s already present in the database and labeling it, this is later fed to the AI model juts to help it understand what inappropriate content seems like. To reduce biases, they also identified problematic words from inappropriate accounts and later fed it to the filter to ensure that it is able to detect the problem right away.


Photo: JasonDoiy / Getty Images

Read next: LinkedIn Added Several Exciting Feature to Facilitate Brands
Previous Post Next Post