LinkedIn has been on a ‘cleaning spree’ and that too, for a good reason

Recently, LinkedIn released its Transparency Report and has claimed that its automated defenses encountered a huge number of fake accounts in 2019, out of which, 93% have been blocked.

As per this report, there were a lot of fake account registration attempts that were made all through last year, especially in the first six months of 2019.

Through the proactive tools of LinkedIn, 7.8 million fake accounts were stopped from being created in the last 6 months of 2019, and LinkedIn’s safety teams managed to catch 3.4 million fake accounts. They also managed to restrict 85,600 accounts that were reported by the members.

During this period, the usage of this professional network saw a definite surge.

Because of this increased usage, it was expected to encounter a lot of spam and scam attacks too. But LinkedIn takes pride in its advanced technology and tools through which all of these scamming and spamming attempts were stopped, and these accounts were brought down before they caused any harm to the community and members.

With such things on a roll, another thing that happened was the rise in harmful content.

LinkedIn managed to bring down 500 cases of content containing hateful and derogatory posts or comments, 15,635 cases of harassment, 9337 cases of content containing adult themes, 1839 cases of violent and graphic content, and worst of all, 167 cases of content promoting child abuse and child exploitation.

While LinkedIn’s own tools are pretty amazing, members also played a key role in cleaning the filth and junk from the platform.

The members of the community placed 11564 requests to take down content due to copyright infringements in a total of 290170 pieces of content. LinkedIn followed these requests and after reviewing, pulled down 290145 of those reported contents!

All these efforts are great steps to build a safe and secure environment on the platform. Cybercrime and harassment attempts are a part of social media, and no one is completely safe from them. But it is good to know that these social media platforms are at least trying to control these things as much as possible.

In this report, there are a lot more details about LinkedIn’s proactive role and an efficient system to get rid of all the junk from its platform. It also discusses that new tools and features are in the process of rolling out to let the members enjoy a great and improved user experience.

LinkedIn also acknowledged the role of members who reported unwanted content and mentioned that they hope to continue getting positive feedback and reinforcement through the members in the future also.

Apparently, LinkedIn is not the only platform that took some action against harmful content.

Recently, YouTube also announced several improved policies to limit harassment and cybercrime as much as possible. They have even come up with features through which creators can limit the reach of their videos to minimalize the chances of harassment, along with a reviewing panel for creators that can judge their video before it gets uploaded on the channel.

Creators on YouTube will also be able to manually review all comments under their video and choose to remove any comment that comes in the abusive or harassing category.

So, it is relieving to know that tech giants like LinkedIn and YouTube are trying to provide a healthy and safe environment for their millions of users.

Read next: LinkedIn’s State of Sales Report 2020 shines a light on the emerging and enduring trends that will impact sales in the coming days
Previous Post Next Post