Facebook Comes Out With "One Strike" Policy To Protect Its Live Streaming Service From Misuse

Remember how the Christchurch mass shooting in New Zealand was broadcasted via the live-streaming service of Facebook? While it was a serious blow to the social media giant, especially considering the fact that they were taking necessary steps to make more users go live, they have now come out with a strict policy update which will ensure that no one gets to abuse the rules, by any means.

To be more precise, it is called as “one strike” rule which further dictates that anyone who violates the most serious policies for once will be restricted from using Live for a certain period of time - most probably 30 days. In fact, while announcing the rule, Facebook’s VP of integrity Guy Rosen openly stated that this is going to be a straight forward warning for anyone who shares a link to any statement from terrorist groups as well.

Fortunately enough, that’s not all. Facebook also plans to implement additional restrictions like limiting the ability to take out ads on the platform. Besides that, there are also chances that Facebook can even ban such users for life, depending on how serious was the violation of “dangerous individuals and organizations” policy.

However, that isn’t seem like the perfect solution. Facebook trusts artificial intelligence (AI) to detect and counter violent content. Over the time, this raised a question mark on AI system’s ability to deal with non-English languages and the concern was indeed real. The detection system not only failed miserably in Myanmar but couldn’t deal with the aftermath of Christchurch as well.

The stream got reported to Facebook 12 minutes after it had ended and the moderators also failed to block 20% of the videos related to the live stream. But Facebook manipulated the fact yet again telling that their AI and human team have got it under control.

Now one might consider it as a response to the failure by the company, but Facebook has recently said that it will invest $7.5 million in new research partnerships with leading academics from institutions like The University of Maryland, Cornell University and The University of California, Berkeley, in order to specifically improve the image and video analysis technology. They will continue to expand research partners for combating deep fakes too.


Although Facebook’s announcement came out within less that a day after leaders from around the world, including New Zealand Prime Minister Jacinda Ardern, met the leading tech companies to sign a pledge for increasing the efforts and modifying the techniques to fight toxic content, this issue needs to be solved as soon as possible.

The new regulations isn't a mere response to Christchurch call. It’s a growing concern for multiple countries and everyone is indeed looking forward to such actions for more safety.



Read next: This is How the New “Clear History” Tool By Facebook Will Affect Advertisers
Previous Post Next Post