Top Social Media Apps Are Using AI To Remove Traces Of War Crimes And Human Rights Abuse, New Research Proves

The world of AI is being used in so many different sectors that it has now become a little bit too hard to keep track of where it might be doing more harm than good.

A recent report by the BBC is shedding light upon how so many war crimes and incidents involving the abuse of human rights might be taking place but the fact that there are no traces of it is harrowing.

Leading platforms are being accused of using the power of AI to delete graphical footage that could potentially lead to evidence getting destroyed for things like prosecutions and holding people accountable for wrongdoing. And when it’s not even archived, it’s beyond impossible to get it back.

But top social media firms like YouTube and Facebook’s parent firms disagree. They feel what is being done is in the best interest of users. However, members of Meta’s Oversight Board do not agree and added how extra precautions can turn out to be hazardous for obvious reasons and such moderation is doing more harm than good.

Most apps claim there is an exemption given to them for showing graphic footage, especially when it’s for the sake of the public’s best interest. Surprisingly, BBC stood up and says that’s not fair because if they were to put up such material, it would be deleted in no time.

One such example happens to be the rollout of footage featuring the war of Russia against Ukraine. And that got deleted so fast.

Remember, AI is designed to get rid of harmful content, and in terms of content moderation, so many machines don’t have the capability to identify whose violating what and if any crime is taking place. It’s a huge disadvantage.

We’ve seen so many top reporters cover the war in Ukraine and the shocking crimes against humanity taking place on a routine basis. This entails both women, men, and kids getting shot on the spot and dying in a helpless manner.

In terms of content moderation, there are plenty of images that are of violent nature but most machines don’t have what it takes to outline those violations.

Those affected by this form of action say they’re mortified by how easily apps find it alright to remove such content and leave no trace of the truth. They find it unfair and are now calling out for more awareness on the subject because it’s against their human rights.

Read next: 99% of Hate Speech Produced By Blue Users On Twitter Fails To Be Removed, New Study Claims
Previous Post Next Post