Meta’s Oversight Board Examines Two Controversial Incidents Involving Explicit AI Deep-Fakes Of Real Women

The Oversight Board of Meta is currently opening up a new investigation that involves two female users who are said to be impacted by the rollout of explicit deepfakes targeting them.

The Board is busy investigating some of the organization’s tech policies in this regard including any past ordeals linked to AI-produced pictures of women without clothing and taking part in explicit actions.

The news comes in line with the reporting of two incidents where such nude pictures made using AI technology were published across social media including Meta’s leading apps such as Facebook and Instagram.

The matter first came to light when one particular group across Facebook known for generating AI content started to bring this matter to light including how the post ended up simulating real females arising from the general public and then showing how they were groped by unknown males. This was soon taken down.

While the board refuses to unveil the real identity of the individual as a means to ensure their privacy remains guarded and protected at all times to stop harassment, it’s getting a lot of negative attention.

The offensive picture was included by one leading media database which ensured the picture would be detected immediately when it was published again in other places across the platform owned by Meta. Here is where the user published the explicit picture made through AI and generated an appeal as to why it was removed. However, they failed to succeed and then were able to generate a new appeal directly to the board in this matter.

Across Instagram, one account known for publishing posts featuring AI pictures of females of Indian descent showcased naked deepfakes of the females gathered at several public arenas in the nation. But this report was shut down as no one from the firm ended up reviewing it. But despite an appeal generated on this front, it was further rejected.

The board mentioned how it was going to flag the matter toward Meta and it did so rightly after getting rid of the deepfake, taking reference from the company’s bullying and harassment policy.

There are two separate incidents to take into consideration here and that’s proof of how the organization’s policies might be working in a rather inconsistent manner when dealing with deepfakes generated using AI tech and real females. It’s similarly not clear as to why Facebook’s parent firm would abruptly shut down images of females while that about the females in India wasn’t given much heed.

There is growing controversy regarding one particular picture being included in the company’s detection database while in another, it was seemingly not.

This is why the Oversight Board seems to be more under fire now than ever to investigate the matter with greater detail and assess what exactly went wrong with the organization’s policies and what practices it was taking on to ensure the practices were being enforced the right way.

It’s already quite controversial to see how pictures made using AI are getting retouched or manipulated and published across Meta’s top apps without any checks and balances. And the matter is even more concerning because you’re dealing with content that’s usually not allowed regularly so why this.

As it is, Meta is being called out for so many other cases where spammy content continues to go viral on the app including a post dubbed ‘Shrimp Jesus’.

We even saw the company mention how it would be making attempts toward adding labels on all content made using AI, an announcement generated this year in February.

Image: DIW-Aigen

Read next: Google Employees Take Massive Stand Against Company’s Billion-Dollar Cloud Computing Contract With Israel
Previous Post Next Post