Top Regulator Accuses Tech Giants Apple And Microsoft Of Not Doing Enough To Stop Child Exploitation

Issues related to child abuse and exploitation through online content have been an issue that many tech giants have failed to aptly deal with. And now, a top regulator from Australia is accusing both Apple and Microsoft of not doing enough to prevent it.

The regulator who is an e-Safety Commissioner has worked hard to gain information on the various methods used by different tech giants to curb the matter. They wished to see how much was being done to put an end to child exploitation via content on online platforms. And the answer was very little.

Both Apple and Microsoft were accused of not screening such harmful content before allowing it to be stored in their devices, services, and others like iCloud, and OneDrive.

Both companies were on the spotlight for how they failed to make use of any sort of technology or software to detect child abuse taking place via live streaming on different video platforms like Skype, FaceTime, and more that are owned by Apple and Microsoft. And this new report was a huge eye-opener for the world.

But the tech giants are speaking out on the matter. They claim to be committed to the likes of putting an end to such a situation as threats increase by the second. Moreover, they really do hope to carry on with protecting children and working against the malicious intent of bad actors who are becoming more and more modern with techniques to evade tech systems from detecting their actions.

Reports like these really do put a lot of pressure on leading tech companies to do more. It also entails the likes of Meta Platforms which owns the world’s leading apps and even Snapchat. They’ve been requested to give more details on how they’re protecting children who are vulnerable to such huge risks in today’s day and age.

Seeing such responses coming forward from big tech giants is a clear eye-opener. It definitely raises a question of inadequacy and inconsistency as revealed by the commissioner.

Being so influential and rich yet turning a blind eye to such important matters says a lot, the regulator explained. And it makes you question if such companies should even be allowed to run in the first place.

What was even more alarming is how much time varies for the removal of such content online. It differed from two minutes on some apps to up to four days on others. And again, that’s a huge alarm bell of the reality of the situation.


Sources: eSafety

Read next: Meta Implemented More Decisions From Its Oversight Board In Q3 Than Q2, New Report Claims
Previous Post Next Post