Google Flags Parents’ Accounts For Abuse When They Uploaded Images Of Their Sick Kids Without Clothing

In what is being termed a controversial matter, critics are raising their concern over Google’s AI after one incident opened many people's eyes.

The case was related to a father who uploaded images of his child suffering from a medical condition. He had an infection in the groin region so to assist doctors in diagnosing his medical illness, the dad followed the nurse's instructions for an online consultation.

But a shock came when Google went about flagging his account on the basis of potential child abuse. The whole scenario got out of control and we saw the case being reported to higher authorities and the police as well.

This led to an entire investigation as reported by the New York Times recently as the dad’s account was shut down, followed by a report being filed. But was that really necessary? Well, we’re not sure about that but one thing that has really opened many people’s eyes is how Google lacks the criteria to differentiate the good from the bad.

Labeling innocent pictures with the tag ‘child abuse’ and then adding it into its digital library too, be it personal phones or cloud storage, has really left so many questions in people’s minds.

Just last year, we saw similar concerns come about when it was announced that blurring lines of what should be kept private could actually have deleterious consequences. The move came when tech giant Apple went public with its Child Safety plan.

Through this move, Apple was seen scanning pictures locally across Apple devices, right before they were added to its iCloud. The pictures were then matched with the database CSAM. Whenever a sufficient number of matches were outlined, the content would get reviewed with the aid of a human moderator.

After reviewing the content, the account holder would be suspended if there was proof of it belonging to CSAM. See, the accounts would be removed if it was proven to be illegal.

But as you can expect, the move came with plenty of controversies. The EFF which works on a non-profit basis blasted Apple as they felt it had the capability of opening a new door into users’ private lives. It also meant that all images on iCloud would be less protected instead of actually serving as an improvement.

Therefore, Apple was left with no choice but to put the move on hold, we’re talking in terms of the scanning part. But after the launch of iOS 15.2, we saw another optional feature come about related to child accounts. There was even an option restricted to family sharing plans too.

When a parent provides their consent on the child’s account, the app would analyze any image attachments and see if the photos had any form of nudity. Still, the messages would be secure via end-end encryption.

Any controversial image would be blurred out, with respective warnings being provided. In addition to that, a list of resources would be included to provide online safety too.

This particular incident that began this whole issue started off during the Pandemic when a dad named Mark clicked images of his sick son’s infected groin region. He needed the assistance and was requested by a nurse to follow these steps, right before a proper doctor’s consultation. In the end, the doctor diagnosed the case and provided respective antibiotics, and that’s it.

But to Mark’s surprise, Google issued him with warning statements after just two days that his account was locked as they don’t allow harmful content. They also stated that it was a major violation of policies too.

The father ended up losing all his Google images, accounts, and phone number too.

While providing protection to young kids against abuse is pivotal, critics are now debating whether scanning all images of a user for unreasonable reasons can’t be allowed.


Read next: Google has just fended off one of the largest cyberattacks ever made on a company
Previous Post Next Post