Meta Faces Backlash Over Mysterious Instagram Bans and Facebook Selfie Checks

For weeks now, Instagram users have reported sudden account bans with no clear explanation or response from the platform. In many of these cases, individuals insist they have not violated any rules, yet they find themselves locked out with little recourse and no reply after submitting appeals. The volume of complaints, especially on Reddit and X, has grown enough to catch wider attention.

Many of the affected users on Reddit describe being stuck in an unresolved process. Some say they uploaded identity documents or submitted multiple appeals, only to be met with silence. Several note that they received no warning before losing access and have not been given a reason for their suspension.

The concern for some users goes beyond lost access to personal profiles. Small businesses and independent creators are also being affected, with some stating that the ban has cut them off from their main marketing platform and customer base. As a result, users are increasingly pointing to automated systems as the likely cause, though Meta has not confirmed any technical details or issued a public statement.

Moderation errors are not unusual on large platforms, especially when artificial intelligence is involved, but the number of cases and the lack of human oversight in the appeal process have left users frustrated. In some instances, people have claimed they were banned for serious policy violations they insist they did not commit, such as content related to child exploitation, a label that carries severe reputational consequences, even if wrongly applied.

Online, calls for accountability are increasing. A petition calling for Instagram to review its moderation approach has gained several thousand signatures. Some users have discussed the possibility of legal action, citing emotional and financial consequences from the account losses.

This incident follows a broader pattern among major tech platforms, where the reliance on AI moderation is being scrutinised. Earlier this year, Pinterest acknowledged an internal error that led to mass suspensions but declined to say whether artificial intelligence was responsible. That company eventually restored affected accounts after admitting fault, but gave few details about the underlying issue.

While Instagram remains under pressure, Meta is also facing a growing number of complaints from long-time Facebook users who say they have been locked out due to the platform's facial recognition checks. As per several comments shared on Digital Information World's post, the process now requires some users to submit a video selfie to confirm their identity. This requirement appears to be triggered in a variety of cases, some after years of activity with no prior issues.

Dozens of individuals have described being asked to submit facial video verification without being told why. Many of them do not own smartphones or desktop cameras and say they are now permanently locked out. Some, including elderly users who only use Facebook to stay in touch with family, expressed confusion and concern about how the change was introduced. Others worry about how the video data might be stored or used, particularly in relation to AI training or personal profiling.

One user explained they had been on Facebook for over a decade and had never posted anything questionable. They were looking at the Marketplace when they were suddenly logged out, then told to verify their identity with a facial video. Another user said they had not used Facebook for years but attempted to log in, only to find the old account inaccessible. After trying to sign up again, they were subjected to puzzles and then a facial scan prompt.

Some individuals speculated that their technical setups might have triggered the system’s suspicion. One mentioned running Linux without geolocation libraries, setting their birthday to 1905, and using privacy-focused browsers that block tracking. Others questioned whether low activity levels or old accounts might be seen by the system as potential bot behaviour.

The consistency across many of these reports suggests that the video selfie check is not just reserved for new sign-ups or flagged activity. It seems to be part of a broader verification push that is catching regular users in the net. Several pointed out that they were not warned about the change and were offered no other way to prove their identity if they lacked a camera or chose not to upload a recording.

Some worry that the system may be biased or discriminatory in how it flags accounts, especially in the absence of transparency from Meta. One user questioned whether certain political views, inactive posting habits, or refusal to participate in advertising were factors that influenced the verification demand. Others said they felt like they were being gradually pushed off the platform, not for breaking rules, but for not aligning with how Facebook wants users to behave or interact.

Despite the volume of complaints, Meta has not publicly addressed the facial recognition issue or explained the criteria triggering the video selfie requirement. For now, affected users remain locked out with no clear timeline for resolution.


Image: DIW-Aigen

Read next: Meta Adds Privacy Warning After AI App Publicly Displays Personal Chats

1 Comments

  1. They must fix this issue. My account was temporarily deactivated for personal reasons! I attempt to reactivated only to be met with a notice saying I violated community guidelines. I didn’t receive a single email about my account being risk or that it was deleted. For any platform nowadays users are informed that their accounts have any activity going on with it. Whether it’s a policy violation or account deletion, users are notified through their email. This is how I know this ban that many are facing is completely unfounded and massive automated glitched. They need to make a public statement and apology like Pinterest did.

    ReplyDelete
Previous Post Next Post