Metaverse Prioritizing Growth Over Users Safety And As A Result Increasing Dangerous Incidents Are Being Reported

Internet safety experts are taking steps after noticing disrupted online hate in the Metaverse and are raising their voices about harassment and safety in the virtual world.

This news has become a trending topic in the field of technology advancement. In October, the co-founder of Facebook, Mark Zuckerberg, endorsed a vision and rebranded it as Meta. The Metaverse is generally a virtual world in which companies are heavily investing in and promoting the benefits of virtual spaces. They generally aimed to develop a virtual space where people can interact and do anything they can imagine. Anyhow, the name of the company's core social network is still Facebook.

Researchers analyzed the 12 hours recording activity on VRChat and came to know about many instances taking place, which are very harmful to kids in that environment. The cases included racism, abusive language, sexual content, often with minorities. Researchers worked with the Center for Countering Digital Hate (CCHD) which is an organization that seeks to disrupt online hate, harassment, and misinformation.The organization collected the data and made a report in which they shared One Hundred total incidents and shared some of the pieces of evidence with NBC.

Many other companies and Meta are planning to make advancements to the virtual worlds and seek to capitalize, especially around the community, using updated technology. But CCDH is uneasy and worried that Meta is focussing on growth rather than the safety of its users.

Imran Ahmed, the CEO of CCHD, said, "I'm afraid that it's incredibly dangerous to have kids in that environment." Further in his statement, he also mentioned that, to be honest; he is very afraid and nervous as a parent that Mark Zuckerberg's algorithms will babysit my kids.

Ahmad criticized the company for the lack of consequences for crimes and misconduct. CCDH was only able to highlight only half of the logged issues to Meta. The company was unable to trace the user to report them for his misconduct. When experts raised these voices, Meta did not respond to any statements or questions related to these reporting issues.

Currently, Meta safety features only include muting, blocking, or transferring people to the safe zone who abuse or conduct any crime. Moving people to a location to give the user a break from the surroundings cannot decrease the spread of hate and crime rates in the virtual world. So, the researchers said that this was a tedious process to file a report. A recent incident happened with Nina Jane Patel. He described that he was virtually harassed in Meta's Horizon Space recently. She also mentioned that she was unable to enact the blocking feature on time and, in the end, left the Venues.

These reports led Meta to change the default, personal boundary, setting in its spaces, which means avatars have to be about 4 feet apart from each other at all times.

Read next: Consumer Opinions on Product Durability and Longevity Revealed
Previous Post Next Post