Skip to main content

Metaverse virtual worlds lack adequate safety precautions, critics say

 Internet safety experts are raising alarms about harassment and safety in the metaverse as companies invest heavily in and promote the benefits of virtual spaces. 

The metaverse, generally thought of as virtual worlds in which people can interact, has become one of the hottest topics in tech in recent months, boosted by Facebook co-founder Mark Zuckerberg’s vision for the metaverse as a place where people can do “almost anything you can imagine.” In October, Facebook rebranded as Meta (though Facebook is still the name of the company’s core social network).

A variety of virtual spaces already exist, and it’s in those worlds that experts are already seeing signs of trouble.

Researchers with the Center for Countering Digital Hate (CCHD), a nonprofit that analyzes and seeks to disrupt online hate and misinformation, spent nearly 12 hours recording activity on VRChat, a virtual world platform accessed on Meta’s Oculus headset. The group logged an average of one infringement every seven minutes, including instances of sexual content, racism, abuse, hate, homophobia anad misogyny, often with minors present. 

The organization shared its data logs and some of the recordings with NBC News, depicting more than 100 incidents in total.

Meta and many other companies are seeking to capitalize on these new worlds, specifically around creativity, community and commerce, using immersive technology (often a headset worn to simulate a first-person field of view). But CCDH and other critics are worried that, similar to the company’s past, Meta is prioritizing growth over the safety of its users.

“I’m afraid that it’s incredibly dangerous to have kids in that environment,” Imran Ahmed, the CEO of CCHD, said. “Honestly speaking, I’d be very nervous as a parent about having Mark Zuckerberg’s algorithms babysit my kids.”

Ahmed specifically highlighted issues with reporting, noting CCDH was only able to flag about half of their logged incidents to Meta, criticizing the company for a lack of traceability or the ability to identify a user in order to report them, and the lack of consequences for wrongdoing.

Meta did not respond to questions about these reporting issues. 

Meta’s current safety features include the ability to mute and block people, or to transfer to a Safe Zone to give a user a break from their surroundings. Upon a report being submitted, it will include a recording of the last few minutes of a user’s experience to include as evidence. CCDH researchers pointed out that this was a tedious process to file a report quickly.

These features also seem to neglect the possibility of a user not being able to enable the safety precautions quickly and easily if they’re experiencing some sort of infringement. 

reports.
03:59 /05:16

Comments

Popular posts from this blog

Postharvest Technology Businesses for MSMEs across sub-Saharan Africa

Problems in Agriculture: Loss of Land and Decreased Varieties