Clicky

According to Facebook (Meta), the metaverse will be censored

Already laying the groundwork.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

The Facebook executive spearheading the company’s virtual reality efforts hopes to create virtual worlds with “almost Disney levels of safety” but has also acknowledged that moderation “at any meaningful scale is almost impossible.”

Facebook’s parent company Meta is working on creating virtual reality worlds where people will socialize, work, game, and even do shopping using 3D avatars of themselves.

In an internal memo, obtained by the Financial Times, Andrew Bosworth, who will be spearheading Meta’s $10 billion “metaverse” project, alleged that virtual reality can potentially be a “toxic environment,” especially for minorities and women.

The memo notes that content and behavior censorship and moderation could be a big challenge, especially given the company’s poor record in fighting “harmful” content.

“The psychological impact on humans is much greater,” said Kavya Pearlman, chief executive of the XR Safety Initiative, a non-profit focused on developing safety standards for VR, augmented and mixed reality. She further explained that users would retain what happened in the metaverse like it happened in reality.

Bosworth outlined a plan that the company could use to tackle the issue, but experts have noted that policing behavior in a virtual reality setting requires a lot of resources and might not even be possible.

“Facebook can’t moderate their existing platform. How can they moderate one that is enormously more complex and dynamic?” said John Egan, chief executive of forecasting group L’Atelier BNP Paribas.

Foley Hoag’s technology lawyer Brittan Heller said that policing images, video, and text is different from policing a virtual world.

“In 3D, it’s not content that you’re trying to govern, it’s behavior,” she said. “They’re going to have to build a whole new type of moderation system.”

According to a Safety video published by Facebook-developed virtual reality social game Horizon Worlds, moderation in a virtual world could involve constantly recording interactions and storing them in the users’ VR headsets. In case a user is reported, their recorded interactions are sent to human reviewers for assessment.

In the memo, Bosworth suggested that Meta should rely on Facebook’s existing community guidelines. Additionally, because users would have a single account with Meta, they could be blocked across all platforms if they violate guidelines.

“The theory here has to be that we can move the culture so that in the long term we aren’t actually having to take those enforcement actions too often,” he added.

Facebook currently relies on AI in content moderation. AI will also probably be used to monitor virtual worlds. A spokesperson told the Financial Times that it was “exploring how best to use AI” in Horizon Worlds.

The spokesperson said the company was looking into how best to keep people “safe” in the metaverse.

“This won’t be the job of any one company alone. It will require collaboration across industry and with experts, governments and regulators to get it right,” the company said.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share