Skip to main content

Future-Proofing the Metaverse

February 24, 2022 | Expert Insights

To have a cohesive society, it is important to have a shared foundation of values, reality, and a similar understanding of the world. But critics have accused social media platforms like Facebook of using algorithms that actively work against such a shared foundation. When discussing the Metaverse, therefore, the pertinent question is who will get to augment reality?


During a virtual demonstration of Sensorium Envision, a metaverse environment, the attendees could chat with some of the virtual personas. One of these bots (or virtual personas) named David spewed misinformation about the COVID-19 virus and related vaccines, alleging the vaccines could be more dangerous than the disease itself. Although this was just a single example, it shows how easily people can be prone to misinformation or disinformation in the Metaverse.

Tech companies have not had the best track record when it comes to policing such misinformation, disinformation and hateful content. The debate surrounding this issue was recently re-ignited when internally leaked documents of Facebook demonstrated algorithms that spread harmful information. If Facebook cannot effectively tackle this issue on its current social media platforms, is it equipped to do it on a more complex network such as the Metaverse?

Future-Proofing the Metaverse


After Mark Zuckerberg made his announcement about Facebook transitioning to a metaverse company, the Washington Post had reported how this is not only a business change but also a political one. The goal is to position the company away from antitrust issues, political extremism and privacy concerns. However, issues such as privacy, misinformation, and anti-competitive practices can also arise in the context of the Metaverse. Policymakers should not assume that these issues will disappear due to certain technological developments.

Andrew Bosworth, who is currently the head of Meta Reality Labs and is soon to be the CTO for Meta, said, "Individual humans are the ones who choose to believe or not believe a thing; they are the ones who choose to share or not share a thing". Bosworth suggests that the issue of disinformation and misinformation is not a Facebook problem but rather an individual problem. To pretend that the company's hands are clean is wrong and implies that the company itself is not motivated enough to fight the issue.

According to Karen Kornbluh, director of the German Marshall Fund's Digital Innovation and Democracy Initiative, the Metaverse will make it easier for extremists to recruit people into their ranks and perpetrate violence. Tech companies mostly rely on Artificial Intelligence (AI) to moderate content on their platforms. Such AI will need to be trained to detect inappropriate content in the Metaverse. The levels of problematic content will depend on how these digital environments are designed. If most interactions only happen in smaller private spheres, malicious content may not spread at the rate it does on social media. 

Control over the Metaverse's physical infrastructure could lead to global confrontations. Countries with capabilities in hardware, computer networks, and payment tools will have significant international leverage. China and Taiwan seem to have an important role in the infrastructure of the Metaverse due to their respective investments in the Digital Silk Road initiative and the semiconductor ecosystem.

Management of personal data could also be a problem in the future. Today, a handful of tech companies have access to large amounts of consumer data, which they store and sell. In the Metaverse, the amount of data available will be significantly more and can be monetised to a higher degree. Control of data also allows for control of the market. Today, companies capture users' data and then deny any access to competitors. The U.S Federal Trade Commission describes this as a ‘buy or bury’ strategy. 

Privacy concerns also extend to the ability of corporations and governments to use facial recognition technology or monitor individuals. Recently, Frances Haugen, a former Facebook employee, had explained how the Metaverse could significantly impact a person's mental health - "When you go into the Metaverse, your avatar is a little more handsome or pretty than yourself. You have better clothes than we have in reality. The apartment is more stylish, more calm. And you take your headset off, and you go to brush your teeth at the end of the night. And maybe you just don't like yourself in the mirror as much." Despite these concerns being flagged, Facebook has not yet publicly spoken about this issue on how it plans to resolve the same.


Facebook has argued that Meta alone will not own virtual worlds; it will also be owned by a collection of engineers, creators and tech companies. Regulators worldwide can work with these innovators to start discussing policies that can be implemented to ensure that the Metaverse is safe. Facebook has also said that it is meeting with human rights groups, governments and think tanks to discuss creating standards and protocols for its virtual world. On top of this, it has invested 50 million dollars in tackling the concerns that many have pointed out. 


  • Issues such as misinformation have already found their way into the Metaverse, even though the technology is still in its initial stages. Policymakers will have to proactively adopt steps that ensure it is a safe environment. 
  • The nature of the problems emanating from the Metaverse will heavily depend on how it is owned. If we live in a world where there are multiple decentralised metaverses, new issues may crop up.