Social work in metaverse: addressing tech policy gaps for racial and mental health equity

Siva Mathiyazhagan, SAFELab, Columbia University, United States
Minahil Salam, SAFELab, Columbia University, United States
Henry A. Willis, SAFELab, Columbia University, United States
Desmond U. Patton, SAFELab, Columbia University, United States

PUBLISHED ON: 16 Feb 2022

Metaverse

The Metaverse is a new combination of emerging technologies such as artificial intelligence (AI), extended reality (XR), and blockchain (Metz, 2021). These technologies will create a virtual world for social connections, entertainment, games, fitness, work, education, and commerce as a digital economy (Snider & Molina, 2021). The main platform companies behind the Metaverse plan for it to become a virtual marketplace, which will require the collection and potential exploitative use of personal data (Wheeler, 2021). It is proven that existing technologies and algorithms are biased and cause digital harm to vulnerable communities (Pilkington, 2019). As a community of social work scientists and mental health practitioners, we are concerned that the Metaverse may amplify social risks, i.e., the addictive nature of algorithms can exacerbate mental health problems in virtual spaces (Karim et al., 2020; McCluskey, 2022). In light of this, there is an urgent need for social work interventions in the virtual world. Social work in the Metaverse would pave a path for racial and mental health equity to prevent possible digital harm. Immediate implementation of the Metaverse in the present non-regulated tech and social condition has the highest potential of amplifying existing social inequalities and mental health challenges in the virtual universe. Therefore, addressing the US federal technology policy gaps with social work principles related to the Metaverse is an urgent priority. This will prevent an exacerbation of existing social and mental health concerns, and the Metaverse may even be used to benefit racial-ethnic minority communities.

Digital harm

The European Commission’s White Paper on Ethics and Artificial Intelligence posits that “the use of algorithms can perpetuate or even stimulate racial bias”. In the US, researchers like Safiya Umoja Noble, Ruha Benjamin, and Joy Buolamwini argue that biased algorithms perpetuate racism on and offline. Racially biased algorithms have denied access to housing, jobs, and other welfare programmes (Sisson, 2019). Biased algorithms are particularly harmful to those with low digital literacy skills, especially historically marginalised communities (Tynes et al., 2021), with lower levels of critical digital literacy being influenced by the digital divide and systemic racism at large (Jacoby, 2021). If the Metaverse is using existing biased data sets and conventional engineers who do not have the social and environmental context to train the algorithms to operate in a safe and inclusive virtual world, it will disproportionately surveil communities of colour (Crockford, 2021). There are no federal technology policies in the US that mandate algorithmic accountability to address digital harm. In this context, the Metaverse could be a threat to safe and inclusive societies unless agencies mandate the effective implementation of inclusive policy standards in the cycle of data integration (Tsai, 2022).

Mental health crisis

Systematic reviews reveal that social media platforms are playing a vital role in aggravating mental health issues and increasing the level of anxiety and depression among their users (Karim et al., 2020). Addictive algorithms are linked with social media addiction and depression (Seymour, 2019). Social media use has become a public health crisis and requires immediate action by the federal government in the US (Bouygues, 2021). The addictive algorithms in 2-D social media are already proven to be causing severe mental health challenges for their users, particularly among teen girls (Seymour, 2019 & Berthon et al. 2019). The 3-D avatars-based social media and integrated with education, health, and other activities in the virtual 3-D world would possibly use more addictive algorithms and create higher levels of mental health risks to their users. Since no policy regulations exist to prevent addictive algorithms and protect the mental health of users online, a community-centred holistic approach to the Metaverse to prevent possible racial and mental health harm to vulnerable groups across the world is required and it has to began from the US federal tech policy regulations.

The Metaverse needs social work

Social workers play a vital role in supporting big tech companies’ aims to address inequalities and mental health challenges. As a mental health field, social work promotes reflexivity, meaningful community participation, and ethical practices with diverse communities through transdisciplinary collaborations and partnerships where those who are most marginalised are centred (Tambe & Rice, 2018; Mathiyazhgan, 2021). Early experiments from social work scientists on algorithmic social interventions for pressing social problems highlighted the value and promise of using AI technology to combat the world's most pressing social issues (Tambe & Rice, 2018). Desmond Patton and his team discovered opportunities for social work contributions to data science to address human rights issues (Patton, 2020; Mathiyazhagan et al., 2021). In their research, social workers contribute contextual qualitative inputs to social media research (Patton et al., 2020). Cogburn and colleagues led the development of mixed reality tools to understand and educate racial inequalities (Cogburn et al., 2018). Social work schools have also started to offer minors in technology and power, race, oppression, and privilege (PROP) for social work students to get familiar with technology and social justice (Columbia School of Social Work, 2021). The competencies of social work, ethical principles, and community-centred approaches in the emerging tech will amplify human rights and social justice in the Metaverse in real-time practice (Patton, Mathiyazhagan, & Landau, 2021). The diverse set of social workers and tech developers could work together in the development process, in order to create safe, inclusive, and just technologies that minimise bias and harm. The US tech policies (i.e., Algorithmic accountability bill; DEEP FAKES accountability bill; and the Executive Order on Maintaining American Leadership in AI) should adopt social work principles that mandate tech companies to appoint social workers and mental health stakeholders for effective implementation of ethics in a contextual manner. This would help minimise possible harm and bias and prevent addictive algorithms.

Glocal call for tech policy

The United States Constitution and other internet legislation does not guarantee the right to privacy as a fundamental human right, unlike European Union data protection laws (Green & Gilman, 2018). The current technology policy and legislative regulations are not up to date with these technological advancements (Harrison, 2020). The US adopts no global policy framework on the ethics of AI. The US is no longer part of the United Nations Educational, Scientific and Cultural Organization (UNESCO) to adopt the recent global framework for ethics of AI (AHEG, 2020). Recently, the US Congress reintroduced a bill for the Algorithmic Accountability Act of 2022 (117th Congress, 2022). Although it provides scope for external stakeholders participation including representatives of and advocates for impacted groups, there is no mention of preventing addictive algorithms in the bill. It is time to have glocal emphasis on introducing comprehensive federal legislation in the US with social work principles to regulate emerging technologies to ensure algorithmic equity in a community-centred holistic approach in product development, deployment, and use of data in AI, XR, and blockchain. It is important now to address federal tech policy gaps in the US before the Metaverse hits the market to prevent possible harm to vulnerable communities across the globe.

References

Ad Hoc Expert Group (AHEG) for the Preparation of a Draft text of a Recommendation the Ethics of Artificial Intelligence. (2020). Outcome document: First draft of the Recommendation on the Ethics of Artificial Intelligence. U.N.E.S.C.O. https://unesdoc.unesco.org/ark:/48223/pf0000373434

Berthon, P., Pitt, L., & Campbell, C. (2019). Addictive De-Vices: A Public Policy Analysis of Sources and Solutions to Digital Addiction. Journal of Public Policy & Marketing, 38(4), 451–468. https://doi.org/10.1177/0743915619859852

Bouygues, H. L. (2021, July 20). Social Media Is a Public Health Crisis | Healthiest. U.S. News & World Reports. https://www.usnews.com/news/health-news/articles/2021-07-20/social-media-is-a-public-health-crisis

Algorithmic Accountability Act of 2022, H.R. 6580 IH, 117th Congress, 2D Session (2022). https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf

Cogburn, C. D., Bailenson, J., Ogle, E., Asher, T., & Nichols, T. (2018). 1000 cut journey. ACM SIGGRAPH 2018 Virtual, Augmented, and Mixed Reality, 1–1. https://doi.org/10.1145/3226552.3226575

Crockford, K. (2020). How is Face Recognition Surveillance Technology Racist? American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist/

European Commission. (2020). On Artificial Intelligence—A European approach to excellence and trust (White Paper COM(2020) 65 final). European Commission. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

Gilman, M. E., & Green, R. (2018). The Surveillance Gap: The Harms of Extreme Privacy and Data Marginalization. NYU Review of Law and Social Change, 42(253). https://ssrn.com/abstract=3172948

Harrison, D. (2020, October 22). Civil Rights Violations in the Face of Technological Change. The Aspen Institute. https://www.aspeninstitute.org/blog-posts/civil-rights-violations-in-the-face-of-technological-change/

Jacoby, K. (2021, November 24). Facebook fed posts with violence and nudity to people with low digital literacy. USA Today. https://www.usatoday.com/story/tech/2021/11/23/facebook-posts-violence-nudity-algorithm/6240462001/?gnt-cfr=1

Karim, F., Oyewande, A., Abdalla, L. F., Chaudhry Ehsanullah, R., & Khan, S. (2020). Social Media Use and Its Connection to Mental Health: A Systematic Review. Cureus. https://doi.org/10.7759/cureus.8627

Mathiyazhagan, S. (2021). Field Practice, Emerging Technologies, and Human Rights: The Emergence of Tech Social Workers. Journal of Human Rights and Social Work. https://doi.org/10.1007/s41134-021-00190-0

Mathiyazhagan, S., Kleiner, S., & Patton, D. (2021). To Make Good Policy on AI, Talk to Social Workers. Tech Policy Press. https://techpolicy.press/to-make-good-policy-on-ai-talk-to-social-workers/

McCluskey, M. (2022, January 4). How Addictive Social Media Algorithms Could Finally Face a Reckoning in 2022. TIME. https://time.com/6127981/addictive-algorithms-2022-facebook-instagram/

Metz, C. (2021, December 30). Everybody into the metaverse! Virtual reality beckons big tech. The New York Times. https://www.nytimes.com/2021/12/30/technology/metaverse-virtual-reality-big-tech.html

Patton, D. U. (2020). Social work thinking for UX and Ai Design. ACM Digital Library. https://dl.acm.org/doi/10.1145/3380535

Patton, D. U., Blandfort, P., Frey, W. R., Schifanella, R., McGregor, K., & Chang, S.-F. U. (2020). Vatas: An open-source web platform for visual and textual analysis of Social Media. Journal of the Society for Social Work and Research, 11(1). https://doi.org/10.1086/707667

Patton, D. U., Mathiyazhagan, S., & Landau, A. Y. (2021). Meet them where they are: Social work informed considerations for youth inclusion in AI Violence Prevention Systems · algorithmic rights and protections for children. Works in Progress. https://wip.mitpress.mit.edu/pub/meet-them-where-they-are/release/1

Pilkington, E. (2019). Digital Dystopia: How algorithms punish the poor. The Guardian. https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor

Seymour, R. (2019). The machine always wins: What drives our addiction to social media. The Guardian. https://www.theguardian.com/technology/2019/aug/23/social-media-addiction-gambling

Sisson, P. (2019). Housing discrimination goes high tech. https://archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination

Snider, M., & Molina, B. (2021). Everyone wants to own the metaverse including Facebook and Microsoft. But what exactly is it? https://www.usatoday.com/story/tech/2021/11/10/metaverse-what-is-it-explained-facebook-microsoft-meta-vr/6337635001/

Student handbook 2021-2022. (2022). Columbia University School of Social Work. https://socialwork.columbia.edu/wp-content/uploads/student-handbook.pdf

Tambe, M., & Rice, E. (Eds.). (2018). Artificial Intelligence and Social Work (1st ed.). Cambridge University Press. https://doi.org/10.1017/9781108669016

Tsai, T. B. K., & B. (2022). Building an inclusive metaverse starts now. Here’s how. World Economic Forum. https://www.weforum.org/agenda/2022/01/building-inclusive-metaverse-must-start-now/

Tynes, B. M., Stewart, A., Hamilton, M., & Willis, H. A. (2020). From google searches to Russian disinformation: Adolescent critical race digital literacy needs and skills. International Journal of Multicultural Education. https://eric.ed.gov/?id=EJ1299564

United Nations. (2021, November 25). 193 countries adopt first-ever global agreement on the ethics of Artificial Intelligence. UN News. https://news.un.org/en/story/2021/11/1106612

What’s a Columbia School of Social Work Graduate doing at Google? The Columbia School of Social Work. (2016, April 14). Columbia School of Social Work. https://socialwork.columbia.edu/news/whats-a-columbia-school-of-social-work-graduate-doing-at-google/

Wheeler, T. (2021, September 30). The metachallenges of the metaverse. The Brookings Institute. https://www.brookings.edu/blog/techtank/2021/09/30/the-metachallenges-of-the-metaverse/

Add new comment