Critical questions for Facebook’s virtual reality: data, power and the metaverse

Ben Egliston, Digital Media Research Centre, Queensland University of Technology, Australia
Marcus Carter, Digital Cultures, University of Sydney, Australia

PUBLISHED ON: 20 Dec 2021 DOI: 10.14763/2021.4.1610

Abstract

Virtual Reality (VR) represents an emerging class of spatial computing technology reliant upon the capture and processing of data about the user (such as their body and its interface with the hardware), or their surrounding environment. Much like digital media more generally, there are growing concerns of who stands to benefit from VR as a data-intensive form of technology, and where its potential data-borne harms may lie. Drawing from critical data studies, we examine the case of Facebook’s Oculus VR—a market leading VR technology, central to their metaverse ambitions. Through this case, we argue that VR as a data-intensive device is not one of unalloyed benefit, but one fraught with power inequity—one that has the potential to exacerbate wealth inequity, institute algorithmic bias, and bring about new forms of digital exclusion. We contend that policy to date has had limited engagement with VR, and that regulatory intervention will be needed as VR becomes more widely adopted in society.
Citation & publishing information
Received: January 28, 2021 Reviewed: June 7, 2021 Published: December 20, 2021
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Virtual reality, Critical Data Studies, Sensors, Power, Data, Social media
Citation: Egliston, B. & Carter, M. (2021). Critical questions for Facebook’s virtual reality: data, power and the metaverse . Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1610

Introduction

Virtual Reality (VR) has seen a great resurgence in the last decade. Emerging out of mid-century military technoscience, and later in a range of consumer-oriented contexts throughout the 1980s and 1990s (Chesher, 1994), VR is a form of head mounted computing device that creates digital or simulated experiences via auditory, visual and haptic feedback. As Rob Shields puts it, such devices typically promise “a sense of phenomenological presence or immersion in the [virtual] environment” (2005, p. 54). As well-known as this promise, however, is the medium’s Sisyphean struggle (and indeed failure) to live up to it. Computer scientist and VR pioneer Jaron Lanier suggests that the medium’s failure to materialise the visions of the 1980s and 1990s was because of limitations in computing power—which could not at the time deliver on the promises made by VR boosters. VR was, as Lanier puts it, ‘stuck in a waiting room for Moore’s law’ (Lanier, 2017 cited in Evans, 2018, p. 31).

VR’s recent prominence can certainly be attributed to technological advancement (see Evans, 2018). But more than just an advancement in computing power, the increasing concentration of financial capital within the VR industry (and toward spatial computing more generally—such as geolocative media in smartphones) has been key in its transformation and materialisation today (see Egliston and Carter, 2020). VR, and mixed reality spectrum technologies like augmented reality, have become a site of intense focus for ‘big tech’. Companies like Google, HTC, and Microsoft have invested heavily in the mixed reality space over the last decade (see e.g., Microsoft’s Hololens, or Google’s AR development software, ARCore). We see these investments most significantly at Facebook, rebranded as ‘Meta’ in October 2021, in the context of its ‘metaverse’ ambitions. Facebook/Meta has committed heavily to the development of VR following its 2014 acquisition of VR company Oculus, spending “at least” US$ 10 billion in 2021 on creating AR and VR hardware, software, and content (Kastrenakes & Heath, 2021) Today, VR is adopted across spheres ranging from health (Bell et al., 2020), to education (Carter et al., 2020), to training police forces (Garcia et al., 2019), spaces in which it is commonly understood to have positive benefits for its capacity to provide “psychologically real” simulations (Bailenson, 2018b).

In the context of what we observe as an emerging tradition of ‘critical VR studies’ (Bollmer, 2017; Evans, 2018; Golding, 2019; Saker & Frith, 2019; Harley, 2019; LaRocco, 2020; Egliston & Carter, 2020; Wallis & Ross, 2020; Irom, 2018; Nakamura, 2020; Roquet, forthcoming)—which broadly offer critical, sociotechnical perspectives on VR histories, current applications, and future imaginaries—this article focuses on how current and emerging forms of VR technology may cause harm.

Specifically, this article focuses on questions of inequity and harm that arise from VR as a sensing device, reliant upon the capture of data to do with the physical space around the user, and the space of the user’s body. Unlike previous ‘critical’ VR work—which is largely drawn out of debates in media and cultural studies, and (particularly critical feminist) Science and Technology Studies—we draw from work around Critical Data Studies (CDS) (Iliadis & Russo, 2016). Emerging in response to the so-called ‘data revolution’, critical data studies takes a sociotechnical perspective to think about data’s politics and power. It tends to recognise that data is not ‘objective’ by virtue of its scale, scope, or the speed with which it can be collected and processed (Kitchin, 2014). Rather it is always captured, processed and analysed in accordance with some kind of aim or ideology (whether conscious or not). In this way, as Ruppert et al. write, data “generate new forms of power relations and politics” (2017, p. 2), things that benefit some more than others. The project of CDS, and related sub-fields like data justice, is then critical and prescriptive, identifying data-borne harms and thinking how we might productively, and more equitably address them, such as through regulatory or policy reform (as is the goal of CDS coloured work in this journal, see Couldry & Mejas, 2019).

We examine VR’s currently existing and potential harm through the case of the Facebook/Meta-owned company Oculus VR. Oculus, while not representative of the entire VR hardware market, currently captures a dominant 61% share (Feltham, 2021) over competitors Sony, HTC, and Valve. As Oculus is largely synonymous with contemporary VR, we believe Oculus provides a useful case study for thinking through the different forms of data-related harm associated with VR in a more general sense. While Oculus is helpful in developing a general critique of the VR medium and its data politics, we also believe it offers a noteworthy case study due to its intersection with Facebook and its attendant political economy concerns, and the company’s significant October 2021 rebrand to being “metaverse first” (Heath, 2021).

In our discussion of harms, we focus on both infrastructural and unseen harms, to do with data’s extraction and expropriation, and those that are more visible and felt at the level of the interface. We focus on three main data-borne harms emerging from use of Oculus VR hardware (informed by a wider review of developments and literature to do with virtual reality, see Carter and Egliston, 2020). These are: 1) the platform capitalist accumulation of spatial data from Oculus devices by Facebook, 2) the rollout of Oculus devices in workplaces as a means to datafy and evaluate workplace performance, and 3) the exclusionary potentials of Oculus’ capacity to ‘sense’—particularly, with respect to disabled bodies. Rather than a definitive or exhaustive statement about Oculus’ potential for harm, these three issues are taken as entry points for further grounded and critical work on the data politics of VR technologies by focusing on a varied range of harms and injustices that they may bring to bear.

In approaching these three examples through the lens of CDS, we ask ‘how does Oculus’ VR—as a data-rich sensing device—enact power?’. We understand power straightforwardly as the capacity for individuals and collectives to exercise agency (a capacity that is by no means afforded to all). In particular, we are interested—much as is the project of CDS—to think about how power is enhanced, enabled, or restricted through the capture, visualisation and analysis of data traces of the VR user. In this sense, we find particular resonance and inspiration in D’Ignazio and Klein’s intersectional feminist approach to critically understanding the data-power nexus (through what they term ‘data feminism’). Such an approach foregrounds how data power is at once about the control of data flows, but also about the way that controlling these data flows constructs (and exacerbates) forms of “structural privilege and structural oppression, in which some groups experience … advantages...and other groups experience disadvantages…” (D’Ignazio & Klein, 2020, p. 24).

Taken together we focus on how Oculus’ VR design and use as a data-rich technology is shaped by social, cultural and political dynamics, and likewise, how it might fold back into and shape life. Certainly, feminist STS has offered insight along these lines into VR for some time—arguing that the virtual can never be disentangled from the politics of the social—with particular attention to gender and race (see e.g., Green, 1999; Sophia, 1992; Balsamo, 1996). As Nicola Green adroitly puts it, to “become virtual” is “not simply to use a computing system as a tool, nor is it to access a wholly ‘other’ space and become digital. Rather, it is a process of making connections between programmed and nonprogrammed spaces in specific locales, and power-laden social, cultural and economic relationships” (1999, pp. 410-411). This refrain of feminist STS is echoed in more recent media and cultural studies scholarship of contemporary VR—such as the work of Lisa Nakamura (2020) and Daniel Harley (2019). While our approach follows these lines of critique in its attention to the virtual as technological, but also social and cultural, where it diverges is in its specific attention to the data generated and processed by VR systems as the means through which structures of power are enacted and supported.

We proceed with a brief introduction to Oculus, and then provide an overview of how recent forms of Oculus’ VR operate as a digital sensor. From there we outline three emerging sociotechnical issues with Oculus’ VR, as well as related policy interventions. We conclude by recognising the need for forms of policy and regulation that are attuned to the sociotechnicality of VR and its medium specific harms.

A brief introduction to Oculus

Like many other Silicon Valley start-ups, the story of Oculus follows a familiar arc. If we are to take posts on the VR forum ‘Meant to Be Seen 3D’ as a starting point—Oculus was first conceived in 2009, as a project of then-17-year-old Palmer Luckey. The first iteration of the device—a yellow fiberglass helmet—was supposedly cobbled together in Luckey’s parents’ garage. Several years later, in 2012, Luckey would famously go on to fund the further development of the device through a US$ 2.5 million Kickstarter campaign. It was at this point in time that Oculus was being framed by Luckey—and in much of the increasing attention being shown to Oculus by the tech press—as a device that would transform gaming. This was by virtue of the prototype device’s ostensible ability to deliver on VR’s long promised affordance of immersion. As Luckey described Oculus on its ‘Rift’ system Kickstarter page, it was a device “by gamers for gamers” (see Levy, 2018, n.p.)—and something that would go on to receive endorsements from game industry veterans like id Software’s John Carmack (who would later go on to serve as Oculus’ CTO), Valve co-founder and president Gabe Newell, and Valve programmer (and now Oculus Chief Scientist) Michael Abrash. While some large game companies were already experimenting with VR at the time (such as Valve), following Oculus’ rise to prominence, VR was once again taken seriously by major players in the videogame industry, having essentially been abandoned since the industry’s (failed) experiments with the medium in the early-mid 1990s.

Despite Luckey having only developed a prototype headset for the Oculus Rift, the company was purchased by Facebook for USD$ 2 billion in 2014—one of a series of high-profile acquisitions for a post-IPO Facebook. In the period immediately following Oculus’ acquisition, the company existed as a relatively autonomous subsidiary of Facebook. However, Facebook would soon draw Oculus into the company’s broader social software platform. As we see from Facebook’s discourse to both shareholders and the public—virtual, and increasingly mixed reality is central to how Facebook publicly imagines the platform’s future—for instance, through the more direct incorporation of Facebook’s suite of social software (e.g., Messenger, Facebook) into Oculus hardware (notably, now requiring Oculus users to have a Facebook account to use the device). As we write elsewhere (Egliston & Carter, 2020), echoing Zuckerberg’s well-known contention that Facebook is a ‘social infrastructure’ (Hoffmann et al., 2016), Zuckerberg labels Oculus as a ‘social computing platform’. Committed to this vision of a VR infrastructure, the Quest 2, the most recent ‘mobile’ Oculus system is sold at a loss-leading price point, likely with the goal of achieving market saturation. In 2020, Facebook launched Facebook Reality Labs (previously Oculus Research, founded in 2014)—a mixed reality research and development group (Freedman, 2020), into which Facebook has reportedly invested US$10 billion in 2021 alone.

It is here that Abrash suggests that Facebook will make field defining advancements in VR/MR – comparable in impact to Douglas Engelbart and the Augmentation Research Centre’s development of the Graphical User Interface in the 1960s and 1970s, an innovation now used in much of modern computing. This goal is in line with Facebook’s longstanding interest in (at least discursively) framing themselves as infrastructure (Plantin et al., 2016), a goal that underpinned the company’s 2021 ‘metaverse’ rebrand.

Overview: virtual reality as digital sensor

VR is a medium centrally reliant on data. Given the embodied nature of VR (Bailenson, 2018b: 31; Chesher, 1994; Golding, 2019), it necessarily captures and processes a large volume of data about the user’s body. Data about the body is how VR generates images perceptible to and manipulable by the human user, the means through which the continuous movement of human bodies are turned into a format legible to the VR system.

These mechanisms of data tracking have always been central to both how VR has worked, and to its imagination. Much early VR focused on tracking movements of the head in order to provide a sense of perspective within computer generated environments. Prototypical VR, developed in the late 1960s laboratory of military technoscience by Ivan Sutherland, was reliant upon sensing head position through ultrasonic frequencies transmitted from the headset and picked up by sensors located above the device. Scholars in media studies have been attentive to the data subtending the forms of commercial virtual realities that emerged in the 1980s and 1990s (Hillis, 1999; Coyne, 1994) focusing on the mediation of vision. As VR developer Mark Pesce (2020) notes, his invention of the ‘sourceless orientation sensor’ for SEGA’s VR system in the early 1990s registered the device’s movement through tracking changes in device orientation and elevation against the Earth’s electromagnetic field. In this early stage of VR development, few systems progressed beyond interfaces that mediated vision and perspective through movements of the head—with the exception of technologies like VPL’s DataGlove, which tracked and graphically rendered hand gestures and movements.

More recent studies have begun to focus on the data intensive capabilities of modern VR technologies, which incorporate new forms of algorithmic sensemaking. As Bailenson writes “In 2018, commercial systems typically track body movements 90 times per second to display the scene appropriately, and high-end systems record 18 types of movements across the head and hands. Consequently, spending 20 minutes in a VR simulation leaves just under 2 million unique recordings of body language” (2018a, n.p). Forms of physical or physiological biometric identification techniques which could theoretically extend from VR include gaze analysis, voice recognition, and facial recognition (XRSI, 2020). While data yielded by VR devices is often framed as anonymised, recent scholarship in computer science and Human-Computer Interaction has suggested that under certain machine learning conditions such data is re-identifiable. Miller et al. (2020) and Pfeuffer et al. (2019) suggest that contemporary VR systems, including the most popular headsets offered by Oculus (Egliston & Carter, 2021), have the capacity to track personally identifiable biometric data from users. Miller et al.’s study, for instance, highlights that five minutes of VR data (with all personally identifiable information stripped) could be correctly identified using a machine learning algorithm with 95.3% accuracy of a 511-participant sample.

Beyond the user’s body, VR increasingly operates through sensing the environment surrounding the user. VR systems make sense of space around the user—locating the user within space, tracking their movement through space, and bounding the virtual from the physical. Systems like the Rift CV1 (on the market from 2016-2019), used optical sensors (external to the device itself, placed around the area of use) for rotational and positional tracking (with the sensors and Rift tethered to a PC). The sensors tracked the position of the user’s head and hands by picking up on infrared light emitted from the Rift’s controllers and head mounted display. Advances in ‘mobile’ VR (that is VR where all computing functions are built into the headset itself, see Saker & Frith, 2020)—such as the Quest—are instead reliant upon algorithmic odometry, particularly upon Facebook’s ‘Insight’ stack—an ‘assemblage’ (McCosker & Wilken, 2020; Mackenzie & Munster, 2019) of sensors that enable the device to algorithmically construct a map of the environment around itself, such that the software can track the position and movement of the device through space.

In short, more immersion requires more data. Oculus’ devices—with their built-in sensors, outward facing cameras, and computer algorithms to generate virtual environments and register body movements—process considerable forms of spatial data, by which we mean the data to do with the body, and its spacings and timings to affect, or respond to events in the virtual environment, as well as data to do with the physical environment surrounding the user.

Emerging harms

In what follows we identify and explain several emerging, data-borne harms relating to contemporary VR technology. We suggest that these cases constitute harm in that they violate the autonomy of data subjects, through the capture and profit from data, but also in that they materialise unjust social relations.

Platform power and data extraction

Facebook’s acquisition of Oculus, and its use of the spatial data collected through Oculus devices, falls neatly into critical conversations about digital platforms (see e.g., Srnicek, 2017; van Dijck et al., 2019). The throughline of this critical platform scholarship is that technology companies today typically provide some kind of technology and/or service—often in a loss leading way (and generally as an exercise in expanding into, and eventually controlling new markets)—commonly generating revenue through data or surveillance-centred business models. As Srnicek writes (2017, p. 49), there are several different kinds of platforms. Facebook specifically would be characterised as an advertising platform—one that generates revenue by brokering transactions between advertisers and users of its social software. This is enabled by Facebook’s capture of user data—tracking and monitoring users’ social activity, creating a vast lake of data that it uses to automate exchanges between advertisers and Facebook, through its ad network (Srinivasan, 2018). While we are yet to see the full implications of Oculus for Facebook as an advertising company (with, currently, ads being served inside certain Oculus games which are based on Facebook data rather than VR specific data) public facing documentation from Oculus (such as software licence agreements, see Egliston & Carter, 2021) suggests that it may operate to further empower Facebook’s advertising arm. A patent filed in 2017 by Facebook likewise points toward an advertising based future for VR.

At its Oculus Connect developer conferences in 2018 and 2019, Facebook has compared its investment in VR to its investment in developing for mobile computing. The comparison here, we argue, might be taken in two ways. First, VR will become a widely adopted computing format, as with the mobile phone. Second, and perhaps more a motivation for the company, Facebook’s early integration into mobile media (Goggin, 2014) was a key mechanism in strengthening its place in the advertising-based data market today. Data from mobile media provided additional data points for the company to connect Facebook users with advertisements through its ad service. It is not hard to imagine where the data collected via the use of VR—about the body (such as hand size, as specified in their ‘Supplemental Oculus Data Policy’), about the user’s home (using its Insight sensing system), could benefit the company’s data driven ad network, particularly when marshalled with other data the company collects from its other software. Put more explicitly elsewhere, prior to the Facebook Connect conference in September 2020, the company updated its Oculus EULA to reflect this:

Information about your environment, physical movements and dimensions when you use an XR device. For example, when you set up the Oculus Guardian system to alert you when you approach a boundary, we receive information about the playing area that you have defined; and when you enable the hand tracking feature, we collect technical information such as your estimated hand size and hand movement data to enable this feature (Supplemental Oculus data policy, 2020: n.p.

As Facebook executives like CEO Mark Zuckerberg and Oculus executives, such as Chief Scientist Michael Abrash, have noted, the company has clear ambitions to integrate more sophisticated forms of biometric tracking: hand tracking, facial expression and eye tracking (which as Zuckerberg notes, are contingent upon hardware advances)1 and brain-computing interfaces.

Despite its supposed commitment to responsible innovation, Facebook Reality Labs and Oculus have little to say about their extractive data practices, focusing instead on questions of privacy. Data is never a resource extracted by Facebook for profit, but something that they are actively protecting from the threats of rogue actors (see Egliston & Carter, 2020, p. 12). This is of course not to say that privacy is not a genuine concern. Rather, it is to say that VR privacy is framed in highly individualised terms, rather than in terms of the large-scale data extractivism driving Facebook’s advertising revenues, from which it makes most of its money.

As we have written elsewhere (Egliston & Carter, 2020), in exploring individuals’ perceptions of the prospect of a Facebook-backed VR future, concerns about data extraction are paramount. Chiming with what Nissenbaum (2004) would call a ‘contextual’ approach to privacy, while individuals did not expect a right to being free of tracking (with many individuals celebrating the various affordances of the device reliant upon tracking the body), they were concerned with an inability to control an appropriate flow of personal information, an inability to manage what was disclosed to Facebook. These concerns, we found, emerged from folk perceptions of platforms and surveillance, a generalised surveillance anxiety, and in response to Facebook’s widely publicised unscrupulous data practices (e.g., the Facebook/Cambridge Analytica case).

The regulation of XR technologies like VR largely fall under broader data regulations like the GDPR (with Facebook including GDPR compliance statements as addenda to their privacy policies). It is worth noting, however, some of the limitations of this regulation when it comes to Oculus and VR. Firstly, a general criticism: as a range of scholars in critical legal studies (Viljoen, 2020) and media studies (Couldry & Yu, 2018) have argued, the “preventative force” (Couldry & Yu, 2018, p. 4474) of the GDPR is reliant upon user consent. If users consent to using a particular product or service and provide explicit consent for the kinds of data tracking outlined in their terms of service agreement, data tracking (even of granular biometric data) is permitted (see GDPR Article 9). The Oculus software license—which like many forms of software licences takes the form of a clickwrap agreement, a mechanism for quickly moving users into consumption (see Obar & Oeldorf-Hirsch, 2018) — is one that is often vague in specifying data uses. While Facebook is clear that Oculus products have the potential to track biometric (e.g., hand size) or other forms of spatial data (e.g., room dimensions), what lacks is a clear statement about both current and potential future uses. This is particularly problematic, for instance, in that Facebook claims in its software licences that any data is de-identified upon collection, yet documentation of Facebook’s internal testing of Oculus has suggested the ability for reidentification (see Bye, 2020).

Beyond the GDPR, there has been recent promise in the application of competition law, particularly in Germany, in regulating Oculus with respect to data. At time of writing, we are in the midst of a noteworthy initial case of state regulation of Oculus (and VR more generally), with Germany’s Bundeskartellamt (national competition regulator) examining whether Oculus is breaching German data coupling laws in requiring an Oculus account to be connected with a Facebook account (Robertson, 2020)—a move following almost immediately from antitrust charges being filed against Facebook by the FTC in the United States, and from a 2019 investigation by the Bundeskartellamt into Facebook’s internal data sharing practices.2 As a result of this, Facebook pulled Oculus products from sale in Germany. While antitrust is typically focused on consumer welfare and market health and rehabilitation, running parallel to rather than in tandem with approaches focused on the social and political aspects of market actors, the application of antitrust law to limit the interoperability of Oculus and other arms of Facebook may be a productive move in limiting the kinds of profiles that Facebook can create of Oculus users (and the further uneven distributions of wealth that emerge from the company’s extractive practices, particularly as they extend into the more sensitive realm of VR data).

Automated decision making: quantifying the qualitative

The data generated through VR is also being used to monitor performance, particularly within the context of work. Oculus is increasingly taking the form of an enterprise technology—framed as a means for collaboration, simulation, and enhanced efficiency (for a more in-depth overview of Facebook’s business partnerships programme, see Egliston & Carter, 2021). One example, exemplifying the data-borne harms of Oculus VR in such a context, is the use of Oculus hardware within workplace training and hiring—specifically, in service sector jobs like retail. In 2018, Oculus announced a partnership between American retail corporation Walmart and the ‘Immersive Learning’ company STRIVR for workplace training and evaluation. STRIVR—a company that began initially creating VR software to train athletes—is now one of the largest VR EdTech and enterprise training companies globally (claiming over six million training sessions deployed, principally via its partnership with Walmart).3 STRIVR purportedly offers Walmart the opportunity to simulate events that would be difficult to emulate in physical training scenarios (like a Black Friday shopping crowd), learn how to use new technology before it is installed, and for soft skills training such as customer service, empathy and dealing with difficult conversations.

As STRIVR note in a summary of one of their key patents4, the programme “automatically clusters learners into groups on sensing data, which can include head, hand, and eye movements, as well as physiological data”—data that can supposedly tell us something (following processing by STRIVR) due to its granularity, echoing the kinds of epistemologies of data-driven empiricism common in data discourses, identified by Kitchin (2014). As we write elsewhere (Carter & Egliston, 2021) of TaleSpin—a similar VR training company, also partnered with Oculus, and which focuses on corporate white-collar training—data is effectively volunteered by users due to the supposed representational verisimilitude of the simulations—how they seem ‘realistic’ through visual, auditory and haptic registers. For prospective clients, this ‘realness’ is not simply to make the experience more compelling and edifying. It is to encourage use and engagement, to “make training for ‘soft skills’ measurable” (Talespin, n.d, n.p.)—or, as one Walmart executive writes of STRIVR, a process of measurement that helps “remove subjectivity and unconscious bias from the selection [hiring] process…a people-led, tech empowered way of working” (Holler, 2019, n.p.).

The potential for harm—we suggest—lies in the way that these technologies further threaten to institute regimes of datafied governance in the workplace, things that are already widely adopted and that have been widely problematised in terms of fairness and bias (see Kim, 2017). These technologies, beyond offering logistical benefits, are framed as offering enhanced insight and decision making to employers, particularly through their ability to capture a range of physiological data about employees based on their participation in VR simulations. In this sense, the phenomenology of decision-making shifts from being reflexive and situated (see Dreyfus & Dreyfus, 2004), to the decision maker being imbued with informational flows, taken into perceptual experience (see Ihde, 1990). This is of course not to infelicitously romanticise ‘human’ decision-making as something free of bias5. Rather it is to say that the distribution of agency through automated decision-making technologies not only creates a ‘mess’ of human and technical agents, but also new and complicated questions to do with inequality in decision-making.

Much like algorithmic decision making in hiring processes more generally, the datafication of workplace decision making (particularly to do with hiring and promotion) is an area that has been fraught with debate and critique (Kim, 2017; Rhaghavan et al., 2019; Sanchez-Mondero et al., 2019). Given the nascency of VR-based, data-driven tools for evaluating workplace performance, we lack clear evidence about how these tools work, and how they evaluate users. For instance, one of STRIVR’s primary evaluation tools is verbal analytics—something that has been recognised more generally as biased toward native English speakers (Harwell, 2018). If we are to take seriously the biometric nature of VR, we might also consider the new forms of bias that these tools enable in revealing information previously protected under antidiscrimination law (e.g., protected characteristics like disability).6 Such a future is troubling in the specific context of Walmart’s partnership with STRIVR/Oculus, but also in the potential of a wider rollout of VR training in the service sector more generally, which commonly relies upon (increasingly technologically mediated) forms of worker coercion and control through limiting the bargaining power of employees through underemployment and restrictive social security (Wood, 2020). Despite companies such as STRIVR expressing sensitivity to issues of bias, what lacks is—at bare minimum—appropriate best practice methods for evaluating transparency; a demand of much recent advocacy in scholarship on fairness (Raghavan et al., 2019) and emerging from recent initiatives such as the US based Civil Rights Principles for Hiring Assessment technologies.7

Within the context of the EU, protections against data informed decision-making are laid out in Art. 22 of the GDPR pertaining to ‘Automated individual decision-making, including profiling’. Art. 22 specifies that decision-making cannot be based solely on data-driven, automated processing (that is, it can inform decision-making). Yet crucially, automated decision-making’s harms relating to inequality in the workplace are often on the basis of how data is used by human subjects to mediate decision-making (see Mateescu & Nguyen, 2019). Certain interests and purposes are imposed on and justified by data within certain institutional contexts. Auditing measures (as have been proposed concerning digital work tools more generally, see Rhagavan et al., 2019) could provide a solution here. Such measures would need to take stock of the materially specific affordances of the technology, and the specific contexts in which it is being used. Auditing measures and transparency could be developed building on the strong focus placed on data access and retention in the GDPR.

Inclusion and access: VR as disabling interface

We have focused thus far on how data is captured, and used in one way or another to sort, track and profile. We turn now to focus on what and who doesn’t get tracked, and what this exclusion might mean in the context of a growing effort on Facebook’s part to integrate VR (and XR more generally) into the infrastructure of society (particularly, in light of recent claims by Facebook to use XR technologies as the infrastructure for a ‘metaverse’, encompassing all aspects of social life). Evergreen today, then, is Anne Balsamo’s question of “who will have access to virtual reality…to the networks that serve the infrastructure of the emerging information society?” (1996, p. 132).

To be sure, one of the most celebrated affordances of VR technologies is the sensation of embodiment: the sense of having a body that exists within a digitally rendered space. To do so, VR technologies render the user’s locomotion as machine readable data. To give a basic example, the Oculus Rift senses movement through tracking the position of the head and hand worn devices (or, specifically, by tracking the position of invisible LEDs on these devices). The illusion of spatial coherence is reliant upon tracking movements of the devices inscribed in X, Y and Z coordinates in a six-dimensional space. It is upon the promise of this sensation of immersion and immediacy (see Bolter & Grusin, 1999), of frictionless interface between body and technology, that Oculus has invested billions of dollars in VR research and development with Facebook Reality Labs. The relationship between VR and embodiment was further highlighted in a recent blog post by Oculus outlining their proprietary sensor ‘Insight’ for the Rift and Quest headsets. We see that the company has gone to painstaking lengths to reduce undesirable visual errors—such as image stuttering or jittering—errors which can cause nausea in even the most experienced VR user.

Yet despite this ambition for high fidelity body tracking, not all bodies are machine readable. While VR operates in large part through its capacity to sense the body, the kinds of embodiments that are sensed and registered as such, are based upon ableist conceptions of ‘normate’ bodies. For Garland-Thomson (1997), the ‘normate’ body is a socially constructed, ideal image of the body—one that, at least in the context of North America, is white, able-bodied, heterosexual and male—something that accrues power and authority if approximated. For Davis (1995) constructions of normalcy exclude—one that renders disability a ‘problem’. Little has been written on VR and disability. Early feminist literature on VR, while not coming from the perspective of disability studies, has suggested that VR follows the path of Western ocular centrism (rather than incorporating a wider range of multisensory experiences, see Murray & Sixsmith, 1999, p. 321), one that assumes particular modes of bodily engagement. While some in disability studies are hopeful for the promise of VR’s multisensory aesthetics for more sensorially inclusive media experiences (see Paterson, 2017, p. 1551), more recent work in HCI has productively identified how contemporary VR systems instantiate ableism. Drawing on surveys with disabled VR users, Gerling and Spiel (2021) argue that VR assumes a ‘corporeal standard’, that is, an ‘ideal’ (non-disabled) body, and moreover, generally fails to accommodate the disabled body in its design.

Facebook’s Oculus suite of technologies is a good example of how VR is designed around the normate body’s proprioceptive and kinetic capacities.8 Considering disability—and the limited extent to which it is incorporated into Oculus’ VR design—highlights Oculus’ normate view of the body. A recent report on research commissioned by ILMxLAB (Wong et al., 2017), on their Star Wars themed Oculus VR games suggested that Oculus VR is often far from its natural or intuitive imagination—encoded with ableist and exclusionary values, which limit the capacities of people with disabilities. A more recent example is the lack of control over avatar height with the Quest. Through the Quest’s Insight system—and its ability to track position and orientation—the device is able to situate the user as accurately as possible in the virtual environment. This means if you crouch in real life, your avatar crouches in the game. But for wheelchair users and people with limited mobility, such an approach makes many Quest games unplayable. Interfaces are often designed to be only within reach of the standing user, and the sitting user’s view is rendered at crotch-height of the virtual non-player characters. Seated mode, where it does exist, is designed for the comfort of the normate body, rather than the inclusion of those for whom sitting is a necessity. In short, the kinds of bodies and mobilities that are rendered machine-readable are based on (ableist) design level assumptions about what bodies are and what they can do.

While the post-Facebook acquisition Oculus has been discursively framed as a more mundane, everyday technology—something that can be integrated into everyday life, and something that has left behind its reputation of VR as a niche technology for computing enthusiasts—the case of disability is a key example of how Oculus still very much emboldens the same fantasy of VR—as we see with Luckey’s original conception of Oculus (see Harley, 2019)—as a libertarian, identity-free and disembodied fantasy (and one against which many of the feminist STS perspectives on VR have pushed back against). If digital technologies and platforms—like Oculus—are to become central to interaction and participation in contemporary societies (as Facebook Reality Labs imagine it)—then the stakes of exclusion are significant.

At the EU policy level, recent regulations such as the European Accessibility Act promise to enforce accessibility standards across computing technology broadly (while not mentioning VR specifically, VR will likely fall under this). Universal accessibility standards, as Costanza-Chock (2020) writes, however run the risk of flattening out difference, minimising the lived and idiosyncratic experiences of certain people and groups. What will be needed, they argue, is a “highly specific, intentional custom design that takes multiple standpoints into account” (2020, p. 230)—something that will be achieved through authentic engagement with community, a “coalitional politics” between designers, policymakers and disabled people. Such involvement will be urgent in that the “tacit and experiential knowledge of community members is sure to produce ideas, approaches and innovations that a nonmember of the community would be extremely unlikely to come up with” (Costanza-Chock, 2020, p. 94).

Conclusions

While Oculus is framed as a data-driven technology positively impacting various aspects of life, we must remain lucid of its potential to amplify or reinforce unequal power relations. Through our three case studies, understood through the perspective of CDS, we have highlighted how approaches to the governance of a metaverse future grounded in Oculus technology will require taking a sociotechnical perspective, one that considers how VR embodies particular logics, aims and values, and will have widely differential effects beyond positive, utopian characterisations of immersion, empathy and solutionism commonly found in marketing discourse or tech reportage. Rather than outright rejecting VR, we advocate for steps that reconfigure the kinds of harmful sociotechnical arrangements that have led to many of the issues observed in the three case studies above, steps which centre fairness and equity. Drawing from the above case studies and discussion of existing regulation and policy, we conclude with two takeaways for VR users, developers, policymakers and data activists.

1. VR data is sensitive and we need appropriate safeguards on its use

VR like that of Oculus requires sensitive data in order to enable its most basic operations, data which falls under Art. 4 of the GDPR’s definition of biometric data, that is “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”. While Oculus has indicated in their licence agreement that this is currently non-identifiable, it is urgent that this remains the case. To some degree, this is protected by GDPR mechanisms, which militate against data uses that exceed purposes for which consent was sought. Oculus software licence agreements, much like software licences more broadly, however, are strategically worded to provide maximum capacity for data extraction (see Egliston & Carter, 2021).

Lessons for regulation of VR as a data rich device could be learnt from a range of recent data policy developments. For example, recently in the United States, the Federal Trade Commission (FTC) focused on the data involved in facial recognition algorithms. In cases where algorithms are trained on data obtained without consent (such as that scraped from social media images), the FTC has ordered these algorithms be shut down. Such an approach provides one way of regulating Facebook’s potential misuse and re-identification of VR data. The European regulatory context also provides an instructive case in the form of the aforementioned German Bundeskartellamt investigation into the data sharing between Facebook’s platforms, including Oculus, and its finding that Facebook’s merging and collecting of data across its platforms constitutes an “abuse of a dominant [market] position”. The focus on limiting platform power here is not dissimilar to recent calls (by the now FTC chair Lina Khan) to place “prophylactic limits” on vertical integration within large tech companies (Khan, 2017). Attempts to restrict Facebook’s ability to commercialise its user data is arguably more important than ever as Facebook enters the VR space, with VR a new paradigm of sensitive data about the body and the built environment. Such efforts would have the potential to mitigate some of the harms described in our first case study of data extraction.

Such measures are not to outright limit the way that VR senses the body, but rather place limits on how this data might be capitalised or used in ways that create harm. Indeed, there are many productive uses of VR within the context of spaces like art, activism and education, which are contingent upon the affordances of VR. For instance, Marcus Carter and colleagues’ own work (Carter et al., 2020) on environmental conservationism education in Australian zoos is reliant upon the use of Oculus Go headsets (Oculus’ first iteration of mobile VR headsets) due to their affordance of mobility.

2. Ensure that advances in VR appropriately consider social and technical questions with respect to actual contexts of use

Key here will be recognising that the plurality of contexts in which VR is currently used, and the many more in which it is imagined, means that there are many meanings of ‘user’ (employee, student, etc.). If policy is to take diverse use and users into account, it could productively benefit from engagement with those who use (and are potentially subject to harm) from these technologies (as highlighted in the cases on work and access/exclusion). It is in this sense that our case study of disability, and its focus on coalitions between people with disabilities, designers, and policymakers, is instructive for broader conversations about governance. Notably, principles of responsible research and innovation, focusing on embedding social benefit and moral responsibility into technological development—which have as a central tenet the incorporation of public or lay perceptions and experiences of technology—have been central to European Commission funded innovation and research (e.g., in 2013 under the Europe 2020 research and innovation funding policy structure). While such principles have been superficially appropriated by FRL in their own corporate responsible innovation policies (see Applin & Flick, 2021)—which provide little sense of how non expert perspectives will be incorporated—responsible innovation that centres the voices of users, particularly those who have the potential to be adversely affected in VR’s use, may be productive in mitigating harm (see Stilgoe et al., 2013).

References

A candid conversation with Facebook’s AR/VR privacy policy manager: New potentials for community feedback (#958). (n.d.). https://voicesofvr.com/958-a-candid-conversation-with-facebooks-arvr-privacy-policy-manager-new-potentials-for-community-feedback/

Applin, S. A., & Flick, C. (2021). Facebook’s Project Aria indicates problems for responsible innovation when broadly deploying AR and other pervasive technology in the Commons. Journal of Responsible Technology, 5, 100010. https://doi.org/10.1016/j.jrt.2021.100010

Bailenson, J. (2018a). Experience on demand: What virtual reality is, how it works, and what it can do (First edition). W. W. Norton & Company, Inc.

Bailenson, J. (2018b). Protecting Nonverbal Data Tracked in Virtual Reality. JAMA Pediatrics, 172(10), 905. https://doi.org/10.1001/jamapediatrics.2018.1909

Balsamo, A. M. (1996). Technologies of the gendered body: Reading cyborg women. Duke University Press.

Bell, I. H., Nicholas, J., Alvarez-Jimenez, M., Thompson, A., & Valmaggia, L. (2020). Virtual reality as a clinical tool in mental health research and practice. Dialogues in Clinical Neuroscience, 22(2), 169. https://dx.doi.org/10.31887%2FDCNS.2020.22.2%2Flvalmaggia

Bollmer, G. (2017). Empathy machines. Media International Australia, 165(1), 63–76. https://doi.org/10.1177%2F1329878X17726794

Bundeskartellamt. (2019, February 7). Bundeskartellamt prohibits Facebook from combining data fro different sources. https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2019/07_02_2019_Facebook.html

Bye, K. (2020, October 16). So @StanfordVR published research on how easily motion data can identify someone. Twitter. https://twitter.com/kentbye/status/1317212859852484608

Carter, M., & Egliston, B. (2020). Ethical Implications of Emerging Mixed Reality Technologies. https://ses.library.usyd.edu.au/handle/2123/22485

Carter, M., & Egliston, B. (2021). What are the risks of Virtual Reality data? Learning Analytics, Algorithmic Bias and a Fantasy of Perfect Data. New Media & Society, 146144482110127. https://doi.org/10.1177/14614448211012794

Carter, M., Webber, S., Rawson, S., Smith, W., Purdam, J., & McLeod, E. (2020). Virtual Reality in the Zoo: A Qualitative Evaluation of a Stereoscopic Virtual Reality Video Encounter with Little Penguins (Eudyptula minor). Journal of Zoo and Aquarium Research, 8(4), 239–245. https://doi.org/10.19227/jzar.v8i4.500

Chesher, C. (1994). Colonizing virtual reality: Construction of the discourse of virtual reality. Cultronix, 1(1), 1–27.

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.

Couldry, N., & Mejias, U. A. (2019). Making data colonialism liveable: How might data’s social order be regulated? Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1411

Couldry, N., & Yu, J. (2018). Deconstructing datafication’s brave new world. New Media & Society, 20(12), 4473–4491. https://doi.org/10.1177/1461444818775968

Coyne, R. (1994). Heidegger and Virtual Reality: The Implications of Heidegger’s Thinking for Computer Representations. Leonardo, 27(1), 65. https://doi.org/10.2307/1575952

Davis, L. J. (1995). Enforcing normalcy: Disability, deafness, and the body. Verso.

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.

Dreyfus, S. E. (2004). The Five-Stage Model of Adult Skill Acquisition. Bulletin of Science, Technology & Society, 24(3), 177–181. https://doi.org/10.1177/0270467604264992

Egliston, B., & Carter, M. (2020). Oculus imaginaries: The promises and perils of Facebook’s virtual reality. New Media & Society, 146144482096041. https://doi.org/10.1177/1461444820960411

Egliston, B., & Carter, M. (2021). Examining visions of surveillance in Oculus’ data and privacy policies, 2014–2020. Media International Australia, 1329878X2110416. https://doi.org/10.1177/1329878X211041670

Evans, L. (2019). The re-emergence of virtual reality (First issued in paperback). Routledge.

Feltham, J. (2021). Oculus Quest 2 Grows To 35% Of SteamVR Headset Usage. UploadVR. https://uploadvr.com/steam-hardware-survey-october-2021/

Freedman, D. H. (2020, December 23). Facebook’s plan to dominate Virtual Reality – and to turn us into ‘data cattle.’ Newsweek. https://www.newsweek.com/facebooks-plan-dominate-virtual-reality-turn-us-data-cattle-1556805

Garcia, E. T., Ware, S. G., & Baker, L. J. (2019, April). Measuring presence and performance in a virtual reality police use of force training simulation prototype. Proceedings of the 32nd AAAI International Conference of the Florida Artificial Intelligence Research Society. https://www.semanticscholar.org/paper/Measuring-Presence-and-Performance-in-a-Virtual-Use-Garcia-Ware/1eac16b05ed8f87d77fe8711fc9f67eb43b1bad2

Garland-Thomson, R. (1997). Extraordinary bodies: Figuring physical disability in American culture and literature. Columbia University Press.

Goggin, G. (2014). Facebook’s mobile career. New Media & Society, 16(7), 1068–1086. https://doi.org/10.1177%2F1461444814543996

Golding, D. (2019). Far from paradise: The body, the apparatus and the image of contemporary virtual reality. Convergence, 25(2), 340–353. https://doi.org/10.1177%2F1354856517738171

Green, N. (1999). Disrupting the Field: Virtual Reality Technologies and “Multisited” Ethnographic Methods. American Behavioral Scientist, 43(3), 409–421. https://doi.org/10.1177%2F00027649921955344

Haiven, M. (2013). Walmart, Financialization, and the Cultural Politics of Securitization. Cultural Politics an International Journal, 9(3), 239–262. https://doi.org/10.1215/17432197-2346964

Harley, D. (2020). Palmer Luckey and the rise of contemporary virtual reality. Convergence: The International Journal of Research into New Media Technologies, 26(5–6), 1144–1158. https://doi.org/10.1177/1354856519860237

Harwell, D. (2018, July 19). The Accent Gap. The Washington Post. https://www.washingtonpost.com/graphics/2018/business/alexa-does-not-understand-your-accent/

Heath, A. (2021, October 28). Mark Zuckerberg on why Facebook is rebranding to Meta. The Verge. The Verge. https://www.theverge.com/22749919/mark-zuckerberg-facebook-meta-company-rebrand

Hillis, K. (1999). Digital sensations: Space, identity, and embodiment in virtual reality. University of Minnesota Press.

Hoffmann, A., Proferes, N., & Zimmer, M. (2016). Making the world more open and connected’: Mark Zuckerberg and the discursive construction of Facebook and its users. New Media & Society, 20(1), 199–218. https://doi.org/10.1177%2F1461444816660784

Holler, D. (2019, October 30). Customers are Changing. Jobs are Changing. At Walmart, the Future of Work is Bright. Walmart Newsroom. https://corporate.walmart.com/newsroom/2019/10/30/customers-are-changing-jobs-are-changing-at-walmart-the-future-of-work-is-bright

Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.

Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 205395171667423. https://doi.org/10.1177/2053951716674238

Irom, B. (2018). Virtual reality and the Syrian refugee camps: Humanitarian communication and the politics of empathy. International Journal Of Communication, 12, 4269–4291.

Kastrenakes, J., & Heath, A. (2021, October 25). Facebook is spending at least $10 billion this year on its metaverse division. The Verge. https://www.theverge.com/2021/10/25/22745381/facebook-reality-labs-10-billion-metaverse

Khan, L. M. (2016). Amazon’s antitrust paradox. Yale Law Journal, 126(3), 710–805.

Kim, P. (2017). Data-Driven Discrimination At Work. William & Mary Law Review, 48, 857–936.

Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481

LaRocco, M. (2020). Developing the ‘best practices’ of virtual reality design: Industry standards at the frontier of emerging media. Journal of Visual Culture, 19(1), 96–111. https://doi.org/10.1177/1470412920906255

Levy, S. (2018, June 11). Inside Palmer Luckey’s bid to build a border wall. WIRED. https://www.wired.com/story/palmer-luckey-anduril-border-wall/

MacKenzie, A., & Munster, A. (2019). Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society, 36(5), 3–22. https://doi.org/10.1177/0263276419847508

McCosker, A., & Wilken, R. (2020). Automating vision: The social impact of the new camera consciousness. Routledge/Taylor & Francis Group.

Miller, M. R., Herrera, F., Jun, H., Landay, J. A., & Bailenson, J. N. (2020). Personal identifiability of user tracking data during observation of 360-degree VR video. Scientific Reports, 10(1), 17404. https://doi.org/10.1038/s41598-020-74486-y

Murray, C., & Sixsmith, J. (1999). The corporeal body in virtual reality. Ethos: Journal of the Society for Psychological Anthropology, 27(3), 315–343.

Nissenbaum, H. F. (2004). Privacy as Contextual Integrity. Washington Law Review, 79(119), 101–139.

Obar, J. A., & Oeldorf-Hirsch, A. (2018). The Clickwrap: A Political Economic Mechanism for Manufacturing Consent on Social Media. Social Media + Society, 4(3), 205630511878477. https://doi.org/10.1177/2056305118784770

Pesce, M. (2021). Augmented reality: Unboxing tech’s next big thing. Polity Press.

Pfeuffer, K., Geiger, M. J., Prange, S., Mecke, L., Buschek, D., & Alt, F. (2019). Behavioural Biometrics in VR: Identifying People from Body Motion and Relations in Virtual Reality. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300340

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828

Robertson, A. (2020, December 10). German Regulators are investigating Facebook for its controversial Oculus account rules. The Verge. https://www.theverge.com/2020/12/10/22167509/germany-fco-investigation-facebook-oculus-account-requirements-competition

Roquet, P. (2021). The immersive enclosure: Virtual reality in Japan.

Saker, M., & Frith, J. (2019). From hybrid space to dislocated space: Mobile virtual reality and a third stage of mobile media theory. New Media & Society, 21(1), 214–228. https://doi.org/10.1177/1461444818792407

Saker, M., & Frith, J. (2020). Coextensive space: Virtual reality and the developing relationship between the body, the digital and physical space. Media, Culture & Society, 42(7–8), 1427–1442. https://doi.org/10.1177/0163443720932498

Sanchez-Monedero, J., Dencik, L., & Edwards, L. (2020). What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. ArXiv:1910.06144 [Cs]. http://arxiv.org/abs/1910.06144

Shields, R. (2003). The virtual. Routledge.

Sophia, Z. (1992). Virtual corporeality: A feminist view. Australian Feminist Studies, 7(15), 11–24. https://doi.org/10.1080/08164649.1992.9994641

Srinivasan, D. (2018). The antitrust case against Facebook. Berkeley Business Law Journal, 16(1). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3247362

Srnicek, N. (2016). Platform capitalism. Polity Press.

Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008

van Dijck, J., Nieborg, D., & Poell, T. (2019). Reframing platform power. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1414

Viljoen, S. (2020). Democratic Data: A Relational Theory For Data Governance. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3727562

Wallis, K., & Ross, M. (2021). Fourth VR: Indigenous virtual reality practice. Convergence: The International Journal of Research into New Media Technologies, 27(2), 313–329. https://doi.org/10.1177/1354856520943083

Whittaker, M., M., A., Bennett, C., Hendren, S., Kaziunas, L., Mills, M., Morris, M., Rankin, J., Rogers, E., Salas, M., & Myers West, S. (2019). Disability, bias, and AI [Report]. AI Now Institute. https://ainowinstitute.org/disabilitybiasai-2019.pdf

Wong, A., Gills, H., & Peck, B. (2017). VR accessibility: Survey for people with disabilities. Disability Visibility. https://drive.google.com/file/d/0B0VwTVwReMqLMFIzdzVVaVdaTFk/view?resourcekey=0-3X6VtbtDNvcbResutE_EVA

Wood, A. J. (2020). Despotism on demand: How power operates in the flexible workplace. Cornell University Press.

X.R.S.I. (2020). The XRSI Privacy Framework. https://xrsi.org/publication/the-xrsi-privacy-framework

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books.

Footnotes

1. Specifically, CPU cooling. The Quest 2—the most recent iteration of the Oculus suite of VR devices—runs an overclocked processor (that is, running at speeds higher than those certified by the processor’s manufacturer) in order for the device to power its algorithmic odometry stack. To further integrate features such as facial expression and eye tracking would likely cause device overheating and breakdown.

2. As the Bundeskartellamt’s press release reads: “[t]he extent to which Facebook collects, merges and uses data in user accounts constitutes an abuse of a dominant position” (Bundeskartellamt, 2019, n.p.).

3. With Walmart, STRIVR offers 45 training modules, and 17,000 (Oculus Go) headsets in 4,700 locations.

4. US 10, 586, 469 B2, granted 10 March 2020.

5. Indeed, Walmart specifically has a long and well documented history of discrimination surrounding promotion of its service workers (see Dukes vs. Walmart Stores, 2011), union-busting, and a predatory culture of worker control and coercion based on exploiting precarity and insecurity (see Haiven, 2013).

6. This has been noted within the context of AI resulting in the involuntary disclosure of disabilities to employers (Whittaker et al., 2019: 20).

7. Concerns mount even further once we consider actors like the private health insurance sector, one that in the US has a unique and highly problematic coupling with one’s employment, and their potential relationship with VR technology. A 2016 report by KPMG’s global strategy group, focused on the insurance sector, points to the potential value of VR and AR technology for actors in this industry. Of note, the report suggests that AR and VR companies could form “cross-sector partnerships”—that is, with the insurance sector—“leveraging their customer networks and detailed customer behavior data” (2016, p. 10). Indeed, such a claim is entirely feasible—we already see large tech companies like Amazon (with their biometric, wearable fitness device Halo) partnering with John Hancock Financial, a US life insurance company.

8. Certainly, as critics have noted throughout the company’s history, this has rung true beyond disability. In 2014, danah boyd wrote that the Oculus Rift’s testing on men resulted in the way the headset works privileging the proprioceptive capacities of men. This bias meant that the device itself wasn’t calibrated for women; resulting in increased feelings of nausea for these bodies rendered ‘abnormal’ by Oculus’ development processes.

Add new comment