Transnational materialities

José van Dijck, Utrecht University, Netherlands
Bernhard Rieder, New Media and Digital Culture, University of Amsterdam, Netherlands, B.Rieder@uva.nl

PUBLISHED ON: 30 Jun 2019 DOI: 10.14763/2019.2.1418

Abstract

This special issue of Internet Policy Review is the second to bring together the best policy-oriented papers presented at the annual conference of the Association of Internet Researchers (AoIR). The conference in Montréal, in October 2018, was organised around the theme of "Transnational materialities". As explained in the editorial to this issue, the contributions map the larger debate on internet governance research in terms of perspectives rather than disciplines. The eleven papers in this issue span a wide range of topics, including normative perspectives on how platforms shape democracy, conceptual perspectives on how to think platform power, and social and legal views on data-driven governance.
Citation & publishing information
Received: March 1, 2019 Reviewed: May 29, 2019 Published: June 30, 2019
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Association of Internet Researchers, Platform governance, Data governance, Algorithmic governance
Citation: van Dijck, J. & Rieder, B. (2019). The recursivity of internet governance research. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1418

The recursivity of internet governance research

Introduction

Technological visionary Stewart Brand once remarked that “[o]nce a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road” (1987, p.9). About forty years after the somewhat muddled invention of the internet and right after the 25th birthday of the web, it seems that these technologies have quite thoroughly rolled over contemporary societies. But instead of simply shaping our societies from the outside, the internet’s “message” – to speak with McLuhan – has become increasingly difficult to read. While the mythos of cyberspace as a new frontier has long faded, common terms like “internet culture” or even “online shopping” signal that there is some kind of elsewhere in the clouds behind our screens. But the stories about election tampering, privacy breaches, hate speech, or algorithmic bias that dominate the headlines are just one reminder that issues still commonly prefixed with “digital”, “internet”, “online”, or similar terms have fully arrived at the centre of collective life. Elsewhere is everywhere. Trends like datafication or platformisation have seeped deeply into the fabric of societies and when scholars discuss questions of internet governance or platform governance, they know all too well that their findings and arguments pertain to social and cultural organisation in ways that go far beyond the regulation of yet another business sector.

It therefore comes as no surprise that not only the subject areas covered by conferences like the one organised by the Association of Internet Researchers every year since 2000 are proliferating, but also that the stakes have grown in proportion. As technologies push deeper into public and private spheres, they encounter not only appropriations and resistances, but complex forms of negotiation that evolve as effects become more clearly visible. Steamroller and road, to stick with Brand’s metaphor, blend into a myriad of relations operating at different scales: locally, nationally, supra-nationally, and globally.

The centrality of the internet in general and online platforms in particular means that the number and variety of actors seeking to gain economic or political advantages continues to grow, pulling matters of governance to the forefront. While the papers assembled in this special issue do not fall into the scope of “classic” internet governance research focused on governing bodies such as the ICANN or W3C and the ways they make and implement decisions, they indeed highlight the many instances of shaping and steering that follow from the penetration of digital technologies into the social fabric. The term “governance” raises two sets of questions: how societies are governed by technologies and how these technologies should be governed in return (cf. Gillespie, 2018). These questions are complicated by the fact that technologies and services are deeply caught up in local circumstances: massive platforms like Facebook or YouTube host billions of users and are home to a vast diversity of topics and practices; data collection and decision-making involving computational mechanisms have become common practices in many different processes in business and government—processes that raise different questions and potentially require different kinds of policy response. Global infrastructures reconfigure local practices, but these local practices complicate global solutions to the ensuing problems.

This knotty constellation poses significant challenges to both the descriptive side of governance research concerned with analysis of the status quo and the prescriptive side that involves thinking about policy and, in extremis, regulation. The papers assembled here do not neatly fit into this distinction, however. Instead, they highlight the complicated interdependence between is and ought, to speak with Hume, and indicate a need for recursive dialogue between different perspectives that goes beyond individual contributions. In this sense, this special issue maps the larger field of debate emerging around governance research in terms of perspectives or entry points rather than disciplines or clearly demarcated problem areas. Three clusters emerge:

First, a normative perspective that testifies and responds to the destabilisation of normativity that characterises societies, which are challenged on several levels at the same time. This involves an examination of the possibilities and underpinnings of critique: how can we evaluate our current governance and political perspectives in normative terms and thereby lay the ground for thinking about adaptations or alternative arrangements?

Second, a conceptual perspective concerned with the intellectual apparatus we use to address and to render our current situation intelligible. The authors in this group indeed argue that conceptual reconfigurations are necessary to capture many of the emerging fault lines, such as the need for transnational policy-making and the complex relationships between groups of stakeholders.

Third, an empirical perspective can be framed as asking how these more abstract concerns can be connected with understanding and evidence of actual practices and effects and how they affect the lived realities of individuals and social groups. The diversity of situations indeed challenges and complicates theoretical discussion, but also play a crucial role in shedding light on situations that may be opaque and counterintuitive.

We will discuss each of these perspectives in greater detail but suffice to say that adequate understanding of contemporary societies depends on their recursive interrelation: normative engagement serves as moral grounding, conceptual work sharpens our analytical grids, and empirical evidence connects us to the actual realities of lived lives. Internet researchers are tasked with the responsibility to advance on all three lines to increase our knowledge of the world we live in and to open pathways for policy responses that are up to the considerable challenges we face.

Normative perspectives: governing the data-subject

Research into the governance of platform-based, data-fueled, and algorithmically driven societies is obviously informed by economic and political theories. Over the past few years, several economic scholars have critically interrogated orthodox political models, such as capitalism and liberal democracy, to find out whether they still apply to societies where offline activities—private or public—are increasingly scarce (Zuboff, 2019; Jacobs and Mazzucato, 2016; Mayer-Schönberger and Ramge, 2018). Wavering between “surveillance capitalism” and “algocracy”, markets can be seen adapting to the advent of data as a new resource and predictive analytics as significant tools that turn users into “data-subjects”. But the study of data-subjects cannot easily be delineated as the study of “citizens” or “consumers” fitting the contextual parameters of “democracies” and “markets”. Normative perspectives cover economic and political principles but also pertain to moral principles—norms and values; the study of data-subjects, in other words, also involves the fundamental rights of human beings participating in “democracies” and “markets”.

Norms and principles are often invisible, hidden in the ideological folds of a social fabric woven together by an invisible technological apparatus that barely leaves traces upon its imprints. It is important to bare the normative perspectives by which the internet is governed; it is equally important to articulate and discuss normative perspectives on the basis of which the internet should be governed—what we called above the complicated interdependence between is and ought. Contributing perspectives from sociology, political economy and philosophy, the authors of the first three articles in this special issue each highlight a different aspect of “governing the data subject”: as an economic resource, as a citizen in a democracy, and as an autonomous individual. All three papers take a broader view of data-subjects as the centre of data practices and try to rethink the normative frameworks by which they are governed.

Nick Couldry and Ulises Mejias propose the political-historical perspective of “data colonialism” to dissect the new social order that has been the result of rapid datafication linked to extractive capitalism. Data colonialism, they argue, is about more than capitalism; it is “human life itself that is being appropriated … as part of a reconstruction of the very spaces of social experience.” Colonialism should thus not be understood metaphorically, and neither should data simply be seen as the “new oil”; data colonialism is a new phase in the history of colonialist expansion—a phase that is characterised by a massive transformation of humanity’s socio-legal and economic order through the appropriation of human life itself by means of data extraction. The data-subject emerging from this perspective is at once personal and relational. Data are not “personal” in the sense that they are “about” our individual selves, but they emerge as constructions of data points—“data doubles”—out of a myriad of data sets. Hence, privacy is important for individuals and collectives: data doubles are projections of the social and thus contribute to reshaping social realities. Couldry and Mejias conclude that existing legal approaches and policy frameworks are profoundly inadequate when it comes to governing datafied societies. Instead, they propose a radical reframing of regulatory discourse that calls into question the direction and rationale of a social order resting on exploitative data extraction.

Starting from the rapid shift from broadly optimistic attitudes concerning the relationship between digitalisation and democracy to broadly negative ones, Jeanette Hofmann argues that the fundamental relationship between media and democratic life should be (re)considered in greater conceptual depth to form a starting point for a critical perspective on governance. Instead of merely describing the “effect” or “influence” of media, she makes a distinction between medium and form that highlights the “alterability” of technologies and the normatively charged struggles over architecture and design that ensue. This perspective allows for a reading of the internet’s history through the lens of shifting and competing ideological models, through “different modes of social coordination and political regulation, which became inscribed as operational principles and standards into the network architecture and as such again subject of political interpretation”. While concepts like “connective action” (Bennett and Segerberg, 2012) emphasise the distributed character of the internet, Hofmann argues that the contemporary emergence of digital platforms is still lacking a clearer appreciation in terms of its consequences for democratic agency. Only a deeper conceptual understanding of the treacherous waters of mediated democracy would allow for a programmatic appropriation of alterability and the realisation of “unrealised alternatives”.

Daniel Susser, Beate Roessler, and Helen Nissenbaum move from broad conceptualisations of digital societies to a more fine-grained level of analysis that deals with a phenomenon that is often mentioned when discussing potential harms but is rarely examined in greater depth: the notion of (online) manipulation. Starting from the specific possibilities for steering and controlling digital platforms incorporate, they argue that core liberal values—autonomy in particular—are under threat when cognitive biases and data profiles can be easily exploited through mechanisms that often remain hidden. But the gist and merit of this paper lie not so much in highlighting these increasingly well-known phenomena, but to submit them to a normative assessment that connects to existing policy discussions, proposing concrete measures for “preventing and mitigating manipulative online practices”. The authors thus invest precisely in what we mean by recursivity: the connection between descriptive and prescriptive modes as well as the tighter coupling between academic research and government policy.

Conceptual perspectives: digital governance between policy-making and politics

Gravitating between what is and what ought are conceptual perspectives of internet governance: what needs to be done to get us from current (inadequate) legal and policy frameworks to frames that work? The papers in this section critically assess foundational notions such as markets, consumers, companies, stakeholders, agreements, and contracts—notions on which much of our governance structures rest but which have become porous, to say the least. If “classic” governance structures no longer seem to apply to an platform-based, data-fuelled, and algorithmically driven society, how can they be reconceptualised? Such reframing and retooling exercises inevitably raise questions of policy-making and political manoeuvring. Not everything that can be theoretically reconceived is politically conceivable. A useful political reality-check is to compare different national governance frameworks and show how policy-making for the internet is an intensely (geo)political affair. The conceptual perspectives in this section range from the very broad to the very specific: they interrogate the foundations of platform power and how power is distributed between state, market, and civil society actors (Van Dijck et al.; Gorwa); they compare (trans)national initiatives of data governance (Meese et al.) and probe the geopolitical implications of compliance with regulatory standards (Meese at al.; Tusikov); and finally, they study how the digital rendering of consumer-facing contracts can be both a threat and an opportunity (Cornelius).

José van Dijck, Thomas Poell, and David Nieborg probe the very assumptions underlying recent decisions by the European Commission to impose substantial fines upon Alphabet-Google for anti-competitive behaviour. They argue that the concepts of consumer welfare, internet companies, and markets—concepts in which many regulatory frameworks are staked—no longer suffice to catch the complex interrelational and dynamic nature of online activities. Instead, they propose expansive concepts such as citizen well-being, an integrated platform ecosystem, and societal platform infrastructures to inform policy-making efforts. But more than a theoretical proposal, their “reframing power” exercise hints at the need for recursive internet governance research. Researchers should help policy-makers in defining the dynamics of platform power by providing a set of analytical tools that help explain the complex relationships between platforms and their responsible actors. Armed by detailed insights from national and comparative case studies, policy-makers, and politicians can help articulate regulatory principles at the EU-level.

Conceptual rethinking is obviously not restricted to formal regulatory frameworks, but also extends into informal governance arrangements. Robert Gorwa, in his contribution to this special issue, reviews the growing number of non-binding governance initiatives that have been proposed by platform companies over the past few years, partly in response to mounting societal concerns over user-generated content moderation. The question “who is responsible for a fair, open, democratic digital society across jurisdictions?” is picked up not just by (transnational) bodies like the EU, but by a variety of actors in multi-stakeholder organisations. Companies like Facebook and Google seek out provisional alliances to create “oversight bodies” and other forms of informal governance. However, as Gorwa shows, the power relationships in the “governance triangle” of companies, states, and civil society actors in these informal arrangements remains unbalanced because civil society actors are notoriously underrepresented. The poignant issue is responsibility rather than liability: we are all responsible for a fair, open, and democratic society, but “we” is not an easy-to-define collective concept. Detailed analyses of big platform companies’ “spheres of influence” through informal arrangements—in conjunction to in depth analyses of formal regulatory toolboxes, as suggested in the previous article—are needed to map the complex power relationships between actors with varying degrees of power. Once again, recursivity is the magic word: researchers inform policy-makers who inform researchers.

James Meese, Punit Jagasia, and James Arvanitakis, in their article “Citizen or consumer?” continue the reframing exercise of this section by comparing data access rights between the EU and Australia. They ask whether the two continents’ regulatory frameworks—the General Data Protection Regulation (GDPR) versus the Consumer Data Right (CDR)—are grounded in different ideological concepts of citizen versus consumer. The authors show the deep interpenetration of policy-making and politics. In Europe, this results in the GDPR’s strong emphasis on protecting fundamental rights of citizens, such as privacy and data protection against (ab)use by companies and governments. In Australia, the CDR betrays clear signs of a neoliberal approach which grants individual rights in the context of markets. This concrete comparison between Europe’s and Australia’s regulatory efforts on data protection signal the importance of including ideological and (geo)political premises into a conceptual approach to governance. Across the globe, we are witnessing the clash between market-oriented approaches vis-à-vis approaches that start from the fundamental rights and freedoms of citizens. Whereas the GDPR, in the eyes of some Europeans, does not go far enough in the second direction, for Australians this would mean a major straying from the first.

A second comparative perspective is provided by Natasha Tusikov, who closely examines the effects of US regulation on China’s internet governance in the area of intellectual property rights protection. A detailed analysis of policy and regulatory policy documents illuminates the power choreography between American private actors, American state regulators, and Chinese platform companies; the US state exerts coercive power on Chinese actors to comply with American standards, as illustrated by Alibaba adopting US-drafted rules to prohibit the sale of counterfeit products via their Taobao marketplace. Tusikov’s careful reconstruction of the “compliance-plus” process demonstrates that the US dominance in transnational platform governance continues a long history of setting rules and standards to benefit its own economic interests and those of its industry actors. Such analysis of reciprocal fine-tuning between regulation, policy-making, and politics is extremely relevant when trying to understand the recent trade war between the US and China which is a clash between two giants to secure their economic, political, and national security interests through internet governance. The world of geopolitics is no longer external to issues of internet governance; on the contrary, disputes concerning internet governance are at the core of geopolitical conflicts.

Kristin Cornelius’ contribution finally approaches the intersection of technology and governance from a very different angle. Looking at the explosive proliferation of “consumer-facing standard form contracts” such as Terms of Service – contracts we constantly submit to yet hardly ever read – she argues that the “digital form” of these documents merits closer attention. Taking a conceptual perspective grounded in information science and document-engineering, she shows not only how the technical form that implements a legal relationship has a normative dimension in the sense that it structures power relations, but also that this technicity is an opportunity: emphasising elements such as standardisation, stabilisation, and machine-readability would not necessarily change the content of these (zombie) contracts, but allow for different forms of social embedding that keep them from coming to haunt the users they apply to. Looking at contracts as documents having specific material forms instead of limiting them to their abstract legal meaning shows how crucial conceptual frames have become for making sense of a situation where technical principles shake established lines of reasoning.

Empirical perspectives: data uses and algorithmic governance in everyday practices.

The last section of this special issue brings us from the higher spheres of politics and policy-making to the concrete everyday practices in which “data subjects” play a central role. The three papers listed in this section scrutinise empirical cases concerning actual data uses which, in turn, serve to inform researchers and policy-makers intent on reshaping internet governance. Whether adopting the notion of “citizens” or “consumers”, these articles ground their research perspectives in empirical observations and interrogations of data subjects—the way they are steered by algorithms and how they respond to certain manipulations of online behaviour. Moreover, all three papers seek to tie concrete, empirical research to normative and conceptual perspectives: from what is to what ought and what could be. Whether the cases concern “citizen scoring” practices at the local levels (Dencik et al.), revolts of YouTube users against the platform’s algorithmic advertising and moderation practices (Kumar), or the broader question of how to study real-world effects of algorithmic governance in different areas of everyday life (Latzer and Festic)—they all come back to the recursivity of research: how to make sense of current algorithmic and data practices in light of the wider political and economic transformations of internet governance?

Lina Dencik, Joanna Redden, Arne Hintz, and Harry Warne provide an insightful analysis of data analytics uses in UK public services. The authors draw on a large number of databases and interviews to investigate what they call “citizen scoring practices”: the categorisation and assessment of data (e.g., financial data, welfare data, health data and school attendance) to predict citizen behaviour at both the individual and the population level. Significantly, Dencik et al. show how the interpretation of data analytics is the result of negotiation between the various stakeholders in data-driven governance, from the private companies that provide the data analytics tools to the public sector workers that handle them. While the use of data analytics in public service environments is steadily increasing, there appears to be no shared understanding of what constitutes appropriate technologies and standards. And yet, such a “golden view” seems to inform the various data-driven analytics practises at the local level. One important goal of this paper is to understand the heterogeneity of local data-based practices against the backdrop of a regulatory vacuum and quite often an austerity-driven policy regime. Hopefully, studies like this one provide a much needed empirical basis for articulating policies that address broader concerns of data use with regards to discrimination, stigmatisation, and profiling.

The article by Sangeet Kumar moves to a very different arena, one where data-driven governance has been at the centre from the very beginning: analysing the so called “Adpocalypse”, an advertiser revolt against YouTube in 2017, he shows how decisions concerning the monetisation of videos have complemented practices such as content moderation or deplatforming as instruments of governance. More subtle in nature, they may have nonetheless a large effect on the overall composition of the platform by steering money flows away from conflictual yet important subjects, transforming YouTube—and the web more broadly—from “a plural, free and heterogenous space” into a “sanitised, family-friendly and scrubbed version” of itself. The paper ends with a call for wider stakeholder participation to put the inevitable decisions on rules and modes of governance on wider bases. Given the outsized role YouTube has come to play in the emerging “hybrid media system” (Chadwick, 2013), one could rightfully ask whether platforms of this size should become touted as “public utilities” as Van Dijck et al. suggest in their conceptual reframing.

While Michael Latzer and Noemi Festic’s contribution does not rely on empirical research itself, it is very much concerned with the question of how empirical evidence on complicated and far-reaching concepts like algorithmic governance can be collected in the first place. While theoretical models proliferate and efforts for algorithmic accountability gain traction, the actual integration of the various mechanisms for selection, ranking, and recommendation users regularly encounter into the practices of everyday life remains elusive. Qualitative studies have given us some idea concerning effects and imaginaries on the user side, but “generalisable statements at the population level” are severely lacking. Such a broad ground-level view is, however, essential for informed policy choices. The authors therefore propose a programmatic framework and mixed-methods approach for studying the actual consequences of algorithmic governance on concrete user practices, in the hope of filling a research gap that continues to blur the picture, despite the heightened attention the topic has recently received. The methodologies used to produce empirical insights thus constitute yet another area where internet researchers have a crucial social role to play, despite the significant challenges they face.

Conclusion

It may still be too early to omit the terms “digital”, “online”, or “internet” as meaningful adjectives when discussing the transformation of societies in which data, algorithms, and platforms play a central and crucial role. Obviously, we are no longer restricted by a predominantly technological discourse when discussing the internet and its governance—like we were in the 1990s when most researchers saw the steamroller coming, but did not quite know how to gauge its power and envision its implications. And, perhaps on a hopeful note, we have not yet become part of the “road” which the steamroller threatens to flatten. However, it takes a conscious and protracted effort for researchers to understand the “internet” and the “digital” as transformative forces before they become part of the road we walk on. And that is what makes the recursivity of governance and policy research so relevant at precisely this moment in time.

When studying the effects of data-informed practices first-hand, internet researchers can detect patterns in how society is governed by platforms; in turn, their insights and conceptual probes might inform regulators and policy-makers to adjust and tweak existing policies. There is a clear knowledge gap, an asymmetry of information that affects not only researchers as they study complicated actor constellations and powerful companies, but also democratic institutions themselves. Governments may be able to wield considerable power in specific situations, in particular around market competition, but they are nonetheless increasingly dependent on the multifaceted input of a wide range of disciplines. Internet researchers may be rightfully sceptical about engaging with institutions that are clearly imperfect; but our current situation requires that we accept our responsibilities as knowledge producers and push the insights we develop beyond the boundaries of our disciplines and institutions. The recursive nature of normative, conceptual, and empirical approaches hopefully encourages collectives of researchers and policy-makers to cooperate in governance design.

References

Bennett, W., & Segerberg, A. (2012). The Logic of Connective Action. Interaction, Communication & Society, 15(5), 739–768. doi:10.1080/1369118X.2012.670661

Brand, S. (1987). The Media Lab: Inventing the Future at MIT. New York: Viking; Penguin.

Chadwick, A. (2013). The Hybrid Media System. Oxford; New York: Oxford University Press.

Gillespie, T. (2018). Custodians of the Internet. Platforms, content moderations, and the hidden decisions that shape social media. New Haven: Yale University Press.

Jacobs, M., & Mazzucato, M. (2016). Rethinking Capitalism: Economics and Policy for Sustainable and Inclusive Growth. London: Wiley.

Mayer-Schönberger, V., & Ramge, T. (2018). Reinventing Capitalism in the Age of Big Data. New York: Basic Books.

Zuboff, S. (2019). The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

Add new comment