A guideline for understanding and measuring algorithmic governance in everyday life

Michael Latzer, Department of Communication and Media Research (IKMZ), University of Zurich, Switzerland, m.latzer@ikmz.uzh.ch
Noemi Festic, Department of Communication and Media Research (IKMZ), University of Zurich, Switzerland, n.festic@ikmz.uzh.ch

PUBLISHED ON: 30 Jun 2019 DOI: 10.14763/2019.2.1415

Abstract

Algorithmic governance affects individuals’ reality construction and consequently social order in societies. Vague concepts of algorithmic governance and the lack of comprehensive empirical insights into this kind of institutional steering by software from a user perspective may, however, lead to unrealistic risk assessments and premature policy conclusions. Therefore, this paper offers a theoretical model to measure the significance of algorithmic governance and an empirical mixed-methods approach to test it in different life domains. Applying this guideline should lead to a more nuanced understanding of the actual significance of algorithmic governance, thus contributing to an empirically better-informed risk assessment and governance of algorithms.
Citation & publishing information
Received: March 1, 2019 Reviewed: May 29, 2019 Published: June 30, 2019
Licence: Creative Commons Attribution 3.0 Germany
Funding: This project has received funding from the Swiss National Science Foundation (SNF)
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Algorithmic governance, Policymaking, Reality construction, Everyday life, Mixed-methods
Citation: Latzer, M. & Festic, N. (2019). A guideline for understanding and measuring algorithmic governance in everyday life. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1415

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

The growing use, importance and embeddedness of internet-related algorithms in various life domains is widely acknowledged. Academic and public debates focus on a spectrum of implications in everyday life, caused by internet-based applications that apply automated algorithmic selection (AS) for, among other things, searches, recommendations, scorings or forecasts (Latzer, Hollnbuchner, Just, & Saurwein, 2016; Willson, 2017). These discussions are often combined with reflections on growing automation in general and the impact of artificial intelligence (e.g., machine learning) in particular (Larus et al., 2018). Questions emerge as to how to analytically grasp and assess the consequences of the diffusion of algorithmic selections in modern societies, which some observers characterise as algocracies (Aneesh, 2009) in an algorithmic age (Danaher et al., 2017), marked by growing relevance of informatics and statistics in the governance of societies.

In this paper we provide a guideline for answering these questions. We (1) take a governance perspective and suggest to understand the influence of automated algorithmic selections on daily practices and routines as a form of institutional steering (governance) by technology (software). This institutional approach is combined with practice-related concepts of everyday life, in particular of the daily social and mediated constructions of realities, and embraces the implications of algorithmic governance in selected life domains. Based on these combined approaches, and on a review of empirical algorithmic-governance literature that identifies research gaps, we (2) develop a theoretical model that includes five variables that measure the actual significance of algorithmic governance in everyday life from a user perspective. To examine these variables for different life domains an innovative empirical mixed-methods approach is proposed, which includes qualitative user interviews, an online survey and user tracking.

Results from applying the proposed guideline should contribute to a more nuanced understanding of the significance of algorithmic governance in everyday life and provide empirically informed input for improved risk assessments and policies regarding the governance of algorithms. Accordingly, applying this guideline should help both academics and practitioners to conduct policy analyses and assist them in their policy-making.

A nuanced understanding of algorithmic governance in everyday life

In the fast growing academic and non-academic literature on algorithms, their implications in daily life are summarised using a variety of sometimes misleading and only vaguely defined terms, ranging from algocracy and algorithmic selection to algorithmic regulation and algorithmic decision-making. In the following, a nuanced understanding of “algorithmic governance” is developed from an institutional perspective, that can form the basis for policy analyses and policy-making.

Governance can be understood as institutional steering (Schneider & Kenis, 1996), marked by the horizontal and vertical extension of traditional government (Engel, 2001). Governance by algorithms, also referred to as algorithmic governance, captures the intentional and unintentional steering effects of algorithmic-selection systems in everyday life. Such systems are part of internet-based applications and services, applied by private actors / commercial platforms (e.g., music recommender systems) and political actors (e.g., predictive policing). They include both institutional steering with and by algorithms in societies, i.e., as tools or as (semi-) autonomous agents, either in new or already established commercial and political governance systems. Our understanding of algorithmic governance in everyday life overlaps with Yeung’s (2018) algorithmic regulation. But algorithmic governance in everyday life goes far beyond ‘intentional attempts to manage risk or alter behaviour in order to achieve some pre-specified goal’, and refers not only to ‘regulatory governance systems that utilise algorithmic decision making’ (Yeung, 2018, p. 3). Unintentional effects of automated algorithmic selections are a major part of algorithmic governance and call for special attention in policy analyses and policy-making.

Danaher et al. (2017) use the terms algorithmic governance and algocracy largely synonymously, referring to the intertwined trends of (1) growing reliance on algorithms in traditional corporate and bureaucratic decision-making systems, and (2) the outsourcing of decision-making authority to algorithm-based decision-making systems. In accordance with Aneesh (2009) and Danaher (2016), we do not understand algocracy as the final stage of technological singularity ‘when humans transcend biology’, as foreseen by Google’s director of engineering Ray Kurzweil (2005), but rather as a kind of governance system where algorithms govern (i.e., shape, enable and constrain activities) either as intentionless tools of human agents or as non-human agents equipped with a certain autonomy. 1 Together and also as part of other kinds of (traditional) governance systems (e.g., legal systems, self-regulations, cultural norms and traditions), they co-govern societies. The extent of the relative importance of algorithmic selections in daily routines and their overall effect on social order in societies, however, is an open research question. Empirically assessing the significance of algorithmic governance is particularly important since accurate assessments of the role of algorithms (e.g., degree of automation and autonomy) and associated risks are a prerequisite for the development of adequate public policies.

Different aspects of algorithmic governance have received attention from various disciplines, leading to a large but fragmented body of research. A comprehensive empirical assessment of the significance of algorithmic selection in daily life requires both concepts of algorithmic selection and of everyday life that can be operationalised. This article commences with a working definition of algorithmic selection as the automated assignment of relevance to certain selected pieces of information and a focus on internet-based applications that build on algorithmic selection as the basic unit of analysis (Latzer et al., 2016).

Algorithmic selection applications as units of analysis

The emerging field of critical algorithm studies can roughly be grouped into studies that centre on (single) algorithms per se as their unit of analysis, and those that focus on the socio-technical context of AS applications. Studies focusing on the algorithm itself show the capabilities of AS and aim to detect an algorithm’s inner workings, typically by reverse engineering the code (Diakopoulos, 2015), experimental settings (Jürgens, Stark, & Magin, 2015), or code review (Sandvig, Hamilton, Karahalios, & Langbort, 2014). Often, however, they are not able to determine the overall social power that algorithms exert, because algorithms are studied in isolation and user perceptions and behaviour are not sufficiently accounted for. Generally, a purely technical definition of algorithms as encoded procedures that transform input data into specific output based on calculations (e.g., Kowalski’s, 1979, ‘algorithm = logic + control’) and the mere uncovering of the workings of an algorithm do not reveal much about the risks of their applications and their social implications. Algorithms remain ‘meaningless machines’ (Gillespie, 2014) or ‘mathematical fiction’ (Constantiou & Kallinikos, 2015) until they are connected to real-world data (Sandvig et al., 2014). This is accounted for in studies on the socio-technical context of AS, where algorithms are viewed as situated artefacts and generative processes embedded in a complex ecosystem (Beer, 2017; Willson, 2017). As such, algorithms are only one component in a broader socio-technical assemblage (Kitchin, 2017), comprising technical (e.g., software) and human (e.g., uses) components (Willson, 2017). By focusing on internet-based applications that build on algorithmic selection as units of analysis and on the societal functions they perform (see Table 1), this article integrates itself within the second group of research.

Table 1: Functional typology of AS applications (adapted from Latzer et al., 2016)

Types

Examples

Search

General search engines (e.g., Google search, Bing, Baidu)

Special search engines (e.g., findmypast.com, Shutterstock, Social Mention)

Meta search engines (e.g., Dogpile, Info.com)

Semantic search engines (e.g., Yummly)

Question and answer services (e.g., Ask.com)

Aggregation

News aggregators (e.g., Google News, nachrichten.de)

Observation/surveillance

Surveillance (e.g., Raytheon’s RIOT)

Employee monitoring (e.g., Spector, Sonar, Spytec)

General monitoring software (e.g., Webwatcher)

Prognosis/forecast

Predictive policing (e.g., PredPol)

Predicting developments: success, diffusion etc. (e.g., Sickweather, scoreAhit)

Filtering

Spam filter (e.g., Norton)

Child protection filter (e.g., Net Nanny)

Recommendation

Recommender systems (e.g., Spotify, Netflix)

Scoring

Reputation systems: music, film, and so on (e.g., eBay’s reputation system)

News scoring (e.g., reddit, Digg)

Credit scoring (e.g., Kreditech)

Social scoring (e.g., PeerIndex, Kred)

Content production

Algorithmic journalism (e.g., Quill, Quakebot)

Allocation

Computational advertising (e.g., Google AdSense, Yahoo!, Bing Network)

Algorithmic trading (e.g., Quantopian)

The typology in Table 1 demonstrates how broad the scope of AS applications has become. An approach that focuses on socio-technical and functional aspects is accessible for research into the social, economic and political impact of algorithms (Latzer et al., 2016) and the power algorithms may have as gatekeepers (Jürgens, Jungherr, & Schoen, 2011), agents (Rammert, 2008), ideologies (Mager, 2012) or institutions (Napoli, 2014). The institutional governance perspective, that is applied in this paper, identifies algorithms as norms and rules that affect daily behaviour by limiting activities, influencing choices, and creating new scope for action. They shape how the world is perceived and what realities are constructed. In essence, algorithms co-govern everyday life and impact the daily individual construction of realities—the individual consciousness—and consequently the collective consciousness, which in turn makes them a source and factor of social order, resulting from a shared social reality in a society (Just & Latzer, 2017).

Algorithms co-govern daily life as instruments and actors

The governing role of algorithms needs further analytical specification. As general-purpose technologies (Bresnahan, 2010), algorithms have an impact on a wide range of life domains, and as enabling technologies their impact is contingent on social-use decisions. From a co-evolutionary perspective (Just & Latzer, 2017), algorithmic governance is a complex, interconnected system of distributed agency (Rammert, 2008) between humans and software, a co-evolutionary circle of permanent shaping and being shaped at the same time. Algorithms co-govern what can be found (e.g., algorithmic searches), what is anticipated (e.g., algorithmic forecasts), consumed (e.g., algorithmic recommendations) and seen (e.g., algorithmic filtering), and whether it is considered relevant (e.g., algorithmic scoring) (Just & Latzer, 2017). They thereby contribute to the constitution and mediation of our lives (Beer, 2009). The use of only vaguely defined terms like algorithmic decision-making can be misleading regarding the assessment of social consequences of different kinds of algorithmic governance. Various analytical distinctions should be kept in mind when studying algorithmic governance:

Algorithmic selection applications on the internet differ widely in their degree of automation and autonomy. At one end of the spectrum, algorithms are used as instruments with imposed agency to exert power without any autonomy, with predefined and widely predictable outcomes 2. At the other end, machine-learning algorithms govern with a delegated agency that implies a predefined autonomy, leading to unforeseeable results 3.

To indicate the actual autonomy of algorithmic systems on the internet, a similar classification to that applied for self-driving cars may be helpful, where a labelling from 1 (low) to 5 (full) marks the degree of automation (Bagloee, Tavana, Asadi, & Oliver, 2016). Literature on automated weapons systems provides another instrumental way to categorise the remaining control by humans in automated decision-making systems: humans are classified as being either (1) in-the-loop and fully in control, (2) on-the-loop and able to intervene if felt necessary, or (3) off-the-loop and without any option to intervene (Citron & Pasquale, 2014). This distinction, for example, proves helpful when liabilities for algorithmic governance are evaluated. The term automated decision-making algorithms often refers to decisions by algorithms without human involvement (off-the-loop), and has already led to regulatory interventions. The use of automated decision-making systems with significant legal or social effects is restricted (e.g., fully automated tax assessments), for example, by article 22(1) of the European General Data Protection Regulation (GDPR), whereas the use of other automated decision-making systems—based on non-personal data—is not restricted (Martini & Nink, 2017).

Algorithmic selections as part of internet-based applications are related to everyday human decisions in different ways. In most of the functional categories listed in Table 1, automated algorithmic selections are applied to augment and enhance everyday human decision-making but not to fully replace it. This is predominantly the case for algorithmic recommendations, filtering and scoring results. Nevertheless, it has to be considered that in many cases (e.g., credit scoring, predictions on recidivism, ranking of job candidates) it becomes increasingly problematic for those responsible to ignore or counteract algorithmic results in their decisions, in particular if these algorithmic outputs are accessible to others or to the public. Accordingly, AS applications that are aimed at enhancing human decisions can de facto evolve into systems where humans merely remain on-the-loop and will only intervene in exceptional cases.

Further, algorithmic selections vary strongly in their scope of potential consequences (social and economic risks). For instance, there is a significant difference between a simple algorithmic filtering concerning which post from a friend is shown in someone’s social media feed and a more meaningful and directly relevant algorithmic scoring of someone’s creditworthiness. Accounting for the case-specific scope and context of algorithmic selections is therefore highly relevant for appropriate policy conclusions. For instance, two technologically identical algorithms where one is applied for recommending books and the other for recommending medical treatments call for very different policies due to the disparity of risks of these automated algorithmic selections.

Algorithmic (co-)governance results in opportunities and risks. The advantages of algorithmic governance such as efficiency gains, speed, scalability and adaptability are compromised by risks ranging from bias, manipulation and privacy violations, to social discrimination, heteronomy and the abuse of market power (Latzer et al., 2016), or by efficiency-based (inaccurate decisions) and fairness-based objections (unfair decisions) in algorithmic governance (Zarsky, 2016).

In sum, while algorithms are increasingly active as tools and actors in governance regimes that affect many life domains on a daily basis, the relative importance of algorithmic governance is far from clear. The practice-related approach proposed here aids the empirical assessment and understanding of this significance of algorithmic governance.

A practice-related approach to everyday life

Everyday life as a field of research is rooted in various theoretical traditions (Adler, Adler, & Fontana, 1987), among other things in phenomenological sociology (Schütz, 2016), historical materialism (Heller, 1984) and De Certeau’s (1984) anthropology.

As for the area of inquiry, this paper takes a practice-related approach (Pink, 2012). Since the field lacks comprehensive empirical research that goes beyond individual services, this article suggests studying the significance of algorithmic governance for everyday life in a more inclusive manner. In order to derive an executable research design, however, it is necessary to analytically segment ‘everyday life’. We focus on four domains of everyday life that span central areas of everyday practice: (a) social and political orientation, (b) recreation, (c) commercial transactions, and (d) socialising. This categorisation is derived from a representative, country-wide CATI survey of internet use in Switzerland. While an infinite number of activities can be performed on the internet, a confirmatory factor analysis revealed four distinct internet usage factors that group the most important internet activities for Swiss internet users (see Büchi, Just, and Latzer, 2016 for an overview of the activities for each domain). Therefore, this categorisation lends itself to an analytical distinction between different life domains in which people engage in online activities and use AS applications in particular. It is important to note that these life domains are obviously closely interrelated and do not necessarily represent the categories in which individuals perceive their everyday lives. Although there is no standard conceptual framework for everyday life, Sztompka (2008), for example, points to its various defining traits, such as that everyday life events include relationships with other people, that they are repeated and not unique, have a temporal duration, and often happen non-reflexively, following internalised habits and routines.

In order to appropriately account for the increasing role of technology, research must go beyond human relationships as one defining characteristic of everyday life. The theory of the social or mediated construction of reality (Berger & Luckmann, 1967; Couldry & Hepp, 2016) is fruitful for the understanding of how social interactions and media technologies shape the perception of the social world. Berger and Luckmann (1967) argue that the social world is constructed through social interactions and underlying processes of reciprocal typification and interpretation of habitualised actions. In this meaningful process, a social world is gradually constructed whose habitualised actions provide orientation, make it possible to predict the actions of others and reduce uncertainty. This leads to an attitude that the world in common is known, a natural attitude of daily life (Schütz & Luckmann, 2003). Accordingly, the resources, interpretations and the common-sense knowledge of routinised practices in everyday life—which increasingly includes AS applications—are seemingly self-evident and remain unquestioned.

This paper particularly aims to expose what is generally left unquestioned and to propose a guideline for the assessment of perceptions and use of AS applications for a wide range of everyday practices in order to better understand their impact, associated risks, and the need for public policies. Willson (2017) emphasises that one of the concerns of studying the everyday is to make the invisible visible and to study the power relations and practices involved. AS applications are seamlessly integrated into the routines of everyday life through domestication (Silverstone, 1994)—the capacity and the process of appropriation—which renders them invisible. Algorithms operate at the level of the ‘technological unconscious’ (Thrift, 2005) in widely unseen and unknown ways (Beer, 2009). Consequently, the study of algorithms aims to reveal the technological unconscious and to understand how AS applications co-govern everyday online and offline activities. AS applications must be investigated in relation to online and offline alternatives to determine the relative significance of algorithmic governance for everyday life, for example by bearing in mind an individual’s media repertoire 4 (Hasebrink & Hepp, 2017). Thus far only a small body of empirical research on AS has emerged with regard to the everyday activities of orientation, recreation, commercial transactions and socialising.

Existing empirical results and research gaps

(a) The significance of algorithmic governance has received the most attention in research on social and political orientation. Search applications and news aggregators are understood as intermediaries (Bui, 2010; Newman, Fletcher, Kalogeropoulos, Levy, & Nielsen, 2018) between traditional mass media and individual news consumption. Empirical research suggests that algorithmic selection will become more important for information retrieval in the future (Newman et al., 2018; Shearer & Matsa, 2018). Accompanying these considerations are fears of personalised echo chambers (Sunstein, 2001) or filter bubbles (Pariser, 2011), leading to fragmented, biased perceptions of society (Dylko, 2016). However, recent empirical studies fail to show a coherent picture: there are clear patterns of algorithmically induced, homogenous opinion networks (Bakshy, Messing, & Adamic, 2015; Del Vicario et al., 2016; Dylko et al., 2017), but other studies indicate more opinion diversity despite algorithmic selection and qualify the risk of echo chambers with empirical evidence (Barbera, Jost, Nagler, Tucker, & Bonneau, 2015; Dubois & Blank, 2018; Fletcher & Nielsen, 2017; Heatherly, Lu, & Lee, 2017; Helberger, Bodo, Zuiderveen Borgesius, Irion, & Bastian, 2017; Zuiderveen Borgesius et al., 2016).

(b) AS applications also increasingly shape daily recreation (i.e., entertainment and fitness). Recommendation applications have been shown to play a predominant role here. The main concerns are diminishing diversity (Nguyen, Hui, Harper, Terveen, & Konstan, 2014), the algorithmic shaping of culture (Beer, 2013; Hallinan & Striphas, 2016) and the social power of algorithms (Rieder, Matamoros-Fernandez, & Coromina, 2018). Again, there has been no clear empirical evidence for this hypothesis, but rather studies qualifying this risk (Nguyen et al., 2014; Nowak, 2016).

Further, wearables—networked devices equipped with sensors—have entered everyday life. Empirical studies investigate the perception, use and modes of self-tracking (Lupton, 2016; Rapp & Cena, 2016), and its social and institutional context (Gilmore, 2015). Such wearables have often been disregarded in critical algorithm studies, although they are an important way in which AS governs the perception of the self (Williamson, 2015) and everyday life in general.

(c) For commercial transactions, there has been a focus on studying recommender systems focusing on the performance of algorithms (Ur Rehman, Hussain, & Hussain, 2013) or the implementation of new features (Hervas-Drane, 2015). Their impact on consumers is mostly studied by evaluating their perceived usefulness (Li & Karahanna, 2015). Furthermore, allocation algorithms in the form of online behavioural advertising have attracted attention (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017), revealing inconsistent results on users’ perceptions of personalised advertisements (McDonald & Cranor, 2010; Smit, Van Noort, & Voorveld, 2014; Ur, Leon, Cranor, Shay, & Wang, 2012).

(d) For socialising, the research focus is on how algorithms curate user interactions on social networking sites and dating platforms (Bucher, 2012; Hitsch, Hortaçsu & Ariely, 2010). These applications raise concerns like social distortion effects or the question of how social connections are adapting to an algorithmically controlled model (Eslami et al., 2015; Rader, 2017; Rader & Gray, 2015; Van Dijck, 2013). So far, there has been no empirical analysis to confirm the relevance of these risks.

Altogether, research on the impact of algorithmic governance on everyday life has produced a plethora of theoretical considerations and fragmented, application-specific empirical findings. To date there has been no comprehensive and systematic empirical investigation of the various central domains of everyday practices. However, generalising policy implications from studies on individual AS services (e.g., Facebook, Twitter or search engines) should be treated with caution. Moreover, existing studies focus on AS applications in relative isolation. Due to this narrow perspective, they are unable to evaluate the power of algorithmic governance in everyday life. Existing work has mostly taken a top-down approach, disregarding the perspective of users. Studies on user perceptions have predominantly relied on self-reported survey measures. While extensive qualitative studies (e.g., Bucher, 2017) offer the basis for a better scientific understanding of the social effects of AS applications, they do not allow generalisable statements at the population level. There is also a lack of empirical work with data on individuals’ actual internet use. To the best of our knowledge, there is no empirical study on the population level that uses tracking data on both mobile and desktop devices, a prerequisite to gain a comprehensive picture of individual internet use. Finally, there have been very few nationally representative studies on the use and perception of AS (e.g., Araujo et al., 2018; Fischer & Petersen, 2018). These existing empirical results do not provide a sound basis for policy-making in this area.

The following section proposes a methodological design that is suited to filling the research gaps identified above. It is designed with the objectives of providing a better understanding of how algorithms exert their power over people (Diakopoulos, 2015)—which essentially corresponds to our understanding of algorithmic governance—and to offer useful evidence-based insights for public policy deliberations regarding algorithmic governance and the policy choices for the governance of algorithms.

Measuring algorithmic governance from a user perspective

This section develops a theoretical model of the variables intended to measure the significance of algorithmic governance for everyday life and form the basis for theory-driven empirical assessments. We then propose a mixed-methods approach to empirically determine the extent to which AS applications govern daily life, since purely theoretically derived risks may lead to premature policy recommendations.

Theoretical model of the significance of algorithmic governance in everyday life

To empirically grasp the significance of algorithmic governance for everyday life, we develop a theoretical model that accommodates the operationalisation of algorithmic governance and entails five variables that influence the potential and effectiveness of this particular type of governance: usage of AS applications, subjective significance assigned to them, awareness of AS, awareness of associated risks, and practices to cope with these risks.

Theoretical model of variables measuring the significance of algorithmic governance in everyday life.
Figure 1: Theoretical model of variables measuring the significance of algorithmic governance in everyday life.

First, in order to determine the governing potential of AS applications in everyday life, their usage (extent, frequency) must be measured, particularly compared to their online and offline counterparts. Also, their governing potential is determined by whether and how these applications have changed people’s behaviour, for instance with regard to individual information seeking, listening to music, gaming, or dating. Second, the subjective significance people attribute to these applications plays an important role in how AS applications affect everyday life. The substantial substitution of traditional online and offline alternatives by AS applications is a prerequisite if fears of AS-associated risks are to be justified. Assessing the significance that users assign to AS applications makes it possible to determine the accuracy of these theoretical estimations. Third, it is essential to investigate how aware people are of the fact that algorithms operate in the services they use and of the specific algorithmic modes of operation. Awareness of AS substantially affects the effectiveness and impact of algorithmic governance. A variety of risks is attributed to the use of AS applications (e.g., filter bubbles, diminishing diversity of content), which are often directly associated with the algorithmic modes of operation. Accordingly, without awareness, users cannot accurately assess potential benefits and risks 5. The fourth factor of algorithmic governance is the risks people associate with the AS applications they use. Algorithmic governance per se is a neutral concept, but it can involve risks that lead to stronger governing effects of AS applications, especially when awareness is low. From a user perspective, applying practices that are opposed to companies’ strategies is the most viable way to exert agency. Based on De Certeau (1984), algorithmic governance is understood in terms of strategies and tactics: platforms that apply AS postulate their own delimited territory from which they manage power relationships with an exteriority—in this case users. These platforms apply ‘panoptic practices’: they observe, measure, and control, and consequently turn users into measurable types. These panoptic practices allow the platforms to create user classifications based on a user habitus that reflects their social disposition. Through these panoptic practices, AS applications co-govern users’ constructions of reality by mirroring their social dispositions in the form of scorings, recommendations, search results or advertisements. We consider user practices as tactics that are the counterpart of the strategies that companies or platforms apply. Accordingly, user practices are generally aimed at coping with risks that companies induce through their data collection and analysis strategies. Such practices are discussed as ‘slow computing’ by Fraser and Kitchin (2017). This term implies slowing down internet use, connectivity, and practices against data grabbing infrastructures. The practices can be seen as complementary to other measures like empowering users by governing algorithms with, for instance, consumer policies that improve the protection of user data (Larsson, 2018). The practices users apply to cope with the risks that they perceive associated with AS applications are thus the fifth factor of investigation when trying to assess the extent of algorithmic governance in everyday life.

The mixed-methods approach

Suitable assessments of risks related to AS applications and corresponding policy measures require the empirical measurement of the governance that AS applications exert in users’ everyday lives. To answer the call for taking algorithms’ ‘socio-technical assemblages’ (Kitchin, 2017) into account and investigating how users engage with AS applications in their lives, existing top-down approaches should be complemented by a user-centred perspective (Bucher, 2017).

Therefore, we propose a user-centred, mixed-methods approach to measuring the significance of AS applications, which is comprised of three research phases. Based on a literature review, (I) semi-structured qualitative interviews are to be conducted for each of the four domains of everyday practice. As these practices (e.g., newsgathering, dating) are not limited to internet use, the significance of AS applications must be considered in relation to alternative online and offline activities. This enlarged and contextualised perspective promises to provide an understanding of individuals’ life worlds and how AS applications are integrated within them. The qualitative interviews can provide in-depth information on individuals’ perceptions, opinions and interpretations regarding AS applications in the four life domains.

These qualitative interviews should form the basis for the quantitative empirical part, which we propose to consist of a representative online survey (II) in combination with a representative passive metering (tracking) (III) of internet usage at the population level. The combination of self-reported survey measures and tracked internet use (passive metering) makes it possible to compare the tracked share of AS services used with the self-reports of internet use, which can be systematically biased (Scharkow, 2016) or subject to social desirability effects. Further, the non-transparent, “black-box” nature of algorithms raises questions about users’ awareness of the mechanisms at play. When asking people about their experiences with algorithms, it must be kept in mind that their awareness of the existence of algorithms might be low and their statements could be biased accordingly. Therefore, a measurement of AS by means of tracking data additionally to the interview and survey data is inevitable 6. This could, for instance, be done by installing tracking software that records internet use on the survey respondents’ mobile and desktop devices 7. It should, for instance, collect the websites they visit (URLs), the search terms they use and the time and duration of their visits.

All three methodological approaches lend themselves to the accomplishment of different goals and results, which are summarised in Table 2. Only in its entirety is this mixed-methods approach able to significantly contribute to closing existing research gaps with regard to the empirical understanding of algorithmic governance and the overall significance of AS applications in everyday life.

Table 2: Expected contributions of the three methods to the empirical assessment of algorithmic governance in everyday life
 

Qualitative interviews with internet users

Quantitative survey
with internet users

Passive metering of individual internet use

Usage of AS applications

Not primarily relevant, gather context data on circumstances of use

Determine frequency of use of offline alternatives

Determine frequency of use of online alternatives and AS applications

Subjective
significance assigned to AS applications

Find reasons why AS applications are relevant, find out whether & how AS applications have changed behaviour

Quantify relevance of AS applications, online and offline alternatives for domains of everyday life

Not primarily relevant

User awareness of AS

Determine interviewees’ understanding of AS applications, use results for appropriate measure for awareness in survey

Quantitatively determine knowledge about / awareness of algorithms at population level

Not primarily relevant

User awareness of related risks

Expand existing list of risks; understand context to explain, interpret and contextualise survey data

Determine perceived importance of risks associated with AS applications

Not primarily relevant

User practices to cope with risks

Find practices that users apply to cope with AS / associated risks

Quantitatively determine relevance of strategies by constructing measure for coping practices

This mixed-methods approach allows for a re-assessment of opportunities and risks of AS applications in the different life domains that form the basis for evidence-based public policy and governance of AS applications, aiming at the democratic control of algorithmic power. The guideline that we propose is to be understood as an exemplary research design that has to be adapted to specific research questions 8.

Conclusions

In this paper we propose a guideline to both a theoretical understanding and an empirical measuring of algorithmic governance (= governance by algorithms) in everyday life. We argue that the assessment of algorithmic governance—a form of institutional steering by software—requires a nuanced theoretical understanding that differentiates between (a) different units of analysis, (b) intentional and unintentional governance effects, (c) public and private, human and nonhuman governing actors, (d) degrees of automation and of the remaining role of human actors in decision-making, as well as (e) the kinds of decisions that are taken by algorithms, their different contexts of applications and scopes of risks. Further, such an assessment needs empirical evidence to measure the actual significance of associated, theoretically derived risks of the governance by internet services that apply automated algorithmic selections in everyday life.

Our review of algorithmic-governance literature illustrates the lack of empirical studies from a user-centred perspective going beyond single platforms or services. Such limited empirical analyses in combination with purely theoretical considerations may lead to the derivation of exaggerated risks and unrealistic policy-relevant conclusions. So far, there is not a sufficient empirical basis to justify the detrimental risks and adventurous policy suggestions that are occasionally associated with AS applications. Rather, recent attempts to empirically investigate these phenomena have tended to reduce the significance of risks like manipulation, bias, or discrimination.

We propose a mixed-method, user-centred approach to make the significance of algorithmic governance in everyday life measurable and to provide a basis for more realistic, empirically grounded governance choices. We identified five variables—usage of AS, subjective significance of these services, awareness of AS, awareness of associated risks, and user practices—as relevant dimensions of inquiry to measure the significance of algorithmic governance in everyday life from a user-centred perspective. The mixed-methods approach consists of qualitative interviews, a representative online-survey and representative user tracking to empirically grasp the significance of algorithmic governance in four domains of everyday life—social and political orientation, recreation, commercial transactions, and socialising. This representative sample of affected life domains is derived from a representative, country-wide survey on internet usage.

Altogether, in the emerging field of critical algorithm studies, where empirical results are limited, contradictory or lacking, the guideline presented here permits a nuanced theoretical understanding of algorithmic governance and a more holistic and accurate measurement of the impact of governance by algorithms in everyday life. This combination of theoretical and evidence-based insights can form a profound basis for policy choices in the governance of algorithms.

References

Adler, P. A., Adler, P., & Fontana, A. (1987). Everyday life sociology. Annual Review of Sociology, 13, 217–235. https://doi.org/10.1146/annurev.so.13.080187.001245

Aneesh, A. (2009). Global labor: Algocratic modes of organization. Sociological Theory, 27(4), 347–370. doi:10.1111/j.1467-9558.2009.01352.x

Araujo, T., de Vreese, C., Helberger, N., Kruikemeier, S., van Weert, J., Bol, N., … Taylor, L. (2018, September 25). Automated decision-making fairness in an AI-driven world [Report]. Amsterdam: Digital Communication Methods Lab, RPA Communication, University of Amsterdam. Retrieved from http://www.digicomlab.eu/wp-content/uploads/2018/09/20180925_ADMbyAI.pdf

Bagloee, S. A., Tavana, M., Asadi, M., & Oliver, T. (2016). Autonomous vehicles. Journal of Modern Transportation, 24(4), 284–303. doi:10.1007/s40534-016-0117-3

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science348(6239), 1130–1132. doi:10.1126/science.aaa1160

Barbera, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting from left to right. Psychological Science26(10), 1531–1542. doi:10.1177/0956797615594620

Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985–1002. doi:10.1177/1461444809336551

Beer, D. (2013). Popular culture and new media: The politics of circulation. New York: Palgrave Macmillan. doi:10.1057/9781137270061

Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. doi:10.1080/1369118X.2016.1216147

Berger, P. L., & Luckmann, T. (1967). The social construction of reality. London, UK: Allen Lane.

Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online behavioral advertising. Journal of Advertising, 46(3), 363–376. doi:10.1080/00913367.2017.1339368

Bresnahan, T. (2010). General purpose technologies. In B. H. Hall & N. Rosenberg (Eds.), Handbook of the economics of innovation (pp. 761–791). Amsterdam: Elsevier. doi:10.1016/s0169-7218(10)02002-2

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society14(7), 1164–1180. doi:10.1177/1461444812440159

Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. doi:10.1080/1369118x.2016.1154086

Bui, C. (2010). How online gatekeepers guard our view: News portals’ inclusion and ranking of media and events. Global Media Journal: American Edition9(16), 1–41.

Büchi, M., Just, N., & Latzer, M. (2016). Modeling the second-level digital divide. New Media & Society, 18(11), 2703–2722. doi:10.1177/1461444815604154

Citron, D. K., & Pasquale, F. A. (2014). The scored society. Washington Law Review, 89(1), 1–33. Available at http://hdl.handle.net/1773.1/1318

Constantiou, I. D., & Kallinikos, J. (2015). New games, new rules: Big data and the changing context of strategy. Journal of Information Technology30(1), 44–57. doi:10.1057/jit.2014.17

Couldry, N., & Hepp, A. (2016). The mediated construction of reality. Cambridge, UK: Polity Press.

Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance, and Accommodation. Philosophy & Technology, 29(3), 245–268. doi:10.1007/s13347-015-0211-1

Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., … Shankar, K. (2017). Algorithmic governance. Big Data & Society, 4(2), 1–21. doi:10.1177/2053951717726554

De Certeau, M. (1984). The practice of everyday life. Berkley, CA: University of California Press.

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., … Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences of the United States of America, 113(3), 554–559. doi:10.1073/pnas.1517441113

Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism3(3), 398–415. doi:10.1080/21670811.2014.976411

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. doi:10.1080/1369118x.2018.1428656

Dylko, I. B. (2016). How technology encourages political selective exposure. Communication Theory, 26(4), 389–409. doi:10.1111/comt.12089

Dylko, I. B., Dolgov, I. Hoffman, W., Eckhart, N. Molina, M., & Aaziz, O. (2017). Impact of customizability technology on political polarization. Journal of Information Technology & Politics, 15(1), 19–33. doi: 10.1080/19331681.2017.1354243

Engel, C. (2001). A constitutional framework for private governance. German Law Journal, 5(3), 197–236. doi:10.1017/S2071832200012402

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., … Sandvig, C. (2015). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153–162). Seoul: Human Factors in Computing Systems. doi:10.1145/2702123.2702556

Fischer, S., & Petersen, T. (2018, May). Was Deutschland über Algorithmen weiss und denkt [What Germany knows and thinks about algorithms]. Retrieved from https://www.bertelsmann-stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/Was_die_Deutschen_ueber_Algorithmen_denken.pdf

Fletcher, R., & Nielsen, R. K. (2017). Are news audiences increasingly fragmented? A Cross-National Comparative Analysis of Cross-Platform News Audience Fragmentation and Duplication. Journal of Communication, 67(4), 476–498. doi:10.1111/jcom.12315

Flick, U. (2009). An introduction to qualitative research (4th edition). London, UK: SAGE.

Fraser, A., & Kitchin, R. (2017). Slow computing [Working Paper No. 36]. Maynooth: The Programmable City, Maynooth University. Retrieved from http://progcity.maynoothuniversity.ie/2017/12/new-paper-slow-computing/

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). Cambridge, MA: The MIT Press. doi:10.7551/mitpress/9780262525374.003.0009

Gilmore, J. N. (2015). Everywear: The quantified self and wearable fitness technologies. New Media & Society, 18(11), 2524–2539. doi:10.1177/1461444815588768

Glaser, B. G., & Strauss, A. L. (2009). The discovery of grounded theory (4th paperback printing). New Brunswick, NJ: Aldine.

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix prize and the production of algorithmic culture. New Media & Society18(1), 117–137. doi:10.1177/1461444814538646

Hasebrink, U., & Hepp, A. (2017). How to research cross-media practices? Investigating media repertoires and media ensembles. Convergence: The International Journal of Research into New Media Technologies, 23(4), 362–377. doi:10.1177/1354856517700384

Heatherly, K. A., Lu, Y., & Lee, J. K. (2017). Filtering out the other side? New Media & Society, 19(8), 1271–1289.doi:10.1177/1461444816634677

Helberger, N., Bodo, B., Zuiderveen Borgesius, F. J., Irion, K., & Bastian, M. B. (2017). Personalised communication. Retrieved from http://personalised-communication.net/the-project

Heller, A. (1984). Everyday life. London, UK: Routledge.

Hervas-Drane, A. (2015). Recommended for you: The effect of word of mouth on sales concentration. International Journal of Research in Marketing, 32(2), 207–218. doi:10.1016/j.ijresmar.2015.02.005

Hitsch, G. J., Hortaçsu, A., & Ariely, D. (2010). Matching and sorting in online dating. The American Economic Review100(1), 130–163. doi:10.1257/aer.100.1.130

Jürgens, P., Jungherr, A., & Schoen, H. (2011). Small worlds with a difference: New gatekeepers and the filtering of political information on Twitter. Proceedings of the 3rd International Web Science Conference, 3 (pp. 21–26). Koblenz, GE: Web Science. doi:10.1145/2527031.2527034

Jürgens, P., Stark, B., & Magin, M. (2015). Messung von Personalisierung in computervermittelter Kommunikation [Measuring personalization in computer-mediated communication]. In A. Maireder, J. Ausserhofer, C. Schumann, & M. Taddicken (Eds.), Digitale Methoden in der Kommunikationswissenschaft (pp. 251–270). Berlin, GE: GESIS.

Jürgens, P., Stark, B., & Magin, M. (2019). Two half-truths make a whole? On bias in self-reports and tracking data. Social Science Computer Review. Advance online publication. doi:10.1177/0894439319831643

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. doi:10.1177/0163443716643157

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118x.2016.1154087

Kowalski, R. (1979). Algorithm = logic + control. Communications of the ACM22(7), 424–436. doi:10.1145/359131.359136

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking.

Larsson, S. (2018). Algorithmic governance and the need for consumer empowerment in data-driven markets. Internet Policy Review, 7(2). doi:10.14763/2018.2.791

Larus, J., Hankin, C., Carson, S. G., Christen, M., Crafa, S., Grau, O., … Werthner, H. (2018, March 21). When computers decide: European Recommendations on Machine-Learned Automated Decision Making [Technical report]. Zurich; New York: Informatics Europe; ACM. doi:10.1145/3185595 Retrieved from http://www.informatics-europe.org/news/435-ethics_adm.html

Latzer, M., Hollnbuchner, K., Just, N., & Saurwein, F. (2016). The economics of algorithmic selection on the Internet. In J. Bauer & M. Latzer (Eds.), Handbook on the economics of the Internet (pp. 395–425). Cheltenham, UK: Edward Elgar. doi:10.4337/9780857939852.00028

Li, S. S., & Karahanna, E. (2015). Online recommendation systems in a B2C e-commerce context: A review and future directions. Journal of the Association for Information Systems, 16(2), 72–107. doi:10.17705/1jais.00389

Lupton, D. (2016). The diverse domains of quantified selves: self-tracking modes and dataveillance. Economy and Society45(1), 101–122. doi:10.1080/03085147.2016.1143726

Mager, A. (2012). Algorithmic ideology: How capitalist society shapes search engines. Information, Communication & Society, 15(5), 769–787. doi:10.1080/1369118x.2012.676056

Martini, M., & Nink, D. (2017). Wenn Maschinen entscheiden… – vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz [When machines decide… – fully automated administrative proceedings and protection of personality]. Neue Zeitschrift für Verwaltungsrecht – Extra, 10(36), 1–14.

McDonald, A. M., & Cranor, L. F. (2010). Beliefs and behaviors: Internet users’ understanding of behavioral advertising. TPRC 2010. Retrieved from http://ssrn.com/abstract=1989092

Napoli, P. M. (2014). Automated media: An institutional theory perspective on algorithmic media production and consumption. Communication Theory24(3), 340–360. doi:10.1111/comt.12039

Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2018). Reuters Institute digital news report 2018. Retrieved from http://media.digitalnewsreport.org/wp-content/uploads/2018/06/digital-news-report-2018.pdf?x89475

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014). Exploring the filter bubble. Proceedings of the 23rd International Conference on World Wide Web (pp. 677–686). New York: ACM. doi: 10.1145/2566486.2568012

Nowak, R. (2016). The multiplicity of iPod cultures in everyday life: uncovering the performative hybridity of the iconic object. Journal for Cultural Research20(2), 189–203. doi:10.1080/14797585.2016.1144384

Pariser, E. (2011). The filter bubble. London, UK: Penguin Books.

Pink, S. (2012). Situating everyday life. Los Angeles, CA: Sage.

Rader, E. (2017). Examining user surprise as a symptom of algorithmic filtering. International Journal of Human-Computer Studies98, 72–88. doi:10.1016/j.ijhcs.2016.10.005

Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 173–182). New York: ACM. doi:10.1145/2702123.2702174

Rammert, W. (2008). Where the action is: Distributed agency between humans, machines, and programs [Working Paper No. TUTS-WP-4-2008]. Berlin: The Technical University of Berlin, Technology Studies. Retrieved from http://www.ts.tu-berlin.de/fileadmin/fg226/TUTS/TUTS_WP_4_2008.pdf

Rapp, A., & Cena, F. (2016). Personal informatics for everyday life. International Journal of Human-Computer Studies94, 1–17. doi:10.1016/j.ijhcs.2016.05.006

Rieder, B., Matamoros-Fernandez, A., & Coromina, O. (2018). From ranking algorithms to ‘ranking cultures’. Convergence: The International Journal of research into New Media Technologies, 24(1), 50–68. doi:10.1177/1354856517736982

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Paper presented to “Data and Discrimination: Converting Critical concerns into productive inquiry," a Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA. Available at https://pdfs.semanticscholar.org/b722/7cbd34766655dea10d0437ab10df3a127396.pdf

Scharkow, M. (2016). The accuracy of self-reported internet use: A validation study using client log data. Communication Methods and Measures, 10(1), 13–27. doi:10.1080/19312458.2015.1118446

Schneider, V., & Kenis, P. (1996). Verteilte Kontrolle: Institutionelle Steuerung in modernen Gesellschaften [Spread control: Institutional management in modern societies].  In V. Schneider & P. Kenis (Eds.), Organisation und Netzwerk. Institutionelle Steuerung in Wirtschaft und Politik (pp. 9–43). Frankfurt am Main: Campus. doi:10.5771/9783845205694-169

Schütz, A. (2016). Der sinnhafte Aufbau der sozialen Welt [The meaningful construction of the social world] (7th ed.). Frankfurt am Main: Suhrkamp.

Schütz, A., & Luckmann, T. (2003). Strukturen der Lebenswelt [Structures of the lifeworld]. Stuttgart: UVK.

Shearer, E., & Matsa, K. E. (2018, September 10). News use across social media platforms 2018. Retrieved from http://www.journalism.org/wp-content/uploads/sites/8/2018/09/PJ_2018.09.10_social-media-news_FINAL.pdf

Silverstone, R. (1994). Television and everyday life. London, UK: Routledge.

Smit, E. G., Van Noort, G., & Voorveld, H. A. M. (2014). Understanding online behavioral advertising. Computers in Human Behavior, 32, 15–22. doi:10.1016/j.chb.2013.11.008

Sunstein, C. R. (2001). Echo chambers: Bush v. Gore, impeachment, and beyond. Princeton, NJ: Princeton University Press.

Sztompka, P. (2008). The focus on everyday life: A new turn in sociology. European Review, 16(1), 1–15. doi:10.1017/S1062798708000045

Thrift, N. J. (2005). Knowing capitalism. London, UK: Sage.

Ur, B., Leon, P. G., Cranor, L. F., Shay, R., & Wang, Y. (2012). Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising. Proceedings of the Eighth Symposium on Usable Privacy and Security, Washington, DC. doi: 10.1145/2335356.2335362

Ur Rehman, Z., Hussain, F. K., & Hussain, O. K. (2013). Frequency-based similarity measure for multimedia recommender systems. Multimedia Systems, 19(2), 95–102. doi:10.1007/s00530-012-0281-1

Van Dijck, J. (2013). The culture of connectivity. Oxford, UK: Oxford University Press.

Williamson, B. (2015). Algorithmic skin: Health-tracking technologies, personal analytics and the biopedagogies of digitized health and physical education. Sport, Education and Society20(1), 133–151. doi: 10.1080/13573322.2014.962494

Willson, M. (2017). Algorithms (and the) everyday. Information, Communication & Society20(1), 137–150. doi:10.1080/1369118X.2016.1200645

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12, 505–523. doi:10.1111/rego.12158

Zarsky, T. (2016). The trouble with algorithmic decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. Science, Technology & Human Values, 41(1), 118–132. doi: 10.1177/0162243915605575

Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review5(1), 1–16. doi:10.14763/2016.1.401

Acknowledgements:

The authors would like to thank Natascha Just and two reviewers for their valuable comments on an earlier draft of this article.

Footnotes

1. This notion is related to Rammert’s (2008) concept of “distributed agency between humans, machines, and programs”.

2. e.g., simple alphabetical sorting.

3. e.g., personalised recommender systems in e-commerce using reinforcement learning.

4. Consideration of individuals’ entire media repertoires, comprising online and offline sources, is vital because, for instance, the effects of using AS services like Facebook for news purposes vary with the person’s use of other news channels or other (offline) sources.

5. Awareness is not to be misunderstood as knowledge of specific algorithmic modes of operation here. Our model suggests that, for instance, without being aware that Google search results are personalised, individuals can not grasp the concept of filter bubbles. They are therefore unable to understand this risk and maybe adapt their behaviour accordingly.

6. Tracking data can also be subject to different biases (e.g., self-selection biases), which must be considered when applying these novel methods (see e.g., Jürgens, Stark, & Magin, 2019).

7. When tracking individuals’ internet use, it is vital to be very mindful of potential effects on participants’ privacy. Specific study designs have to be approved by the responsible ethics committee and defining measures to protect individuals’ privacy are crucial.

8. This guideline – combining the proposed theoretical model and mixed-methods research design – has already been applied by the authors in Switzerland. Results from qualitative internet user interviews and a representative online-survey combined with internet use tracking on a mobile and desktop device for a representative sample of the Swiss population are forthcoming.

Add new comment