From access and transparency to refusal: Three responses to algorithmic governance

Alexandra James, Australian Research Centre in Sex, Health and Society, La Trobe University, Melbourne, Australia, alexandra.james@latrobe.edu.au
Danielle Hynes, School of the Arts and Media, University of New South Wales, Sydney, Australia
Andrew Whelan, Faculty of Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, Australia
Tanja Dreher, School of the Arts and Media, University of New South Wales, Sydney, Australia
Justine Humphry, Faculty of Arts and Social Sciences, University of Sydney, Sydney, Australia

PUBLISHED ON: 17 May 2023 DOI: 10.14763/2023.2.1691

Abstract

In this paper, we identify three responses evident in the dialogue regarding the emergence and development of data driven algorithmic governance. The first two responses, ones of access, inclusion and transparency, seek to remedy harms produced by the deployment of advanced digital technologies in public sector service provision. However, with a limited interest in contextualising these technologies relative to the social relations in which they are designed and deployed, these responses ultimately risk misidentifying the sources of harm, thereby reinforcing injustices. The third response, one of data justice, abolition and refusal, seeks to address the limitations to achieving social justice through prioritising digital access, inclusion and transparency. Offering a more transformative response to algorithmic governance, this third response focuses on fundamental questions regarding the deeply unequal power relations, structural inequalities and racism embedded in algorithmic systems, providing a critical repertoire of options for contesting and reconfiguring these relations. While the three responses do not constitute a chronology, we conclude with a discussion of the resurgent interest in refusal as a framework and method to intervene in harmful data-driven algorithmic systems. In so doing, we offer suggestions for how to collectively and institutionally operationalise refusal, alongside abolitionism and data justice, particularly in the area of social welfare provision, and to imagine and bring about alternative social, political and economic systems and relations that are radically transformative.
Citation & publishing information
Received: Reviewed: Published: May 17, 2023
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The authors have declared that no competing interests exist that have influenced the text.
Keywords: Algorithmic governance, Data justice, Critical Data Studies, Data governance, Algorithmic accountability
Citation: James, A., Hynes, D., Whelan, A., Dreher, T., & Humphry, J. (2023). From access and transparency to refusal: Three responses to algorithmic governance. Internet Policy Review, 12(2). https://doi.org/10.14763/2023.2.1691

Introduction

The significance of datafication in governance predates digital infrastructure (Didier, 2020; Porter, 2020). Nonetheless, new digital technologies and techniques are facilitating this process at scale and speed. National governments all over the world have, at increasing rates, turned to networked communication infrastructure as a key mechanism to deliver services, engage and inform citizenry and manage administrative operations. Sophisticated, complex data-driven technologies used by governments hinge on, and contribute to, an expanded internet, enabling increasingly ubiquitous datafication and automation across an evolving set of spaces, domains, objects and dimensions of our lives: cities, homes, education, the workplace, finance, mobility and transit, health, policing, welfare and more. These internet and data-driven processes also reshape social relations, notably those between citizens, the state and corporate powers (Dencik & Kaun, 2020). As such, these processes constitute a crucial site of discussion and debate. It is the site at which ideals and processes of democratic administration are being reconfigured by algorithmic governance under the conditions of ubiquitous datafication of social life.

The application of advanced technologies by governments occurs in pre-existing structures and settings, which are shaped by the relations between existing technologies, social and economic arrangements, and the distributions of social power (Dencik & Kaun, 2020). The deployment of these technologies has been commonly justified by claims for the capacity of technology to solve social inequalities, increase access, generate efficiencies and cost savings and mitigate procedural injustice (James & Whelan, 2022). These claims mobilise various arguments, for example: that technology is required to ameliorate or offset other social risks (even if the risk is of being “left behind”); that operations of the technology are or will be fairly managed and comprehensible; that people will have input into or control over how their data and interactions with the technology will be handled; that the technology can be considered acceptable if certain conditions are met; and that the technology is, on balance, a net benefit, and so on.

In this paper, we describe and engage critically with two key theoretical and policy responses which have informed analyses of the development of algorithmic governance over recent decades: access and inclusion, and transparency. We then turn to data justice, and a resurgent interest in “refusal” as a longstanding response receiving increasing interest within abolitionist and data justice approaches. Refusal presents opportunities for renegotiating and revising the technologically mediated relationships between powerful state and corporate actors and citizens. This work builds on and continues the discussion of other writing in critical technology studies which examines the range of political, social, and theoretical outcomes resulting from the deployment of algorithmic governance.

In a comparable move, Frank Pasquale (2015) identified “waves” of algorithmic accountability, with the first wave oriented towards improving algorithmic systems, (what we analyse as the responses of access and inclusion and of transparency) while the second asks if these algorithmic systems are required at all (Ganesh & Moss, 2022), which we analyse here as the response of refusal. The following discussion refers to “responses” as a means to structure and analyse the political, social and theoretical implications of algorithmic governance, and to foreground how they are characterised by specific alignments to the systems they intend to challenge or change. We do not mean to imply that these responses are neatly distinct or sequential in their development. Rather, we seek to illustrate how these responses variously extend, intersect, overlap with or contest each other, and the dominant institutional structures which give rise to algorithmic governance. Unlike the concept of “waves”, these responses do not emerge chronologically and we understand the relationship of these responses as a dialogue rather than as a chronology. They can be understood as continuing developments in the current and ongoing complex move to (and against) algorithmic governance. They are both responses to the opportunities of technology itself, and to the outcomes of technologies, particularly those deployed within public sector services. This framing facilitates our argument for assessing the responses in terms of their promise for advancing strategies centred on goals of justice and civic intervention. It allows us to trace how schools of thought are developed, accepted and challenged while working in dialogue and, ultimately, informing the lived experience of people subject to the various outcomes of algorithmic technologies. The sequence of our discussion, from access and inclusion, through transparency, and on to refusal, is intended to highlight movement away from responses working largely within the prevailing logic of algorithmic governance, and towards responses that contest or refuse that logic altogether.

This critical analysis is borne from our collaboration and enabled by theoretical perspectives in our respective disciplines: sociology, gender studies, media, communication and politics. Through this collaboration, we draw on a range of scholarly tools and empirical illustrations to explore how these responses engage with the practice and policy of algorithmic governance. Our interdisciplinary perspective provide insights around the varied social and political outcomes produced by responses to algorithmic governance. Our primary audiences are those who research technology, policy, and social welfare, especially as activist practices, or in support of activists. As the potential of algorithmic applications continue to be developed, scholars, developers and policy makers will benefit from attending to how well-intentioned responses can be limited, and how adverse impacts on the populations most exposed to the governmental use of networked technologies can be reduced. In the analysis below, we identify the contributions and limitations of several such responses, addressing the implications of ubiquitous datafication and algorithmic governance.

We argue two main points. The first is that responses to algorithmic governance that draw on traditional public administration values are not well formulated for addressing the various harms produced by algorithmic governance, or for expressing the plurality and heterogeneity possible in conceptualising it. The second, related point is that the response of data justice and refusal holds more promise than access, inclusion and transparency. The latter responses, which have had the most significant policy impact thus far, do not always contextualise the technologies they describe effectively, and begin with the goal of spreading digital benefits while downplaying the potential for digital harms. Too frequently they situate the issues around ‘the technology’ or its implications, without sufficient attention to the social, cultural and economic contexts which give rise to and normalise these technologies.

The paper is set out as follows. It begins by outlining access and inclusion, and then moves on to transparency. These are described as responses that have facilitated technological deployment while often overlooking or downplaying the power relations that shape technology design and implementation. As such, they tend to underestimate the potential of algorithmic data use to perpetuate harm or produce new harms. Following this, we examine data justice, abolition and the resurgent interest in refusal, as a more radical response that seeks to fundamentally transform the existing structures which have shaped citizen-government interactions. Finally, we present a discussion of some of the considerations necessary for the practical application of refusal in response to algorithmic governance, attending to possible limitations, such as those which have beset other responses to established public administration values in the face of datafication. Refusal, in our understanding, operates as a response that opens up alternatives rather than as an end point. Throughout, we substantiate our account by drawing on conceptual literature and our own empirical research and key examples across several domains where the technology is consequential (respectively, network access for underserved urban communities, housing, housing policy and social welfare).

The public administration values informing responses to algorithmic governance

“Algorithmic governance” is a contested phrase used across disciplines in different ways, and often without elaborating on what was originally a computer science term: “algorithm” (Katzenbach & Ulbricht, 2019). In this computer science context, the term algorithm relates to “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2014, p. 167). Similarly, others focus on the problem-solving nature of an algorithm as formulated by a given set of rules (Roughgarden, 2017). Within the social sciences, however, a more holistic understanding of algorithms has been adopted, viewing them as elements in an assemblage of technological and human factors having political, economic and social outcomes (Latzer & Festic, 2019; Lowrie, 2018). In this paper, we refer to the increasing and accelerating trend of algorithmic governance, resulting in new modes of social ordering as a consequence of automated decision-making (Katzenbach & Ulbricht, 2019).

Technological advancements, notably in machine learning and artificial intelligence, have facilitated the deployment of algorithmic decision-making, given the capacity to access and process vast quantities of data (Sætra, 2020). We use the term algorithmic governance to refer to the social and political processes of algorithms, and the data they rely upon, used within institutional government settings to assist in decision-making and service delivery (Danaher, 2016). We emphasise the importance of context to highlight how algorithms, and the social imagining of them, spur specific outcomes and policies. These in turn contribute to subsequent social and technological developments, resulting in further social outcomes and policy positions. This looping of algorithmically induced effects across multiple domains resonates with descriptions of algorithmic governance as the “intersection of digitalisation, datafication and governance through technology” (Gritsenko & Wood, 2022, p. 46).

Algorithmic governance has been closely examined in research on digital media, communication and in computer science, with some contributions from political science and policy analysis perspectives (Gritsenko & Wood, 2022; Katzenbach & Ulbricht, 2019). It is instructive to locate predominant policy responses regarding algorithmic governance, in particular with regard to access and inclusion and transparency, relative to traditional values of public administration, which generally include orientations to public service, duty, integrity and trustworthiness, accountability, prudence and probity, confidentiality, procedural consistency and impartiality, respect, responsiveness, formal equality before the law and so on. A number of these traditional values are emphasised in conversations around the impactfulness of technology, including transparency, fairness, and equality of access (Bannister & Connolly, 2014). These values predate the technological advances we refer to here (Dencik & Kaun, 2020) with a history embedded in colonial rule (Kirk‐Greene, 1999). However, permutations of public administration values have been observed on a global basis in the discourses accompanying the deployment of algorithmic governance. While they have been most notably packaged as “ethics” (James & Whelan, 2022), within this, access, inclusion and transparency, as we argue below, constitute some of the most widely adopted policy responses to algorithmic governance. Public administration values tend to maintain existing social and economic structures (Mascio et al., 2020) and, as a result, the responses of access, inclusion and transparency can be seen to reproduce, rather than transform, the broader social systems in which algorithmic governance operates; ultimately producing similar inequitable outcomes, as demonstrated below.

Access and inclusion

Algorithmic operations depend on access to and inclusion in data systems, raising questions about how access as a value is enlisted in data-driven governance projects, and the extent to which the discourse of access and inclusion elides or obfuscates existing power relations and hierarchies. In the arguments in favour of digital communication technology for positive social outcomes, the first response is often to access and inclusion: extending network reach to marginalised and excluded groups (e.g., Chohan & Hu, 2020; Fuchs & Horak, 2008; Hardill & O’Sullivan, 2018).

Access equity has been at the heart of efforts, dating back to the 1990s, to address disparities between technology “haves” and “have nots”, framed in terms of a “digital divide”. Digital inclusion efforts recognise a broad set of equity goals, including ability and affordability. It is widely acknowledged that those lacking in access and the means to afford the internet and other communication technologies face exclusion, marginalisation and additional challenges in navigating an increasingly digitised society and government (Humphry, 2019; van Dijk, 2020).

Despite this, access and inclusion do not in and of themselves resolve the structural problems that digital exclusion is patterned after and exacerbates: entrenched forms of social exclusion from employment, education, secure housing, healthcare, transportation and so on. That poor, underserved and marginalised communities frequently experience technology as a harmful extension of disciplinary systems is often overlooked (Eubanks, 2018; Ragnedda, 2020). Indeed, through the morally appealing response of access and inclusion, vulnerable populations, in practice, come to have their data and privacy exploited and used to fuel processes of algorithmic governance, as in the case of LinkNYC discussed below.

Improvements in digital access are often part of the offer and rationale for international development, smart urban initiatives and welfare service reform. For example, in announcing the proposal to install the LinkNYC network of smart kiosks in New York City to replace the city’s payphone network, the project led by private consortium CityBridge, was promoted as offering free, high-speed wi-fi internet access to lower-income NYC neighbourhoods who lacked the connectivity of wealthier ones (New York City Office of the Mayor, 2014). Similarly, the Australian government’s Digital Transformation Strategy (2018-2025) promised to deliver ‘services that are simple, personalised and available wherever you need them’ with multiple channels for access, even offering a virtual smart assistant called Alex to answer taxation enquiries (Digital Transformation Agency, 2021).

Digital transformation also involves a reduction in non-digital alternatives, on the back of decades of infrastructural gaps, welfare cuts and public administration efficiency dividends (Ball et. al, 2022). This leads to single-point-of-service failures (notable in crises such as the COVID-19 pandemic) and digital dependency. Additionally, access to and inclusion in these systems entails exposure to increased monitoring, social sorting and surveillance of populations (Alston, 2019; Dencik & Kaun, 2020; Humphry, 2020; Humphry et al., 2022). These contradictions around the social justice goals of access and inclusion are also illustrated in the international development context, where technological connectivity is equated to a “bridge” leading to social and economic progress, and a “development shortcut”, which may instead lock developing countries into dependency on the digital capitalism of the West (Wade, 2002). Access, in these contexts, is more accurately read as access to “emerging markets” for powerful corporate interests (Bhagat & Roderick, 2020; Oyedemi, 2021). Robust critiques of “digital divide” discourses have pointed out the reductive nature of assessments of the unequal distribution of global wealth and resources, and the futility of playing “technological catch up” (Parayil, 2005). Data systems can also be used for “ceding and skewing power” (Heeks & Shekar, 2019) and for reproducing “extractive practices of historical colonialism” through new forms of “data colonialism” (Couldry & Mejias, 2018). Pro-equity data initiatives in the global South, such as “community mapping” with digital platforms rely on extracting information from communities, while largely excluding them from decision-making and value chain processes (Heeks & Shekar, 2019).

Similarly, smart urban infrastructures in the global North, rolled out in public-private partnerships that offer free Wi-Fi and other connectivity services, simultaneously operate to access populations and subject them to the new logic of datafication. The other side of the connectivity coin is the development of new networks of data value. Green (2020) claims that the real goal of the LinkNYC kiosks was to collect and monetise user data gathered from sensing devices, including mobile devices that connected to the Wi-Fi network. For those groups who have little choice but to use these services, inclusion comes with a host of new data-related risks. Unhoused people, for example, have historically been subject to higher levels of policing (Wacquant, 2009) and smart urban objects can be co-opted into enacting regressive policies to identify, monitor and displace rough sleepers (Davis, 1990). In the United Kingdom, similar smart kiosks have been installed with a call-blocking algorithm to prevent their use for “anti-social” behaviour, meaning criminalised drug-related activities (Wray, 2019). Smart kiosks and other urban devices such as smart poles fitted with cameras, Wi-Fi and sensors (see for example the Northern Territory’s ‘Switching on Darwin’ scheme and San Diego’s ‘smart streetlights’) can increase the visibility of groups and even entire neighbourhoods through their interactions and data traces. The effects of these technologies can be particularly pernicious for communities with less resources and skills to protect themselves, and less recourse for contesting new methods of data-enabled surveillance and predictive policing (Smith, 2020; Strover et al., 2021).

Related to the increased risk of visibility and targeting is the issue of racial bias. Browne (2015) queries how algorithms, facial recognition, airport scanners, credit scores and welfare cards function as additions to the already extensive apparatus of existing racialised technologies? These technologies build on and interact with existing forms of governance shaped by and reproducing structural racism, and they increasingly interact with each other. Surveillance technologies and data capture are organised to particular ends, and powerful interests shape and leverage their design and affordances. Their normalisation also draws on rhetorical or discursive technologies, such as that of access and inclusion. Benjamin (2019) and other critical race and technology scholars critique the idea of inclusion in unjust systems. In a response to Buolamwini’s quest for “full spectrum inclusion to counter bias”, and the work of the Algorithmic Justice League, Benjamin asks: “while inclusion and accuracy are worthy goals in the abstract, given the encoding of long-standing racism in discriminatory design, what does it mean to be included, and hence more accurately identifiable, in an unjust set of social relations?” (2019, p. 124).

Insofar as concerns are raised about these developments, they are commonly framed in terms of individual rights to privacy, missing the scale and depth of the issues involved. It is not merely that “opting out” becomes untenable. The issues involved are fundamentally social rather than individual, raising questions around the appropriate relations between state and commercial providers, the allocation and availability of public space, and the starkly uneven distribution of rights to space, data, money, security and other resources. Economically disenfranchised groups are not only denied meaningful access and participation, but also colonised by data processes (Couldry & Mejias, 2018) designed to extract various forms of value from their use of “public” infrastructure which may not best serve their own interests. The terms of access and of what exactly is accessed are not subject to collaborative negotiation. Access is generally angled “up”: something is made available to those who otherwise “lack” it. Access is however bidirectional, and access “down” for powerful players is politically consequential.

The problems identified here are about the ease with which access as a progressive goal, and thus the response of access and inclusion, can reinforce existing social inequalities and impose the ideal of the competitive, agential citizen competing for exclusive access to the capacities and resources that others lack. Spandler has identified the same problem with the response of inclusion in mental health policy, which creates “an obsession with the choices and responsibilities of the individual rather than the constraining context in which they live” (2007, p. 4).

The blind spot at the centre of this pretence of horizontal neutrality exacerbates existing inequities precisely by treating everybody the same. Laudable-sounding goals such as access and inclusion are difficult to question and scrutinise because of their seemingly obvious universal benefit and are insidious precisely because of this. The naturalisation of this kind of technological progressivism, combined with the concrete issues associated with these technologies (notably around surveillance, privacy and the commodification of data), is often accompanied by calls for more transparency in technology design, operation and governance. Transparency functions as a foil to access and inclusion: both a safeguard against exploitative inclusion and the means of assuring that access goals are being met.

Transparency

Transparency is another key public service value, a means to effective administration that has come to represent an end-in-itself in democratic governance. Moore (2018) has documented how the last four decades have led to an increased emphasis on transparency, where governments provide open access and evidence to citizens on a host of administrative matters. Such transparency in government “performance” relies on and is mobilised through techniques of classification, measurement, evaluation and rank, which bring into view that which is to be rendered transparent and acted upon. Quantificatory mechanisms generate data that can be represented numerically, so that “value for money” can be demonstrated. These numerical representations render an objectively knowable world: the data seems to “stand for itself”, inviting further intervention and improvement. Acting relative to those representations ensures responsible and good governance, and is such governance (Hansen, 2015). Transparency is a tautological value which justifies itself: it is good to see that it is good to see. New technologies have not only rendered this nominal transparency more achievable than ever before, the deployment of advanced technologies of “seeing”, and the systems within which they operate, are themselves also subject to calls for transparency.

The response of transparency recurs in conversations about the efficacy and ethics of computational techniques applied in the public and private sectors. The lack of transparency associated with proprietary or “black boxed” operations of advanced technologies, such as machine learning, has drawn criticism from academics, activists and human rights advocates (Pasquale, 2015). In both public and private organisations, opacity in algorithmic design and the workings of AI systems has resulted in biased output, ranging from racial bias in automated judicial decisions (Angwin et al., 2016), to gender discrimination in hiring decisions (Dastin, 2018). These failings have seen the concept of transparency re-emphasised, and reconfigured, resulting in calls for ‘explainable’ ethical technologies (for example, Walsh et al., 2019).

This may involve informing individuals about the collection of their data, their use of or interaction with an AI system or the extent to which they are subject to an algorithmic decision (Walsh et al., 2019). For example, the EU General Data Protection Regulation introduces a right of explanation for individuals to obtain “meaningful information of the logic involved” when subject to automated decision-making with “legal or similarly relevant effects” (Regulation 679/2016). This kind of right is intended to ensure that individuals are aware of and can appeal decisions arising from algorithmic processes. It is framed as a continuation of traditional public administration norms, such as the duty to give reasons (Oswald, 2018). Explainability as transparency is considered essential for garnering public trust in the deployment of AI (see Walsh et al., 2019, p. 10). In such instances, transparency is not positioned as an end-in-itself, but as a means to enhance public acceptance of the rollout of future technologies.

However, just as with the response of access and inclusion, new configurations of technological and administrative transparency discount the practical and structural limitations forestalling equitable outcomes. For instance, informing individuals about the collection and use of data does not ensure consent to the implications associated with data use, retention, aggregation or sale. Machine learning utilising big data can involve algorithmically generated outcomes which are, as administrative decisions, neither explicable nor rendered by humans who can explain the decision (Robbins, 2019). In terms of their range of action, citizens subject to algorithmic governance are in an asymmetric relation to powerful state and corporate actors (Dencik & Kaun, 2020). Where the capacity to comprehend what it means to be subject to AI outputs cannot be assured, the limits to acting on this information and contesting algorithmic outputs may render conventional notions of transparency trivial and incongruous.

These issues can be observed in the Australian example of ‘Robo-debt’, the popular name given to the Federal Government’s Income Compliance Program (Services Australia, 2022), an automated system to identify and recover welfare overpayments. No credible claim to transparency was made in this instance, though transparency was arguably demanded of welfare recipients through the “reverse onus”, the requirement to disprove the debt was placed on the recipient (as opposed to the government proving the existence of the debt). As with access and inclusion, the response of transparency is bidirectional. The use of algorithmic techniques to (inaccurately) average the “debts” recipients owed to the government became a scandal in the national press, going through several Ombudsmen and Senate inquiries before eventually being ruled unlawful (Park & Humphry, 2019; Whelan, 2020). Welfare recipients, many financially dependent on the agency demanding repayment, had limited means to contest the outcome of their purported debts, and were required to do so through cumbersome online portals. Irrespective of the extent to which citizens are informed of algorithm use, notions of consent are complicated by both the limited capacity to adequately explain what it means to be subjected to an algorithm, and by limits on opting out, particularly from government services and those corporate services without which it is almost impossible to function.

Codified principles of transparency can also be observed in the uptake of public sector algorithmic impact assessments in, for example, the UK and Canada. Used to evaluate trade-offs with the view to mitigating adverse outcomes (Moss, et al., 2020), algorithmic impact assessments, as adopted in accordance with previous approaches to environmental protection (Kingsman, et al., 2022; Selbst, 2021), are described to allow for a form of public oversight and accountability (Selbst, 2021; Moss, et al., 2020). Arguably, the response of transparency underpins the formulation and uptake of algorithmic impact assessments. This can be seen, for example, in the inclusion of such impact assessments within the UK governments’ Algorithmic Transparency Standard (Kingsman, et al., 2022). While there are different forms of algorithmic impact assessments, those used in the public sector can be understood as premised on notions of transparency as a result of traditional modes of public administration operations. As Selbst (2021) puts it: “because transparency, and specifically notice and comment frameworks, are part of the regulation that is usually applied to the public sector in the United States, it is perhaps not surprising that these proposals tend to focus on the public sector, rather than the private sector” (p. 141). Potential limitations with algorithmic impact assessments have already been noted by scholars, including oversight in relation to impacted groups (see, for example, Kingsman, et al., 2022; Selbst, 2021). This can also be understood in terms of the broader problem of the values arising from and articulating new public management political goals.

The contemporary response of transparency, as evidenced in AI ethics codes (James & Whelan, 2022) and calls for algorithmic impact assessments, is unlikely to lead to equitable outcomes for those at the mercy of algorithmic governance. This is in part because of how venerable public administration ethics are mobilised and diluted. To ensure the reality of equitable technological outcomes, it is necessary to consider transparency relative to the structural and power imbalances constraining genuine consent and contestation. Rather than resolving the issues of technological opacity, the response of transparency administers these issues through legalistic procedures of notification and review. The preoccupation with technological transparency, and the procedural management of opacity, both obfuscate and compound another significant non-technological form of opacity: the contractual relations through which these technologies are developed and implemented.

This can be observed in the development of “smart cities”, where urban technology is extensively deployed to collect city data and manage services through enhanced network connectivity, as introduced in the previous example of LinkNYC. Many authors in this field critique smart cities and smart urbanism as commercially led and dominated by corporate actors (Kitchin, 2014; McFarlane & Söderström, 2017). One element of this is the increasing reliance on and encouragement of public-private partnerships (PPPs) by governments to access and implement the necessary technology (Luque-Ayala & Marvin, 2015; Shelton et al., 2015). For example, in November 2018, the Australian federal Minister for Cities, Urban Infrastructure and Population announced the outcomes of Round Two of the Smart Cities and Suburbs Program, allocating $21 million of the $50 million fund to 32 projects across the country (Department of Infrastructure, Regional Development and Cities, 2018). To be eligible for a grant, applicants had to fund at least 50 percent of the project through non-Commonwealth sources. Whilst this funding could come from other levels of government or NGOs, multiple collaborators were encouraged, increasing the likelihood of public-private partnerships.

Partnerships between governments and private companies increase state reliance on private actors while transferring public funds to the private sector, and further subjecting urban infrastructure to corporate power. This corporate dependency and neoliberal political economy is characteristic of the marketisation of public services (Kitchin, 2014). Public-private partnerships can also increase the opacity of government processes, as proprietary algorithms, increasingly form a crucial element of smart city technology, and often cannot be scrutinised (Pasquale, 2015).

Paradoxically, as the technologies of transparency and transparent governance have developed, the increasingly automated mechanisms by which data is generated and acted upon become more complex and obscure (James & Whelan, 2022). This tends to undermine efforts at democratic data governance, in that the intersection of ‘black boxed’ machine learning and large-scale datasets lead to a situation where transparency becomes increasingly remote: the end is hijacked by the means. Transparency as a regime of political value and an administrative end-in-itself becomes less plausible, and its disciplinary functions become more evident (Hoffman, 2020). In the next section, we examine how the response of refusal departs from those that are underpinned by traditional public administration values, such as with access and inclusion and transparency.

Data justice, abolitionism and the response of refusal

An emergent response to algorithmic governance, arising in the context of discussions of data justice (Dencik et al., 2018; Taylor, 2017) and abolition (Benjamin, 2019; Milner, 2019) is “critical refusal” (Barabas, 2022; Cifor et al., 2019; Gangadharan, 2020; Garcia et al., 2020). Approaches foregrounding access and inclusion (as with LinkNYC), or transparency (as with the proliferation of ethical AI strategies), generally take the position that better governance of technologies is the appropriate solution to data-driven harms. This leaves the basic premises on which these technologies roll out unchallenged. Data justice, abolitionism and refusal seek to widen the debate: foregrounding questions of political economy and structural racism, and taking embodied difference and social justice as starting points (Benjamin, 2019; Costanza-Chock, 2020; Dencik et al., 2018; Hoffman, 2020).

To situate data in the context of existing social structures and systems, data justice researchers and activists ask: whose interests are being advanced by rapid processes of datafication via digital media? What are the implications of shifting functions previously the preserve of governments and state agencies to public-private partnerships (PPPs) or multinational corporations? What are the labour conditions and the possibilities for oversight and regulation under “surveillance capitalism” or “data capitalism”? What might be the possibilities for collective or cooperative alternatives? And at what points are we able to refuse data technologies altogether?

A central tenet of data justice involves beginning with “impacted communities and social groups” to trace injustice, oppression and domination. Though digital media and processes of datafication have rapidly proliferated, the impacts, harms and benefits are highly uneven in their distribution (Costanza-Chock, 2020; Dencik et al., 2018; Eubanks, 2018). Questions around data justice are grounded within the context of pre-existing struggles against domination and oppression (Dencik et al., 2018). This is vital in the context of “ubiquitous surveillance” (Andrejevic, 2011), in which individuals and societies are “colonised by data” (Couldry & Mejias, 2018), because technological and data systems so often exacerbate existing inequalities and forms of oppression, and because it is highly restrictive to bracket these technologies from the social and political contexts which give rise to them. Data justice research approaches datafication as an issue of social, political and economic justice, rather than as a matter of individual privacy, or of procedural norms – which have only ever been selectively applied at the best of times (Dencik et al., 2018). An important aim is to sidestep epistemic data-centrism in studying and understanding the implications of datafication. In this vein, Dencik and Kaun (2020) argue for understanding the datafication of the welfare state as a political rather than technological development. They place values of social solidarity and social mobility as contradictory to the values of individualisation encouraged by algorithmic governance, which privilege individual responsibility over that of the collective. For instance, datafied systems emphasise correlation over causation, which contributes to the view of social problems as individual failings, and the side-lining of structural causes. Dencik and Kaun argue that now is a moment “to rearticulate the role and importance of organising welfare based on solidarity, universal access, and equality” (2020, p. 5). In moving debates around social justice and algorithmic governance away from discussions of privacy, access and inclusion, transparency and other responses seeking to refine rather than transform existing systems, data justice advances alternate strategies and modalities of intervention.

A range of interventions can be identified here, including investigative journalism that exposes the (unintended) harms of data-driven governance systems (Ganesh & Moss, 2022), “reverse ratings” such as citizen scoring projects that rate judges on the severity of their sentencing (Barabas, 2022), research engagement interventions to develop Public Interest Tech (at UC Berkeley) and Digital Public Infrastructure (at Berkman Klein Centre, Harvard University), and modifications developed within Big Tech in response to crises and exposure of algorithmic harms. Ganesh and Moss (2022) draw attention to how Big Tech’s increasing interventions to address algorithmic harms serve to reinscribe their own influence and narrow the frame of response to managing fairness and bias. In contrast, New Luddism (Ganesh & Moss, 2022) or Neo-Luddism draws on the long history of Luddite refusal and seeks to greatly expand the frame of reference:

A neo-Luddite movement would understand no technology is sacred in itself, but rather any technology is worthwhile only insofar as it benefits society. It would confront the harms done by digital capitalism and seek to address them by giving people more power over the technological systems that structure their lives (Sadowski, 2021).

This intervention highlights the importance of refusal: the capacity to say “no” to the application and development of data-driven systems (Benjamin, 2019; Cifor et al., 2019). Refusal is a political strategy, and a constant, in that we can always interrogate where and how the right to refuse can be exercised. There is a long history of radical refusal in Indigenous (Coulthard, 2014; Duarte, 2017; Simpson, 2017), anarchist (Fessenden, 2019; Malatesta, 2020) and Marxist struggles (Pizzolato, 2017; Stevenson, 2018). Benjamin’s (2019) work on refusal and solidarity, for example, draws on a long history of abolitionist thought and activism, recent developments in critical race theory, and contemporary movements respond to systemic racism, surveillance and the over-policing of Black communities in the USA. When applied to data practices, critical refusal is understood as “an informed practice of ‘talking back’ [...] a generative concept for challenging harmful data practices, while simultaneously negotiating and developing alternative actions” (Garcia et al., 2020, p. 93). As Ganesh and Moss explain, rather than improving or reforming current socio-technical systems, “refusal alerts us to how we must consider the quality and experiences of life under conditions beholden to algorithmic logics” (2022 p. 93), and decline the power of Big Tech over social, biological and interior life.

While refusal has a long history, in this paper we focus on the resurgent scholarly interest in the politics of refusal as a response to ubiquitous datafication, evidenced by a proliferation of recent publications exemplified by The Refusal Conference convened at Berkeley in 2019 with the strapline: “At this conference we lean in to the idea that sometimes making a more just or equitable society means refusing certain technologies or applications of technology” (Algorithmic Fairness and Opacity Group, n.d.). As the strapline makes clear, the politics of refusal begins not simply from the challenge of making algorithmic governance fairer, but rather from the fundamental question of how to build a more just society, and what might be the place of technologies within it. In practice, refusal takes many forms. Here we briefly sketch examples of two modes of organised refusal in practice: “ground up” (Ganesh & Moss, 2022) community organising in the example of #BlockSidewalk in Toronto, and individual and collective refusals by computational practitioners (Barabas, 2022), including from within Google.

The community response to Sidewalk Toronto can be seen as an example of “ground up” refusal in practice. This project, launched in 2017 by Sidewalk Labs, a sister company of Google, planned to turn a dockside region of Toronto into “the ‘smartest’ of smart cities” (Bernholz, 2020, p. 108). Sidewalk Toronto was to be a “test bed for urban technologies” (Mattern, 2022, p. 46) and proposed an “open digital ecosystem” that would encourage urban innovation focused around mobility, sustainability and housing (Sidewalk Labs, n.d.). One concrete element underpinning this open digital ecosystem was the planned installation of “standardised mounts”: digital connection points installed in public spaces that would serve as a mount for various devices such as traffic counters, air quality monitoring and bicycle counters. Sidewalk Labs claimed that standardising these would “reduce the cost of deploying digital innovations” (Sidewalk Labs, 2019, p. 382). The data generated from these devices “would be made publicly accessible (on a non-discriminatory basis), enabling companies, community members, and other third parties to use it as a foundation to build new tools” (Sidewalk Labs, 2019, p. 378), directly encouraging the private use of public data in pursuit of innovation.

Sidewalk Toronto went through a number of years of consultation, during which what began as a group of around 30 Torontonians banded together under the #BlockSidewalk campaign. This group grew to include many organisations and over 1000 Torontonians working together in the pursuit of a collective expression of refusal (#BlockSidewalk n.d.). A central element of the campaign, as stated by one #BlockSidewalk activist, was to “put No on the table” (O’Kane, 2019). Concerns around transparency, privacy and data ownership were present within the campaign against Sidewalk Toronto. However, if these concerns were adequately assuaged fundamental questions about the shape of social relations and civic life would have remained unaddressed. If the project were completely transparent and accessible, it would still not be a vision of the city the residents of Toronto desired, and would not address core concerns around the increasing encroachment of for-profit technology companies on civic life (Mann et al., 2020). As a June 2019 media release from #BlockSidewalk states “this is as much about privatization and corporate control as it is about privacy” (#BlockSidewalk, 2019). Within this context, refusal offers a more transformative response that challenges the basic premise on which technologies are rolled out. The collective refusal enacted by #BlockSidewalk is generative, not only saying no, but also creating space for alternative futures and alternative means of collaboratively envisioning them. After over a year of community campaigning against the project, Sidewalk Labs scrapped its plans and withdrew from Toronto in May 2020, citing pandemic related business concerns (Bernholz, 2020). After Sidewalk was successfully blocked from its ambitions in Toronto, the #BlockSidewalk campaign called on supporters to continue efforts to “maintain public control over our communities and over decisions regarding how and when to use technologies in them” (#BlockSidewalk, 2020), pursuing a vision of urban civic life that is not dominated by private interests.

While marginalised communities subject to algorithmic governance may have a limited capacity or autonomy to refuse these systems, tech workers and “computational professionals” (Barabas, 2022) have demonstrated possibilities and increasing commitment to organised practices of refusal. Barabas uses the term “computational practitioners” to refer to “a wide range of actors in academia, industry and government who are engaged in data-centred discourse, research, and design” and argues that refusal is a particularly useful concept for computational practitioners involved in data ethics projects, commonly referred to as FAccT ML “Fair, Accountable, and Transparent Machine Learning” (Barabas, 2022, p. 3). Barabas draws on Benjamin’s concept of “second-hand refusal” (Barabas, 2022, p. 8) to explore refusal practiced by these powerful institutional actors – including computer scientists and technology designers – who often occupy privileged positions within organisations. For example, the Google Walkout involved thousands of employees staging protests to demand the company end lucrative military, immigration and policing contracts (Lerman, 2019). Interdisciplinary groups of scholars have demanded that academic journals refuse to publish studies that fuel the “tech to prison pipeline” (Barabas, 2022, p. 5). Student groups such as NoTechForTyrants mobilise students to refuse Big Tech recruitment campaigns, and the Harvard Alliance Against Campus Cops have organised petitions to cancel computational courses that treat marginalised communities as “laboratories for experimentation” (Barabas, 2022, p. 5).

Further examples of refusal put into practice can be found where local governments abandon data-driven technologies, and in the work of the Design Justice Network. In a research project identifying and analysing algorithmic governance projects that have been cancelled or halted, Redden and colleagues found that reasons for cancelling included concerns about negative effects and bias, and observed that a recurring factor was “a failure to consult with the public and particularly with those who will be most affected by the use of these automated and predictive systems before implementing them” (Marsh, 2020). In contrast, Costanza-Chock (2020) engages with the work of the Design Justice Network and advocates for Design Justice: a data justice approach that prioritises maximum participation of the most impacted in every aspect of technology and policy design, including the possibility to say no and refuse. The work is animated by restorative conceptions of justice, seeking to transform rather than simply ameliorate relations of domination and oppression. In some ways these forms of data justice hearken back to older ideals and models of collective decision-making: participatory budgeting, policy co-production, autogestion, industrial democracy and so on.

In developing a toolkit of ‘abolitionist tools for the new Jim Code’, Ruha Benjamin (2019) reminds readers that calls for abolition are never simply about ending harmful systems; they are also about envisaging new ones. This dual movement is evident in the Feminist Data Manifest-No (Cifor et al., 2019), described as “a declaration of refusal and commitment”: refusing harmful data regimes and committing to more just data futures. Refusal is intended to enable solidarities across interlocking struggles (Benjamin, 2019), paired with a commitment “to centring creative and collective forms of life, living, and worldmaking that exceed the neoliberal logics and resist the market-driven forces to commodify human experience” (Cifor et al., 2019). The capacity to refuse is crucial: “we commit to ‘no’ being a real option in all online interactions with data-driven products and platforms and to enacting a new type of data regime that knits the ‘no’ into its fabric”.

Data justice, abolition and refusal can be distinguished from responses of access and inclusion, and transparency. Rather than looking to get “in” to the mechanism to ensure it is inclusive, or that its doings are comprehensible, abolition and data justice expand the site of contestation: locating datafication in a sociohistorical context, asking questions about how datafication comes to be normalised and how it can instead be defamiliarized and rejected. Rather than isolating the technology and seeking means to accommodate it in the existing social field, refusal as a principle of abolitionism and data justice implies beginning from an understanding that the existing social field is already arbitrarily unjust, that technology is socially shaped and that the legitimacy of technology and technological progressivism is not predetermined elsewhere. Abolition and data justice also open questions of strategy, insofar as the two previously discussed responses seek to work within existing dynamics, while refusal must remain “excessive” or ungovernable to continue its interrogation and negation.

Refusal as practice

All of the responses presented here are underpinned by social justice ideals. They seek in different ways to mitigate existing social inequalities, or the inequalities brought about by technological governance, or to maximise the benefits enabled by technological governance for social good. When practically applied, however, such responses can also result in social harms, reinforcing or generating new exclusions, inequalities and injustices. This occurs especially where responses are largely benign in a politically liberal sense, and thereby amenable to co-optation.

As with access and inclusion and transparency, the concept of refusal has a history that predates its application as a response to algorithmic governance, but has emerged as a contemporary and emerging iteration of data justice. It is worth considering how resurgent refusal might navigate some of the limitations and shortcomings of transparency and inclusion. Refusal is not a policy goal or a request for adequate opt-out clauses. Although it may advocate for these, it does not rest with them, because it does not cede control; rather refusal is a way to think about an ongoing and dynamic space from which to contest dominant regimes. Rather than asking for refusal to be instituted as a tick box (as with the ‘None of the Above’ or NOTA ballot option), we can ask how refusal can be practiced across distinctive domains, by end users, by programmers and people working in the tech industries, by welfare recipients, by people involved in automated and digital service delivery and so on. Refusal is about enhancing the space for democratic participation regarding how everyday life is mediated and how the digital traces of that are handled. From the abolitionist perspective, refusal is about disrupting and reinventing existing systems and power relations and creating the space to imagine and bring about a more liveable world.

The authors of the Manifest-No acknowledge that “not everyone can safely refuse or opt out without consequence or further harm” (Cifor et al., 2019), and this requires particular attention when applied to citizen-government interaction. Where there is, unquestionably, an imbalance of power between individuals and corporate actors, such imbalance is excessively heightened in relation to citizens and the government. Those reliant on government services are not afforded many concrete opportunities to refuse. This can be demonstrated by Robo-debt, where refusal to respond only led to further disciplinary action or harm, for example, debt recovery by garnishing income or by private debt collectors (Chisnall, 2020). While it was possible to contest the imposition of the debt, funds were automatically recouped from former claimants even while their appeal was under consideration. It was ‘impossible to refuse’ the recovery of the assumed debt, even when the debt had not been proved and calculated lawfully. In this way, refusal, like digital disconnection strategies, may be imperative but nonetheless limited by uneven distributions of agency and power.

Refusal thus has two important implications: the first is identification and evaluation of the distributed capacity to act on an objection. Refusal incorporates the objection and the ongoing assertion of the right to refuse in practice. Refusal does not entail success. As with the other responses, refusal is also bidirectional. It is possible to refuse, but to be unwillingly compelled nonetheless, and it is important to be explicit about naming this compulsion when it occurs. To refuse is never completely enclosed, it always remains possible to refuse that enclosure. Refusal is thus open-ended in the same way that access and inclusion and transparency are, although refusal looks “out” rather than “in”.

The second implication is how refusal must be iterated socially: the second part of “refusal and commitment”. While refusal might begin with small critical gestures, it does not unfold within the parameters of the individualised neoliberal frame. We could characterise “Facebook suicide” (Karppi, 2011) as a type of refusal, but refusal in the sense we are describing is not reducible to voluntaristic forms of exit. In some senses, refusal can require presence to effect change. Refusal is not absence.

Refusal thus invites and builds forms of collective recognition and solidarity. Refusal entails joint action, both horizontally and vertically, in terms of building solidarity within and across institutions and social fields. Refusal contests existing power dynamics and rejects existing forms of hierarchy. Refusal is not about asking for permission. The efficacy and range of refusal can be demonstrated by the diversity of contexts in which it has been sustained. Refusal is the strategic and political warrant for the strike (Tronti, 1980), for work refusal (Frayne, 2015), for pacifist and nonviolent resistance (Sharp & Finkelstein, 1973; Mazali, 2004) and for various forms of feminist politics (Ambrosch, 2016; Ferreira da Silva, 2018; Honig, 2021).

As such, refusal of ubiquitous datafication can take many forms, as discussed above. A number of actions occurring on a global scale provide evidence of these various possibilities; ranging from the collective forms of refusal seen among participants in the ‘Our Data Bodies’ project with racialised communities in the USA (Gangadharan, 2020), to refusal undertaken by UK local governments that have discontinued or abandoned their use of algorithmic decision-making tools (Marsh, 2020). Similarly, grassroots campaigns have spurred some local governments to suspend the use of facial recognition software (e.g. Conger et al., 2019), and strikes from Swedish workers resulted in the abandonment of plans to automate decision-making processes for social benefits (Dencik & Kaun, 2020, p. 4). In the research field, groups focused on increasing diversity in AI have announced that they will no longer accept funding from Google and have called on academics to refuse to review papers submitted to machine learning conferences sponsored by Google (Johnson, 2020).

While these actions are not intentionally linked to the philosophical underpinnings of refusal, they nonetheless provide evidence of the ways in which technological refusal can be undertaken by collective and institutional actors. However, we have yet to find widespread collective refusals and solidarities of this kind in the areas of social welfare provision and the algorithmic governance of marginalised communities that form the focus of this paper. Given the structural power imbalance of citizen-government relations noted above, this may be unsurprising. Nevertheless, emergent strategies and struggles of collective and “second-hand” refusal developing in commercial and research contexts and among anti-racist social movements may point to possibilities for successful action.

Formulating refusal effectively involves guarding against the possibility of co-optation (Barabas, 2022; Ganesh & Moss, 2022). Given the inequitable trajectory of the responses of transparency and access and inclusion, and even ethics (James & Whelan, 2022), a future can be imagined where refusal is politically neutralised, diluted of its capacity to generate change and results in social harms. Powerful commercial and state actors could weaponise the language of refusal and regulate the capacity to refuse; dismissing dissent where everyone could, in such a context, “just say no”.

Conclusion

In this paper we have discussed three responses to novel technologies which produce socially undesirable outcomes. While these responses are properly understood as a dialogue rather than a chronology, we have structured our argument to end on refusal, partly because of its intellectual and political promise, and partly because of its contemporary valence and resonance. Unlike responses that seek to alter governance from within given social and economic structures, and in accordance with traditional public administration values, refusal involves contesting these structures. Refusal extends beyond technological refusal, becoming an ongoing and iterative rejection of inequity, neoliberal governance techniques and prevailing algorithmic exploitation. We have sought to build on new and emerging scholarship that proposes refusal, to point to its sometimes-overlooked antecedents and to suggest starting points for further discussions that will work to ensure that specific strategies and contexts of refusal remain generative.

As a position like refusal becomes operationalised, some of its current political intentions may become diluted or redirected. This is a feature of all political-analytical strategies. Like the other responses described here, refusal has a long and varied history, and there is no one right way to practice it. Despite the potential for such responses to eventually reproduce the inequalities they seek to remedy, refusal offers a powerful and compelling alternative: to reconfigure networked relations and empower citizenry, transforming not just technological systems, but the social relations that give rise to them and in which they are imposed.

References

Alston, P. (2019). Report of the Special rapporteur on extreme poverty and human rights (A/74/48037; pp. 1–20). UN General Assembly. http://www.ohchr.org/Documents/Issues/Poverty/A_74_48037_AdvanceUneditedVersion.docx

Ambrosch, G. (2016). ‘Refusing to be a man’: Gender, feminism and queer identity in the punk culture. Punk & Post Punk, 5(3), 247–264. https://doi.org/10.1386/punk.5.3.247_1

Andrejevic, M. B. (2011). Surveillance and alienation in the online economy. Surveillance & Society, 8(3), 278–287. https://doi.org/10.24908/ss.v8i3.4164

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Ball, S., Considine, M., Lewis, J. M., McGann, M., Nguyen, P., & O’Sullivan, S. (2022). The digital governance of welfare to work. Industry report on interviews with international experts (pp. 1–23) [Report]. University of Melbourne, UNSW & La Trobe University. https://arts.unimelb.edu.au/__data/assets/pdf_file/0009/4079232/Digital_Governance_Industry_Report_One.pdf

Bannister, F., & Connolly, R. (2014). ICT, public values and transformative government: A framework and programme for research. Government Information Quarterly, 31(1), 119–128. https://doi.org/10.1016/j.giq.2013.06.002

Barabas, C. (2022). Refusal in data ethics: Re-imagining the Code Beneath the Code of Computation in the Carceral State. Engaging Science, Technology, and Society, 8(2). https://doi.org/10.17351/ests2022.1233

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

Bernholz, L. (2020). Purpose-built digital associations. In L. Bernholz, H. Landemore, & R. Reich (Eds.), Digital technology and democratic theory (pp. 90–112). The University of Chicago Press.

Bhagat, A., & Roderick, L. (2020). Banking on refugees: Racialized expropriation in the fintech era. Environment and Planning A: Economy and Space, 52(8), 1498–1515. https://doi.org/10.1177/0308518X20904070

BlockSidewalk. (n.d.). About us [Campagne]. #BlockSidewalk. https://www.blocksidewalk.ca/about

BlockSidewalk. (2019). Media releases [Campagne]. #BlockSidewalk. https://www.blocksidewalk.ca/media

BlockSidewalk. (2020). We’ve #BlockedSidewalk. It’s time to build the waterfront we need [Campagne]. #BlockSidewalk. https://www.blocksidewalk.ca/quayside2_0

Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke University Press. https://read.dukeupress.edu/books/book/147/

Chisnall, M. (2020). Digital slavery, time for abolition? Policy Studies, 41(5), 488–506. https://doi.org/10.1080/01442872.2020.1724926

Chohan, S. R., & Hu, G. (2022). Strengthening digital inclusion through e-government: Cohesive ICT training programs to intensify digital competency. Information Technology for Development, 28(1), 16–38. https://doi.org/10.1080/02681102.2020.1841713

Cifor, M., Garcia, P., Cowan, T. L., Rault, J., Sutherland, T., Chan, A., Rode, J., Hoffman, A. L., Salehi, N., & Nakamura, L. (2019). Feminist data manifest-no [Declaration]. https://www.manifestno.com/

Conger, K., Fausset, R., & Kovaleski, S. F. (2019, May 14). San Francisco bans facial recognition technology. The New York Times. https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. https://doi.org/10.7551/mitpress/12255.001.0001

Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349. https://doi.org/10.1177/1527476418796632

Coulthard, G. S. (2014). Red skin, white masks: Rejecting the colonial politics of recognition. University of Minnesota Press.

Danaher, J. (2016). The threat of algocracy: Reality, resistance and accommodation. Philosophy & Technology, 29(3), 245–268. https://doi.org/10.1007/s13347-015-0211-1

Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

Davis, M. (1990). City of quartz: Excavating the future in Los Angeles (Paperback edition). Verso.

Dencik, L., Jansen, F., & Metcalfe, P. (2018). A conceptual framework for approaching social justice in an age of datafication. DATAJUSTICE project. https://datajusticeproject.net/2018/08/30/a-conceptual-framework-for-approaching-social-justice-in-an-age-of-datafication/

Dencik, L., & Kaun, A. (2020). Datafication and the welfare state. Global Perspectives, 1(1), 12912. https://doi.org/10.1525/gp.2020.12912

Department of Infrastructure, Regional Development and Cities. (2018). $21 Million for smart cities projects across the country [Press release]. Australian Government. https://minister.infrastructure.gov.au/tudge/media-release/21-million-smart-cities-projects-across-country

Didier, E. (2020). America by the numbers: Quantification, democracy, and the birth of national statistics. The MIT Press.

Digital Transformation Agency. (2021). Digital Transformation Strategy 2018-2025. Ministerial Forward. Australian Government. https://www.dta.gov.au/digital-transformation-strategy/digital-transformation-strategy-2018-2025/ministerial-foreword

Duarte, M. E. (2017). Network sovereignty: Building the internet across Indian Country. University of Washington Press.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.

Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Document 32016R0679. http://data.europa.eu/eli/reg/2016/679/oj

Fairness, A., & Group, O. (XXXX). The Refusal Conference. AFOG Berkeley. https://afog.berkeley.edu/programs/the-refusal-conference#overview

Ferreira da Silva, D. (2018). Hacking the subject: Black feminism and refusal beyond the limits of critique. PhiloSOPHIA, 8(1), 19–41. https://doi.org/10.1353/phi.2018.0001

Fessenden, S. G. (2019). Drawing the contours of ethnography: Ethnographic refusal and anarchistic consent in fieldwork and writing. Collaborative Anthropologies, 11(2), 92–109. https://doi.org/10.1353/cla.2019.0003

Frayne, D. (2015). The refusal of work: The theory and practice of resistance to work. Bloomsbury Academic.

Fuchs, C., & Horak, E. (2008). Africa and the digital divide. Telematics and Informatics, 25(2), 99–116. https://doi.org/10.1016/j.tele.2006.06.004

Ganesh, M. I., & Moss, E. (2022). Resistance and refusal to algorithmic harms: Varieties of ‘knowledge projects’. Media International Australia, 183(1), 90–106. https://doi.org/10.1177/1329878X221076288

Gangadharan, S. P. (2020). Digital exclusion: A politics of refusal. In H. Landemore, R. Reich, & L. Bernholz (Eds.), Digital technology and democratic theory (pp. 113–140). University of Chicago Press.

Garcia, P., Sutherland, T., Cifor, M., Chan, A. S., Klein, L., D’Ignazio, C., & Salehi, N. (2020). No: Critical refusal as feminist data practice. Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. https://doi.org/10.1145/3406865.3419014

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.001.0001

Green, B. (2020). The smart enough city: Putting technology in its place to reclaim our urban future (First MIT Press paperback edition). The MIT Press.

Gritsenko, D., & Wood, M. (2022). Algorithmic governance: A modes of governance approach. Regulation & Governance, 16(1), 45–62. https://doi.org/10.1111/rego.12367

Hansen, H. K. (2015). Numerical operations, transparency illusions and the datafication of governance. European Journal of Social Theory, 18(2), 203–220. https://doi.org/10.1177/1368431014555260

Hardill, I., & O’Sullivan, R. (2018). E-government: Accessing public services online: Implications for citizenship. Local Economy: The Journal of the Local Economy Policy Unit, 33(1), 3–9. https://doi.org/10.1177/0269094217753090

Heeks, R., & Shekhar, S. (2019). Datafication, development and marginalised urban communities: An applied data justice framework. Information, Communication & Society, 22(7), 992–1011. https://doi.org/10.1080/1369118X.2019.1599039

Hoffmann, A. L. (2021). Terms of inclusion: Data, discourse, violence. New Media & Society, 23(12), 3539–3556. https://doi.org/10.1177/1461444820958725

Honig, B. (2021). A feminist theory of refusal. Harvard University Press. https://doi.org/10.2307/j.ctv1jpf62p

Humphry, J. (2019). ‘Digital First’: Homelessness and data use in an online service environment. Communication Research and Practice, 5(2), 172–187. https://doi.org/10.1080/22041451.2019.1601418

Humphry, J. (2020). ‘Second class’ access: Homelessness and the digital materialization of class. In E. Polson, L. Schofield Clark, & R. Gajjala (Eds.), The Routledge companion to media and class (pp. 242–252). Routledge.

Humphry, J., Maalsen, S., Gangneux, J., Chesher, C., Hanchard, M., Joss, S., Merrington, P., & Wessels, B. (2022). The design and public imaginaries of smart street furniture. In S. Flynn (Ed.), Equality in the city: Imaginaries of the smart future (pp. 127–148).

James, A., & Whelan, A. (2022). ‘Ethical’ artificial intelligence in the welfare state: Discourse and discrepancy in Australian social services. Critical Social Policy, 42(1), 22–42. https://doi.org/10.1177/0261018320985463

Johnson, K. (2020, December 7). Researchers are starting to refuse to review Google AI papers. Venture Beat. https://venturebeat.com/2020/12/07/researchers-are-starting-to-refuse-to-review-google-ai-papers/

Karppi, T. (2011). Digital suicide and the biopolitics of leaving Facebook. Transformations: Journal of Media and Culture, 2011(20), 1–18.

Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1424

Kingsman, N., Kazim, E., Chaudhry, A., Hilliard, A., Koshiyama, A., Polle, R., Pavey, G., & Mohammed, U. (2022). Public sector AI transparency standard: UK Government seeks to lead by example. Discover Artificial Intelligence, 2(2). https://doi.org/10.1007/s44163-022-00018-4

Kirk-Greene, A. (1999). Public administration and the colonial administrator. Public Administration and Development, 19(5), 507–519.

Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1–14. https://doi.org/10.1007/s10708-013-9516-8

Latzer, M., & Festic, N. (2019). A guideline for understanding and measuring algorithmic governance in everyday life. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1415

Lerman, R. (2019, August 16). Google employees call for pledge not to work with ICE. ABC News. https://abcnews.go.com/Technology/wireStory/google-employees-call-pledge-work-ice-65012148

Lowrie, I. (2018). Algorithms and automation: An introduction. Cultural Anthropology, 33(3), 349–359. https://doi.org/10.14506/ca33.3.01

Luque-Ayala, A., & Marvin, S. (2015). Developing a critical understanding of smart urbanism? Urban Studies, 52(12), 2105–2116. https://doi.org/10.1177/0042098015577319

Malatesta, E. (2020). An anarchist program. Translated by Vernon Richards. In D. Turcato (Ed.), The method of freedom: An Errico Malatesta reader (pp. 279–293). AK Press.

Mann, M., Mitchell, P., Foth, M., & Anastasiu, I. (2020). #BlockSidewalk to Barcelona: Technological sovereignty and the social license to operate smart cities. Journal of the Association for Information Science and Technology, 71(9), 1103–1115. https://doi.org/10.1002/asi.24387

Marsh, S. (2020, August 24). Councils scrapping use of algorithms in benefit and welfare decisions. The Guardian. https://www.theguardian.com/society/2020/aug/24/councils-scrapping-algorithms-benefit-welfare-decisions-concerns-bias

Mascio, F. D., Natalini, A., & Cacciatore, F. (2020). Public administration and creeping crises: Insights from COVID-19 pandemic in Italy. The American Review of Public Administration, 50(6–7), 621–627. https://doi.org/10.1177/0275074020941735

Mattern, S. (2022). Sidewalks of concrete and code. In S. Sharma & R. Singh (Eds.), Re-understanding media: Feminist extensions of Marshall McLuhan (pp. 36–50). Duke University Press.

Mazali, R. (2004). Acts of refusal. An interview with Rela Mazali. Middle East Report, 231, 22–25. https://doi.org/10.2307/1559432

McFarlane, C., & Söderström, O. (2017). On alternative smart cities: From a technology-intensive to a knowledge-intensive smart urbanism. City, 21(3–4), 312–328. https://doi.org/10.1080/13604813.2017.1327166

Milner, Y. (2019, July 8). Abolish big data. Medium. https://medium.com/@YESHICAN/abolish-big-data-ad0871579a41

Moore, S. (2018). Towards a sociology of institutional transparency: Openness, deception and the problem of public trust. Sociology, 52(2), 416–430. https://doi.org/10.1177/0038038516686530

Moss, E., Watkins, E., Metcalf, J., & Elish, M. (2020). Governing with algorithmic impact assessments: Six observations. Proceedings of the AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society. https://doi.org/10.2139/ssrn.3584818

NYC Office of the Mayor. (2014, December 8). Support pours in for LinkNYC. Official Website of the City of New York. www1.nyc.gov/office-of-the-mayor/news/944-14/ support-pours-for-linknyc

O’Kane, J. (2019, November 24). Opponents of Sidewalk Labs get advice from German tech protesters. The Globe and Mail. https://www.theglobeandmail.com/business/article-opponents-of-sidewalk-labs-get-advice-from-german-tech-protesters/

Oswald, M. (2018). Algorithm-assisted decision-making in the public sector: Framing the issues using administrative law rules governing discretionary power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170359. https://doi.org/10.1098/rsta.2017.0359

Oyedemi, T. D. (2021). Digital coloniality and ‘next billion users’: The political economy of Google Station in Nigeria. Information, Communication & Society, 24(3), 329–343. https://doi.org/10.1080/1369118X.2020.1804982

Parayil, G. (2005). The digital divide and increasing returns: Contradictions of informational capitalism. The Information Society, 21(1), 41–51. https://doi.org/10.1080/01972240590895900

Park, S., & Humphry, J. (2019). Exclusion by design: Intersections of social, digital and data exclusion. Information, Communication & Society, 22(7), 934–953. https://doi.org/10.1080/1369118X.2019.1606266

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061

Pizzolato, N. (2017). A new revolutionary practice: Operaisti and the ‘refusal of work’ in 1970’s Italy. Estudos Históricos (Rio de Janeiro), 30(61), 449–464. https://doi.org/10.1590/s2178-14942017000200008

Porter, T. M. (2020). The rise of statistical thinking, 1820-1900 (New edition). Princeton University Press.

Ragnedda, M. (2020). Enhancing digital equity: Connecting the digital underclass. Palgrave Macmillan.

Robbins, S. (2019). A misdirected principle with a catch: Explicability for AI. Minds and Machines, 29(4), 495–514. https://doi.org/10.1007/s11023-019-09509-3

Roughgarden, T. (2017). Algorithms illuminated. Part 1: The basics (First edition). Soundlikeyourself Publishing.

Sadowski, J. (2021, August 9). I’m a Luddite. You should be one too. The Conversation. https://theconversation.com/im-a-luddite-you-should-be-one-too-163172

Sætra, H. S. (2020). A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technology in Society, 62, 101283. https://doi.org/10.1016/j.techsoc.2020.101283

Selbst, A. D. (2021). An institutional view of algorithmic impact assessments. Harvard Journal of Law & Technology, 35(1), 117–191.

Services Australia. (2022). Class action settlement. Australian Government. https://www.servicesaustralia.gov.au/information-for-people-who-got-class-action-settlement-notice?context=60271

Sharp, G., & Finkelstein, M. (1973). The methods of nonviolent action. Porter Sargent.

Shelton, T., Zook, M., & Wiig, A. (2015). The ‘actually existing smart city’. Cambridge Journal of Regions, Economy and Society, 8(1), 13–25. https://doi.org/10.1093/cjres/rsu026

Sidewalk Labs. (n.d.). Sidewalk Toronto. Sidewalk Labs. https://www.sidewalklabs.com/toronto

Sidewalk Labs. (2019). Master innovation and development plan. Sidewalk Labs. https://www.sidewalklabs.com/toronto

Simpson, A. (2017). The ruse of consent and the anatomy of ‘refusal’: Cases from indigenous North America and Australia. Postcolonial Studies, 20(1), 18–33. https://doi.org/10.1080/13688790.2017.1334283

Smith, G. J. (2020). The politics of algorithmic governance in the black box city. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720933989

Spandler, H. (2007). From social exclusion to inclusion? A critique of the inclusion imperative in mental health. Medical Sociology Online, 2(2), 3–16.

Stevenson, N. (2018). The return of radical humanism in Marxism and Anarchism? The art of refusal, resistance and humility. In S. Çoban (Ed.), Media, ideology and hegemony (pp. 41–58).

Strover, S., Esteva, M., Cao, T., & Park, S. (2021). Public policy meets public surveillance. Selected Papers of AoIR 2021: The 22nd Annual Conference of the Association of Internet Researchers. https://journals.uic.edu/ojs/index.php/spir/article/view/12247/10437

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717736335

Tronti, M. (1980). The strategy of refusal. Semiotext(e), 3(3), 28–34.

van Dijk, J. (2020). The digital divide. Polity Press.

Wacquant, L. J. D. (2009). Prisons of poverty (Expanded ed). University of Minnesota Press.

Wade, R. H. (2002). Bridging the digital divide: New route to development or new form of dependency? Global Governance: A Review of Multilateralism and International Organizations, 8(4), 443–466. https://doi.org/10.1163/19426720-00804005

Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels, I. M. Y., & Wood, F. M. (2019). The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing (pp. 1–233) [Report]. Australian Council of Learned Academies (ACOLA).

Whelan, A. (2020). “Ask for more time”: Big data chronopolitics in the Australian welfare bureaucracy. Critical Sociology, 46(6), 867–880. https://doi.org/10.1177/0896920519866004

Wray, S. (2019, April 16). InLinkUK rolls out call-blocking algorithm to prevent kiosks being used for crime. Smart Cities World. https://www.smartcitiesworld.net/news/news/inlinkuk-rolls-out-call-blocking-algorithm-to-prevent-kiosks-being-used-for-crime--4082