Data justice

Lina Dencik, School of Journalism, Media and Culture, Cardiff University, United Kingdom, dencikl@cardiff.ac.uk
Javier Sanchez-Monedero, University of Córdoba, Spain

PUBLISHED ON: 14 Jan 2022 DOI: 10.14763/2022.1.1615

Abstract

Data justice has emerged as a key framework for engaging with the intersection of datafication and society in a way that privileges an explicit concern with social justice. Engaging with justice concerns in the analysis of information and communication systems is not in itself new, but the concept of data justice has been used to denote a shift in understanding of what is at stake with datafication beyond digital rights. In this essay, we trace the lineage and outline some of the different traditions and approaches through which the concept is currently finding expression. We argue that in doing so, we are confronted with tensions that denote a politics of data justice both in terms of what is at stake with datafication and what might be suitable responses.
Citation & publishing information
Received: October 4, 2021 Reviewed: December 12, 2021 Published: January 14, 2022
Licence: Creative Commons Attribution 3.0 Germany
Funding: The research for this article was supported by the ERC Starting Grant DATAJUSTICE (grant no. 759903) under the Horizon 2020 research and innovation programme.
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Social justice, Datafication, Digital society, Algorithms, Data, Access to justice
Citation: Dencik, L. & Sanchez-Monedero, J. (2022). Data justice. Internet Policy Review, 11(1). https://doi.org/10.14763/2022.1.1615

This article belongs to Concepts of the digital society, a special section of Internet Policy Review guest-edited by Christian Katzenbach and Thomas Christian Bächle.

Introduction

The growing reliance on data-driven technologies across social life—what is commonly referred to as datafication—is widely seen to propel transformations across areas of science, government, business and civil society. These transformations are often simultaneously touted as enhancing forms of efficiency and better decision-making at the same time as presenting significant societal challenges. Data justice has emerged as a key framework for engaging with such challenges in a way that privileges an explicit concern for social justice. Privileging social justice concerns in the analysis of information and communication systems is not in itself new, but the concept of data justice has been used to pave a way for a shift in understanding of what is at stake with datafication beyond digital rights. In particular, Dencik et al. (2019: 875) argue that we have seen the concept of data justice ‘used to denote an analysis of data that pays particular attention to structural inequality, highlighting the unevenness of implications and experiences of data across different groups and communities in society.’ In this brief essay, we look at how this focus has manifested across different traditions and disciplines and point to a continued politics of data justice that illustrates the unsettled nature of this concept. We argue that how we understand the ‘grammar’ of justice (Fraser, 2008) will do much to inform what we mean by data justice, both in terms of what is at stake with datafication and what might be suitable responses.

Data justice in context

The concept of data justice draws from a range of long-standing traditions that have concerned themselves with the social justice implications of the nature of information and communication systems, ranging from debates on ethics and human rights to the orientation of activism and social movements. While these earlier discussions provide foundational insights, data justice has predominantly emerged in the dual context of the growing focus on so-called big data (and the more recent iterations of machine learning and artificial intelligence), and the perceived limitations in how such developments have been framed and approached. In particular, the revelations from the Snowden leaks, first published in 2013, pushed the societal significance of ‘big data’ into a more mainstream and public view (Lyon, 2015) but often in terms of a simple binary between enhanced efficiency and (state)security on the one hand and concerns with surveillance and privacy on the other (Hintz, Dencik, & Wahl-Jorgensen, 2018). This provided notable impetus for engaging with the implications of emerging technologies, which included the mainstreaming of privacy-enhancing technologies and encryption as well as the significant prominence of digital rights and anti-surveillance campaigning in the public realm, but it also privileged particular responses that struggled to account for the implications of datafication in relation to broader social justice agendas (Dencik, Hintz, & Cable, 2016).

As Andrejevic (2015) has outlined, the nature of surveillance programmes revealed in the Snowden leaks are intimately linked to a model of economics and state-corporate interests in detecting and predicting patterns, profiling and categorising populations rather than individual people. Data-centric information systems are instrumental as systems of control, not just by increasing the potential for monitoring, but as sorting mechanisms (Gandy, 1993). Data justice debates tend to understand how these sorting mechanisms work and what their relationship is to historical contexts, social structures and dominant agendas as not just a question of individual privacy, but one of justice.

This focus is significant because although it is clear that how we make sense of the social world is central for how we also make claims about it, systems of communication and information infrastructures have tended to be neglected in prevalent theories of justice, often in favour of a focus on political institutions and moral ethics dating back to Aristotle through to Rawls (Bruhn Jensen, 2021). Whilst such a focus continues to be important for ideas of justice, the nature of institutions and the parameters for moral ethics are increasingly bound up with the nature of our information and communication systems. To speak of data justice is thus to recognise not only how data, its collection and use, increasingly impacts on society, but also that datafication is enabled by particular forms of political and economic organisation that advance a normative vision of how social issues should be understood and resolved. That is, data is both a matter in and of justice; datafication embodies not only processes and outcomes of (in)justice, but also its own justifications.

In this sense, data justice as a concept and focus speaks closely to the sorts of concerns that inform critical data studies and related fields in that it seeks to examine data issues in the context of existing power dynamics, ideology and social practices, rather than as technical developments in the interactions between information systems and users (boyd and Crawford, 2012; Van Dijck, 2014; Kitchin & Lauriault, 2014). The premise is that developments in data cannot be considered separately from social justice concerns and agendas, but need to be integrated as part of them (Dencik, Hintz, & Cable, 2016). However, what this means as an approach is varied, and we have seen a range of different perspectives engage with data justice, often across disciplines and traditions. Whilst these different approaches unite around a need to foreground justice in understandings of data, or to foreground data in understandings of justice, as we shall see they also elicit areas of tension in the meaning of data justice in important ways. As Fraser (2008) has argued, despite the many theories of justice that inform the architecture of institutions and laws to uphold justice, we rarely share a common ‘grammar’ of justice, such as the three ‘nodes’ of the what (ontology), the who (scope) and the how (procedure) of justice. This condition of ‘abnormal justice’, she argues, is apparent with disruptive developments such as globalisation that highlight conflicts over what we want to make claims to, when we make claims to justice, who those claims apply to, and the processes through which they may be realised.

Datafication is often touted as a form of disruption, but only rarely in the context of justice. Drawing on Fraser’s notion of abnormal justice can be fruitful for elucidating this relationship (Cinnamon, 2017; Dencik, Jansen, & Metcalfe, 2018). For example, as Couldry (2019) has argued, datafication significantly shapes what comes to count as social knowledge and the very terms upon which we come to reason about values as choice is automated and regulated by what legal scholar Karen Yeung (2017) describes as the ‘hypernudge’. At the same time, our understanding of data itself is not clearly defined and so when we want to make justice claims about it, it is unclear whether this is about its distribution as a good or resource, the inferences made from it and how people come to be recognised, or the nature of how it is generated and attributed meaning. Similarly, the nature of data flows has dislocated any clear relationship between the loci of decision-making and the subject of such decision-making as well as any bounded polity of who can make claims to data justice. As Andrejevic (2014) has argued, datafication brings about particular social stratifications between different data classes whilst the notion of any individual data subject struggles to account for how data about an individual is bound up with population-level effects (Viljoen, 2020). Finally, the criteria or procedure through which disputes about the ‘what’ and ‘who’ of data justice should be resolved continues to be a source of tension. At one level, Pasquale (2017) has argued that we are moving from territorial sovereignty to ‘functional sovereignty’ in which technology companies increasingly take on governance functions and disrupt procedures for how decision-making might be challenged or held to account. At the same time, it is unclear what institutions should be the arbiters of justice claims about data, whether traditional avenues such as governments or courts are still adequate, and what role there is for computational or design mechanisms to uphold justice claims.

Approaches to data justice

There are therefore notable tensions around the what, who and how of data justice that speak to a particular politics around how to engage with the broader implications of datafication for society. This is perhaps unsurprising considering the inherently trans-disciplinary nature of datafication, and the many stakeholders that shape its development. However, it also points to the way different interests and perspectives manifest in not only the analysis of societal implications but also responses to them.

In policy and data governance debates, for example, several on-going concerns about digital rights became elevated in the aftermath of the Snowden leaks and with a renewed focus on big data, were translated into regulation. Most notably in Europe was the development of a new General Data Protection Regulation (GDPR) that was adopted in 2018 on the premise that individuals should be able to claim some rights with regards to information collected about their person, and that collecting such information requires some form of consent. Although broad in its conception of data protection, questions remain about both its scope and enforceability. Perhaps in part as a response, much attention and resources have been dedicated to advancing ‘data ethics’ (and its most recent iteration as ‘AI ethics’) as alternative and complementary frameworks. This field has engaged a range of different streams of thought and practice, some of which continue a long-standing tradition of computer ethics while changing the level of abstraction of ethical enquiries from an information-centric to a data-centric one (Floridi & Taddeo, 2016). That is, the focus shifts from a concern with how to treat information as an input and output of computing to a focus on how people access, analyse and manage data in particular, not necessarily engaging any specific technology, but what digital technology manipulates.

Data ethics foregrounds key challenges with datafication, including transparency, bias and accountability, but has also been criticised for containing such challenges within individualistic moral assessments or as procedural safeguards that do little to challenge existing power structures (D’Ignazio & Klein, 2019; Taylor & Dencik, 2020). However, traditionally there continues to be a close connection between ethics and justice. For example, in her engagement with data justice, Taylor (2017) puts forward a framework for determining ethical paths through a datafying world that can underpin data governance. This framework considers three central pillars—(in)visibility; (dis)engagement with technology; and antidiscrimination—that can form the basis of international data justice. These pillars collectively inform fairness in the way people are made visible, represented and treated as a result of their production of digital data. Importantly, they take into account the novelty and complexity of the ways in which data systems can discriminate, discipline and control. This builds on work on information justice put forward by Johnson (2016) in which he outlines how data systems have a disciplinary function because the way data is collected and structured constitutes a form of normative coercion. The task, therefore, is to make this politics of data technologies explicit and to consider both the right to be seen and represented as well as the right to withdraw from a database. In this sense, Taylor’s framework for data justice accounts for both the positive and negative potential of new data technologies to facilitate human flourishing (Taylor, 2017).

More recently, we have seen some of the pillars outlined in Taylor’s framework for data justice migrate into discussions on data governance that seek to broaden the scope for what such governance entails. A prominent focus has been on data stewardship, for example, such as the establishment of ‘data trusts’ that would provide a legal mechanism to ‘empower’ data subjects to ‘take the reins’ of personal data by introducing an independent intermediary between data subjects and data collectors (Delacroix & Lawrence, 2019). A related but different take on the control over data has been expressed in terms of ‘data commons’ that enable people to share their data for specific purposes or social benefit (Grossman et al., 2016; Morozov, 2018; Nesta, 2021). The premise is that data is a public good and that people should have some say in what data is collected, how it is used and who benefits. Viljoen (2020) has articulated some of these ideas within a framework she describes as ‘democratic data governance’ that shifts the lens away from a focus on the handling and processing of data towards the institutional reforms needed to facilitate democratic participation in determining the population-level effects of datafication.

These governance debates have also been significant for changing the perception of computer scientists and engineers and their role within society (Connolly, 2020). However, it is not always clear how, for example, the proliferation of guidelines for ethical and responsible AI and automation has actually translated into practice, and how data justice concerns might be addressed. In a review by Jobin et al. (2019) they identify justice as a principle in the advancement of data-driven technology as being predominantly expressed in terms of fairness and the monitoring and mitigation of so-called algorithmic ‘bias’, which is often equated with discrimination (Balayn & Gürses, 2021). Predominantly, discrimination by algorithms is understood as the result of existing discrimination patterns present in the training data (using demographic categories such as gender, age, ethnicity, or disability), but more comprehensive engagements with this issue also considers biases introduced via assumptions in labels or biases brought about in particular contexts of use (Hallensleben et al., 2020). Less common is the reference to justice in terms of diversity and the possibility to understand and challenge algorithmic decisions, although some frameworks do address such principles with reference to human rights (Fjeld et al., 2020).

The translation of social justice into fairness understood in computational terms has paved the way for different principles to guide the development of data-driven technologies. In some respects, it advances on the longer standing tradition of ‘privacy-by-design’ in computer science towards a commitment to ‘fairness-by-design’. However, as Gürses et al. (2015) have pointed out, the abstract nature of privacy can lead to very different systems as a result of choosing one or several particular privacy design patterns and privacy enhancing technologies. With a notion such as fairness, there is even less of a shared criteria for what this might mean for computational systems, and what the guiding principles of fairness actually are (Friedler et al., 2021). Moreover, as the community of computer scientists and engineers dedicated to establishing such fairness criteria has grown, especially through a focus on ‘de-biasing’ and algorithmic discrimination, prominent questions have been asked about the limits of this interpretation of data justice and the legitimacy of technologists to define and be the arbiters of justice claims (Gangadharan & Niklas, 2019).

Justice as a value is conditional on a range of principles that go beyond bias and that cannot be limited to technical components of a system. As outlined in the framework Algorithmic Ecology (Stop LAPD Spying Coalition and Free Radicals, 2020, n.p.), an ‘algorithm is designed to operationalize the ideologies of the institutions of power to produce intended community impact’. As such, a value of justice applies not only to the many abstraction layers in which a system operates but also how justice is experienced. In this sense, the universal scope of a system often assumed in computational definitions of fairness in order to also accommodate population level optimisation falls short in accounting for the way systems are often used to target specific groups. Furthermore, principles need to be incorporated into not just the system, but the design process itself and the role and relation of technologists towards other stakeholders (Costanza-Chock, 2020). Such understandings invite more holistic views of computer science and software engineering methodologies as decidedly socio-technical (Connolly, 2020; Selbst et al., 2019).

Calls have therefore been made to focus justice concerns in computer science less on the input and output data and more explicitly on the connection of the optimisation process with the real-world task (Hooker, 2021; Lipton, 2018). Whilst the optimisation task of a system can be more or less explicit, the issue of misalignment between optimisation tasks and performance metrics and real-world problems is gaining traction within the field. It points to the limitations of fairness claims without an understanding of the effect of data collection, designer world views, and embedded values (Friedler et al., 2021). As McQuillan (2019) has argued, the optimisation process tends to implement societal structures and logics and secure the ‘institution in the loop’ in any system. At a technical level, such structures and logics can be challenged by moving from process optimisation to community well-being (Musikanski et al., 2020) or by counter-optimising a system to protect impacted communities that might be harmed by institutional optimisation logics (Kulynych et al., 2020).

These resistance strategies play an important role in how we might think of data justice in terms of political and social mobilisation. They point to the importance of situating technological developments in social, economic, political and cultural context and to consider data issues in relation to historical struggles for justice around issues such as equality, oppression and domination (Dencik, Jansen & Metcalfe, 2018). That is, data justice as a way to inform mobilisation needs to be levied at system-level critique in which the parameters of the debate do not begin and end with the technology itself but rather how datafication features in on-going negotiations of social relations and power dynamics within society. On this reading, the asymmetries between different data classes point to the entrenchment of social stratifications and the growing concentration of power in private hands, whilst shifting decision-making away from the public realm. Issues of ‘bias’ or discrimination in data-driven tools are not bugs in the system, but rather a structural feature informed by the historical social sorting of populations based on stigmatisation, marginalisation and exclusion. And the operationalism of data systems speaks to a prevalent rationality that has long dominated many parts of the world in terms of privileging individualism, market logics and bureaucratic control (Gandy, 1993; Fourcade & Healy, 2017; Benjamin, 2019; Andrejevic, 2019).

Approached from the perspective of political and social mobilisation, data justice draws from critical traditions in media studies that have been oriented toward ‘media justice’, which have explicitly sought to situate media as a social justice issue. The aim is not necessarily to focus on media reform per se, but to bring together media scholars and activists and social justice scholars and activists as a way to identify synergies between the two fields and advance a better understanding of the role of media and communication in struggles for social justice (Jansen, 2011). In particular, the media justice frame has sought to privilege the insights and experiences of historically marginalised communities and the long tradition of social justice activism around the world to inform media reform debates. As such, a key contribution of the media justice approach is to draw attention to whose voices are heard and what concerns are foregrounded in efforts towards media and social change. It highlights how the nature of media systems is intricately linked to social justice struggles, calling for different media representations and alternative ownership and governance structures in addressing injustices. Moreover, it calls for different movements and groups, across communication rights and socio-economic rights, to unite and find common ground.

Similarly, mobilisation under a data justice frame starts with a recognition that the burdens of datafication overwhelmingly fall on resource-poor and marginalised groups in society (Eubanks, 2018; Benjamin, 2019; Metcalfe & Dencik, 2020). This is important, as it cuts through the all-too-comfortable narrative that emerged out of the emphasis on mass data collection, particularly prominent in the aftermath of the Snowden leaks, that suggests we are all equally implicated in the datafied society. Instead, data justice debates have to contend with the way the development, advancement and impact of datafication is contingent upon deep historical social and economic inequalities, both domestically and globally. As a starting point, this shifts the focus of what voices need to be centred in any understanding of what is at stake with datafication and challenges the current constitution of the decision-making table as to how datafication can and should be negotiated. As an approach, it explicitly undermines the assertion that the technology industry should be able to dictate the scope of problems and solutions, let alone that a decision on what constitutes ‘fairness’ should be confined to what can be computationally determined. Perhaps more contentiously, it also asserts the need to move mobilisation on data beyond the domain of communication and digital rights groups.

Instead, Gangadharan and Niklas (2019) argue that there is a need to ‘decentre’ technology in data justice debates, and situate technology within systemic forms of oppression in which the harms that emerge from data-driven systems are articulated by those who are predominantly impacted and those who have a history of struggle against such oppression. That is, the concern with data needs to be part of an integrated social justice agenda, one in which definitions of problems and solutions may not actually be about data. As Hoffmann (2019) has argued, we cannot afford to continuously fail to address the logics that produce advantaged and disadvantaged subjects and the underlying structural conditions against which we come to understand data harms and injustice. In taking such an approach, we are invited to turn our attention to focus on what function datafication—as a discourse and practice—serves in different contexts, the social and political organisation that enables it, and who benefits.

Relevance of data justice

Importantly, therefore, the intersection between data and justice encompasses more than just technological questions and instead forces us to ask how society should be organised and what the role of technology might play in it. We see this also in the way that data justice debates are being shaped by activism and campaigning. The Center for Media Justice in the United States, for example, has created a Data Justice Lab dedicated to thinking through ways to bridge research, data, and movement work relating to issues like surveillance, carceral tools, internet rights, and censorship. The Detroit Digital Justice Coalition has worked with local residents to identify harms that emerge through the collection of data by public institutions, situating these in the context of on-going criminalisation and surveillance of low-income communities, people of colour and other targeted groups. In some instances, these activities have foregrounded a politics of refusal (Gangadharan, 2019) that advance an abolitionist agenda as articulated by groups such as the StopLAPD Spying Coalition and the Data for Black Lives initiative. Here, the focus is not to make technologies more efficient, but rather to recognise how technology has meaning and impact in relation to the inequalities manifest in capitalist exploitation and a history of state violence. The call is to divest resources into oppressive data systems and to ‘abolish big data’ that is used to measure and profile people, and instead reinvest in communities (Benjamin, 2019; Crooks, 2019).

In Europe, meanwhile, we have seen a growing mobilisation around social and economic rights in the context of datafication that has been particularly evident in the use of strategic litigation amongst non-governmental organisations against algorithmic systems and platforms. In the area of welfare, for example, coalitions between welfare and digital rights groups have successfully challenged the use of some algorithmic systems, such as SyRI in the Netherlands and an algorithm in the Department for Works and Pensions targeting disabled people in the UK (Toh, 2020; Savage, 2021). Similarly, in the context of the labour movement, there is growing engagement with the intersection of data and workers’ rights that stretch beyond the issue of potential job losses in the face of automation and considers also the quality of work and the position of labour in relation to capital in datafied societies (De Stefano, 2018, Moore et al., 2017). This includes, for example, establishing workers’ data rights as suggested by UNI Global Union, or the ‘right to disconnect’ as is the subject of significant union campaigns across Europe. Indeed, calls for ‘data justice unionism’ that would seek to explicitly connect digital rights with socio-economic rights and to build coalitions across social movements might provide an avenue through which the labour movement can play a role in connecting transformations relating to datafication in work to broader questions of society (Dencik, 2021).

In the context of environmentalism, the Environmental Data & Governance Initiative (EDGI) has preserved vulnerable scientific data in the aftermath of the US election of Trump in 2016, and in the process developed an ‘environmental data justice’ framework that considers the politics, generation, ownership and uses of environmental data (Vera et al., 2019). Similar concerns inform an increasing emphasis on ‘sovereignty’ in relation to data, particularly amongst indigenous communities, evident in the agenda set out by the growing Indigenous Data Sovereignty movement made up of a network of alliances and groups around the world that asserts that indigenous peoples need to be decision-makers around how data about them is collected and used. This orientation builds on long-standing struggles over the on-going extraction and exploitation of indigenous peoples and their knowledge systems, customs and territories (Kukutai & Taylor, 2016).

These different actions and struggles unite around a need to tackle the actual conditions that lead to experiences of injustice as they exist on the ground rather than necessarily pouring efforts into appealing to ideal formations of data and technology in contemporary society. Moreover, mobilisation in this sense is nurtured through solidarity, the aim of which is not simply the creation of just institutions that enact justice ‘from above’ but the manifestation of justice within and through social relations as they currently exist (Cohen, 2008). Holding on to the possibility of solidarity in determining how society should be organised and the role of technology within it has never been more relevant (Fenton et al., 2020). As Gandy (2020) has argued, such political mobilisation is precisely what is needed but also what is directly under threat with the advancement of datafication. As behaviours and activities are abstracted and reduced for the purposes of optimisation, people’s shared experiences, and with that their political capability, are undermined as algorithmically-defined groups come to dictate the basis of social positioning. A call for data justice is therefore also a call for the continued relevance of social relations through which people can identify with each other and through which mobilisation for struggles can be formed.

Conclusion

The concept of data justice borrows from many long-standing traditions, but it is also relatively nascent in its advancement and use. Although it has emerged out of pressing issues that arise from contemporary developments in digital technologies, it has found expressions in many diverse areas and fields. These expressions are not always aligned and speak to different interpretations of the ontology of data justice, who it applies to, and how it should be upheld. That is, they are expressions of the struggle over not only ideal formations of justice but the very grammar of justice that datafication disrupts. This is important as it alerts us to a politics of data justice that is currently played out across disciplines and practices. In this sense, we might say that the meaning of data justice is still up for grabs, and as with justice in general will continue to be interpreted and shaped by different interests and perspectives. However, in its current formation it holds significance for shifting our understanding of what is at stake with datafication and what might be possible responses. In particular, it alerts us to the need to consider issues of data not as siloed and abstracted technical issues, but as an embedded part of how we might think of social justice. As datafication continues to advance in different iterations, and under different modes of crisis, this need has never been more relevant.

References

Andrejevic, M. (2014). The Big Data Divide. International Journal of Communication, 8, 1673–1689.

Andrejevic, M. (2015). Keynote plenary at the conference ‘Surveillance and Citizenship’: Digital Citizenship and the Surveillance Society [Conference Presentation].

Andrejevic, M. (2019). Automated media. Routledge, Taylor & Francis Group.

Balayn, A., & Gürses, S. (2021). Beyond Debiasing: Regulating AI and its inequalities [Report]. EDRI. https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.

boyd, danah, & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Cinnamon, J. (2017). Social Injustice in Surveillance Capitalism. Surveillance & Society, 15(5), 609–625. https://doi.org/10.24908/ss.v15i5.6433

Cohen, G. A. (2008). Rescuing justice and equality. Harvard University Press.

Connolly, R. (2020). Why computing belongs within the social sciences. Communications of the ACM, 63(8), 54–59. https://doi.org/10.1145/3383444

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.

Couldry, N. (2018, May 24). Keynote at Applying the Capabilities Approach to Media and Communications. ICA 2018 Preconference, Prague. https://cdn.ymaws.com/www.icahdq.org/resource/resmgr/conference/2018/preconferences/pc_capabilities.pdf

Crooks, R. (2019, March 22). What we mean when we say #AbolishBigData2019 [Medium Post]. Roderic Crooks. https://medium.com/@rncrooks/what-we-mean-when-we-say-abolishbigdata2019-d030799ab22e

De Stefano, V. (2018). Negotiating the algorithm”: Automation, artificial intelligence and labour protection. Employment, Working Paper, 246.

Delacroix, S., & Lawrence, N. D. (2019). Bottom-up data Trusts: Disturbing the ‘one size fits all’ approach to data governance. International Data Privacy Law, ipz014. https://doi.org/10.1093/idpl/ipz014

Dencik, L. (2018). Surveillance Realism and the Politics of Imagination: Is There No Alternative? Krisis: Journal for Contemporary Philosophy, 1, 31–43.

Dencik, L. (2021). Towards Data Justice Unionism? A Labour Perspective on AI Governance. In P. Verdegem (Ed.), AI for Everyone? Critical Perspectives (pp. 267–284). University of Westminster Press. https://doi.org/10.16997/book55.o

Dencik, L., Hintz, A., & Cable, J. (2016). Towards data justice? The ambiguity of anti-surveillance resistance in political activism. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679678

Dencik, L., Hintz, A., Redden, J., & Treré, E. (2019). Exploring Data Justice: Conceptions, Applications and Directions. Information, Communication & Society, 22(7), 873–881. https://doi.org/10.1080/1369118X.2019.1606268

Dencik, L., Jansen, F., & Metcalfe, P. (2018). A conceptual framework for approaching social justice in an age of datafication. https://datajusticeproject.net/2018/08/30/a-conceptual-framework-for-approaching-social-justice-in-an-age-of-datafication/

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.

Fenton, N., Freedman, D., Schlosberg, J., & Dencik, L. (2020). The media manifesto. Polity.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3518482

Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society, 374(2083). https://doi.org/10.1098/rsta.2016.0360

Fourcade, M., & Healy, K. (2016). Seeing like a market. Socio-Economic Review, mww033. https://doi.org/10.1093/ser/mww033

Fraser, N. (2008). Abnormal Justice. Critical Inquiry, 34(3), 393–422.

Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2021). The (Im)possibility of fairness: Different value systems require different mechanisms for fair decision making. Communications of the ACM, 64(4), 136–143. https://doi.org/10.1145/3433949

Gandy Jr., O. H. (2021). The Panoptic Sort: A Political Economy of Personal Information (2nd ed.). Oxford University Press. https://doi.org/10.1093/oso/9780197579411.001.0001

Gandy, O. (2020). A Panoptic Sort: A Political Economy of Personal Information (2nd ed.). Oxford University Press.

Gangadharan, S. P. (2019). What Do Just Data Governance Strategies Need in the 21st Century? [Keynote]. Data Power, Bremen, Germany. https://www.youtube.com/watch?v=Qjhdfr6Di3k

Gangadharan, S. P., & Niklas, J. (2019). Decentering technology in discourse on discrimination. Information, Communication & Society, 22(7), 882–899. https://doi.org/10.1080/1369118X.2019.1593484

Grossman, R. L., Heath, A., Murphy, M., Patterson, M., & Wells, W. (2016). A Case for Data Commons: Towards Data Science as a Service. ArXiv:1604.02608 [Cs]. http://arxiv.org/abs/1604.02608

Gürses, S., Troncoso, C., & Diaz, C. (2015). Engineering Privacy by Design Reloaded. Amsterdam Privacy Conference, 21. http://carmelatroncoso.com/papers/Gurses-APC15.pdf

Hallensleben, S., Hustedt, C., Fetic, L., Fleischer, T., Grünke, P., Hagendorff, T., Hauer, M., Hauschke, A., Heesen, J., Herrmann, M., Hillerbrand, R., Hubig, C., Kaminski, A., Krafft, T., Loh, W., Otto, P., & Puntschuh, M. (2020). From Principles to Practice: An interdisciplinary framework to operationalise AI ethics [Report]. AIEI Group. https://www.bertelsmann-stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/WKIO_2020_final.pdf

Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2019). Digital citizenship in a datafied society. Polity Press.

Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912

Hooker, S. (2021). Moving beyond “algorithmic bias is a data problem.” Patterns, 2(4), 100241. https://doi.org/10.1016/j.patter.2021.100241

Jansen, S. C., Pooley, J., & Taub-Pervizpour, L. (Eds.). (2011). Media and social justice (1st ed). Palgrave Macmillan.

Jensen, K. B. (2021). A theory of communication and justice. Routledge.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Johnson, J. (2016). The question of information justice. Communications of the ACM, 59(3), 27–29. https://doi.org/10.1145/2879878

Kukutai, T., & Taylor, J. (Eds.). (2016). Indigenous Data Sovereignty: Toward an Agenda (Vol. 38). ANU Press. https://www.jstor.org/stable/j.ctt1q1crgf

Kulynych, B., Overdorf, R., Troncoso, C., & Gürses, S. (2020). POTs: Protective optimization technologies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 177–188. https://doi.org/10.1145/3351095.3372853

Lange, S., & Santarius, T. (2020). Smart green world? Making digitalization work for sustainability. Routledge, Taylor and Francis Group.

Lipton, Z. C. (2018). The Mythos of Model Interpretability. ACM Queue, 16(3). https://queue.acm.org/detail.cfm?id=3241340

Lyon, D. (2015). Surveillance after Snowden. Polity Press.

McQuillan, D. (2019, June 7). AI Realism and structural alternatives. Danmcquillan.Io. http://danmcquillan.io/ai_realism.html

Metcalfe, P., & Dencik, L. (2019). The politics of big borders: Data (in)justice and the governance of refugees. First Monday, 24(4). https://firstmonday.org/ojs/index.php/fm/article/view/9934/7749

Moore, P. V., Upchurch, M., & Whittaker, X. (Eds.). (2018). Humans and Machines at Work: Monitoring, Surveillance and Automation in Contemporary Capitalism. Springer International Publishing. https://doi.org/10.1007/978-3-319-58232-0

Morozov, E. (2018, March 31). After the Facebook scandal it’s time to base the digital economy on public v private ownership of data’. The Guardian. https://www.theguardian.com/technology/2018/mar/31/big-data-lie-exposed-simply-blaming-facebook-wont-fix-reclaim-private-information

Musikanski, L., Rakova, B., Bradbury, J., Phillips, R., & Manson, M. (2020). Artificial Intelligence and Community Well-being: A Proposal for an Emerging Area of Research. International Journal of Community Well-Being, 3(1), 39–55. https://doi.org/10.1007/s42413-019-00054-6

Nesta. (2021). Unlocking the value of data as a commons. Nesta. https://www.nesta.org.uk/feature/four-future-scenarios-personal-data-economy-2035/unlocking-the-value-in-data-as-a-commons/

Pasquale, F. (2017, December 6). From Territorial to Functional Sovereignty: The Case of Amazon. Law and Political Economy. https://lpeproject.org/blog/from-territorial-to-functional-sovereignty-the-case-of-amazon/

Savage, M. (2021, November 21). DWP urged to reveal algorithm that ‘targets’ disabled for benefit fraud. The Guardian. https://www.theguardian.com/society/2021/nov/21/dwp-urged-to-reveal-algorithm-that-targets-disabled-for-benefit

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598

Stop L.A.P.D.Spying Coalition & Free Radicals. (2020, March 2). Algorithmic Ecology: An Abolitionist Tool for Organizing Against Algorithms. Free Radicals. https://freerads.org/2020/03/02/the-algorithmic-ecology-an-abolitionist-tool-for-organizing-against-algorithms/

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 205395171773633. https://doi.org/10.1177/2053951717736335

Taylor, L., & Dencik, L. (2020). Constructing commercial data ethics. Regulation & Technology, 1–10. https://doi.org/10.26116/techreg.2020.001

Toh, A. (2020, February 6). Dutch Ruling a Victory for Rights of the Poor. Human Rights Watch. https://www.hrw.org/news/2020/02/06/dutch-ruling-victory-rights-poor

Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. https://doi.org/10.24908/ss.v12i2.4776

Vera, L. A., Walker, D., Murphy, M., Mansfield, B., Siad, L. M., Ogden, J., & EDGI. (2019). When data justice and environmental justice meet: Formulating a response to extractive logic through environmental data justice. Information, Communication & Society, 22(7), 1012–1028. https://doi.org/10.1080/1369118X.2019.1596293

Viljoen, S. (2020). Democratic Data: A Relational Theory For Data Governance. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3727562

Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.1186713

Add new comment