The ‘golden view’: data-driven governance in the scoring society

Lina Dencik, Cardiff School of Journalism, Media and Cultural Studies, Cardiff University, United Kingdom, dencikl@cardiff.ac.uk
Joanna Redden, School of Journalism, Media and Culture, Cardiff University, United Kingdom, ReddenJ@cardiff.ac.uk
Arne Hintz, Cardiff School of Journalism, Media and Cultural Studies, Cardiff University, United Kingdom, hintza@cardiff.ac.uk
Harry Warne, School of Journalism, Media and Culture, Cardiff University, United Kingdom, warneh2@cardiff.ac.uk

PUBLISHED ON: 30 Jun 2019 DOI: 10.14763/2019.2.1413

Abstract

Drawing on the first comprehensive investigation into the uses of data analytics in UK public services, this article outlines developments and practices surrounding the upsurge in data-driven forms of what we term ‘citizen scoring’. This refers to the use of data analytics in government for the purposes of categorisation, assessment and prediction at both individual and population level. Combining Freedom of Information requests and semi-structured interviews with public sector workers and civil society organisations, we detail the practices surrounding these developments and the nature of concerns expressed by different stakeholder groups as a way to elicit the heterogeneity, tensions and negotiations that shape the contemporary landscape of data-driven governance. Described by practitioners as a way to achieve a ‘golden view’ of populations, we argue that data systems need to be situated in this context in order to understand the wider politics of such a ‘view’ and the implications this has for state-citizen relations in the scoring society.
Citation & publishing information
Received: March 25, 2019 Reviewed: May 29, 2019 Published: June 30, 2019
Licence: Creative Commons Attribution 3.0 Germany
Funding: The research for this article has been supported by a grant from the Open Society Foundations.
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Datafication, Data governance, Data scores, Public sector, Citizenship
Citation: Dencik, L. & Redden, J. & Hintz, A. & Warne, H. (2019). The ‘golden view’: data-driven governance in the scoring society. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1413

This paper is part of Transnational materialities, a special issue of Internet Policy Review guest-edited by José van Dijck and Bernhard Rieder.

Introduction

Questions about how data is generated, collected and used have taken hold of public imagination in recent years, not least in relation to government. While the collection of data about populations has always been central to practices of governance, the digital era has placed increased emphasis on the politics of data in state-citizen relations and contemporary power dynamics. In part a continuation of long-standing processes of bureaucratisation, the turn to data-centric practices in government across Western democracies emerges out of a significant moment in the securitisation of politics, the shrinking of the public sector, and the rise of corporate power. In the case of the United Kingdom, this is particularly brought to bear through an on-going austerity agenda since the financial crisis of 2008. Data analytics, in this context, is increasingly viewed and sold as providing a means to more efficiently target and deliver public services and to better understand social problems (Beer, 2018).

As government has entered into this space, adopting the processes, logics and technologies of the private sector, this raises major questions about the nature of contemporary governance and the socio-technical shaping of citizenship. Of particular concern is how new and often obscure systems of categorisation, risk assessment, social sorting and prediction may influence funding and resource decisions, access to services, intensify surveillance and determine citizen status or worth. The proliferation of data sharing arrangements among government agencies is raising concerns about who is accessing citizen data, the potential for highly personal profiling, function creep and misuse. At the same time, the black boxed nature of big data processes, the dominant myths about data systems as objective and neutral, as well as the inability of most to understand these processes makes interrogating government data analytics systems difficult for researchers and near impossible for citizens without adequate resources (Pasquale, 2015; O’Neil, 2016; Kitchin, 2017).

Moreover, the empirical underpinning for a more thorough understanding of these dynamics remains obscure as the implementation of data analytics in public services is only emerging. In this article we therefore contribute with an overview of developments of data analytics in public services in the particular case of the UK. Drawing on research carried out for the one-year project ‘Data Scores as Governance’, the article provides the first integrated analysis of the use of such systems in the UK and of the often polarised views and approaches among stakeholders. In mapping this emerging field, we explore the way these data systems are situated and used in practice, engaging with the myriad negotiations and challenges that emerge in this context.

The article identifies an upsurge in data-driven forms of what we term ‘citizen scoring’ - the use of data analytics in government for the purposes of categorisation, assessment and prediction at both individual and population level. It demonstrates citizen scoring as a situated practice that emerges from an amalgamation of actors, imaginaries and political and economic forces that together shape and contest what was described in our research as a desired ‘golden view’ of citizens. The article thus highlights the heterogeneity of data practices, and points to the need for a nuanced understanding of the contingency of data systems on significant contextual factors that moves us beyond an engagement with the technologies themselves, towards a wider politics of their development, deployment, implementation and use as part of understanding the nature of citizenship in an emerging ‘scoring society’.

From data to data scores

The growing collection of data across social life, what has been described as the ‘datafication’ of society (Mayer-Schönberger & Cukier, 2013), is now a prominent feature of politics, economics and culture. At once celebrated for driving a ‘new industrial revolution’ (Hallerstein, 2008), the technical ability to turn increasing amounts of social activity and human behaviour into data points that can be collected and analysed has simultaneously advanced a power dynamic in need of investigation and critique. The trend to put phenomena in a quantified format that can be tabulated and analysed requires both the right set of tools as well as a desire to quantify and record. Premised on the notion that it is possible to infer probabilities by feeding systems substantial quantities of data on which to base predictions, data science has taken hold across both private and public sectors, as well as civil society, constituting effectively, according to Van Dijck (2014), a new paradigm based on a particular set of (highly contested) assumptions. Not only is there an assumption that (objective) data flows through neutral technological channels, but also that there is “a self-evident relationship between data and people, subsequently interpreting aggregated data to predict individual behaviour.” (Van Dijck, 2014, p. 199) It is, moreover, argues McQuillan (2017), a paradigm rooted in a belief akin to Neo-Platonism in which a hidden mathematical order is perceived to be ontologically superior to the one available to our everyday senses.

In this context, citizen scoring emerges as emblematic of the logics and functions that accompany this wider datafication of society, particularly as it relates to the governance of citizens. We use it as a term to connote the typical practices of data analytics in public services to do with the categorisation and segmentation, and sometimes rating and ranking, of populations according to a variety of interoperable data sets, with the goal of allocating resources and services accordingly. In some instances this involves types of risk assessments and the identification of particular characteristics in individuals as a way to predict their behaviour. Data-driven scores and classifications that combine data from different sources towards calculating risks or outcomes are emerging as a prime means for such categorisations. We are predominantly familiar with these practices in the financial sector, most notably in the form of the credit score, which increasingly relies on an array of digital transactions to inform predictions about the financial responsibility of individuals (Citron & Pasquale, 2014). A wider range of consumer scores are now being applied across different economic sectors (Dixon & Gellman, 2014). Sources of data for such scores may include, for example, an analysis of people’s mobile phone use, or the creditworthiness of their social media friends. People’s social activities are thus increasingly incorporated into particular commercial assessments, which points to a growing integration of social and transactional data sets (McCann et al., 2018). This practice builds on established experiences in the marketing industry and, more recently, the platform economy, where consumption patterns are predicted based on a variety of social, cultural, health and other data.

Whilst perhaps more normalised in financial and commercial industries, the use of data-driven scores has also reached governmental and public services. Much recent attention has focused on the ‘social credit score’ being developed in China, for example, which aims to integrate the rating of citizens’ financial creditworthiness with a wide range of social and consumer behaviour to assess people’s overall trustworthiness and allow, or deny, services accordingly (Ly & Luo, 2018). The Chinese social credit score is distinct in many ways, but it demonstrates possible implications of the algorithmic mediation of daily life and therefore offers interesting pointers for investigating the use of data analytics in the public sector of other countries (Fullerton, 2018; Jefferson, 2018). In particular, it provides a stark illustration of how practices of consumer scoring have migrated into citizenship debates, pointing to the actuarial logics underpinning citizen scoring more broadly (Poon, 2016; McQuillan, 2019).

In her study of the uses of data and automated processing in the United States, Eubanks (2018) points to a rise of a ‘regime of data analytics’ in public services, detailing, for example, uses of automated welfare eligibility systems and predictive risk models in child protection akin to the kinds of assessments and categorisations we associate with citizen scoring. Automated ‘decision support systems’, such as risk scores, have also been considered and implemented elsewhere with mixed results. Australia’s automated debt recovery system, now popularly referred to as ‘robo-debt’, was introduced to identify those with overpaid benefits and seek repayment. The system has caused scandal because of its errors and impact on marginalised communities and been widely criticised as unethical as well as illegal (Carney, 2018). In New Zealand, the government shelved its plans to introduce the use of predictive risk assessments in child welfare services following public critique (Gillingham, 2019). In Europe, The Netherlands has introduced an automated system to try and detect benefit fraud, France has automated traffic offence processing and Italy is using automation to allocate health treatments (AlgorithmWatch, 2019). The widely referenced ProPublica investigation into the use of algorithmic processes in US criminal justice systems highlighted the prevalence of risk assessment tools that produce ‘risk scores’ on defendants to estimate their likelihood of re-offending to inform sentencing (Angwin et al., 2016). Similar investigations in the UK have pointed to Durham Constabulary's Harm Assessment Risk Tool (HART) and its categorisation of risk for defendants to inform custody decisions (Big Brother Watch, 2018a). In border control, data-driven profiling based on a cross-set of aggregated data is increasingly used for ‘vetting’ the ‘threat’ of migrants and refugees, producing what has been referred to as a ‘terrorist credit score’ (Crawford, 2016).

Although increasing attention is being paid to developments relating to this kind of citizen scoring, little is known about the uses of new data systems, particularly at the local government level where public services are predominantly provided. Prominent calls have been made to increase transparency about the use of algorithmic decision-making in government across different national contexts. The government of New Zealand recently responded to this by providing an overview of operational algorithms as part of a ‘algorithm assessment’ report (Stats NZ, 2018). Following public pressure, New York City set up an Automated Decision Systems Task Force to increase transparency and review how the City uses algorithms (Kirchner, 2017). In the UK, an inquiry into algorithms in decision-making in 2018 led to a recommendation from the House of Commons Select Committee on Science and Technology to produce a list of algorithms in local and central government (Science and Technology Committee, 2018). At the time of writing, such a list is still not available. Moreover, we lack analysis of how these systems are implemented and used in practice, changes in governance that occur, how trade-offs are negotiated, and how these relate to the questions and concerns expressed by different stakeholder groups across society. It is only through such an analysis that we can engage with the actual implications of the turn to data-driven technologies, understood in context and in relation to other social practices and historical trends, as a way to politicise their development, deployment, implementation and use as sites of struggle (Christin, 2017; Dencik, 2019). We therefore now turn our attention to detailing developments and practices pertaining to citizen scoring in the UK.

Method

In order to investigate the uses of data-driven scoring systems in public services we combined a number of different methods that would provide us with insights into general tendencies as well as particular practices. For this article, we draw predominantly on two data sets that form part of a larger project into citizen scoring 1: 1) 423 Freedom of Information (FOI) requests to local authorities and partner agencies in the UK asking for names and uses of data systems in public services; 2) 27 semi-structured interviews with public sector workers (17) and civil society groups (10) discussing the implementation and uses of data-driven systems, key advantages, challenges and concerns with using such systems in public services.

Our interviews are structured around six case studies that explore different kinds of data system applications in different parts of the UK, including areas of benefit fraud, child welfare, health, and policing across North and South England. For these case studies, we submitted some more targeted Freedom of Information requests and carried out semi-structured interviews with public sector workers, seeking to speak with people involved with the development, management and user side of the data systems in order to include a range of perspectives. The six case studies are:

  1. Bristol’s Integrated Analytical Hub
  2. Kent’s Integrated Dataset
  3. Camden’s Resident Index
  4. Hackney’s Early Help Profiling System
  5. Manchester’s Research & Intelligence Database
  6. Avon & Somerset Police’s Qlik Sense

In order to further engage with the implications of using data analytics in public services and the practice of citizen scoring, we also interviewed a range of civil society groups that were sampled according to their role as public service stakeholders and their familiarity with service-users and other impacted communities. These included diverse orientations pertaining to digital rights, welfare rights, and citizen participation (see table 1). All interviews were carried out during May-November 2018 in person, through online video or on the phone, lasting on average 30-60 minutes.

Table 1: Sample of civil society groups

Organisation

Orientation

Big Brother Watch

Civil liberties

British Association of Social Workers (BASW)

Professional association

Citizen’s Advice Bureau

Advice & advocacy

Defend Council Housing

Housing activism

Disabled People Against the Cuts (DPAC)

Disability activism

Involve

Public engagement

Liberty

Human rights

Netpol

Police watchdog

Open Rights Group (ORG)

Digital rights

Independent activist

Welfare rights

Finally, for our analysis we also draw on discussions that took place during two project-dedicated workshops with stakeholders from across the public sector, civil society and academia working in the area of data and public services. Both these workshops took place in 2018, one in April and one in November, in London, UK.

The study constitutes the most comprehensive analysis of citizen scoring in UK public services to date, and the first to combine a map of developments with stakeholder perspectives from across the public sector and civil society. We now outline some key developments and their implications based on our findings.

Predicting and scoring

Citizen scoring relies on predictive analytics, but not all uses of predictive analytics lead to citizen scoring. Given the lack of information available about where and how predictive analytics is being used, we began by producing a list of all the instances we could identify through manually analysing our FOI requests. At the time of writing, 328 responses had been received out of 423 requests. The others were either blocked, delayed, or not yet responded to. From this exercise, we identified 53 councils using predictive analytics (table 2). 2 Whilst we did not include FOI requests to separate police forces, we were able to complement our list with further information from research conducted by the non-governmental organisation Liberty that identified 14 UK police forces making use of predictive analytics based on 90 FOI requests (table 3). 3

Table 2: Predictive analytics systems in UK public services

Council

Systems

Argyll and Bute Council

Sentiment Metrics social media sentiment analysis

Birmingham City Council

Business Objects (SAP)

Birmingham City Council

Tableau

Blaby District Council

Mosaic (Experian)

Bournemouth Borough Council

AccsMap

Bournemouth Borough Council

Arcady

Bournemouth Borough Council

Mova

Bournemouth Borough Council

Scoot

Bournemouth Borough Council

SocialSignIn

Bournemouth Borough Council

Stratos

Bournemouth Borough Council

Tableau

City of Bradford Metropolitan District Council

CapitaONE

City of Bradford Metropolitan District Council

Liquidlogic Children's Social Care System (LCS)

London Borough of Brent

Risk Based Verification

Brighton and Hove City Council

[Name not specified]

Brighton and Hove City Council

ArcGIS

Brighton and Hove City Council

Business Objects (SAP)

Brighton and Hove City Council

Predictive Analytics (SAP)

Bristol City Council

Think Family

Carlisle City Council

Housing Benefit System (Capita)

Carlisle City Council

Risk Based Verification (Xantura)

Ceredigion County Council

Daffodil

Ceredigion County Council

Local Development Plan

Ceredigion County Council

POPGROUP (Edge Analytics)

Charnwood Borough Council

Abritas Shortlisting

Charnwood Borough Council

QL Rent Arrears Progression

Chiltern District Council

Risk Based Verification

City of York Council

[Name not specified]

Copeland Borough Council

GIS (Geographical Information Systems)

London Borough of Croydon

Business Objects (SAP)

Dacorum Borough Council

Risk Based Verification (CallCredit)

Derbyshire Dales District Council

M3PP (Northgate Public Services)

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

[Name not specified]

Dudley Metropolitan Borough Council

Business Objects (SAP)

Dudley Metropolitan Borough Council

Single Person Discount Review (TransUnion/CallCredit) -- provided by external service provider, Civica

London Borough of Ealing

Risk Based Verification (Coactiva)

East Hampshire District Council

Dynamics (Microsoft)

East Hampshire District Council

Experian Public Sector Profiler

East Riding of Yorkshire Council

Risk Based Verification (Xantura & Northgate PS Ltd)

Erewash Borough Council

Risk Based Verification

Folkestone & Hythe District Council

Risk Based Verification (Xantura)

Fylde Borough Council

Risk Based Verification (TransUnion/Callcredit)

Greater Manchester Combined Authority

[Name not specified]

Hertfordshire County Council

[Name not specified]

Hertfordshire County Council

[Name not specified]

Hertfordshire County Council

Mosaic (Experian)

Hull City Council

Risk Based Verification

Huntingdonshire District Council

Risk Based Verification (CallCredit)

Inverclyde Council

Scottish Patients at Risk of Readmission and Admission (SPARRA) (Health & Social Care Partnership)

Ipswich Borough Council

Risk Based Verification (CallCredit)

London Borough of Islington

Holistix (Quality Education Systems (QES))

Kent County Council

ACORN (CACI)

Kent County Council

Kent Integrated Dataset (KID)

Kent County Council

Mosaic (Experian)

Leeds City Council

FFT Aspire

Liverpool City Council

[Name not specified]

London Borough of Southwark

Student council tax discount review, with National Fraud Authority Initiative & Fujitsu

Medway Council

National Child Measurement Programme

Medway Council

NHS Health Checks programme

Milton Keynes Council

NHS Health Check software

Northamptonshire County Council

CapitaOne Admissions

Northamptonshire County Council

Fischer Family Trust Aspire

Northamptonshire County Council

Youth Offender Group Reconviction Scale (YOGRS)

Nottinghamshire County Council

Mosaic (Experian)

Purbeck District Council

Risk Based Verification (Xantura)

Rotherham Metropolitan Borough Council

Rentsense (Mobysoft)

Royal Borough of Windsor and Maidenhead

Risk Based Verification

South Bucks District Council

Risk Based Verification

Suffolk County Council

Connect Measure

Sunderland City Council

Risk Based Verification (CallCredit)

London Borough of Waltham Forest

Looker

London Borough of Waltham Forest

Sagemaker (Amazon Web Services)

West Lothian Council

Risk Based Verification (CallCredit)

Weymouth and Portland Borough Council

Risk Based Verification (Xantura)

Wigan Metropolitan Borough Council

Risk stratification models

Worcester City Council

Risk Based Verification (Capita)

London Borough of Hackney

Early Help Profiling System

London Borough of Tower Hamlets

Children's Safeguarding Profiling Model

London Borough of Newham

Children's Safeguarding Profiling Model

Table 3: Use of predictive policing programmes in the UK (source: Liberty, 2019)

Police Force

Predictive mapping programmes

Individual risk assessment programmes

Avon and Somerset

X

X

Cheshire

X

 

Durham

 

X

Dyfed Powys

X (in development)

 

Greater Manchester Police

X

 

Kent

X

 

Lancashire

X

 

Merseyside

X

 

The Met

X

 

Norfolk

X

 

Northamptonshire

X

 

Warwickshire and West Mercia

X (in development)

 

West Midlands

X

X

West Yorkshire

X

 

In analysing the general FOI requests, we found a varied landscape across local authorities in the UK in terms of both understanding and implementation of data systems in public services. The range of responses we were provided with indicate that there is as yet no common understanding of what constitutes data analytics within local government, let alone the use of data for practices such as prediction, risk assessment, categorisation, profiling or scoring. It is therefore very difficult to collate a comprehensive list of predictive analytics systems 4. The map over predictive analytics illustrates the diverse uses of data by different councils and partner agencies. At the same time, in combination with our case study research we can identify a number of key players and central trends. In particular, as we go on to outline below, the turn to data analytics is happening in a context of funding cuts and whilst some systems are being developed in-house, we see the emergence of a few prominent private companies as suppliers of data systems, with a push towards collecting, sharing and integrating data across agencies and a view to carry out risk assessments and profiling at individual and population level.

Austerity and public-private partnerships

Across our case studies, interview participants pointed to the need to implement new systems for data sharing and analysis in order to contend with the financial realities of an austerity agenda. Local authorities in England have had their funding cut from central government by up to 60% since 2010 (Davies, Boutaud, Scheffield, Youle, 2019). A developer who worked to implement Qlik Sense in the Avon & Somerset Constabulary said of their turn to more data systems, “it’s viewed very much as a critical enabler, a strategic imperative for any… organisation that’s facing cuts” (Avon & Somerset Police developer). In engaging with this context, whilst some councils develop systems in-house our research identified a number of prominent companies involved and different kinds of public-private relationships ongoing or emerging with the turn to data-driven systems.

We can see from the list of predictive analytics that a few private companies have established themselves as prominent suppliers of predictive algorithms. Xantura and CallCredit, for example, provide data sharing and analytics to several public sector clients across the UK, particularly in the area of risk assessments. On its website (www.xantura.com), Xantura lists their key areas of focus as “improving outcomes for vulnerable groups, protecting the public purse and, helping clients build ‘smarter’ business processes”. Their systems relate to areas such as the Troubled Families programme (a government initiated reform of social services launched in 2012), fraud and error detection, and children’s safeguarding. Their Early Help Profiling System (EHPS), used for example in Hackney, one of our case studies, “translates data on families into risk profiles, sending monthly written reports to council workers with the 20 families in most urgent need of support,” as drawn from their website. In addition, they provide a Risk Based Verification (RBV) system for the automated detection of “fraud and error” which applies different levels of checks to benefit claims according to the risk associated with those claims, determining the level of verification required (Department for Work and Pensions, 2011). CallCredit, meanwhile, is a major consumer credit reporting agency (now acquired by TransUnion) that also, similar to Xantura, offers a Risk Based Verification system service to councils processing Housing and Council Tax benefits claims. CallCredit also provides a demographic profiling tool similar to Mosaic, the geodemographic segmentation system provided by Experian, which our research highlights is widely used across local authorities and partner agencies for a range of purposes. Most controversially, it was found to be used to inform the risk assessment of defendants as part of Durham Constabulary's Harm Assessment Risk Tool (HART).

Policing has become a prominent area of predictive analytics. The research carried out by Liberty indicates that predictive policing programmes predominantly fall in two areas: 1) predictive mapping programmes and 2) individual risk assessment programmes. Most forces using predictive policing programmes engage in forms of mapping, which are programmes that evaluate police data about past crimes to identify ‘hot spots’ of high risk on a map. These are supplied by a range of private companies, including HunchLab, IBM, Microsoft, Hitachi, and Palantir. A few are also engaging in individual risk assessment programmes which predict how people will behave, including whether they are likely to commit or be victims of certain crimes (Couchman, 2019).

From data warehouses to risk assessments

Our FOI requests point to different applications of data analysis across contexts and our case studies demonstrate the distinct nature of developments in different local authorities. No standard procedures are in place for how data systems are implemented, discussed and audited. Instead, uses of data systems are approached very differently, with some data-sharing leading to the creation of individual risk scoring, whilst in other contexts this is not practiced and databases serve predominantly as verification tools or to provide population level analytics. This indicates that whilst it is broadly accepted that public service planning requires data and analytics, there is not a shared understanding amongst local authorities as to what is appropriate to do with such technologies.

Despite differences in application, data sharing between agencies and different parts of the council is a prominent trend, described as the creation of “data warehouses” or “data lakes”, that seek to get “the golden view” (Camden Council manager) of citizens. This refers essentially to integrated databases that gather information about residents and their interactions with public services, across areas such as housing, education, social services, and sometimes also health and policing. In the case of Bristol’s Integrated Analytical Hub, for example, the Think Family database that is used for services pertaining to child welfare, integrates 35 different social issue data sets, including school attendance, criminal records, unemployment, domestic abuse and mental health problems in the family. These are similar to the data sets used elsewhere, including the Early Help Profiling System developed by Xantura that is used in Hackney, and the iBase system that is part of Manchester’s Research & Intelligence Database. In the case of Camden’s Resident Index, which is used for benefit fraud detection, the data sources include council tax and benefits, housing, electoral registration, libraries and parking permits data in addition to adult and children’s social services and school information. Avon & Somerset Police have sought to connect internal data sets as well as some data sets from other agencies in Bristol Council to provide integrated assessments and evaluations through the self-service analytics software Qlik Sense. Kent’s Integrated Dataset (KID) on the other hand, brings together data from 250 local health and social care provider organisations as well as Fire and Rescue Service data to support planning and commissioning decisions. In integrating data sets from across agencies and different parts of the council, managers see a potential for targeting resources more effectively and being better positioned to respond to primary need. One manager described it as a need “to have a more strategic understanding of the city” (Bristol Council manager).

The creation of this ‘golden view’ of citizens, as one of our research participants described it, takes several forms and plays out in a broad range of data applications. We use it here as a metaphor to understand data systems as part of a desire to have both additional and more integrated information about populations as well as more granular information about citizens that form the basis of prediction and can drive actions taken. In our case studies, some of these applications involve population level analytics and network analysis, and do not involve the production of ‘scores’ as such, but rather a map of general trends and connections. In other cases, scoring can take several forms; in some instances it is predominantly a matching score used for identity verification (the probability that a record refers to the same person in different data sets), whilst in others it is based on a risk assessment relating to individuals or populations that indicate either a percentage score or a particular ‘risk threshold’ that is passed to trigger an alert (based on a combination of risk factors), or a ranking of high to low individuals ‘at risk’ within a specific ward.

With Camden’s Resident Index, for example, citizen scoring predominantly concerns identity verification used to indicate the risk of fraud. This may include household views to show the different records from the different people associated with an address, allowing different levels of verification to data points such as data based on council tenancy registration or data from accessing a library service. The view provides the possibility to detect fraud such as “school admissions where people are applying for school places from places they don’t live in, or people are illegally subletting their council tax properties, or people retaining accessible transport benefits when they no longer live in the borough” (Camden Council developer). Whilst the model does not lead to any final decision or action, the project manager noted that “it helps the service whittle down the likely cases to investigate.” (Camden Council developer)

Meanwhile, councils such as Bristol and Hackney and police forces such as the Avon & Somerset Police Constabulary have developed or contracted systems that are concerned with identifying risks and vulnerability amongst individuals and households. Prominent uses of citizen scoring in this respect exist in areas such as child welfare and policing where vulnerability and risk are calculated through the combination of extensive data sets that identify characteristics and behaviours of previous victims and offenders in order to flag individuals with similar characteristics. These scores and reports are provided to frontline workers as intelligence to help indicate who might need further attention. Bristol’s Think Family database, for example, includes data on all children and young people within the local authority, who are all provided a score to indicate the likelihood that they may become victims of some form of exploitation. The Qlik Sense system adopted by the Avon & Somerset Police, ranks all offenders and victims of crime, categorising them as high, medium and low risk for either re-offending or becoming a victim of crime, alongside the harm that an offender carries (e.g., grievous bodily harm or threats to kill).

In outlining developments, we therefore see the varied applications of data systems, the significance of contextual factors, such as policy agendas relating to austerity, for turning to data-driven technologies in public services, and the further intertwining of government and business spheres. As we discuss further below, this is significant for the ability to engage citizens in consultations and advance public transparency, as well as positioning public sector workers in an empowered position in relation to negotiating these systems as government agencies become locked-in and reliant on external expertise the less they invest in developing their own internal capabilities (Garrido et al., 2018). Moreover, in the extensive data collection and sharing, and the onus on prediction and risk assessments as a central feature of data systems in public services, concerns about the implications of these for citizen rights and impact of such decision-making on different groups and communities have become prominent. We now turn to outline some of these negotiations and tensions.

Negotiations and tensions

There are ongoing tensions emerging as local authorities try to respond to the problems facing communities, and doing so with less resources driving a need to be ‘smarter’ and more efficient. Some of these tensions are prominent amongst public sector workers as they are confronted with different challenges pertaining to the practices of citizen scoring, but we also see a discrepancy in relation to the nature of issues raised by different stakeholder groups. In this section we outline some key themes emerging from our research interviews with regards to transformations and implications of the implementation of data systems in public services.

Citizen rights and harms

The extent of data collection, who gets to see it, and the lack of transparency around its uses were raised as prominent concerns amongst civil society groups, but are tackled very differently by different councils. Although the EU’s General Data Protection Regulation (GDPR) addresses some aspects of data sharing and use, detailed requirements are still unclear and many parts of public service provision are exempt from such regulation (Big Brother Watch, 2018b). In the context of this regulatory vacuum, there was therefore a recognition among interviewees from local authorities that they were balancing or engaging in a tradeoff between privacy rights and the rights of vulnerable individuals to protection and care. Indeed the drive to enable a ‘golden view’ of citizens by linking up all available data sets comes in part from perceived failings of agencies to adequately share and act on information in order to respond to needs and risks, marked by high-profile cases such as the deaths of Baby P, Victoria Climbié, and Fiona Pilkington following instances of long-term child abuse. Yet interpretations of what this means for data practices are varied.

The KID was developed to provide population level health planning rather than to aid decision-making related to specific individuals. This also means that practitioners only have access to pseudonymised data. On the other hand, Manchester’s Research and Intelligence Database is designed to make it easier for frontline workers to share and access identifiable data about those receiving support services to develop a fuller picture of these individuals and the networks around them. With Hackney’s Children’s Safety Profiling System, identifiable data is sent to case workers once the automated system deems a certain risk threshold has been crossed. Similarly, we found that different councils have different ideas about what and how citizen consent is required. For example, Manchester seeks consent from service users whose data is contained in the system whereas Hackney council does not, as it is thought this would compromise the effectiveness of the system. Finding ways to communicate data practices to citizens and upholding genuine ‘informed consent’ was generally seen across both public sector and civil society as a prominent challenge.

Less discussed and addressed by local authorities are issues of ‘bias’ or harms, particularly how the use of these new systems might negatively affect people’s lives. Such concerns have become particularly prominent as data processes often sit behind a veneer of technological ‘objectivity’. There is recognition across our case study interviews that bias can be embedded in models and that harms can occur. This is prevalent in how these systems are discussed. For example, Xantura developers explicitly stated that they develop their systems to not be ‘punitive’ but to instead enable early intervention and regularly monitor and check for biases in their model. However, engaging with unanticipated harms that may arise from using such models is less salient amongst developers and managers working to implement data systems. In contrast, this was one of the critiques raised most often by civil society groups. They highlighted concerns about the ways in which a data lens particularly targets those on the margins and how these systems impact citizen rights and opportunities differently. As one interviewee noted, “it’s not something that the bulk of the population will ever encounter. It’s something you only encounter when you are part of a risk group, a risk population” (Netpol). Such concerns are echoed in research carried out in other countries that have highlighted how systems like this, which disproportionately draw on and use data about people who make use of social services, are biased through the over-representation of a particular part of the population. The variables being used can in practice be proxies for poverty, for example by using the length of time someone has been on benefits as a variable influencing risk assessments (see also Gillingham & Graham, 2017; Eubanks, 2018).

Related to this, several of our civil society interviewees raised the issue of stigmatisation as a central feature of citizen scoring, highlighting how the creation of data warehouses, risk assessments and predictions in itself can be harmful: “Because of this kind of quantification and categorisation approach that data analytics actually demands and the use of ever more sensitive data, there are people who will feel sidelined, maligned, judged, stereotyped” (Big Brother Watch). Further, none of the case studies we analysed included a means for people who had been scored to know their score, how it was generated and how to interrogate it. This inability to see or ‘talk back’ was seen as having significant democratic implications in terms of due process and can lead to differential treatment and opportunity given the way that someone may unknowingly be affected by a score. Indeed, transparency about how data is used and processed and for what purpose was noted as “the first step” (Netpol) towards mitigating harms that may emerge from the implementation of data systems and the kinds of interventions that will be acted on them, not least in the context of the speed with which data systems are being deployed, often with limited consultation and impact assessment: “I think the issue is that things are being introduced so quickly and without adequate oversight and without adequate testing for things like bias” (Liberty). Moreover, this information asymmetry also speaks to the way the pursuit of a ‘golden view’ situates citizens in relation to their social context, through the practice of labeling, sorting and scoring: “You think that you’re normal working class, maybe a poor family and suddenly you are being classed as a risk in some way. It’s a fundamental question, what right do you have to label people based on something” (Open Rights Group).

Engaging with civil society concerns and assessments of the implications for impacted communities is especially pertinent as there is an underlying assumption in the implementation of data systems in public services that information will lead to action. The perceived value of these scoring systems lies in part in their ability to incorporate ‘real-time’ data that provides a profile and assessment of individuals and households on a continuous basis, informing also an escalation of risk. For example, this is a key part of the scoring for offenders: “once you’re measuring risk in an automated way, you can then measure the escalation risk. So if someone’s offending behaviour changes over the last week or two or even overnight, the model will then show you that and it’ll push it up the list” (Avon & Somerset Police developer). This, in turn, serves to advance a logic around early intervention and pre-emptive measures, or what was referred to as “targeted interventions” (Bristol Council manager) in the context of “preventative proactive work” that in “capturing more risk” through the use of automated risk assessments will require engagement with individuals who are not usually considered high enough, asking for a “light touch” that engages with people on an ongoing basis (Bristol Council developer). However, we found that those interviewed often could not tell us how the data systems introduced led to concrete measures. Without comprehensive evaluation of how these new data arrangements are, or are not, affecting action, engagement and resources, these claims remain unproven. The argument that these systems make it easier for frontline staff to access and share information and assess risk is made with little, if any, evidence provided about how this affects resource allocation or actions taken.

At the same time, experiences amongst service users and communities point to the need to engage more comprehensively with the way data systems relate to different activity that might lead to a range of harms and feelings of being targeted. This requires a re-evaluation of how authorities and the state might be perceived as not necessarily benign, and that technologies are not necessarily neutral. Whilst harmful outcomes relating to data collection and use might not be intentional, such evaluations point to the need to consider how data has the potential to facilitate punitive measures. Yet what kind of impact would need to be assessed and how evaluations on actions taken on citizen scores would be carried out remain difficult areas as there is no clear line of accountability for any one system that is distributed across different people and uses. Moreover, councils pointed to a lack of resources in pursuing any comprehensive evaluations or impact assessments of transformations in practices and provision with the implementation of new data systems.

Professional authority and operational logics

This question of how to evaluate or assess impact gains further pertinence as the tensions and negotiations surrounding the harms and rights infringements that may arise with the use of data systems in public services are simultaneously playing out in a context of changing practices and organisational transformations that position different understandings and activities at odds. In building a culture of data collection, we found a concern amongst both civil society groups and frontline staff about a fundamental re-orientation of professional practices and routines, relationships and the kinds of information deemed valuable in delivering public services. In determining a family’s needs, for example, a member of a professional association for social workers noted that “the systems are set up for social workers to collect data as performance management,” pointing to a concern that this “can divert the social worker from being able to understand the case because the sort of data that they’re collecting, they might be lost in there, the complexities of the case” (Godfred Boahen, BASW).

In the prominent application of data systems for the purposes of identifying and measuring risk, such as the widespread use of Risk Based Verification systems, we are also confronted with a general shift within public administration towards risk management as a new ‘paradigm’ of operations (Yeung, 2018). The way in which this shifts authority away from public sector workers themselves towards computational outputs was a recognised tension across our case studies and frequently addressed through an explicit emphasis on professional judgment as the central pillar for any decision-making, regardless of the implementation of data systems. One manager described it as, “it’s not computer says yes or no, it’s computer provides advice and guidance to the professional who adds the professional judgment in order to make better decisions about resource allocation” (Bristol Council manager). This was similarly echoed elsewhere, with developers working with Hackney Council, for example, stressing that the goal is “not to replace professional staff but to support them by giving them the information they need to do their job better,” and Avon & Somerset Police inspectors pointing out “it is just a tool” and not “the be-all-and-end-all”.

Emphasising the continued value of professional judgment as the ultimate ‘decision-maker’ has been key to advancing the implementation of data analytics within public services in the face of what was recognised by several interviewees as an element of hostility towards technology amongst frontline staff. This resistance was often reduced or dismissed by managers and developers as issues of professional conservatism or a lack of technical skills. One described it as “confidence around technology is low” (Avon & Somerset Police developer) and another pointed to a historical scepticism towards alternative approaches to knowledge: “There’s been a strongly held view that the only people who should tell you something about them is children and families themselves” (Bristol Council manager).

Maintaining a prominent rhetoric around the importance of professional knowledge and domain-specific expertise is also a way to contend with what are perceived to be not just cultural challenges within the organisation, but also technical challenges that limit the so-called ‘accuracy’ of systems. In interviews, developers of data systems pointed to continued issues of data quality within public services, with some data sets being riddled with a high volume of errors, for example “with people giving wrong names, wrong date of birth, things like that” (Bristol Council developer). High error rates mean that practitioners find it important to be able to interrogate scores. As a coordinator within Avon & Somerset Police noted: “if someone has got a particularly high score, we will look at what’s given them the high score and drill in to make sure the data’s correct but it isn’t always. For example, it might be a data quality issue where someone is identified as high risk because they were previously linked to a murder or attempted murder and actually they were eliminated from that murder” (Avon & Somerset Police co-ordinator). This has spurred on managers to call for increased “data literacy” training dedicated to enhancing people’s “ability to engage, interpret and argue data and pla[ce] data at the centre point of how people make decisions” (Avon & Somerset Police manager).

At the same time, we see a frustration amongst frontline staff with the ways in which professional judgment is continuously confined within limited parameters as data systems come to set the terms of engagement with citizens. One frontline police officer complained, “there will still be people who say…[following the technology] is what we must do” (Avon & Somerset Police inspector). This tension was also recognised by some of the developers: “we can’t control what people do off the back of [the data system]… It might force them into activity they wouldn’t otherwise do” (Bristol Council developer). In part, this speaks to the challenge of what is also referred to as ‘automation bias’ (Cummings, 2004) in which people attribute higher value to technological outputs, sometimes trusting these more than their own judgments and decision-making. However, it also points to a broader challenge with regards to how professionals are positioned in relation to data systems, not least in a context of austerity and cuts to services. In our workshop discussions, experiences indicated how the implementation of data-driven technologies advanced a push towards the rationalisation of ‘messy’ lives through the recognised reductionism and functionalism that are fundamental features of the information processing of data-driven scoring systems, undermining the holistic assessments that are hallmarks of good judgment (Pasquale, 2019). That is, the crude categorisations that data systems rely on in order to provide analyses and scores are unable to account for the rich contextual domain-specific knowledge that professionals consider to be central to appropriate decision-making. This is significant in several respects. In the case of Bristol’s Integrated Data Analytics Hub, for example, developers noted that data-driven risk assessments can only take account of risk factors such as school attendance, records of domestic abuse, etc. but cannot account for insulating ‘positive’ factors such as other types of social engagement or wider family networks that rely on contextual knowledge and unstructured information. Furthermore, whilst there are attempts to aggregate data to identify broader social issues that shape opportunities and challenges for families and individuals, household-level and individual-level data relies on attaching risk factors to individual characteristics and behaviour that therefore might divert focus away from structural causes, such as issues of inequality, poverty or racism.

As such, we see how at the level of management and development of data systems in the context of public services challenges are predominantly seen as either technical and cultural in nature. Issues pertaining to data quality or organisational scepticism towards technology are current obstacles, but are of a kind that can eventually be overcome through ‘better’ data practices that ultimately fit a shift towards data-driven governance. This understanding of challenges marks a significant discrepancy with the more fundamental concerns expressed by stakeholder groups from both civil society and frontline staff. Here we see a concern with social and political issues that speak to tensions at the core of what the ‘golden view’ of citizens might mean, in terms of different harms, rights, and the potential for enacting agency both as service users and professionals. Moreover, as we will go on to discuss further below, these tensions point to more rudimentary questions about the way data-driven systems might transform state-citizen relations and understandings of both people and social issues.

Transformations in governance: deconstructing the ‘golden view’

In outlining developments and pointing to the complex amalgamation of political and economic forces, private and public actors, interpretive and regulatory vacuums, and prominent tensions and differences amongst stakeholders that makes up the turn to data-driven governance, we see a broader politics of such a turn emerge. Whilst our case study research points to the fact that no decision is currently made solely on the basis of these data-driven scores, the implementation of such systems is shaping the terms upon which citizens are engaged with and constructed in the context of public services. These systems are part of a move towards a perceived need for more integrated and granular information about populations that is now seen to be possible with the advent of data-driven technologies. Moreover, using data to categorise and classify behaviours and characteristics is seen as a way to target resources in the face of significant cuts in public sector spending, with a view to predict and pre-empt activities and outcomes to advance more proactive forms of engagement with citizens.

At the same time we have seen that in many instances these systems are being bought in from private suppliers that develop various off-the-shelf tools and applications that can be deployed and repurposed within different parts of local government and the public sector, particularly for identification and risk assessments in the areas of benefit fraud, child welfare and policing. Yet in this context, the turn to data-driven technologies raise concerns across different stakeholders not just about the lack of transparency and the likelihood of errors and bias in the design and uses of these systems, but also about a more fundamental shift in what constitutes or is privileged as social knowledge, the kinds of actions that might be taken on such knowledge, and the way in which this positions citizens as subjects of governance. Whilst a notion of a ‘golden view’ of populations, as expressed by management, suggests an advanced, more comprehensive engagement with the needs of the city or borough and the people living within it, the tensions and negotiations we have seen as systems are implemented amongst professionals and civil society groups illustrate the politicised nature of this ‘view’ in practice.

Concerns point to the implications of ‘seeing’ people through data within this context, and the abstracted and reduced understanding this may lead to when relied upon at the expense of other types of knowledge. In conjunction with the deskilling and disempowerment of professionals as the use of data systems grows, issues raised by stakeholders speak to a perceived danger that the messiness of people and lived experiences is necessarily sidelined or ignored for the algorithmic processing of information. The extent to which digitised systems can be used in ways that reduce what is ‘knowable’ and hide complexity while appearing objective and neutral is a repeated finding across research investigating the ‘modernization’ of public services (Gillingham, 2011; Munroe, 2010; White et al., 2009; Bartlett & Tkacz, 2017). Furthermore, in mapping what data systems are used for, we see how these technologies advance an onus on risk management as the dominant operative logic of public services. As Amoore (2013) has argued, with the turn to algorithmic decision-making in governance, authority and expertise is transferred to calculative devices seeking to capture risk over and above other forms of expertise. As such, citizens are positioned within this ‘golden view’ not as participants or co-creators, but primarily as (potential) risks, unable to engage with or challenge decisions that govern their lives.

Moreover, concerns with targeting and stigmatisation, particularly of marginalised and poor groups in society, highlight the way these systems attribute risk factors to individuals’ behaviour and characteristics, shifting the burden of responsibility for social ills onto individuals over and above collective solutions. When the focus is on individuals, predicting risks of committing crime through data-driven profiling, for instance, is comfortably presented as a ‘solution’ for tackling increasing crime levels whilst doing little to engage with any underlying causes of crime (Andrejevic, 2017). Similarly, when assessing child welfare in the context of individual households, emphasis falls on seeing this primarily as an outcome of family history and behaviour. The worry is that what is measured is the impact of school absences but not the impact of school cuts; or of measuring the impact of benefit claims but not the impact of precarious work. In other words, these systems, in their emphasis on correlation over causation, can individualise social problems by directing attention away from structural causes of social problems (Keddell, 2015).

In deconstructing the ‘golden view’ of citizens that data systems afford, we therefore need to consider how state-citizen relations and indeed the substance of public services are configured within such a view. This goes beyond questions of error and bias or forms of data discrimination that have received increased attention. Instead, it requires us to consider the implementation of data systems as a distinctly political process, engaging with the politics of data at different and interconnected scales (Ruppert et al., 2017). At one level, the ‘golden view’ signifies an interpretation of state-citizen relations marked by a context of data-rich societies subject to austerity (McQuillan, 2018), turning to increased data sharing and analysis as a coping mechanism for a reduction in resources. This requires a critical interrogation of the premise of these systems, and the interests and agendas their implementation is seeking to serve. Moreover, the nature of citizen scoring, and the use of data for the purposes of categorisation, segmentation and profiling is embedded within a particular understanding of the relationship between people and data, and with that, a particular type of social knowledge and value system (Van Dijck, 2014; Kitchin, 2014). In effect, data scores come to order the contours of citizenship, shaping the deserving and undeserving, the risky and the vulnerable, and, ultimately, the terms upon which access to and participation in society might occur.

Conclusion

The introduction of predictive analytics, scoring systems, intelligent databases and data warehouses into local government is a rapidly emerging feature of datafication. An increasing emphasis on data use in UK government has led to a proliferation of data systems being implemented, leading to significant experimentation with algorithmic processes designed to provide new insights and value extraction based on different kinds of analytics. For public services, these systems are said to offer an opportunity to allocate resources and respond to needs more effectively. However, little is known about the kinds of systems in place, how and where they are used, and what practitioners and stakeholders think about these developments. This is especially a challenge in what we have identified as both a regulatory and interpretive vacuum that signifies a lack of shared understandings of not only what constitutes data-driven decision-making and algorithmic processing of information, but also what is appropriate to do with such systems. Through FOI requests, interviews and workshops we have sought to map uses and detail the different kinds of data systems being implemented as well as the benefits and concerns being identified by practitioners and civil society experts.

Our findings demonstrate the heterogeneity of data systems and their uses, and the contingency of their implementation on both local and broader societal factors. The turn to scoring systems and predictive analytics is being fueled by an austerity context in which local councils have faced substantial cuts. While these technologies are being implemented as ‘smart’ and effective solutions for better service provision, they are introduced in the context of service reduction. Further, we can observe a strong reliance on commercial systems that provide additional challenges to transparency and incorporate a wider set of (transactional, social, etc.) data on people into public sector decision-making. In this setting, shifts in organisational practices and logics that implicate the role of professional judgment and the extent to which data systems come to guide decision-making have led to prominent concerns amongst stakeholder groups in civil society that are not necessarily considered within local authorities and partner agencies. These include concerns beyond questions of transparency, bias and discrimination, and point to broader worries about targeting and stigmatisation, and how people come to be ‘seen’ and engaged with as citizens and service users.

The desire to rely on data collection and analysis as a way to create a ‘golden view’ of populations serves as a pertinent metaphor as it encapsulates not only the perception of what is possible with increased data sharing, but also suggests a particular conception of state-citizen relations. The kind of negotiations that emerge when looking at the implementation, deployment and uses of data systems in practice point to the contestations that exist over what comes to constitute social knowledge in such a view, the individualisation of risk and responsibility, the differential treatment that this can introduce, and the inability for citizens to know, engage or challenge such assessments. These negotiations will continue to play a key part in understanding the transformations in governance emerging with the scoring society and need to form a prominent part of discussions on what is at stake as data systems come to govern more and more aspects of our lives.

References

AlgorithmWatch. (2019). Automating Society: Taking Stock of Automated Decision-Making in the EU [Report]. Berlin: AlgorithmWatch. Retrieved from https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf

Amoore, L. (2013). The Politics of Possibility: Risk and Security Beyond Probability. Durham; London: Duke University Press.

Andrejevic, M. (2017). To pre-empt a thief. International Journal of Communication, 11, 879–896. Retrieved from https://ijoc.org/index.php/ijoc/article/view/6308

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23) Machine Bias. Pro Publica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bartlett, J., & Tkacz, N. (2017). Governance by Dashboard [Policy Paper]. London: Demos. Retrieved from https://www.demos.co.uk/wp-content/uploads/2017/04/Demos-Governance-by-Dashboard.pdf

Beer, D. (2018). Envisioning the power of data analytics. Information, Communication & Society, 21(3): 465-479. doi:10.1080/1369118X.2017.1289232

Big Brother Watch (2018, April 2018) A closer look at Experian big data and artificial intelligence in Durham Police [Blog post]. Retrieved from Big Brother Watch website: https://bigbrotherwatch.org.uk/2018/04/a-closer-look-at-experian-big-data-and-artificial-intelligence-in-durham-police/

Brown, W. (2015). Undoing the Demos: Neoliberalism’s Stealth Revolution. Cambridge, MA: The MIT Press.

Carney, T. (2018) Robo-debt illegality: The seven veils of failed guarantees of the rule of law? Alternative Law Journal, https://doi.org/10.1177/1037969X18815913

Cheney-Lippold, J. (2017). We Are Data. New York: New York University Press.

Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2). doi:10.1177/2053951717718855

Citron, D. K., & Pasquale, F. (2014). The Scored Society: Due Process for Automated Predictions. Washington Law Review, 89(1). Available at: https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2

Couchman, H. (2019). Policing by Machine: Predictive Policing and the Threat to Our Rights [Report]. Available at: https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf

Crawford, K. (2016, May 2). Know your terrorist credit score! Talk presented at the re:publica, Berlin. Retrieved from https://16.re-publica.de/en/16/session/know-your-terrorist-credit-score

Cummings, M. L. (2004). Automation Bias in Intelligent Time Critical Decision Support Systems. Presented at the AIAA 1st Intelligent Systems Technical Conference, Chicago. doi:10.2514/6.2004-6313 Available at: https://web.archive.org/web/20141101113133/http://web.mit.edu/aeroastro/labs/halab/papers/CummingsAIAAbias.pdf

Davies, G. Boutaud, C., Scheffield, H., Youle, E. (2019, March 4) Revealed: The thousands of public spaces lost to the council funding crisis. The Bureau of Investigative Journalism. Available at: https://www.thebureauinvestigates.com/stories/2019-03-04/sold-from-under-you

Dencik, L. (2019). Situating practices in datafication – from above and below. In H. C. Stephansen & E. Treré (Eds.), Citizen Media and Practice. London; New York: Routledge.

Department for Work and Pensions. (2011, November 9). Housing Benefit and Council Tax Benefit Circular HB/CTB S11/2011. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/633018/s11-2011.pdf

Dixon, P., & Gellman, R. (2014). The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future [Report]. Lake Oswego: World Privacy Forum. Available at http://www.worldprivacyforum.org/wp-content/uploads/2014/04/WPF_Scoring_of_America_April2014_fs.pdf

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St Martin’s Press.

Fullerton, J. (2018, March 24). China’s ‘social credit’ system bans millions from travelling. The Telegraph. Retrieved from https://www.telegraph.co.uk/news/2018/03/24/chinas-social-credit-system-bans-millions-travelling/

Garrido S., Allard M. C., Béland J., Caccamo, E., Reigeluth, T., & Agaisse, J-P. (2018) Literature Review: Ethical issues and social acceptability of IoT in the Smart City [Final Report No. 1]. Montreal: CIRAIG. Retrieved from http://ville.montreal.qc.ca/pls/portal/docs/page/prt_vdm_fr/media/documents/ido_vi_revue_litt_final_en.pdf

Gillingham, P. (2011). Decision making tools and the development of expertise in child protection practitioners: Are we “just breeding workers who are good at ticking boxes”?. Child and Family Social Work16(4): 412–421. doi: 10.1111/j.1365-2206.2011.00756.x

Gillingham, P., & Graham, T. (2017). Big data in social welfare: the development of a critical perspective on social work's latest “electronic turn.” Australian Social Work, 70(2), 135–147. doi: 10.1080/0312407X.2015.1134606

Hellerstein, J. (2008, November 19). The Commoditization of Massive Data Analysis. Radar. Retrieved from http://strata.oreilly.com/2008/11/the-commoditization-of-massive.html

Jefferson, E. (2018, April 24). No, China isn’t Black Mirror – social credit scores are more complex and sinister than that. New Statesman. Retrieved from https://www.newstatesman.com/world/asia/2018/04/no-china-isn-t-black-mirror-social-credit-scores-are-more-complex-and-sinister

Keddell, E. (2015). The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy, 35(1), 69–88. doi:10.1177/0261018314543224

Kirchner, L. (2017, December 18). New York City Moves to Create Accountability for Algorithms. Propublica. Retrieved from https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms

Kitchin, R. (2014). The data revolution. Big data, open data, data infrastructures & their consequences. London: Sage.

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. doi:10.1080/1369118X.2016.1154087

Lv, A., & Luo, T. (2018). Asymmetrical Power Between Internet Giants and Users in China. International Journal of Communication 12, 3877–3895. Retrieved from https://ijoc.org/index.php/ijoc/article/view/8543

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work and Think. New York: John Murray.

McCann, D., Hall, M., & Warin, R. (2018). Controlled by calculations?: Power and accountability in the digital economy [Report]. London: New Economics Foundation. Retrieved from https://neweconomics.org/2018/06/controlled-by-calculations

McQuillan, D. (2017). Data Science as Machinic Neoplatonism. Philosophy & Technology, 31(2), 253–272. doi:10.1007/s13347-017-0273-3

McQuillan, D. (2018, October 13). Rethinking AI through the politics of 1968. Opendemocracy. Retrieved from https://www.opendemocracy.net/digitaliberties/dan-mcquillan/rethinking-ai-through-politics-of-1968

McQuillan, D. (2019, June 7). AI Realism and Structural Alternatives. Talk presented at the Data Justice Lab, Cardiff. Retrieved from http://danmcquillan.io/ai_realism.html

Munroe, E. (2010, October 1). The Munro review of child protection. Part one: A systems analysis. London: Department for Education. Retrieved from https://www.gov.uk/government/publications/munro-review-of-child-protection-part-1-asystems-analysis

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Penguin.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Pasquale, F. (2019). Professional Judgment in an Era of Artificial Intelligence and Machine Learning. boundary 2, 46(1).

Poon, M. (2016). Corporate Capitalism and the Growing Power of Big Data: Review Essay. Science, Technology & Human Values, 41(6), 1088–1108. doi:10.1177/0162243916650491

Qlik. (2017, January 11). UK Police Force Visualizes Incident and Operations Data to Fight Crime Faster and Improve Public Safety [Press release]. Retrieved from https://www.qlik.com/us/company/press-room/press-releases/0111-police-force-visualizes-incident-operations-data-fight-crime-faster-improve-public-safety

Ruppert, E., Isin, E., & Bigo, D. (2017). Data politics. Big Data & Society, 4(2). doi:10.1177/2053951717717749

Science and Technology Committee, House of Commons. (2018). Algorithms in decision-making [Report No. 4]. Retrieved from https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/351.pdf

Stats NZ. (2018). Algorithm assessment report [Report]. Wellington: Government Information Services. Retrieved from https://data.govt.nz/use-data/analyse-data/government-algorithm-transparency

Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197–208. doi:10.24908/ss.v12i2.4776

White, S., Broadhurst, K., Wastell, D., Peckover, S., Hall, C., & Pithouse, A. (2009). Whither practice-near research in the modernization programme? Policy blunders in children’s services. Journal of Social Work Practice, 23(4), 401–411. doi:10.1080/02650530903374945

Footnotes

1. Full details of the methodology, including an outline of FOI responses can be found here: https://datajustice.files.wordpress.com/2018/12/data-scores-as-governance-project-report2.pdf

2. For an interactive map of these systems, see: https://data-scores.org/overviews/predictive-analytics

3. For details of this research, see the full report here: https://www.libertyhumanrights.org.uk/sites/default/files/LIB%2011%20Predictive%20Policing%20Report%20WEB.pdf

4. See also the submission from Big Brother Watch to the UN Special Rapporteur on extreme poverty and human rights: https://bigbrotherwatch.org.uk/wp-content/uploads/2018/11/BIG-BROTHER-WATCH-SUBMISSION-TO-THE-UN-SPECIAL-RAPPORTEUR-ON-EXTREME-POVERTY-AND-HUMAN-RIGHTS-AHEAD-OF-UK-VISIT-NOVEMBER-2018.pdf

Add new comment