Making sense of data ethics. The powers behind the data ethics debate in European policymaking

Gry Hasselbalch, Department of Information Studies, University of Copenhagen, Denmark

PUBLISHED ON: 13 Jun 2019 DOI: 10.14763/2019.2.1401

Abstract

This article offers an analytical investigation of the different actors and forces that mould definitions of “data ethics” in European policy-making. It details how data ethics public policy initiatives took shape in the context of the European General Data Protection reform, and addresses the general uncertainty that exists regarding their role and function. The paper also presents an analytical framework for an action-oriented “data ethics of power” that aims to elucidate the power relations of the ‘Big Data Society’, arguing that we recognise data ethics policy initiatives as open-ended spaces of negotiation among different interest groups that seek to guide the cultural definition of “data ethics”, with complex power relations exercised via cultural positioning.
Citation & publishing information
Received: November 21, 2018 Reviewed: January 28, 2019 Published: June 13, 2019
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Data ethics, Data protection, Privacy, European policymaking, Governance
Citation: Hasselbalch, G. (2019). Making sense of data ethics. The powers behind the data ethics debate in European policymaking. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1401

Introduction

January 2018: The tweet hovered over my head: “Where are the ethicists?” I was on a panel in Brussels about data ethics and this wasn’t the first time a panel or initiative as such was questioned. There wasn’t the foundation proper, the right expertise was not included - the ethicists were missing, the humanists were missing, the legal experts were missing. The results, outcome and requirements of these initiatives were unclear. Would they water down the law? I understood the critiques though. How could we talk about data ethics when a law was just passed following a lengthy negotiation process on this very topic? What was the function of these discussions? If we were not there to acknowledge a consensus, that is, the legal solution, what then was the point?

In the slipstream of sweeping data protection law reform in Europe, discussions regarding data ethics has gained traction in European public policy-making. Numerous data ethics public policy initiatives have been created, moving beyond issues of mere compliance with data protection law to increasingly focus on the ethics of big data, especially concerning private companies’ and public institutions’ handling of personal data in digital forms. Reception in public discourse has been mixed. Although gaining significant public attention and interest, these data ethics policy initiatives have also been depicted as governmental “toothless wonders” (e.g., Hill, 24 November 2017) and a waste of resources, and have been criticised for drawing attention away from public institutions’ mishandling of citizens’ data (e.g., Ingeniøren’s managing panel, op ed, 16 March 2018) and for potential “ethics washing” (Wagner, 2018), questioning the expertise and interests involved in the initiatives, as well as their normative ethics frameworks.

This article constitutes an analytical investigation of the various dimensions and actors that shape definitions of data ethics in European policy-making. Specifically, I explore the role and function of European data ethics policy initiatives and present an argument regarding how and why they took shape in the context of a European data protection regulatory reform. The explicit use of the term “ethics” calls for a philosophical framework; the term “data” for a contemporary perspective of the critical role of information in a digitalised society; and the policy context for consensus-making and problem solving. Together, these views on the role of the data ethics policy initiatives are highly pertinent. However, taken separately they each provide a one-sided kaleidoscopic insight into their role and function. For example, a moral philosophical view concerning data ethics initiatives (in public policy-making as well as in the private industry) might not be vigilant of the embedded interests and power relations; pursuit of actionable policy results may overlook their function as spaces of negotiation and positioning; while viewing data ethics initiatives as something radically new in the age of big data can lose sight of their place in and relation to history and governance in general.

In my analysis, I therefore adopt an interdisciplinary approach that draws on methods and theories from different subfields within applied ethics, political science, sociology, culture and infrastructure/STS studies. A central thesis of this article is that we should perceive data ethics policy initiatives as open-ended spaces of negotiation embedded in complex socio-technical dynamics, which respond to multifaceted governance challenges extended over time. Thus, we should not view data ethics policy initiatives as solutions in their own right. They do not replace legal frameworks such as the European General Data Protection Regulation (GDPR). Rather, they complement existing law and may inspire, guide and even set in motion political, economic and educational processes that could foster an ethical “design” of the big data age, covering everything from the introduction of new laws, the implementation of policies and practices in organisations and companies and the development of new engineering standards, to awareness campaigns among citizens and educational initiatives.

In the following, I first outline a cross-disciplinary conceptualisation of data ethics, presenting what I define as an analytical framework for a data ethics of power. I then describe the data ethics public policy focus in the context of the GDPR. I recognise that ethics discussions are implicit in legislative processes. Nevertheless, in this article I do not specifically focus on the regulation’s negotiation process as such, but rather on policymakers’ explicit use of the term “data ethics”, and especially on the emergence of formal data ethics policy initiatives (for instance, committees, working groups, stated objectives and results), many of which followed the adoption of the GDPR. I subsequently move on to an analysis of data ethics as described in public policy reports, statements, interviews and events in the period 2015–2018. In conclusion, I take a step back and review the definition of data ethics. Today, data ethics is an idea, concept and method that is used in policy-making, but which has no shared definition. While more aligned conceptualisations of data ethics might provide a guiding step for a collective vision for actions in law, business and society in general, an argument that runs through this article is that there is no definition of data ethics in this space neutral of values and politics. Therefore, we must position ourselves within a context-specific type of ethical action.

This article is informed by a study that I am conducting on data ethics in governance and technology development in the period 2017-2020. In that study and this article, I use an ethnographically informed approach based on active and embedded participation in various data protection/internet governance policy events, working groups and initiatives. Qualitative embedded research entails an immersion of the researcher in the field of study as an active and engaged member to achieve thorough knowledge and understanding (Bourdieu, 1997; Bourdieu & Wacquant 1992; Goffman, 1974; Ingold, 2000; Wong, 2009). Thus, essential to my understanding of the underlying dimensions of the topic of this article is my active participation in the internet governance policy community. I was for example part of the Danish government’s data ethics expert committee (2018) and am part of the European Commission’s Artificial Intelligence High Level Expert group (2018-2020). I am also the founder of the non profit organisation DataEthics.eu, which is active in the field.

In this article, I also draw on ideas, concepts and opinions generated in interaction with nine active players (decision-makers, policy advisors and civil servants) whom contributed to my understanding of the policy-making dynamics by sharing their experiences with data ethics in European 1 policy-making (see further in references). The interviewees were informed about the study and that they would not be represented by name and institution in any publications, as I wanted them to be minimally influenced by institutional interests and requirements in their accounts.2

Section 1: What is data ethics? A data ethics of power

In this section I introduce the emerging field of data ethics as the cross-disciplinary study of the distribution of societal powers in the socio-technical systems that form the fabric of the “Big Data Society”. Based on theories, practices and methods within applied ethics, legal studies and cultural studies, social and political sciences, as well as a movement within policy and business, I present an analytical framework for a “data ethics of power”.

As a point of departure, I define a data ethics of power as an action-oriented analytical framework concerned with making visible the power relations embedded in the “Big Data Society” and the conditions of their negotiation and distribution, in order to point to design, business, policy, social and cultural processes that support a human-centric distribution of power. In a previous book (Hasselbalch and Tranberg, 2016) we described data ethics as a social movement of change and action: “Across the globe, we’re seeing a data ethics paradigm shift take the shape of a social movement, a cultural shift and a technological and legal development that increasingly places the human at the centre” (p. 10). Thus, data ethics can be viewed as a proactive agenda concerned with shifting societal power relations and with the aim to balance the powers embedded in the Big Data Society. This shift is evidenced in legal developments (such as the GDPR negotiation process) and in new citizen privacy concerns and practices such as the rise in use of ad blockers and privacy enhancing services, etc. In particular, new types of businesses emerge that go beyond mere compliance with data protection legislation when incorporating data ethical values in collection and processing of data, as well as their general innovation practices, technology development, branding and business policies.

Here, I use the notion of “Big Data Society” to reflectively position data ethics in the context of a recent data (re)evolution of the “Information Society”, enabled by computer technologies and dictated by a transformation of all things (and people) into data formats (“datafication”) in order to “quantify the world” (Mayer-Schonberger & Cukier, 2013, p. 79) to organise society and predict risks. While I suggest that this is not an arbitrary evolution, but can also be viewed as an expression of negotiations between different ontological views on the status of the human being and the role of science and technology. As the realisation of a prevailing ideology of modernist scientific practices to command nature and living things, the critical infrastructures of the Big Data Society may therefore very well be described as modernity embodied in a “lived reality” (Edwards, 2002, p. 191) of control and order. From this viewpoint, a data ethics of power can be described as a type of post-modernist, or in essence vitalist, call for a specific kind of “ethical action” (Frohmann, 2007, p. 63) to free the living/human being from the constraints of the practices of control embedded in the technological infrastructures of modernity that at the same time reduce the value of the human being. It is here valuable to understand current calls for data ethical action in extension of the philosopher Henri Bergson’s vitalist arguments at the turn of the last century against the scientific rational intellect that provides no room for, or special status to, the living (Bergson, 1988, 1998). In a similar ethical framework, Gilles Deleuze, who was also greatly inspired by Bergson (Deleuze, 1988), later described over-coded “Societies of Control” (Deleuze, 1992), which reduce people (“dividuals”) to a code marking their access and locking their bodies in specific positions (p. 5). More recently, Spiekerman et al. (2017) in their Anti-Transhumanist Manifesto directly oppose a vision of the human as merely information objects, no different than other information objects (that is; non-human informational things), which they describe as “an expression of the desire to control through calculation. Their approach is limited to reducing the world to data-based patterns suited for mechanical manipulation” (p. 2).

However, a data ethics of power should also be viewed as a direct response to the power dynamics embedded in and distributed via our very present and immediate experiences of a “Liquid Surveillance Society” (Lyon, 2010). Surveillance studies scholar David Lyon (2014) envisions an “ethics of Big Data practices” (2014, p. 10) to renegotiate what is increasingly exposed to be an unequal distribution of power in the technological big data infrastructures. Within this framework we do not only pay conventional attention to the state as the primary power actor (of surveillance), but also include new stakeholders that gain power through accumulation and access to big data. For example, in the analytical framework of a data ethics of power, changing power dynamics are progressively more addressed in the light of the information asymmetry between individuals and the big data companies that collect and process data in digital networks (Pasquale, 2015; Powles, 2015–2018; Zuboff, 5 March 2016, 9 September 2014, 2019).

Beyond this fundamental theoretical framing, a data ethics of power can be explored in an interdisciplinary field addressing the distribution of power in the Big Data Society in diverse ways.

For instance, in a computer ethics perspective, power distributions are approached as ethical dilemmas or as implications of the very design and practical application of computer technologies. Indeed, technologies are never neutral, they embody moral values and norms (Flanagan, Howe, & Nissenbaum, 2008), hence power relations can be identified through analysing how technologies are designed in ethical or ethically problematic ways. Information science scholars Batya Friedman and Helen Nissenbaum (1996) have illustrated different types of bias embedded in existing computer systems that are used for tasks such as flight reservations and the assignment of medical graduates to their first job, and have presented a framework for such issues in the design of computer systems. From this perspective, we can also describe data ethics as what the philosophy and technology scholar Philip Brey terms a “Disclosive Computer Ethics”, identifying moral issues such as “privacy, democracy, distributive justice, and autonomy” (Brey, 2000, p. 12) in opaque information technologies. Phrased differently, a data ethics of power presupposes that technology has “politics” or embedded “arrangements of power and authority” (Winner, 1980, p. 123). Case studies of specific data processing software and their use can be defined as data ethics case studies of power, notably the “Machine Bias” study (Angwin et al., 2016), which exposed discrimination embedded in data processing software used in United States defence systems, and Cathy O’Neil’s (2016) analysis of the social implications of the math behind big data decision making in everything from getting insurance, credit to getting and holding a job.

Nevertheless, data systems are increasingly ingrained in society in multiple forms (from apps to robotics) and have limitless and wide-ranging ethical implications (from price differentiation to social scoring), necessitating that we look beyond design and computer technology as such. Data ethics as a recent designation represents what philosophers Luciano Floridi and Mariateresa Taddeo (2016, p. 3) describe as a primarily semantic shift within a computer and information ethics philosophical tradition from a concern with the ethical implications of the “hardware” to one with data and data science practices. However, looking beyond applied ethics in the field of philosophy to a data ethics of power, our theorisation of the Big Data Society is more than just semantic. The conceptualisation of a data ethics of power can also be explored in a legal framework, as an aspect of the rule of law and protection of citizens’ rights in an evolving Big Data Society. Here, redefining the concept of privacy (Cohen, 2013; Solove, 2008) in a legal studies framework, addresses the ethical implications of new data practices and configurations that challenge existing laws, and thereby the balancing of powers in a democratic society. As legal scholars Neil M. Richards and Jonathan King (2014) argue: “Existing privacy protections focused on managing personally identifying information are not enough when secondary uses of big data sets can reverse engineer past, present, and even future breaches of privacy, confidentiality, and identity” (p. 393). Importantly, these authors define big data “socially, rather than technically, in terms of the broader societal impact they will have,” (Richards & King, 2014, p. 394) providing a more inclusive analysis of a “big data ethics” (p. 393) and thus pointing to the ethical implications of the empowerment of institutions that possess big data capabilities at the expense of “individual identity” (p. 395).

Looking to the policy, business and technology field, the ethical implications of the power of data and data technologies are framed as an issue of growing data asymmetry between big data institutions and citizens in the very design of data technologies. For example, the conceptual framework of the “Personal Data Store Movement” (Hasselbalch & Tranberg, 27 September 2016) is described by the non-profit association MyData Global Movement as one in which “[i]ndividuals are empowered actors, not passive targets, in the management of their personal lives both online and offline – they have the right and practical means to manage their data and privacy” (Poikola, Kuikkaniemi, & Honko, 2018). In this evolving business and technology field, the emphasis is on moving beyond mere legal data protection compliance, implementing values and ethical principles such as transparency, accountability and privacy by design (Hasselbalch & Tranberg, 2016), and ethical implications are mitigated by values-based approaches to the design of technology. For example, engineering standards such as those of IEEE P7000s Ethics and AI standards 3 that seek to develop ethics by design standards and guiding principles for the development of artificial intelligence (AI). A values based design approach is also revisited in recent policy documents such as section 5.2. “Embedded values in technology – ethical-by-design” of the European Parliament’s “Resolution on Artificial Intelligence and Robotics” adopted in February 2019.

A key framework for data ethics is the human-centric approach that we increasingly see included within ethics guidelines and policy documents. For example, the European Parliament’s (2019, V.) resolution states that “whereas AI and robotics should be developed and deployed in a human-centred approach with the aim of supporting humans at work and at home…”. The EC High Level Expert Group on Artificial Intelligence’s draft ethics guidelines also stress how the human-centric approach to AI is one that “strives to ensure that human values are always the primary consideration” (working document, 18 December 2018, p. iv), and directly associate it with the balance of power in democratic societies: “political power is human centric and bounded. AI systems must not interfere with democratic processes” (p. 7). The human-centric approach in European policy-making is framed in a European fundamental rights framework (as for example extensively described in the European Commission’s AI High Level Expert group’s draft ethics guidelines) and/or with an emphasis on the human being’s interests prevailing over “the sole interests of society or science” (article 2, “Oviedo Convention”). Practical examples of the human-centric approach can also be found in technology and business developments that aim to preserve the specific qualities of humans in the development of information processing technologies. Examples include the Human in the Loop (HITL) approach to the design of AI, The International Organization for Standardization (ISO) standards on human-centred design (HCD) and the Personal Data Store Movement, which is defined as “A Nordic Model for human-centered personal data management and processing.” (Poikola et al., 2018)

Section 2: European data ethics policy initiatives in context

Policy debates that specifically address ethics in the context of technological developments have been ongoing in Europe since the 1990s. The debate has increasingly sought to harmonise national laws and approaches in order to preserve a European value framework in the context of rapid technological progress. For instance, the Council of Europe’s “Oviedo Convention” was motivated by what Wachter (1997, p. 14) describes as “[t]he feeling that the traditional values of Europe were threatened by rapid and revolutionary developments in biology and medicine”. Data ethics per se gained momentum in pan-European politics in the final years of the negotiation of the GDPR, through the establishment of a number of initiatives directly referring to data and/or digital ethics. Thus, the European Data Protection Supervisor (EDPS) Digital Ethics Advisory Group (2018, p. 5) describes its work as being carried out against “a growing interest in ethical issues, both in the public and in the private spheres and the imminent entry into force of the General Data Protection Regulation (GDPR) in May 2018”.

Examination of the differences in scope and the stakeholders involved in respectively the development of the 1995 Data Protection Directive and the negotiation process of the GDPR beginning with the European Commission’s proposal in 2012, provides some insight into the evolution of the focus of data ethics. The 1995 Directive was developed by a European working party of privacy experts and national data protection commissioners in a process that excluded business stakeholders (Heisenberg, 2005). Nevertheless, the group of actors influencing and participating in the development of the GDPR process progressively expanded, with new stakeholders comprising consumer and civil liberty organisations and American industry representatives and policymakers. The GDPR was generally described as one of the most lobbied EU regulations (Warman, 8 February 2012). At the same time, the public increasingly scrutinised the ethical implications of a big data era, with numerous news stories published on data leaks and hacks, algorithmic discrimination and data-based voter manipulation.

Several specific provisions of the GDPR were discussed inside and outside the walls of European institutions. For example, the “right to erasure” proposed in 2012 was heavily debated by industry and civil society organisations, especially in Europe and the USA, and was frequently described in the media as a value choice between privacy and freedom of expression. In 2013, the transfer of data to third countries (including those covered by the EU-US Safe Harbour agreement) engendered a wider public debate between certain EU parliamentarians and US politicians regarding mass surveillance and the role of large US technology companies. Another example was the discussion of an age limit of 16. This called civil society advocates into action (Carr, Should I laugh, cry or emigrate?, 13 December 2015) and led to new alliances with US technology companies regarding young people’s right to “educational and social opportunities” (Richardson, “European General Data Protection Regulation draft: the debate”, 10 December 2015). A last-minute decision rendered it possible to lower the age limit to 13 in member states.

These intertwined debates and negotiations illustrate how the data protection field was transformed within a global information technology infrastructure. It took shape as a negotiation of competing interests and values between economic entities, EU institutions, civil society organisations, businesses and third country national interests. We can also perceive these spaces of negotiation of rights, values and responsibilities and the creation of new alliances to have a causal link with the emergence of data ethics policy initiatives in European policy-making. In the years following the first communication of the reform, data protection debates were extended, with the concept of data ethics increasingly included in meeting agendas, debates in public policy settings and reports and guidelines. Following the adoption of the GDPR, the list of European member states or institutions with established data or digital ethics initiatives and objectives rapidly grew. Examples included the UK government’s announcement of a £9 million Centre for Data Ethics and Innovation with the stated aim to “advise government and regulators on the implications of new data-driven technologies, including AI” (Digital Charter, 2018). The Danish government appointed a data ethics expert committee 4 in March 2018 with a direct economic incentive to create data ethics recommendations to Danish industry and to turn responsible data sharing into a competitive advantage for the country (Danish Business Authority, 12 March 2018). Several member states’ existing and newly established expert and advisory groups and committees began to include ethics objectives into their work. For example, the Italian government established an AI Task Force in April 2017, publishing its first white paper in 2018 (AI Task Force/Italy, 2018) with an explicit section on ethics. The European Commission’s communication on an AI strategy, published in April 2018, also included the establishment of an AI High Level Expert Group 5, whose responsibility it was, among others, to publish ethics guidelines for AI in Europe the following year.

Section 3: Data ethics - policy vacuums

“I’m pretty convinced that the ethical dimension of data protection and privacy protection is going to become a lot more important in the years to come” (in ‘t Veld, 2017). These words of a European parliamentarian in a public debate in 2017 referred to the evolution of policy debates regarding data protection and privacy. You can discuss legal data protection provisions, she claimed, but then there is “a kind of narrow grey area where you have to make an ethical consideration and you say what is more important” (in ‘t Veld, 2017). What did she mean by her use of the term “ethics” in this context?

In an essay entitled “What is computer ethics?” (1985), the moral philosophy scholar James H. Moor described the branch of applied ethics that studies the ethical implications of computer technologies. Published only a few years after Acorn, the first IBM personal computer, was introduced to the mass market, Moor was interested in computer technologies per se (what is special about computers), as well as the policies required in specific situations where computers alter the state of affairs and create something new. But he also predicted a more general societal revolution (Moor, 1985, p. 268) due to the introduction of computers that will “leave us with policy and conceptual vacuums” (p. 272). Policy vacuums, he argued, would present core problems and challenges, revealing “conceptual muddles” (p. 266), uncertainties and the emergence of new values and alternative policies (p. 267).

If we view data ethics policy initiatives according to Moor’s framework, they can be described as moments of sense-making and negotiation created in response to the policy vacuums that arise when situations and settings are amended by computerised systems. In an interview conducted at the Internet Governance Forum (IGF) in 2017, a Dutch parliamentarian described how in 2013, policy-makers in her country rushed to tackle the transformations instigated by digital technologies that were going “very wrong” (Interview, IGF 2017). In response, she proposed the establishment of a national commission to consider the ethical challenges of the digital society: “it’s very hard to get the debate out of the trenches, you know, so that people stop saying, ‘well this is my position and this is my position’, but to just sit back and look at what is happening at the moment, which is going to be so huge, so incredible, we have no idea what is going to happen with our society and we need people to think about what to do about all of this, not in the sense you know, ‘I don’t want it’, but more in the sense, ‘are there boundaries?’ ‘Do we have to set limits to all of these possibilities that will occur in the coming years?’” Similarly, in another interview conducted at the same event, a representative of a European country involved in the information policy of the Committee of Ministers of the Council of Europe discussed how the results of the evolution of the Information Society included “violations”, “abuses” and recognition of the internet’s influence on the economy. Concluding, she stated that: “We need to slow down a little bit and to think about where we are going”.

In reviewing descriptions of data ethics initiatives, we can note implicit acknowledgement of the limits of data protection law in harnessing all of the ethical implications of a rapidly evolving information and data infrastructure. Data ethics thus become a means to make sense of emerging problems and challenges and to evaluate various policies and solutions. For example, a report from EDPS from 2015 states: “In today’s digital environment, adherence to the law is not enough; we have to consider the ethical dimension of data processing” (p. 4). It continues by describing how different EU law principles (such as data minimisation and the concepts of sensitive personal data and consent) are challenged by big data business models and methods.

The policy vacuums described in such reports and statements highlight the uncertainties and questions that exist regarding the governance of a socio-technical information infrastructure that increasingly shapes not only personal, but also social, cultural and economic activities.

In the same year as Moor’s essay was published, communications scholar Joshua Meyrowitz’s No Sense of Place (1985) portrayed the emergence of “information systems” that modify our physical settings via new types of access to information, thereby restructuring our social relations by transforming situations. As Meyrowitz (1985, p. 37) argued, “[w]e need to look at the larger, more inclusive notion of “patterns of information””, illustrating how our information realities have real qualities that shape our social and physical realities. Accordingly, European policymakers emphasise the real qualities of information and data. They see digital data processes as meaningful components of social power dynamics. Information society policy-making thus becomes an issue of the distribution of resources and of social and economic power, as an EU Competition Commissioner stated at a DataEthics.eu event on data as power in Copenhagen in 2016: “I’m very glad to have the chance to talk with you about how we can deal with the power that data can give” (Vestager, 9 September 2016). Thus, data ethics policy debates have moved beyond the negotiation of a legal data protection framework, increasingly involving a general focus on information society policy-making, in which different sectional policy areas are intertwined. As the European Commissioner for Competition elaborated at the DataEthics.eu event: “So competition is important. It keeps the pressure on companies to give people what they want. And that includes security and privacy. But we can’t expect competition enforcement to solve all our privacy problems. Our first line of defence will always be rules that are designed specifically to guarantee our privacy”.

Section 4: Data ethics - culture and values

According to Moor, the policy vacuums that emerge when existing policies clash with technological evolution, force us to “discover and make explicit what our value preferences are” (1985, p. 267). He proposes that the computer induced societal revolution will occur in two stages, marked by the questions that we ask. In the first “Introduction Stage”, we ask functional questions: How well does this and that technology function for its purpose? In the second “Permeation Stage”, when institutions and activities are transformed, Moor argues that we will begin to ask questions regarding the nature and value of things (p. 271). Such second-stage questions are echoed in the European policy debate of 2017, as one Member of the European Parliament (MEP) who was heavily involved in the GDPR negotiation process argued in a public debate: “[this is] not any more a technical issue, it’s a real life long important learning experience” (Albrecht, 2017), or as another MEP claimed in the same debate: “The GDPR is not only a legislative piece, it’s like a textbook, which is teaching us how to understand ourselves in this data world and how to understand what are the responsibilities of others and what are the rules which is governing in this world” (Lauristin, 2017).

Consequently, the technicalities of new data protection legislation are transformed into a general discussion about the norms and values of a big data age. Philip Brey describes values as “idealized qualities or conditions in the world that people find good”, ideals that we can work towards realising (2010, p. 46). However, values are not just personal ideals; they are also culturally situated. The cultural theorist Raymond Williams (1958, p. 6) famously defined culture as a “shape”, a set of purposes and common meanings expressed “in institutions, and in arts and learning”, which emerge in a social space of “active debate and amendment under the pressures of experience, contact and discovery”. Culture is thus traditional as well as creative, consisting of prescribed dominant meanings and their negotiation (Williams, 1958). Similarly, the anthropologist James Clifford (1997) replaced the metaphor of “roots” (an image of the original, authentic and fixed cultural entity) with “routes”: intervals of negotiation and translation between the fixed cultural points of accepted meaning. Values are advanced in groups with shared interests and culture but they exist in spaces of constant negotiation. In an interview conducted at the IGF 2017, one policy advisor to an MEP enquired as to the role of values in the GDPR’s negotiations, described privacy as a value shared by a group of individuals involved in the reform process: “I think a group of core players shared that value (…) all the way from people who wrote the proposal at the Commission, to the Commissioner in charge to the rapporteur from the EU Parliament, they all (…) to some extent shared this value, and I think that they managed to create a compromise closer to their value than to others”. He also explained how discussions about values were emerging in processes of negotiation between diverse and at times contradictory interests: “the moment you see a conflict of interest, that is when you start looking at the values (…) normally it would be a discussion about different values (….) an assessment of how much one value should go before another value (… ) so some people might say that freedom of information might be a bigger value or the right to privacy might be a bigger value” .

Accordingly, ethics in practice, or what Brey refers to as “the act of valuing something, or finding it valuable (…) to find it good in some way” (2010, p. 46) is in essence never merely a subjective practice, but neither is it a purely objective construct. If we investigate the meaning of data ethics and ethical action in European data protection policy-making, we can see the points of negotiation. That is, if we look at what happens in the “intervals” between established value systems and the renegotiation of these in new contexts, we discover clashes of values and negotiation as well as the contours of cultural positioning.

Section 5: Data ethics - power and positioning

Philosophy and media studies scholar Charles Ess (2014) has illustrated how culture plays a central role in shaping our ethical thinking about digital technologies. For instance, he argues that people in Western societies place ethical emphasis on “the individual as the primary agent of ethical reflection and action, especially as reinforced by Western notions of individual rights” (p. 196). Such cultural positioning in a global landscape can also be identified in the European data ethics policy debate. An example is the way in which one participant in the 2017 MEP debate discussed above described the GDPR with reference to the direct lived experiences of specific European historical events: “It is all about human dignity and privacy. It is all about the conception of personality which is really embedded in our culture, the European culture ( ...) it came from the general declaration of human rights. But there is a very very tragic history behind war, fascism, communism and totalitarian societies and that is a lesson we have learned in order to understand why privacy is important” (Lauristin, 2017).

Values such as human dignity and privacy are formally recognised in frameworks of European fundamental rights and data protection law, and conscious of their institutionalised roots in the European legal framework, European decision-makers will reference them when asked about the values of “their” data ethics. Awareness of data ethics thus becomes a cultural endeavour, transferring European cultural values into technological development. As stated in an EDPS report from 2015: “The EU in particular now has a ‘critical window’ before mass adoption of these technologies to build the values into digital structures which will define our society” (p. 13) .

When exploring European data ethics policy initiatives as spaces of value negotiations, a specific cultural arrangement emerges. In this context, policy and decision-makers position themselves against a perceived threat to a specifically European set of values and ethics that is pervasive, opaque and embedded in technology. In particular, a concern with a new opponent to the state power emerges. In an interview conducted in 2018 at an institution in Europe, a project officer reflected on her previous work in a European country’s parliament and government where concerns with the alternative form of power that the internet represents had surfaced. The internet is the place where discussions are held and decisions are made, she said, before remembering the policy debates concerning “GAFA” (the acronym for the four giant technology companies of Google, Apple, Facebook and Amazon). Such a clash in values has been directly addressed by European policymakers in public speeches and debates, increasingly naming the technology industry stakeholders they deem responsible. Embedded values of technology innovation are a “wrecking ball”, aiming not simply to “play with the way society is organised but instead to demolish the existing order and build something new in its place”, argued a President of the European Parliament in a speech in 2016 (Schultz, 2016). Values and ethics are hence directly connected with a type of cultural power that is built into technological systems. As one Director for Fundamental Rights and Union Citizenship, European Commission DG Justice claimed in a 2017 public debate: “the challenge of ethics is not in the first place with the individual, the data subject; the challenge is with the controllers, which have power, they have power over people, they have power over data, and what are their ethics? What are the ethics they instil in their staff? In house compliance ethics? Ethics of engineers?” (Nemitz, 2017).

Section 6: Data ethics - spaces of negotiation

When dealing with the development of technical systems, we are inclined towards points of closure and stabilisation (Bijker et al., 1987) that will guide the governance, control and risk mitigation of the systems. Relatedly, we can understand data ethics policy initiatives as end results with the objectives “to formulate and support morally good solutions (e.g., right conducts or right values)” (Floridi & Taddeo, 2016, p. 1), emphasising algorithms (or technologies) that may not be “ethically neutral” (Mittelstadt et al., 2016, p. 4). That is to say, as solutions to the ethical problems raised within the very design of technologies, the data processing activities of the algorithms or the collection and dissemination of data. However, I would like to address data ethics policy initiatives in their contexts of interest and value negotiation. For instance, where does morality begin and end in a socio-technical infrastructure that extends across jurisdictions and continents, cultural value systems and societal sectors?

The technical does indeed in the very design represent forms of order, as the political theorist Langdon Winner reminded us (1980, p. 123). That is, it is “political” and thus has ethical implications when creating by design “wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner, 1980, p 125). To provide an example, the Facebook APIs that facilitated the mass collection of user data, before these were reused and processed by Cambridge Analytica, were specifically designed to track users and share data en masse with third parties, hence directly enabling the mass collection, storage and processing of data. However, these design issues of the technical are also “inextricably bound up into an organic whole” with economic, social, political and cultural problems (Callon, 1987, p. 84). An analysis of data ethics as it is evolving in the European policy sphere demonstrates the complexity of governance challenges arising from the infrastructure of the information age being “shaped by multiple agents with competing interests and capacities, engaged in an indefinite set of distributed interactions over extended periods of time” (Harvey et al., 2017, p. 26). Governance in this era is, as highlighted by internet governance scholars Jeanette Hofmann et al., a “heterogeneous process of ordering without a clear beginning or endpoint” (2016, p. 1412). It consists of actors engaged in “fields of struggle” (Pohle et al-, 2016) of meaning making and competing interpretations of policy issues that are “continuously produced, negotiated and reshaped by the interaction between the field and its actors” (p. 4). I propose that we also explore, as essential components of our data ethics endeavours, the complex dynamics of the ways in which powers are distributed and how interests are met in spaces of negotiation.

Evidently, we must also recognise data ethics policy initiatives as components of a general infrastructural development’s rhythm rather than caved in ethical solutions and isolated events. Understand them as the kind of negotiation posts that repeatedly occur throughout the course of a technological system’s development (Bijker et al., 1987), and as segments of a process of standardisation and consensus-building within a complex general technological evolution of our societies that “contain messy, complex, problem-solving components” (Hughes, 1987, p. 51). The technological systems of modernity are like the architecture of mundane buildings. They reside, as Edwards (2002, p. 185) claims, in a “naturalised background, ordinary as trees, daylight, and dirt”. Silently they represent, constitute and are constituted by both our material and imagined modern societies and the distribution of power within. They remain unnoticed until they fail (Edwards, 2002). But when they do fail, we see them in all their complexity. An example is the US intelligence officers PowerPoint presentations (The Guardian, 2013) detailing the “PRISM program” leaked by Edward Snowden in 2013 that provide a detailed map of an information and data infrastructure that is characterised by intricate interconnections between a state organisation of mass surveillance, laws, jurisdictions and budgets, and the technical design of the world wide web and social media platforms. The technological infrastructures are indeed like communal buildings. With doors that we never give a second thought until the day we find one of them locked.

Conclusion

October 2018:“These are just tools!” one person exclaimed. We were at a working group meeting where an issue with using Google Docs for the practical work of the group was raised and discussed at length. While some were arguing for an official position on the use of the online service, mainly with reference to what they described as Google’s insufficient compliance with European data protection law, others saw the discussion as a waste of time. Why spend valuable work time on this issue?

What is data ethics? Currently, the reply is shrill, formally framed in countless statements, documents and mission statements from a multitude of sources, including governments, intergovernmental organisations, consultancy firms, companies, non-governmental organisations, independent experts and academics. But it also emerges when least expected, in “non-allocated” moments of discussion. Information technologies that permeate every aspect of our lives today, from micro work settings to macro economics and politics, are increasingly discussed as “ethical problems” (Introna, 2005, p. 76) that must be solved. Their pervasiveness sparks moments of ethical thinking, negotiated in terms of moral principles, values and ideal conditions (Brey, 2010). In allocated or unallocated spaces of negotiation, moments of pause and sense-making (Moor, 1985), we discuss the values (Flanagan et al., 2008) and politics (Winner, 1980) of the business practices, cultures and legal jurisdictions that shape them. These spaces of negotiation encompass very concrete discussions regarding specific information technology tools, but increasingly they also evolve into reflections concerning general challenges to established legal frameworks, individuals’ agency and human rights, as well as questions regarding the general evolution of society. As one Danish minister said at the launch of a national data ethics expert group: “This is about what society we want” (Holst, 11 March 2018).

In this article, I have explored data ethics in the context of a European data protection legal reform. In particular, I have sought to answer the question: “What is data ethics?” with the assumption that the answer will shape how we perceive the role and function of data ethics policy initiatives. Based on a review of policy documents, reports and press material, alongside analysis of the ways in which policymakers and civil servants make sense of data ethics, I propose that we recognise these initiatives as open-ended spaces of negotiation and cultural positioning.

This approach to ethics might be criticised as futile in the context of policy and action. However, I propose that understanding data ethics policy initiatives as spaces of negotiation does not prevent action. Rather, it forces us to make apparent our point of departure: the social and cultural values and interests that shape our ethical action. We can thus create the potential for a more transparent negotiation of ethical action in the “Big Data Era”, enabling us to acknowledge the macro-level data ethics spaces of negotiation that are currently emerging not only in Europe but globally.

This article’s analytical investigation of European data ethics policy initiatives as spaces of cultural value negotiations has revealed a set of actionable thematic areas. It has illustrated a clash of values and an emerging concern with the alternative forms of power and control embedded in our technological environment, which exert pressure on people and individuals in particular. Here, a data ethics of power that takes its point of departure in Gilles Deleuze’s description of computerised Societies of Control (1992) enables us to think about the ethical action that is necessary today. Ethical action could for example concern the empowerment of individuals to challenge the laws and norms of opaque algorithmic computer networks, as we have noted in debates on the right to explanation and the accountability and interpretability of algorithms. Ethical action may also strive towards ideals of freedom in order to break away from coding, to become indiscernible to “Weapons of Math Destruction” (O’Neil, 2016) that increasingly define, shape and limit us as individuals, as seen for instance in the digital self-defence movement (Heuer & Tranberg, 2013). Data ethics missions such as these are rooted in deeply personal experiences of living in coded networks, but they are also based on growing social and political movements and sentiments (Hasselbalch & Tranberg, 2016).

Much remains to be explored and developed regarding the power dynamics embedded in the evolving data ethics debate, not only in policy-making, but also in business, technology and public discourse in general. This article seeks to open up a more inclusive and holistic discussion of data ethics in order to advance investigation and understanding of the ways in which values are negotiated, rights and authority are distributed, and conflicts are resolved.

Acknowledgements

Clara; Francesco Lapenta for the many informed discussions regarding the sociology of data ethics; Jens-Erik Mai for insightful comments on the drafts of this article; The team at DataEthics.eu for inspiration.

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. Propublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Albrecht, J. P. (2017, January 26) MEP debate: The regulation is here! What now? [video file] Retrieved from https://www.youtube.com/watch?v=28EtlacwsdE

Bergson, H. (1988). Matter and Memory (N. M. Paul & W. S. Palmer, Trans.) New York: Zone Books.

Bergson, H. (1998), Creative Evolution (A. Mitchell, Trans.). Mineola, NY: Dover Publications.

Bijker, W. E., Hughes, T. P., & Pinch, T. (1987). General introduction. In W. E. Bijker, T. P. Hughes, & T. Pinch. (Eds.), The Social Construction of Technological Systems (pp. 1-7). Cambridge, MA: MIT Press.

Brey, P. (2000). Disclosive computer ethics. Computer and Society, 30(4), 10-16. doi:10.1145/572260.572264

Brey, P. (2010). Values in technology and disclosive ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 41-58). Cambridge: Cambridge University Press.

Bourdieu, P. (1997). Outline of a Theory of Practice. Cambridge: Cambridge University Press.

Bourdieu, P., & Wacquant, L. (1992). An Invitation to Reflexive Sociology. Cambridge: Polity Press.

Callon, M. (1987). Society in the making: the study of technology as a tool for sociological analysis. In Wiebe E. Bijker, Thomas P. Hughes, & Trevor Pinch (Eds.), The Social Construction of Technological Systems (pp. 83-103). Cambridge, MA: MIT Press.

Carr, J. (2015, December 13). Should I laugh, cry or emigrate? [Blog post]. Retrieved from Desiderata https://johnc1912.wordpress.com/2015/12/13/should-i-laugh-cry-or-emigrate/

Clifford, J. (1997). Routes: Travel and Translation in the Late Twentieth Century. Cambridge: Harvard University Press.

Cohen, J. E. (2013). What privacy is for. Harvard Law Review, 126(7). Retrieved from https://harvardlawreview.org/2013/05/what-privacy-is-for/

Danish Business Authority. (2018, March 12). The Danish government appoints new expert group on data ethics [Press release]. Retrieved from https://eng.em.dk/news/2018/marts/the-danish-government-appoints-new-expert-group-on-data-ethics

Deleuze, G. (1992). Postscript on the societies of control. October, 59, p. 3-7. Retrieved from http://www.jstor.org/stable/778828

Deleuze, G. (1966). Bergsonism (H. Tomlinson, Trans.). New York: Urzone Inc.

Edwards, P. (2002). Infrastructure and modernity: scales of force, time, and social organization in the history of sociotechnical systems. In Misa, T. J., Brey, P., & A. Feenberg (Eds.), Modernity and Technology (pp. 185-225). Cambridge, MA: MIT Press.

Ess, C. M. (2014). Digital Media Ethics. Cambridge, UK: Polity Press

Flanagan, M., Howe, D. C., & Nissenbaum, H. (2008). Embodying values in technology – theory and practice. In J. van den Hoven, & J. Weckert (Eds.), Information Technology and Moral Philosophy (pp. 322-353). Cambridge, UK: Cambridge University Press.

Floridi, L., & Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083). doi:10.1098/rsta.2016.0360

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330-347. doi:10.1145/230538.230561

Frohmann, B. (2007). Foucault, Deleuze, and the ethics of digital networks. In R. Capurro, J. Frühbauer, & T. Hausmanninger (Eds.), Localizing the Internet. Ethical Aspects in Intercultural Perspective (pp. 57-68). Munich: Fink.

Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience. Boston, MA: Northeastern University Press

Harvey, P., Jensen, C. B., & Morita, A. (2017). Introduction: infrastructural complications. In P. Harvey, C. B. Jensen, & A. Morita (Eds.), Infrastructures and Social Complexity: A Companion. p. 1-22. London: Routledge.

Hasselbalch, G., & Tranberg, P. (2016, December 1). The free space for data monopolies in Europe is shrinking [Blog post]. Retrieved from Opendemocracy.net https://www.opendemocracy.net/gry-hasselbalch-pernille-tranberg/free-space-for-data-monopolies-in-europe-is-shrinking

Hasselbalch, G., & Tranberg, P. (2016, September 27). Personal data stores want to give individuals power over their data [Blog post]. Retrieved from DataEthics.eu https://dataethics.eu/personal-data-stores-will-give-individual-power-their-data/

Hasselbalch, G., & Tranberg, P. (2016). Data Ethics. The New Competitive Advantage. Copenhagen: Publishare.

Heisenberg, D. (2005). Negotiating Privacy: The European Union, The United States and Personal Data Protection. Boulder, CA: Lynne Reinner Publishers.

Heuer, S., & Tranberg, P. (2013). Fake It! Your Guide to Digital Self-Defense. Copenhagen: Berlingske Media Forlag.

Hill, R. (24 November 2017). Another toothless wonder? Why the UK.gov’s data ethics centre needs clout. The Register. Retrieved from https://www.theregister.co.uk/2017/11/24/another_toothless_wonder_why_the_ukgovs_data_ethics_centre_needs_some_clout/

Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: finding the governance in Internet governance. New Media & Society, 19(9), 1406-1423. doi:10.1177/1461444816639975

Holst, H. K. (2018, March 11). Regeringen vil lovgive om dataetik: det handler om, hvilket samfund vi ønsker [The government will legislate on data: it is about what we want to do in society]. Berlingske. Retrieved from https://www.berlingske.dk/politik/regeringen-vil-lovgive-om-dataetik-det-handler-om-hvilket-samfund-vi-oensker

Hughes, T. P. (1987). The evolution of large technological systems. In W. E. Bijker, T. P. Hughes, & T. Pinch (Eds.), The Social Construction of Technological Systems (pp. 51-82). Cambridge, MA: MIT Press.

Ingold, T. (2000) The Perception of the Environment: Essays in Livelihood, Dwelling and Skill. London: Routledge.

Introna, L. D. (2005). Disclosive ethics and information technology: disclosing facial recognition systems. Ethics and Information Technology, 7(2), 75-86. doi:10.1007/s10676-005-4583-2

Ingeniøren. (2018, March 16). Start nu med at overholde loven Brian Mikkelsen [Now start complying with the law, Brian Mikkelsen]. Version 2. Retrieved from https://www.version2.dk/artikel/leder-start-nu-med-at-overholde-loven-brian-mikkelsen-1084631

in ‘t, Veld, S. (2017, January 26). European Privacy Platform [video file]. Retrieved from https://www.youtube.com/watch?v=8_5cdvGMM-U

Lauristin, M. (2017, January 26). MEP debate: The regulation is here! What now? [video file] Retrieved from: https://www.youtube.com/watch?v=28EtlacwsdE

Lyon, D. (2014). Surveillance, Snowden, and big data: capacities, consequences, critique. Big Data & Society, 1(2). doi:10.1177/2053951714541861

Lyon, D. (2010). Liquid surveillance: the contribution of Zygmunt Bauman to surveillance studies. International Political Sociology, 4(4). (pp. 325-338). doi:10.1111/j.1749-5687.2010.00109.x

Mayer-Schonberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work and Think. London: John Murray.

Meyrowitz, J. (1985). No Sense of Place: The Impact of the Electronic Media on Social Behavior. Oxford: Oxford University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data & Society, 3(2), 1-21. doi:10.1177%2F2053951716679679

Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266-275. doi:10.1111/j.1467-9973.1985.tb00173.x

Nemitz, P. (2017, January 26) European Privacy Platform [video file]. Retrieved from: https://www.youtube.com/watch?v=8_5cdvGMM-U

O’Neil, C. (2016). Weapons of Math Destruction. New York: Penguin Books.

Pasquale, F. (2015). The Black Box Society – The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press

Poikola, A., Kuikkaniemi, K., & Honko, H. (2018). Mydata – A Nordic Model for human-centered personal data management and processing [White paper]. Helsinki: Open Knowledge Finland. Retrieved from https://www.lvm.fi/documents/20181/859937/MyData-nordic-model/2e9b4eb0-68d7-463b-9460-821493449a63?version=1.0

Pohle, J., Hosl, M. & Kniep, R. (2016). Analysing internet policy as a field of struggle. Internet Policy Review, 5(3) doi:10.14763/2016.3.412

Powles, J. (2015–2018). Julia Powles [Profile]. The Guardian. Retrieved from https://www.theguardian.com/profile/julia-powles

Richards, N. M., & King J. H. (2014). Big data ethics. Wake Forest Law Review, 49, 393- 432.

Richardson, J. (2015, December 10). European General Data Protection Regulation draft: the debate. Retrieved from Medium https://medium.com/@janicerichardson/european-general-data-protection-regulation-draft-the-debate-8360e9ef5c1

Schultz, M. (2016, March 3) Technological totalitarianism, politics and democracy [video file] Retrieved from: https://www.youtube.com/watch?v=We5DylG4szM

Solove, D. J. (2008). Understanding Privacy. Cambridge: Harvard University Press.

Spiekermann, S., Hampson P., Ess, C. M., Hoff, J., Coeckelbergh, M., & Franckis, G. (2017). The Ghost of Transhumanism & the Sentience of Existence., Retrieved from The Privacy Surgeon http://privacysurgeon.org/blog/wp-content/uploads/2017/07/Human-manifesto_26_short-1.pdf

The Guardian. (2013, November 1). NSA Prism Programme Slides. The Guardian. Retrieved from https://www.theguardian.com/world/interactive/2013/nov/01/prism-slides-nsa-document

Vestager, M. (2016, September 9). Making Data Work for Us. Retrieved from European Commission https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/making-data-work-us_en Video available at https://vimeo.com/183481796

de Wachter, M. A. M. (1997). The European Convention on Bioethics. Hastings Center Report, 27(1), 13-23. Retrieved from https://onlinelibrary.wiley.com/doi/full/10.1002/j.1552-146X.1997.tb00015.x

Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping? In M. Hildebrandt (Ed.), Being Profiling. Cogitas Ergo Sum. Amsterdam: Amsterdam University Press. Retrieved from https://www.privacylab.at/wp-content/uploads/2018/07/Ben_Wagner_Ethics-as-an-Escape-from-Regulation_2018_BW9.pdf

Warman, M. (2012, February 8). EU Privacy regulations subject to ‘unprecedented lobbying’. The Telegraph. Retrieved from https://www.telegraph.co.uk/technology/news/9070019/EU-Privacy-regulations-subject-to-unprecedented-lobbying.html

Williams, R. (1993). Culture is ordinary. In A. Gray, & J. McGuigan (Eds.), Studying Culture: An Introductory Reader (pp. 5-14). London: Edward Arnold.

Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121-136. Retrieved from https://www.jstor.org/stable/20024652

Wong, S. (2009) Tales from the frontline: The experiences of early childhood practitioners working with an ‘embedded’ research team. Evaluation and Program Planning, 32(2), 99–108. doi:10.1016/j.evalprogplan.2008.10.003

Zuboff, S. (2016, March 5). The secrets of surveillance capitalism. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html

Zuboff, S. (2014, September 9). A digital declaration. Frankfurter Allgemeine. Retrieved from http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboff-on-big-data-as-surveillance-capitalism-13152525.html

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London; New York: Profile Books; Public Affairs.

Policy documents and reports

AI Task Force & Agency for Digital Italy. (2018). Artificial Intelligence at the service of the citizen [White paper]. Retrieved from: https://libro-bianco-ia.readthedocs.io/en/latest/

Council of Europe. (1997). Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine. (The “Oviedo Convention”) Treaty No.164. Retrieved from https://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/164

Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A31995L0046

EC High-Level Expert Group. (2018). Draft Ethics Guidelines for Trustworthy AI. Working document, 18 December 2018 (final document was not published when this article was written). Retrieved from https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

European Commission. (2012, January 25). Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data (General Data Protection Regulation). Retrieved from http://www.europarl.europa.eu/registre/docs_autres_institutions/commission_europeenne/com/2012/0011/COM_COM(2012)0011_EN.pdf

European Commission. (2018). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions - Coordinated Plan on Artificial Intelligence (COM(2018) 795 final). Retrieved from https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence

European Parliament. (2019, February 12). European Parliament Resolution of 12 February 2019 on a Comprehensive European Industrial Policy on Artificial Intelligence and Robotics (2018/2088(INI)). Retrieved from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2019-0081+0+DOC+PDF+V0//EN

European Union Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&uri=CELEX%3A32016R0679

Gov.uk. (2018, January 25). Digital Strategy. Retrieved from https://www.gov.uk/government/publications/digital-charter/digital-charter

European Data Protection Supervisor (EDPS). (2015). Towards a New Digital Ethics Data Dignity and Technology. Retrieved from https://edps.europa.eu/sites/edp/files/publication/15-09-11_data_ethics_en.pdf

European Data Protection Supervisor (EDPS). Ethics Advisory Group. (2018). Towards a Digital Ethics. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf

Footnotes

1. By “European” I am not only focusing on the European Union (EU), but on a constantly negotiated cultural context, and thus for example I do not exclude organisations like the Council of Europe or instruments such as the European Convention of Human Rights.

2. Interviews informing the article (anonymous, all audio recorded, except from one based on written notes, four directly quoted in the article): two policy advisors; four European institution officers; one data protection commissioner; one representative of a European country to the Committee of Ministers of the Council of Europe; one European parliamentarian.

3. I am the vice chair of the IEEE P7006 standard on personal data AI agents.

4. I was one of the 12 appointed members of this committee (2018).

5. I was one of the 52 appointed members of this group (2018-2020).

Add new comment