What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate

References to ‘European values’ are often rooted in some perception of a commitment to particular rights that uphold certain principles about democracy and the relationship between state, market and citizens. Whilst rarely translated into consistent policy frameworks or activities, the formulation of new policy areas, such as artificial intelligence (AI), provide a window into what priorities, interests and concerns currently shape the European project. In this paper, we explore these questions in relation to the recent AI policy debate in the European Union with a particular focus on the place of social rights as a historically pertinent but neglected aspect of policy debates on technology. By examining submissions to the recent public consultation on the White Paper on AI Strategy, we argue that social rights occupy a marginal position in EU’s policy debates on emerging technologies in favour of human rights issues such as individual privacy and nondiscrimination that are often translated into design solutions or procedural safeguards and a commitment to market creation. This is important as systems such as AI are playing an increasingly important role for questions of redistribution and economic inequality that relate to social rights. As such, the AI policy debate both exposes and advances new normative conflicts over the meaning of rights as a central component of any attachment to ‘European values’. Issue 3 This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.


Introduction
The entrenchment and establishment of particular rights has from the outset been part of the advancement of the European project and how the European Union (EU) has defined itself. References to 'European values' are often rooted in an understanding of this commitment to rights seen to uphold certain principles about democracy and the relationship between market, state and citizens. Although the notion that Europe is premised on a set of exceptional values is contentious, Foret and Calligaro argue that European values can be understood as those 'values enshrined in the treaties and asserted by European institutions in their discourses ' (2018, p. 2). These treaties and institutional discourses do not always translate into consistent policy agendas and geopolitical activity, but they provide a window into what is considered valuable and for whom. This is particularly relevant in new policy areas, such as emerging technologies, where concrete conceptualisations of different fundamental rights are still being formulated. In these circumstances, we are provided with an opportunity to explore policy debates as indications of the priorities and concerns that make up the European integration project as it is shaped by different strategic interests and self-understandings.
In this paper we approach the question of what rights matter in EU policy debates by looking at the discourses of different stakeholders in the policy debate surrounding AI. We do so with a particular focus on the place of social rights as a growing, but historically neglected aspect of the governance discourse surrounding emerging technologies. Rights-based approaches in the governance of technologies, especially optimisation technologies 1 , have tended to prioritise human rights understood in terms of individual privacy, non-discrimination and procedural safeguards pertaining to consent and transparency as significant entry-points for regulation (Gangadharan, 2019). Whilst these are important areas for engaging with technology, ample research demonstrates how impacts on social and economic rights, such as the right to work, social security, healthcare, or education, constitute a crucial component of the societal tensions surrounding developments 1. We are following Kulynych et al. (2020) in describing a set of different data and algorithmic driven technologies, as ' optimisation technologies' that are 'developed to capture and manipulate behavior and environments for the extraction of value' (p. 1) and operates within the optimisation logic that prioritise technological performance and cost minimisation.
in AI (Alston, 2019). Yet despite these rights being important for the European project, they have received marginal attention in AI policy and governance debates.
As a way to uncover how social rights are understood in EU's policy debate on AI we use the public consultation on the White Paper for AI Strategy as a case study for examining concerns and priorities amongst different stakeholder groups. The engagement with the White Paper for AI Strategy is an important discussion in this regard, as it forms part of a discourse on AI that from the outset positioned policy concerns in relation to the protection of fundamental rights and so-called 'European values' . We start by outlining the historical and theoretical context out of which social rights emerge, situating them in relation to the broader pursuit of the 'European social model' following World War II and the subsequent creation and integration of the EU. We then go on to discuss some of the ways social rights intersect with optimisation technologies, and the role of rights-based approaches in concerns about data justice, particularly in areas such as employment and social welfare. Against this backdrop we outline the emergence of AI policy in the EU as an introduction to our study of the submissions to the public consultation on the White Paper on AI Strategy from key stakeholder groups, including civil society, public authorities and business associations. In analysing the dominant themes of these submissions, we argue that social rights are relatively muted within the AI policy debate despite the profound significance AI policy has for the articulation of resource distribution and economic inequality. Whilst concerns about social rights manifest themselves in discourses pertaining to public services and employment, they do so predominantly in a procedural context that emphasise fair data collection or right to redress rather than in material or distributive terms. Moreover, as an indication of what actually informs 'European values' , social rights are marginalised in favour of geopolitical concerns about the single market, regional competition and technological innovation.

Social rights and the European project
Although they are sometimes perceived as elusive, social rights have a firm role in the broader discussion on the evolution of citizenship, most famously perhaps in T.
H. Marshall's three dimensions that include civil, political and social citizenship (Marshall, 1950). At the same time, social rights are steeped in ambiguity of both a political and legal nature that relates, in part, to the division of rights into different categories that we see play out particularly at the international level (Ssenyonjo, 2009). Post World War II international human rights regimes, for example, adopted separate treaties for civil and political rights (such as freedom of religion, the right to assembly or to privacy) and economic, social and cultural rights (including the right to work, health or social security). 2 While both categories of rights are part of international human rights, civil rights have dominated the discourse and practice of human rights, often becoming their synonym and leaving social rights in the position of 'poor stepsister' (Alston, 2005).
One of the key differences between these categories of rights revolves around state-citizen relations. In the case of civil rights, the emphasis is on issues of individual freedom, especially from state interference, whereas the implementation of social rights often requires state intervention, incurring budgetary expenses and limiting private property or economic freedom (Eide, 2001). Furthermore, social rights have been considered dysfunctional in terms of any legal structure of those rights, making their judicial assessment harder to carry out (Langford, 2009). They also intersect with other categories of rights in some circumstances. For example, in the constitutional practice of some countries, the right to life is used to confirm the protection of access to medical care or medicines. On the other hand, conflicts can arise between rights, especially when the individual freedom vs. state intervention binary comes into play (Toebes, 1999).
While the scope of social rights might be contested, for the purposes of this article we follow Marshall's early understanding of social citizenship to include '(t)he right to share to the full in the social heritage and to live the life of a civilized being according to the standards prevailing in the society' and the 'universal right to real income which is not proportionate to the market value of the claimant' (Marshall, 1950, p. 11 and 47). In other words social rights are strongly connected to public services, fair working conditions, equality, and a guarantee of social protection delivered through universal systems and wealth redistribution measures such as minimum income or progressive taxation (Katrougalos, 2007;Moyn, 2018). Legal jurisprudence in the field on international social rights denotes that such rights are structurally complex and consist of various obligations of the state such as the guarantee of non-discrimination, procedural standards, but most of all ensuring the availability of public services and welfare (e.g. UN CESR 2008).
In Europe, the emergence of the modern welfare state has been associated with a strong commitment to social rights since the 19 th century (Esping-Andersen, 1990). Furthermore, the dual crises of the global recession and second world war ushered in a widespread consensus around the need for state institutions to play a 2. In this article we use shorter terms for each group of rights: for civil and political rights-civil rights, and for economic, social and cultural rights-social rights. permanent role in mitigating the harms of the market economy through social reforms that ensured social protection, access to employment and decent care (Judt, 2007 (Dodo, 2015).
A commitment to social rights has been a prominent feature in what has characterised the European model, but the advancement of a European integration project was always primarily oriented towards the formation of a common market (Maduro, 1999;Kenner, 2003;Garben, 2020 Maduro, 1999). In this context, social rights were presented as a component of eligibility rules that would allow for migration within the EU and the standardisation of national social security systems, constitutive of a 'market making' rather than a 'market breaking' imperative (Katrougalos, 2007;Maduro, 1998). Far from traditional welfare model and comprehensive social redistribution mechanisms, EU's social legislation set up minimal common standards (Demertzis 2011). Social rights eventually became inscribed in the European Charter of Fundamental Rights under the category described by Menendez (2003) as 'rights to solidarity' . Most recently, the UK's exit from the EU (Brexit) marked the adoption of a so-called European Pillar of Social Rights-a non-binding instrument that proclaims different rights related to equal opportunities, access to the labour market, fair working conditions, social protection and inclusion (Plomien, 2018). We now turn to the relationship of such rights with emerging technologies.

Optimisation technologies and social rights
Whilst there is widespread recognition that the rapid development and deployment of data-centric technologies has significant transformative implications, the question as to what these are and how they should be addressed is still a point of contention. Initial concerns have been oriented towards the mass collection of data that have tended to focus on issues of surveillance and privacy, prominent in public debate particularly in the immediate aftermath of the Snowden leaks in 2013 (Hintz et al., 2018). These events made clear the limitations of existing legislation and fed into a long-standing discussion about the need for further protection of privacy and personal data and better oversight in the handling and processing of data by both corporate and state actors (Lyon, 2015). Some of these con- The focus on privacy has been particularly dominant in relation to optimisation technologies, but there has also been a growing emphasis on issues such as harmful profiling, automated sorting, and biases embedded in data and algorithms that lead to forms of discrimination (Gandy, 1992). Both privacy and non-discrimination have become significant organisational concepts for policy debates on optimisation technologies. Yet in assessing the transformative potentials of such technologies both privacy and non-discrimination policies also have limitations (Mann & Matzner, 2019;Schermer, 2011). In part, the way these priorities have been operationalised has been critiqued for lending itself to design solutions that seek remedies in efforts such as 'privacy-by-design' or bias mitigation that, although useful, rarely address the contextual nature of technologies or their operative logics (Powles, 2018, Hoffmann, 2019 One of the most prominent themes in this regard is the growing orientation towards the so-called 'future of work' that has often focused on anxieties about the automation of work, potential mass job losses, wage reductions or global workplace restructuring (Arntz et al., 2016;Frey & Osborne, 2017). These discussions have provided impetus for new policy initiatives focused on redistribution and income guarantees, such as a universal basic income and public services or new wage policies (Standing, 2016;Portes et al., 2017;McGaughey, 2018). At the same time, debates about the impact of emerging technologies on actual job quality and the position of workers are also a growing focus, such as the impact of algorithmic management or increased workplace surveillance (Stefano, 2018;Wood, 2021).
The focus on the precarity at the intersection of optimisation technologies and work has also informed debates on the future of the welfare state more broadly.
This question encompasses not only ways to secure workers' rights or income guarantees, but increasingly focuses on the ways in which data infrastructures are shaping public services, including eligibility checks, risks assessments, and profiling (Dencik & Kaun, 2020;AlgorithmWatch, 2019;Eubanks 2018). In his report to the General Assembly, the UN Special Rapporteur on extreme poverty and human rights, Philip Alston, describes these developments as the advent of the 'digital welfare state' that is already a reality or is emerging in many countries across the globe. In these states, 'systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish' (Alston, 2019, n.p.). Such systems have frequently been implemented in a context of spending cuts, reduction in services and new behavioural requirements, whilst at the same time being perceived as void of policy implications that exempt them from much scrutiny or public debate (Alston, 2019).
These different areas of concern point to the relevance of social rights in the context of datafication and the advent of optimisation technologies, even if they are rarely directly addressed. While privacy and data protection across work and welfare have been part of this debate, social rights, as a constructive frame, has seldom been a dominant focus. It remains unclear how these can effectively be translated into policy debates and shape legislative agendas in relation to data infrastructures and emerging technologies. As a way to explore this further, we now turn to the recent policy debate on AI in the EU.

The case of European artificial intelligence policy
Over the last few years, the EU has been actively engaging in a range of policy initiatives that have focused on the development of AI within Europe and includes investments and financial policies, regulation of AI systems, international cooperation and other activities. Importantly, European AI policy should be seen as part of a larger ecosystem of institutional and legal interventions regarding communications and digital technologies that has a long history and dates back to the early 1970s (Mărcuț, 2017). It is not the intention to detail these here, but it is worth noting that the interest in AI started to gain traction in 2017 and 2018 with the adoption of the first communications of the European Commission and resolutions of the European Parliament on AI (see Niklas & Dencik, 2020 (2017) refers to as ' AI policy' , a distinctive area of policymaking that addresses different challenges tied to AI and similar technologies, including justice and equity, safety and certification, privacy and power dynamics, taxation or displacement of labour.
Within Europe, we see that AI policy plays out along the lines of what Jasanoff (2009) describes as the dualistic nature of liberal state interventions in technology and innovation informed, on the one hand, by a principle of public funding in research that grants significant autonomy to scientists, whilst on the other hand, recognising a need for regulatory intervention before new products enter the market. This dynamic is evident, for example, in discussions concerning tensions between the need for binding legislation and business-preferred ethical principles and soft guidelines (Wagner, 2019).
Among the documents that make up the European AI policy is the White Paper published in February 2020 as part of the five-year strategy Shaping Digital Future. White papers initiate debates in a particular area and contain ideas for particular actions (sometimes outlining possible options) and are used for consultations with stakeholders and institutions before legislative proposals are formulated (Overy, 2009). The scope of the White Paper on AI is broad and covers legislative, financial, educational and scientific activities. It is an outline of a broad strategy containing goals and concrete action plans, together with an estimated time for their implementation. It is not the aim to provide a comprehensive review of the White Paper here, but it is worth highlighting a few noteworthy aspects that inform our analysis.
AI is defined through its main components-algorithms and data. The two pillars of the European strategy are so-called ' ecosystems' of trust and excellence. The ecosystem of trust includes the strategies for funding and economic growth, research support and creating incentives for adopting AI systems by public and private sectors. The ecosystem of trust focuses on risks that AI systems create for fundamental rights, product safety and liability in what is considered a risk-based approach. Such an approach entails an assessment of 'high' and 'low' -risk applications that should inform interventions and requirements, e.g. obligations to keep a record for data, quality requirements for training models and transparency rules for consumers. The White Paper also makes suggestions for voluntary labelling schemes, conformity assessments and new governance structures that involve cooperation between national authorities.
The articulation of rights in the White Paper primarily concerns privacy, personal data protection, consumer rights and non-discrimination. The emphasis on nondiscrimination distinguishes the AI policy from many existing policy discourses on rights and technology that have prioritised privacy and personal data, leaving discrimination issues aside (Mann & Metzner, 2019). It is important to note that discrimination in the White Paper is primarily interpreted as a problem of bias, data quality and specific technological architecture. The paper also notes that AI systems can support 'the democratic process and social rights' but there are no further mentions of such rights except rare references to healthcare, public services or employment. For example, the White Paper refers to discrimination 'in access to employment', 'the rejection of an application for social security benefits' or the use of AI system to 'improve healthcare' .
Whilst the White Paper serves as an illustration of regulatory approaches to AI and a proposed institutional framework for research and innovation in this area, it is also indicative of a wider set of discourses that are part of asserting the meaning of the European project and how the EU seeks to define itself. As Jasanoff (2007, p. 92) notes in relation to the EU's biotechnology policy, policies on technology 'became a site of interpretive politics, in which important elements of European identity were debated along with the goals and strategies of European research' . Similarly, the White Paper on AI Strategy makes frequent references to notions such as 'European values' , 'European data' and 'digital sovereignty' that denote a close connection between narrower regulatory and funding initiatives with a broader articulation of the EU's geopolitics and vision for the relationship between European in-stitutions and citizens. This is the case not least in its positioning as an alternative to the 'surveillance capitalism' of the US and the 'technological authoritarianism' of China (European Commission, 2020a). In this sense, the White Paper reveals a certain set of priorities. Yet in order to understand the AI policy debate in broader terms it is important to engage with the different stakeholder interests and concerns that shape this debate. As a way to further explore how social rights feature in the AI policy debate, we therefore now go on to examine stakeholder perspectives with regards to the White Paper.

Methods
In order to examine the place of social rights in the EU's AI policy debate, we conducted a qualitative content analysis of documents submitted to the public consultations on the White Paper on AI Strategy (European Commission, 2020d). The process of public consultations in the European Union invites various social actors, such as non-governmental organisations, trade unions, enterprises and academics to participate in the policy or regulatory process. These consultations are intended to make policy-making more democratic, sensitive to the voices of civil society and increase legitimacy for new political decisions (Rasmussen & Toshkov, 2013). However, they have also been accused of prioritising the involvement of particular groups of actors and require specific expertise that place limitations on their results (Persson, 2007). They are also bound by particular structures, such as on-line consultations that often use standardised questionnaires, shaping the extent of problem-definition and inclusivity (Quittkat, 2011). This is a significant aspect to consider in the analysis of any public consultation process and is illustrative in some of the conclusions we are able to draw.   We conducted a thematic data analysis, following six steps recommended by Braun and Clarke (Braun & Clarke, 2006) and using qualitative data coding software (NVivo). First, we identified prominent concepts and initial findings. Second, based on this first reading of the collected data and previous research on social rights and optimisation technologies we developed a list of codes that summarise and capture the crucial aspects of the given concepts. Those codes were assigned to particular sentences or larger segments of text. Then initial codes were defined and grouped in a way to help identify connections between them. They focused on different aspects of texts-description of particular phenomena, normative statements about the role of technology in society or recommendations for new laws or budget policies regarding AI. We ended up with a group of codes that were focused on particular problems and represent four areas of interest: a) social rights and policies (access to public services, work and employment, welfare administration), b) human rights and justice (discrimination, privacy, due process, transparency), c) narratives about AI systems (beneficial, critical) and d) approaches to European AI policy (critiques, recommendations, approval). After analysing the materials from each group of actors participating in the consultations, we prepared a summary on this group. Summaries covered the role of human rights in documents, political recommendations, issues related to social policies, and a general approach to AI. These summaries and the comparisons between them also allowed us to capture significant differences between specific actors participating in consultations, e.g. between NGOs and companies. Importantly, drawing on the interpretative policy analysis approach, we understand policy debates as a set of discourses constituting a conglomerate of various narratives, frames and understandings, where policy issues such as rights, regulations or institutions are seen as social constructs (Hajer, 1993). In this sense, we also approach rights as discursive and sociological rather than legal phenomena and are less interested in the legal interpretations and normative content of specific rights. We predominantly want to explore how rights and 'rights talk' build political discourses, set up priorities and indicate decisions about values.

Findings
As a way of outlining how social rights feature in the consultation on the EU's White Paper on AI Strategy, we start by briefly outlining the structure of the online questionnaire in the consultation and the results from our search of keywords relating to fundamental rights and policies in the answers to that questionnaire (Tab. 3).
The questionnaire was divided into three sections, with a total of 16 close-ended questions, 10 open questions and additional space for comments (European Commission, 2020b). Each participant could also provide additional documents like policy briefs, reports or more elaborated positions. Section one included questions related to the ' ecosystem of excellence' and covered issues such as support for development and uptake of AI, research excellence, and financing for start-ups. Section two referred to the AI regulation and section three raised questions about safety and liability. As part of these two, there were a limited number of questions pertaining to human rights, that also included potential answers such as ' AI may breach fundamental rights' or 'The use of AI may lead to discriminatory outcomes' and one question referred to workers' rights. In this sense, the questionnaire provided limited scope for human rights concerns to be raised and made no overt reference to social rights.
The analysis of responses to the questionnaire (especially the open-ended ones) using keyword search shows that human rights was still an important part of the consultation. When writing about potential threats and problems, participants noted the violations of human rights in general terms, and in particular privacy and non-discrimination. Social and labour rights were very rarely included in the responses. Keyword searches specifically related to social policies demonstrate that mentions of healthcare or education were most prominent, and less so work, with a significant absence of mentions of social security or protection altogether. Whilst this may illustrate certain priorities, it may also be related to a focus on educational skills and innovation in healthcare related to AI. This initial analysis indicates, in simplified terms, some priorities in the discussion on the White Paper. To further explore the question of the place of social rights in the EU's policy debate on AI, we next draw on our qualitative analysis of submissions and provide four central themes that emerged from our analysis. The first theme engages with the privileging of human rights in discussions on AI, whilst the second theme showcases how rights are operationalised in the context of the dual efforts of strategic investment and a risk-based approach. The final two themes focus particularly on how the intersection between social rights and technology is understood in relation to two policy areas: workplace relations and public services.

Human rights as a starting point
References to human rights and fundamental rights were very prominent in the submissions. All NGOs, trade unions and most research institutions and public authorities privileged a concern with human rights, with business organisations re-  (EDRi, 2020, p. 5). While often focused on the issue of biases and data processing, some organisations also explained that technologies may lead to discrimination because they are applied to certain groups, sectors of society or 'problem districts' (NJCM, 2020, p. 9).
In terms of explicit references to social rights, we found these in 15 of the submissions analysed and engaged with the framework of rights to health, social security or work in the context of using AI systems.

Operationalising human rights: from accountability to public investment
The engagement with rights language is not only indicative of normative priorities, but also suggests specific policy initiatives.  (AN, 2020, p. 8). Some of those instruments create direct links with social rights, such as the proposal of a risk assessment that includes 'social discrimination, and impact on working conditions' (UGICT, 2020, p. 10) as a response to the question of how to give human rights more concrete meaning in the development of AI. With regards to business organisations, rights were often operationalised in terms of particular organisational and technical procedures that focus especially on biases. Google, for example, explained how discrimination is addressed within their operations 'from fostering an inclusive workforce that embodies critical and diverse knowledge, to assessing training datasets for potential sources of bias, to training models to remove or correct problematic biases' (Google, 2020, p. 21).
When it comes to investment efforts, human rights concerns were highlighted by NGOs (at least seven) as a necessary inclusion to ensure trust: 'Ecosystem of excellence must include trust' (EDRi in European Commission, 2020e, n.p.). In particular, NGOs, trade unions or research institutes advocated for greater participation or evaluation methods that included fundamental rights, such as the suggestion from EWL that investing in and developing technology should include 'gender budgeting, impact assessments and well-funded monitoring frameworks' (EWL, 2020, p. 4). Beyond these procedural safeguards, some organisations also engaged with the question of how decisions about resource allocation should be made: 'initiatives on research should ensure that the public interest is taken into account and that priorities are not simply set by the private sector but by broader social and environmental policy objectives' (EPSU in European Commission, 2020e, n.p.). Relatedly, some saw public investment as an opportunity to challenge a 'surveillance-based business model' (Amnesty Int., 2020, p. 4) and data monopolies, and made suggestions for 'mandatory nonexclusive licensing of machine-collected data' (industriAll, 2019, p. 5) or 'legislative action to ensure access and use of business to government (B2G) data sharing' (EUROCITIES, n.d., p. 1). These discourses are indicative of a perceived role for the public sector in technological innovation as a way of ensuring fundamental rights.

Employment: automation of jobs to algorithmic management
References to social rights in the submissions centred on two main areas; employ- In addition to restructuring the labour market, the submissions also focused on the impact of AI on management and working conditions where rights-based approaches were particularly prominent in questions of data governance, privacy, workers surveillance and algorithmic decision-making. All of the analysed submis-

Public services: providing access to benefits, healthcare and education
The other significant area for engagement with social rights was in relation to AI and public services. A diverse range of actors (business associations, NGOs, research institutions) referred in their submissions to the way automated systems are used by the public sector in areas like social security, healthcare or education.
Only six submissions linked those issues with a language of rights although they did provide an indication of the normative expectations for AI in those areas, predominantly seeing AI as advancing social rights. For example, in describing the use of AI in public administration, some noted the benefits of AI for ensuring 'health workers spend their limited time in the most productive way' (EPHA, 2019, p. 3), provide 'better, faster and more customised care to patients' (EFPIA, 2020, p. 1), 'support and improve decision making' (REIF, 2020, p. 3), 'help to inform policy direction and actions' (Government of Ireland, 2020, p. 11) (EPSU in European Commission, 2020e, n.p.). As also noted above, these comments speak to the perceived close association between a strong public sector and the safeguarding of social rights.

The place of social rights in the EU's AI policy debate
The White Paper on AI and submissions to the public consultation provide a useful indication of the different priorities and interests that are shaping the AI policy debate in Europe. When it comes to the question of social rights, it is noteworthy that their place is limited in the current AI policy. The White Paper does not lack a 'rights language' , however the clear priority remains privacy, different transparency safeguards and specific understandings of non-discrimination. It is also in relation to non-discrimination that we see most engagement with social rights, such as un- There are many different ways in which we might explain this limited conversation about social rights in the EU's AI policy debate. First of all, social rights hold an awkward position in European integration also in relation to the historical trajectory of the welfare state in the broader discussion on European identity and values (Katrougalos, 2007;Dodo, 2012). Whilst a social agenda within the EU has evolved over decades, the notable priority on market creation advances a political environment that makes some debates possible and some not. Social rights occupy a controversial place in policy debates that make them a less favourable frame for those actors that often prioritise individual freedoms or lack expertise in areas of social welfare or employment. Furthermore, the character of the policy process on technology prioritises the regulation of risks and the allocation of resources in innovation as main concerns (Jasanoff, 2009). Such a focus favours procedural and budgetary questions rather than, for example, the character of work or sustainability of public services. It also prioritises certain kinds of actors and language that can engage with these priorities. With regards to civil society, for example, this means that very often actors that have a particular techno-centric focus tend to respond to policy consultations and play an essential role in setting the agenda (Gangadharan & Niklas, 2019). This has played out in terms of a framing of issues that privi-leges data protection and non-discrimination as main dominant human rights concerns as evidenced in our analysis. Both these issues have become widely recognised as spaces for policy intervention that engage with questions of data processing, algorithmic bias and transparency of computational models. This specific nature of the discussion on technology policy also undoubtedly influenced the nature of public consultations on the White Paper, which from the very beginning provided limited space for an engagement with social rights within the discussion.
Moreover, setting priorities in terms of rights discourses is a political matter, and is often associated with a broader economic and political context. This also means that how issues are understood creates certain parameters for the nature of responses. For example, the nature of the discussion on discrimination in AI debates that has tended to favour a focus on data and algorithmic bias has led to concerns about the presence of 'happy talk' on inclusion and diversity (Benjamin, 2019) and the drive towards an atomistic and techno-centric response to automated inequality (Hoffmann, 2019). These outcomes can be the result of many factors including particular corporate involvement, priorities of civil society or specific approaches to the topic in the media. Whilst rights-based approaches in general can be said to always have limitations (see also Hoffmann, 2020), the marginalisation of social rights within the EU's AI policy debate should be seen as a political struggle over the meaning of 'European values' that goes beyond technological policy and touches upon the wider political priorities of the European project.
Nonetheless, social rights remain a relevant component of European integration and continue to be significant for addressing harmful market practices and for informing regulatory mechanisms (Kapczynski, 2019). Even if, as Moyn (2018)  ing benefits, use healthcare or other public services (Yamin, 2008). Both Yamin and Kapczynski argue that in relation to a 'narrow understanding of human rights' , social rights play a significant role in confronting matters of political economy, can 'articulate claims to public prerogatives and infrastructures' and reconstruct existing market mechanisms (Kapczynski, 2019). On this reading, social rights are integral to the creation of egalitarian social institutions and provide them with renewed relevance in light of neoliberal marketisation and widespread austerity agendas.

Conclusion
The policy debate on AI within Europe provides significant insights into how 'European values' are being constructed and what priorities are shaping approaches to technology innovation and regulation. Concerns about the turn to data infrastructures across areas of social life have tended to focus on particular human rights issues such as privacy and more recently non-discrimination, that are often translated into design solutions or procedural safeguards. At the same time, funding and intervention in the advancement of technology has been informed by an overarching commitment to the creation of a common market that can compete globally.
These dynamics continue to play out in relation to current AI policy debates. Although the characteristics of the 'European culture of justice' have historically been associated with a social model that contrasts with other parts of the world (most notably the US) through its commitment to employment regulation and access to public services, an engagement with social rights in the context of emerging technologies has been notably absent and limited at best. Despite a growing recognition of the significance of social rights in addressing the impacts of AI advancements, they continue to occupy a marginal and awkward position in EU's policy debates.
Yet certain openings for a discussion on social rights are emerging, particularly within questions of the future of work (including automation) or the use of optimisation technologies in the public sector, healthcare or education. Often this is bound up with an emphasis on non-discrimination. As we have seen, the increased involvement of trade unions and some NGOs that have not traditionally been prominent in policy discussions on technology has meant that there is emerging, albeit limited, engagement with social rights concerns in the most recent consultation on the White Paper on AI Strategy, particularly in relation to transformations in work and in public administration. Whilst these concerns speak to the continued relevance of the European social model, they rarely translate into a social rights frame that can effectively be operationalised in relation to AI, relying instead on design solutions or procedural safeguards. Instead, interests in redistribution and equality may need to engage with structural changes that involve the power relations of institutions, political economy and broader forms of governance not easily captured by rights-based approaches. Insisting on such an engagement as part of establishing any 'European values' in relation to technology that are said to be committed to (data) justice will continue to be a huge challenge.