Naming something collective does not make it so: algorithmic discrimination and access to justice

Jenni Hakkarainen, Law and Technology, University of Helsinki, Finland

PUBLISHED ON: 07 Dec 2021 DOI: 10.14763/2021.4.1600

Abstract

The article problematises the ability of procedural law to address and correct algorithmic discrimination. It argues that algorithmic discrimination is a collective phenomenon, and therefore legal protection thereof needs to be collective. Legal procedures are technologies and design objects that embed values that can affect their usability to perform the task they are built for. Drawing from science and technology studies (STS) and feminist critique on law, the article argues that procedural law fails to address algorithmic discrimination, as legal protection is built on data-centrism and individual-centred law. As to the future of new procedural design, it suggests collective redress in the form of ex ante protection as a promising way forward.
Citation & publishing information
Received: October 28, 2020 Reviewed: April 12, 2021 Published: December 7, 2021
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Access to justice, Collective redress, Algorithmic discrimination, Feminism, gender
Citation: Hakkarainen, J. (2021). Naming something collective does not make it so: algorithmic discrimination and access to justice. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1600

This paper is part of Feminist data protection, a special issue of Internet Policy Review guest-edited by Jens T. Theilen, Andreas Baur, Felix Bieker, Regina Ammicht Quinn, Marit Hansen, and Gloria González Fuster.

Introduction: Technology, discrimination and access to justice

We discriminate. Discrimination can be intentional or unconscious, as human beings have biased attitudes towards other people. These attitudes become integral parts of institutions and their decision-making processes, where they eventually promote inequality more broadly. Human beings are not the only ones who embed and mediate values. Things are also claimed to have politics (Marres, 2012; Winner, 1980). The intended and unintended values of the participants who contribute to the design of a technological artefact are performed through those technologies (Pfaffenberger, 1992). Technologies, therefore, can mediate biased attitudes and inequality.

Recent advances in the field of artificial intelligence (AI) have raised concerns about how data-driven technologies, more specifically automated decision-making tools, mediate discrimination. These concerns have turned into reality, as computer programmes used e.g. in recruiting1 have been discovered to produce biased decisions. Feminist scholar Cathy O’Neil argues that algorithmic discrimination is most likely to affect minorities, low-income people or people with disabilities through automated social welfare systems and credit scoring (O’Neil, 2016). In addition, discriminating algorithms have been embedded in predictive policing systems2 and platform economy governance3, just to name two examples.

Despite growing awareness, the current research and institutional responses to discrimination are partly hindered by their inability to recognise the connections between algorithmic discrimination and the long history of research on discrimination as well as the role that technologies play in postmodern society. For example, non-technology specific contemporary feminism has argued since the 1960s that discrimination is a matter of collective experience, enabled by societal and legal structures, and that procedural law is not well equipped to provide access to justice in discrimination cases (West, 1988; MacKinnon, 1979). Elsewhere, scholars on access to justice have pointed out the limits of individual litigation in addressing harms that occur not only as an individual harm, but have more far-reaching consequences at the structural level of societies (Cappelletti et al., 1989). Debates in philosophy and social sciences have long been challenging the individualistic assumptions that are embedded in our language, law and societal practices (e.g. Gilbert, 2014; Somek, 2008) and questioning the neutrality of technologies and the hidden, male preferences they distribute (Adam, 1998). Science and technology studies (STS) scholars have further voiced how crucial it is to recognise the roles of human and non-human actors as the primus motor behind technologies, data, and procedures (Jasanoff, 2004; Callon, 1984).

This article is about technologies that discriminate and how algorithmic discrimination can be contested via public enforcement, either in courts or in administrative procedures. Existing legal procedures that address algorithmic discrimination focus strongly on data and individual redress, and are therefore ill-suited to address the problem both as a societal problem and as a personal encounter with injustice. Since algorithms are designed to facilitate and govern collectives, I argue that we need to strengthen collective redress mechanisms. However, the existing collective proceedings, such as class action or representative action, are not sufficient to address discrimination (Lahuerta, 2018; Farkas, 2014), let alone algorithmically-produced inequality. The problems that raise the threshold for accessing the court and accessing justice include procedural rules, which determine whose injustice is heard, institutional arrangements around procedural justice, as well as information and power asymmetries (Hodges, 2019; Lahuerta, 2018; Cappelletti & Garth, 1978).

This article combines three strands of research: access to justice, science and technology studies (STS), and feminist legal studies. Each tradition understands something crucial about discrimination, technology, collectivism, and how to correct injustices. However, by drawing insight from each discipline to complement those areas where they fail to recognise or respond to algorithmic discrimination, a new kind of procedural response emerges. This perceives legal procedures as design objects and builds on the understanding that law, technology and humans, as individuals and collectives, are constantly in a co-constitutive movement. Consequently, this article is neither about data protection nor is it about material discrimination law, nor even about doctrinal procedural law.

In legal scholarship, discrimination is usually connected to an academic sandbox of constitutional studies, where it finds an especially close ally in human rights law. Recently, algorithmic discrimination has been extensively approached as a data-related issue, as the article 22 of the General Data Protection Regulation (GDPR) regulates automated decision-making and e.g. recitals 71 and 75 address discriminatory outcomes of data processing. Legal research on algorithmisation has focused mostly on the phenomena through the lenses of data, privacy, and ethics, whereas questions related to procedural safeguards have mainly been discussed in the margins or as parts of the domains mentioned above. Practical solutions of procedural safeguards are still very much underdeveloped, even though the need is recognised (Petkova & Ojanen, 2020; Williams et al., 2018).

The issue of algorithmic discrimination and procedural safeguards has been brought up in several recently published policy papers in the European Union (EU).4 These documents recognise the importance of effective redress and enforcement mechanisms, conducted by official bodies, in addressing both individual and collective harm caused by discriminatory AI tools. One possible solution suggested by the documents is ex ante protection, which is also discussed to some extent in the current AI literature as technological protection by design (Hildebrandt, 2017; Diver, 2020), preventive impact assessment (Mantelero, 2014), or protection via oversight (Koulu, 2020). However, the official documents leave this novel-sounding procedural response largely unspecified, thus making it difficult to imagine what it would mean if implemented in the EU’s procedural toolkit. As I develop my argument about the future of new procedural design, I suggest collective redress in the form of ex ante oversight as a prominent way forward.

The next section discusses how collectives are perceived in law, STS, and feminist studies. I will focus on feminist legal studies developed in the 1970s and 1980s and complement it with insight from more recent literature on data feminism. Sections 2 and 3 draw on STS and go to show how algorithmic discrimination operates. They argue that procedures are types of technologies themselves, and possess all the characteristics of a design object. Section 4 discusses access to justice. Sections 5 and 6 utilise feminist critique on law and individualism to argue that the current procedural mechanisms are not enough to respond to algorithmic discrimination. Despite the individualistic underpinnings of the modern legal process, I claim that robust collective protection can be constructed within the law. Even though I address the individual/collective dichotomy in this article, I will not be criticising individualism as such, but instead aim to direct attention to the narrow understanding of collectives in procedural law and procedural design, dominant especially in European legal thinking.

Section 1: Collectives in law, technology and feminism

Collectives have always been difficult for legal scholarship. Collectives are discussed e.g. in connection to collective rights, but only a handful of legal scholars seem to consider collective rights or modes of being as normatively feasible concepts (Ingram, 2000; Réaume, 1988). A firm belief in a rational free man who holds all legal rights is dominant, especially in the European and Anglo-American tradition. However, some scholars have recently begun to develop a more nuanced conceptualisation on privacy as a right that necessitates group-level privacy (Regan, 2002), a communal approach, and collective protection in order to be effective (Cohen, 2018). However, these voices have appeared mainly in legal scholarship, as the legal norms still mainly operate on an individual level. Reluctance to adopt a collective standpoint has made it difficult to include collective elements in procedural design, as collective procedural mechanisms depart from the traditional framing of procedural law—a firm belief in individual redress and two-way communicative action (Habermas, 2001) in justifying the outcome of a court proceeding. It should be noted that even though collective procedural law remains on the margins in the European system, collective elements play more important roles in the legal systems of many Latin American countries (e.g. Gómez, 2012 and 2016).

Even though collectively aligned vocabulary is a stranger to law, other disciplines have approached the topic with great interest. Various strands of research from social to political sciences use collective experience and collectivity to criticise individualism as an element that underpins modern institutions and ideologies, such as neoliberalism (Gilbert, 2014), or to develop alternative theories to explain social and political action (Somek, 2008). Moreover, various disciplines have been engaged in developing theories on how collectives are formed and how they interact and produce knowledge (Gilbert, 2014; Simon, 2015). Actor Network Theory (ANT) claims that co-constitutive relation between the society and its human and non-human components is the primus motor of all action, and therefore also a starting point for any scientific inquiry and institutional process (Latour, 2007; Callon, 1984).

Feminist STS scholar Alison Adam has criticised male-driven AI development and algorithmically produced reality for bypassing women as designers of technology. For Adam, this results in a society where the male-way-of-knowing becomes dominant over other epistemologies (Adam, 1998), such as collective knowledge and identity formation. Hester Baer, a scholar specialised in gender issues, makes a contrary claim, as she suggests that algorithmisation is at its core a collective modality (Baer, 2016). Therefore, institutions and processes which have been built on an individualistic worldview and epistemology fail to properly understand the effects of algorithmisation. STS scholars Tsjalling Swierstra and Arie Rip further claim that collective action is the primary mode for shaping technological infrastructures (Swiestra & Rip, 2007). A collective modality is a state of being in which individuals are no longer isolated, but forced to emerge into something other than themselves (Arendt, 1998). While collective modality can therefore be perceived as an act of violence and power, it also holds great promise as it enables new forms of collective action and the shaping of the technological infrastructure itself.

In addition to the recent discussion on collective action within STS-inspired data feminism (D’Ignazio & Klein, 2020; Jouët, 2018) collectives and collective action were central topics for feminist legal studies in the 1980s. Katharine Bartlett discusses collectives as physically and emotionally connected communities (Bartlett, 1990). In feminist studies, collective action is commonly understood as an action that brings people together in order to reach some mutual goal, e.g. to raise awareness on workplace harassment or to change policies that are considered harmful for gender equality (Schneider, 1986). Collective action emerges from a shared experience and from the desire to bring social change to those areas of law and politics where women’s voices and experiences are not recognised. More recently, legal scholar Julie Cohen has discussed collective responses to technologies of power in the context of technical infrastructures and stressed the importance of collective action in contesting algorithmic power (Cohen, 2019).

It is important to note, however, that all collective action is not institutionalised and therefore differs from collective legal action and collective legal protection. By collective legal action I refer to court proceedings that aim to resolve multiple similar injustices in one court proceeding. Collective legal protection is a more holistic concept and refers to institutionalised ways of overseeing and enforcing collective interests, and ensuring collective justice. In other words, community-level action has its limits in affecting societal structures, as it often lacks the regulatory means and power to enforce change, which are at the core of law and legal systems.

In the next section, I will briefly look at how algorithms constitute discrimination. As Riikka Koulu highlights, algorithmisation reveals structural and systemic weaknesses in societal systems, such as the law (Koulu, 2020). Discrimination is not a new phenomenon, nor is the critique against ill-functioning redress mechanisms in tackling discrimination. So how is algorithmic discrimination any different? The main difference, I argue, is that algorithms create collectives that are not yet recognised by law, and that they create a distance that complicates procedural responses to address discrimination.

Section 2: Algorithmic decision-making makes and governs collectives

In algorithmic decision-making, human-generated data is used on individuals and groups. Our behaviour, characteristics and actions are turned into population-level data, and then our future behaviour and actions are anticipated according to those data sets. The most obvious example is a recommendation system that decides what ads and news we see while browsing online. In this case, technology decides for us. In addition to deciding for us, data is used to make decisions about us, to predict our future behaviour and life events. For example, a recruiting agency might scan job applicants with a computer programme to figure out whom to call for an interview and whom to exclude.

It is common knowledge in most academic disciplines that discrimination is deeply rooted in human behaviour and has long been present in the everyday lives of individuals and groups (Eubanks, 2011; Gandy, 2009; Crenshaw, 1991) and is now present in human-generated data (O’Neil, 2016). However, algorithmisation changes the medium through which inequality is distributed, and it is precisely this change of medium which adds another level of complexity that makes social, political, and legal attempts to address discrimination more difficult. Social and legal systems have so far been text-orientated (Hildebrandt, 2015), vision-orientated (Koivisto, 2020), and reliant on immediate experience. Now these systems are struggling with a medium that changes how legal practices can harness both the potential and the threats that algorithms constitute.

The increasing complexity caused by the technological medium is closely attached to the concept of distance. Data analytics take place behind technological interfaces, adding distance between individuals and decision-making processes about them (D’Ignazio & Klein, 2020), while other scholars have argued how algorithms simultaneously distance people from their peers while closing the geographical gaps between them (Gillespie, 2018). Even though structural discrimination has always been difficult to identify and take down, digital infrastructures even widen the gap between different actors and events, such as the individual being discriminated against and the reasons for that discrimination. The distance manifests itself in informational asymmetries (Koivisto, 2020; Pasquale, 2015), power asymmetries (Hildebrandt, 2015), and physical distance. Another layer of complexity arises from the practical operations of algorithms, which create collectives that are not yet recognised in material norms, which define the scope of discrimination or procedural norms, which in turn define the rules for accessing justice. I will return to the procedural implications that arise from this crisis of recognition and distancing effect in sections 4 and 5.

Discrimination mediated by AI tools does not always fall under the categories that traditionally define prohibited discrimination, most notably ethnicity and gender. In this sense, algorithmic discrimination overlaps with intersectional discrimination, which argues that various features contribute to how and why a person is categorised, faces injustice, and is excluded from the society (Mann & Matzner, 2019; Gandy, 2009; Crenshaw, 1991). The more overlapping and intersecting the discriminatory features that contribute cumulatively to discrimination, the harder it becomes to address it within the dominant anti-discrimination and procedural frameworks. To put it simply, the distancing effect of algorithmic governance creates a barrier for identifying discrimination, as the discrimination results from complex data-analytics.

In a recent case Deliveroo, a well-known food delivery company, used an algorithm to rank its drivers. The algorithm distributed the best work shifts and deliveries to the workers the algorithm ranked highest. On December 2020, the Labour Court of Bologna ruled that the ranking algorithm was discriminating. The court concluded that since the algorithm did not take into account different reasons why a driver was not working, but produced the ranking purely based on reliability and participation, it therefore breached local labour laws.5 In another recent case, The National Non-Discrimination and Equality Tribunal investigated a case in Finland where a finnish speaking male was refused credit by a credit-scoring algorithm. The tribunal concluded that because the algorithmic system based its decision on a combination of multiple features, such as gender, age and place of residence, it constituted an act of unlawful discrimination.6 Discrimination was a result of a combination of different features. On the surface, analytical tools might seem neutral, but nevertheless biases can affect their decisions and therefore promote inequality on both the individual and collective levels.

The constitutive problem is that, in addition to using discriminatory data, algorithmic decision-making is built on making decisions based on similarities and differences. The core function, as pointed out by legal scholar Mireille Hildebrandt, is to produce inequality (Hildebrandt, 2015), as the basic function of analytics is to sort and categorise people and to form collectives based on differences. Technical infrastructures form an architecture in which an individual is simultaneously a unique actor made from billions of data points and an aggregated part of a larger whole, as the individual data points gain meaning only in relation to larger data sets. An individual becomes an ever-changing, dynamic thing that is constantly made, evaluated and pre-determined.

While one of the core problems of digitalisation and algorithmic discrimination is in how they enable, make, and manage collectives by creating distance between the physical and the digital self and others (Jouët, 2018), it also provides keys to address the phenomenon within legal procedures—or at least a point of departure. If algorithmic discrimination targets collectives and forms a distant and collective threat, should legal protection also be collective? Collective protection is a topic bubbling beneath many of the debates surrounding legal protection in the digital environment, especially in the domain of data protection and data privacy (e.g., Rodota, 1973; Cohen, 2018) and consumer law (Hodges, 2015), but it has not had much effect on discrimination laws (Lahuerta, 2018). As algorithmic discrimination builds on the dynamic formation of individuals and collectives, it is difficult to imagine how the current collective mechanisms could be adapted to address the problem at a collective level.

Next, I will briefly touch on the concept of access to justice, as it provides the normative starting point for all procedural reforms. By drawing inspiration from STS, legal procedures can be approached as design objects. If a legal procedure is perceived as a design object, it forces academics and policymakers to ask these crucial questions: why is the current procedural law incapable of addressing algorithmic discrimination, and what is preventing us from developing better-suited tools for delivering access to justice?

Section 3: Behold, we have a process!

Procedures are powerful tools for justification and hence establishing community rules, following an industry guideline, or proceeding according to procedural norms gives an illusion of compliance and a proper problem-solving capacity. Legal procedures, which are conducted according to procedural rules defined in legal statutes, as procedures in general, have a dual function. Procedures make things visible and make things come into existence (Solum, 2004; Arendt, 1998). Legal procedures bring issues from the private sphere under public scrutiny and make them contestable. On the other hand, because procedures make things visible by reducing complex issues into structured and regulated performances, they also have the capacity to hide things.

It is often forgotten that apparently neutral processes always include a variety of actors exercising power over other participants, as pointed out by Bruno Latour, John Law and Alan Sheridan in their study on procedures that define valid and acceptable scientific research (Latour & al., 1988). Court proceedings, among other legal procedures, are themselves types of technologies that operate via physical artefacts such as marriage certificates and data management systems. By using the concept of technology here, I wish to underline exactly the very materiality of a legal procedure that differs from the traditional procedural paradigm, which mostly understands legal procedures as a combination of immaterial space where justice is distributed and rules that guide the behaviour of various actors on their way towards that justice. Framing court proceedings as technologies opens up space for interpreting them in line with early feminist scholars in artificial intelligence who emphasised the materiality of action shaping tools (Adam, 1998; Akrich, 1992) and to a power-sensitive perception of law in postmodern thought (Foucault, 1977).

Legal procedures as technologies are products of deliberate design and shaping, and therefore hold all the characteristics of a design object. Two sets of issues emerge from this observation. First, as do other technologies, procedures embed values that can further or constrain their use and design. Second, like other technologies, procedures can be reshaped. Seemingly neutral procedures are always built on pre-existing assumptions and values that shape how procedures, legal court proceedings included, are designed. Even though legal scholars are largely aware of how law is prone to ideological biases and promoting particular worldviews, they have not been too sensitive in detecting the biases and presumptions that are present in legal procedural rules. The assumption seems to be that as long as material norms are properly investigated and fixed, a procedure will take the dispute and interpret the rules and enforce them in a neutral way. However, just like technologies, legal procedures are not neutral and embed values that can have far-reaching effects on access to justice. So, what is access to justice?

Section 4: Accessing justice

Access to justice has multiple meanings in the legal discourse. In its narrow sense, it refers to legal procedures and procedural design—creating and delivering legal services. Consequently, it is every lawyer’s job to mediate between people and the law and to ensure the availability of proper legal redress. In addition to the production of legal services, access to justice is a widely recognised human right established e.g. in article 6 of the European Convention on Human Rights.

The ideal of access to justice has probably been voiced most effectively in the EU context during the period from the 1960s to the 1980s and in the Florence project, especially by procedural law scholars Mauro Cappelletti and Bryant Garth. Their work on access to justice has had far-reaching effects on procedural design in the EU and beyond (Cappelletti & Garth, 1978). Collective redress in the EU is tightly connected with the access to justice research, as it was Cappelletti’s and Garth’s work that inspired the EU to develop and adapt collective mechanisms, especially in consumer disputes. It is worth noting, however, that collective redress mechanisms do not exist in a vacuum, and that collective elements existed in the procedural paradigm before the 1960s (Bosters, 2017; Cappelletti & al., 1982). Outside the EU, Cappelletti’s ideas on collective enforcement of diffusive rights (Cappelletti et al., 1989) have received more attention and gained practical significance in many Latin American countries in particular.

Despite the great expectations that embellished the early years of the boom in collective redress, EU member states have been reluctant to adapt and develop collective mechanisms such as class-action models. What’s more, the ability to adapt collective mechanisms to resolve high-impact cases such as discrimination has been systematically investigated within the EU. Meanwhile, scholars, especially those in data protection and discrimination, have begun to call for a broader application of collective mechanisms to address misuse of data and discrimination (Lahuerta, 2018; MacDermott, 2018). Collective mechanisms are considered anomalies in the European procedural paradigm, which has traditionally taken, in the spirit of rational enlightenment, individual dispute and two-way dialogue as its core values (Habermas, 1996).

Hodges has noted how the procedural reforms in the EU and its member states that aim to better collective mechanisms lack creativity, as the EU tends to circulate old tools in a slightly altered form to address new problems (Hodges, 2019). Cohen argues similarly, as she claims that a path dependency exists between the status quo and the ability to think ahead and to initiate novel reforms (Cohen, 2019). It is worth asking: how much do the discourses focusing on data and residues of individualism affect the procedural responses to algorithmic discrimination? In the worst case, this can lead to a never-ending circle in which technologies challenge procedural law by adding complexity to already-existing social phenomena, and procedural law answers by introducing small cosmetic changes to already-existing procedures. In other words, due to institutional opposition and ideological constraints, nothing really changes except for fancy new names. Despite the struggles at the institutional level, alternative ways of providing collective protection are emerging at the margins of law. These alternative routes to collective ex ante protection include social collective action and resistance (e.g. Mantelero, 2014) or (semi)institutionalised preventive measures such as the various impact assessment tools established in GDPR and further developed in response to human right threats in AI governance (e.g. Oswald & al., 2018).

Collective action as a form of feminist activism or social action aims to make the experience of injustice public, but also to use action as a tool for political and legal reform. The Florence School approached injustices from an institutionalised perspective, as it recognised the crucial role played by institutionalised procedures in advancing societal goals. Accessing a competent court serves the purpose of maintaining social stability, but also of assuring those who face injustice that their concerns are heard and taken seriously. Legal norms have only a little effect without procedural law, as it is legal procedures that bring issues from the private sphere under public scrutiny and make them contestable (Arendt, 1998), and make the rights embedded in legal texts real: a law that has an impact (Solum, 2004). I do not wish to undermine the importance or the impact that collective action has on societal structures and for individuals. However, having correct and just procedural tools to tackle societal problems serves a purpose. As it stands, most social theories recognise the institutionalised capacity of law in guiding social change, even though different theories perceive its legitimacy and efficiency differently.

In the next two sections, I will approach access to justice and algorithmic discrimination by drawing insight from STS and feminism. The analysis will draw on feminist legal studies developed in the 1980s, given that they provide a technology-neutral way of analysing the ideological underpinnings of modern law and its operations. Whereas feminist legal studies directs my attention to individualism, STS and data feminisms address the issue of data-centrism. I try to explain why procedural law is not sufficient to address algorithmic discrimination.

Section 5: Access to justice for imagined individuals

Digitalisation is commonly discussed in terms of data governance and data protection for the obvious reason that data is regarded to be the most important asset in the digitalised society (Zuboff, 2019). The digitalised society forces us to operate on technical infrastructures (Nissenbaum, 2001), so our daily activities in maintaining friendships, finding information, and using public services cannot be separated from informational capitalism (Cohen, 2019). Consequently, data-centric vocabulary loses sight of the human behind the data and perceives them as calculable objects (Hildebrandt, 2015). Disregarding the roles that humans play as both constructers of technologies and objects of technological governance comes with a risk of perceiving technology as an abstract phenomenon and discrimination as a mechanical question about abusive data processing. As technology distances people from its operations, the focus shifts from protecting humans into protecting human-generated data.

Many academic disciplines have recently taken a critical position against data-centric approaches to algorithmisation. Scholars in critical algorithm studies and critical data studies have put a lot of effort into shifting the attention of scientific inquiries and policy proposals from data to the humans behind the data (e.g. Jackson, 2014). Many legal scholars have recently adopted this viewpoint and enriched the legal debates around algorithmisation by pointing out how data processing constitutes only a small part of discrimination and the resolution thereof (Eubanks, 2019; Mann & Matzner, 2019; Hacker, 2018).

Approaches that emphasise the GDPR as the focal legal instrument for addressing algorithmisation hides the nuanced ways in which technologies create injustices, and arguably affects how remedial measures are discussed. What follows is that the design of procedural safeguards against algorithmic discrimination follows the data protection regime. Despite the growing criticism, the GDPR includes some preventive compliance mechanisms, which arguably complement the existing legal protection toolkit. These mechanisms include a specialised administrative authority (data protection authority, DPA) that oversees compliance with the GDPR, as well as impact assessment and by-design requirements. On a concrete level, this means that industry-driven by-design approaches and administrative procedures aimed at protecting personal data are also used to tackle algorithmic discrimination. However, by design approaches have been criticised for pushing the responsibility for legal protection downstream to the developers and users of technology (Koulu, 2020; Hildebrandt, 2017), and putting aside reforms of public institutions, such as court proceedings or procedural norms. Moreover it raises the question of how sensitive the aforementioned tools are in detecting dynamic, even surprising, forms of discrimination.

Another discourse, or set of ideological baggage, that affects procedural responses to algorithmic discrimination is quite paradoxically that of individualism—or as often referred to in the social and political sciences—methodological individualism. Methodological individualism is a tendency to take an abstract and idealised man as the only point of reference through which modern society and its institutions and operations are observed (Arrow, 1994). In terms of procedural design, this means that a legal process is designed for an idealised man that is both rational and free, and that legal protection is mediated only via this individual. Fixation on individuals has become part of the procedural ideal, partly as residue from the Enlightenment’s push towards rationality, but also due to the firm belief in rational discourse as a legitimate and necessary way of enforcing legal decisions.

One of the most prominent reactions to individualism in law has been developed within feminist scholarship, which places collectives at the core of social and legal experience and enquiries. Feminist scholars, such as Bartlett and Elizabeth Schneider, have criticised law for being constructed for and by a male way of perceiving the world (Bartlett, 1990; Schneider, 1986, also Bender, 1988; MacKinnon, 1987). The equivalent for methodological individualism is the male voice, which puts all its effort into observing and constructing the social and the legal from an individualistic standpoint. The male voice detaches social phenomena from their contexts and translates them into legal language that is individualistic at its core. Schneider and Bartlett perceive women’s voices as embracing mutual recognition, physical belonging to the world, and collective experience.

Another important point Bartlett makes is that law perceives the world as a construct of monolithic categories (Bartlett, 1990). Reducing a lived life into neat categories, such as a consumer or a woman, penetrates procedural norms that approach the world from a distance, through imagined categories (Bender, 1988, also Bacchi & Goodwin, 2016), and interprets social injustices in light of an abstract ideal of justice (Isaacs, 2018). Consequentially, the social world is reduced to categories and its actors into clearly defined characters. Feminist legal studies have strongly voiced how the law’s capability, tendency or even need to categorise things holds an enormous power. Through the act of categorisation, law determines those who are recognised and those who are left out (Bender, 1988). The act of naming does not occur only at the abstract level of lawmaking, but also in the daily practices of the courts that decide what counts as discrimination, whether a person is discriminated against, or if a person is included in a collective that is recognised as a procedurally valid entity and worthy of legal protection.

Legal language and procedural design are persistent in their belief that legally defined categories actually correspond to algorithmically created agencies and collectives. This affects access to justice, as belonging to a relevant reference group often constitutes a preliminary step for a person to identify a discriminating act (correspondence to material law and prohibited discrimination). Moreover, membership in a clear and legally defined category or group grants access to the court (correspondence to procedural norms and capacity to conduct a legal proceeding). Emilios Christodoulidis, scholar in legal theory, has illustrated how the inability of the court to recognise individuals as parts of an oppressed group often leads to excluding them from court and legal redress. By positing individual members outside a protected group, the court denies them their voices and fails to recognise their experiences as worthy of justice (Christodoulidis, 2004). Even though strict legal definitions serve the purpose of legal certainty and ensuring the rule of law, it is an error to believe that the social categories would not be in continuous movement, a situation further complicated by technological development. It is precisely the evaporation of categories and the failure to recognise the dynamic nature of algorithmic discrimination that constitute one of the biggest challenges for procedural law, and especially for procedural responses to algorithmic discrimination.

Most feminist theories engage themselves not only in revealing the power that is at play in the act of categorising, but aim to complement the current ways of categorising with more flexible and context-aware methods to determine when someone is similar enough to be included. More radical voices have demanded the replacement of fixed categories altogether with context-dependent ways of naming, including, and excluding (Bartlett, 1990; West, 1988; MacKinnon, 1987). Feminist perception of categories, therefore, is rooted within a different framing—fluid social roles and mutual recognition as a way of being in the world. As the evaporation of categories is now faster due to collective-shaping algorithms and as algorithmic discrimination is due to affect unexpected collectives, the question is how to incorporate human-centric, collective, and flexible elements in the law and in legal procedures. Is a class action or representative action enough? How do we integrate flexibility and collectivity in the procedural design? In the final section, I will briefly revisit the concept of ex ante oversight and imagine what shapes an ex ante oversight mechanism, designed to tackle algorithmic discrimination, could take. I am not able to provide a detailed description of ex ante oversight, but instead will offer sketches on some preliminary thoughts on such a mechanism.

Section 6: Collective access to justice and ex ante protection

Ex ante protection is not an uniform concept, and a variety of different techniques such as precaution, risk management and preventive justice have elements that are meant to anticipate possible harm and to mitigate threats or violations of human rights.7 Recently, scholars from different disciplines have demanded for stronger participation rights in policy drafting and in the development and design of socio-technical systems which impact particular communities, such as electronic surveillance or social benefit systems (e.g. Katell et al., 2020; Young et al., 2019.) Certain rights, such as privacy, have implications on a collective level and are of collective interest (e.g. Regan, 2002) and thus worthy of collective protection (Montalero, 2016). I argue that by strengthening ex ante mechanisms from administrative oversight to community engagement, more robust collective protection is also reached.

Even though procedural legal protection is mediated via individuals in reactive, case-by-case tailored litigations, law works through generalisation. Legal norms are interpreted in the light of individual events and facts. Yet, at the abstract level, legal decisions change behaviour patterns, such as discriminatory practices, and guide future behaviour. Ex post remedy can therefore stretch its effect and become collective ex ante protection. In addition to court decisions, various administrative bodies play important roles in providing collective access to justice. The DPA oversees practices of data processing and sanctions actors who fail to comply with the GDPR. Equality ombudsmen investigate complaints regarding discrimination and, depending on their competence, can enforce decisions or use softer measures to alter discriminatory practices (e.g. Farkas, 2014).

Ex ante protection differs from traditional court and administrative proceedings, which take place after a possible discriminatory act has occurred and include an element of enforcement. In legal literature and in the policy documents mentioned previously, ex ante protection is currently discussed as a prominent way to harness algorithmic technologies in general. If we don’t limit our imagination on official proceedings only, ex ante techniques vary from oversight mechanisms (Diver, 2020; Koulu, 2020) to impact assessment tools (Kaminski & Malgieri, 2021) and documentation obligations including community level participation (Katel et al., 2020).

Impact assessment tools and by-design-requirements are considered efficient since they push protective measures downstream and aim to minimise the very possibility of discimination and to prevent discriminating AI tools from emerging into society. However, their efficiency can be called into question, not least because they only obligate particular actors and assume that those actors are able to interpret legal norms and possible threats correctly all the way to the unforeseeable future. Therefore, by design and impact assessment can only constitute one layer of collective ex ante protection.

As already pointed out above, feminist scholarship has seen collective action as a crucial method to reach feminist goals and a more equal society. Recently, similar forms of collective civic action have emerged to address the problems regarding algorithmic practices, such as surveillance and predictive policing (Van Zoonen, 2021). As Cohen has argued, communal approaches and participation are necessary to ensure public engagement and justification for algorithmic governance, but public institutions also require reforms (Cohen, 2019). Community engagement builds on legitimising the effects of technology through participation and democratic debate, through which individuals are able to affect the design and use of technological devices they are expected to interact with in their daily lives. Private action and by-design approaches do not connect clearly to the access to justice paradigm as they find their natural place in private law and self-regulation, social control, and systems design.

Administrative complaints, which are the dominant way of contestation in data protection and in many national equality laws, connect to administrative law and to a complaint system in which individuals are commonly responsible for initiating complaints. What is more, they often lack the regulatory competence to oversee technical practices and enforce decisions efficiently (Hodges, 2019), and due to the individual-centric procedural rules, administrative bodies often lack the competence to initiate collective proceedings, such as class action suits

The promise of oversight bodies lies in the assumption that they are more equipped to detect algorithmically produced, dynamic discrimination that might escape the perception of the subject of discrimination. This is not to say that those being discriminated against would not be able to feel the effects of discrimination. Rather, algorithmic discrimination adds another layer of complexity by producing dynamic forms of discrimination, or hiding discrimination behind technological neutrality. Access to justice can draw inspiration from private action and engaging the public, but they alone are not sufficient as prevention of structural discrimination is also of public interest. Administrative oversight which takes place at a distance together with e.g. wider representation rights in courts for communities or heterogeneous groups affected by algorithmic systems, would therefore complement other communal approaches of collective action.

In addition to the temporal shift from reactive protection to proactive protection, ex ante suggests a change in the object of legal protection. Because ex ante measures take place before any concrete violation has happened, it provides preventive and abstract protection that stretches its effect to a collective level. This collective protection does not target any particular individuals or particular features, but those heterogeneous groups affected by the system in its entirety.

As imagined in this article, ex ante oversight aims at collective protection that changes the dynamics between technological systems and those affected by algorithmic systems, which constitute discrimination. Collective ex ante protection that begins at the community level, then builts on self-regulatory component and principle, and is maintained by public authorities keeps the individual act of harm at the core of its operations but takes the responsibility for detecting biased algorithms from that individual and places it in the hands of public authorities, as the individual being discriminated against has few means to gain access to relevant information about algorithms. It does not presuppose a rational and free agent who is capable of recognising a discriminatory event and acting accordingly, as dynamic and structural algorithmic discrimination escapes the logic of other forms of discrimination. This implies that the concrete practices would build on hybrid strategies, combining self-regulation and assessment with system-level oversight of public authorities, regulatory power granted to those authorities, and a smoother integration of oversight power and court proceedings.

Since the law necessitates recognition of both individuals and collectives as worthy of accessing justice, the process I am broadly sketching out cannot be entirely collective, since it would result in losing sight of the individual whose access to justice is at the core of legal operations. Neither can it take the individual as the only entry point through which justice is distributed, because case-by-case litigation would continue to repeat the ineffectiveness of existing remedies for discrimination. The object of oversight is therefore not only individual acts of discrimination, but also the technological system itself that enables and manages collectives. The object of ex ante oversight and process would therefore be—another process?

Conclusions

In this article, I suggested that current procedural safeguards are not sufficient to address algorithmic discrimination and to ensure access to justice. Access to justice has its roots in the modern procedural paradigm, and hence concrete procedural tools that aim to address and remove algorithmic discrimination are built on both hidden and visible dependencies within that paradigm. I suggested that procedures are a type of technology, and can be treated as design objects.

I used algorithmisation and automated decision-making as a tool to reveal the structural and ideological weaknesses of the procedural law framework. I have argued that two existing technological discourses affect the EU’s reactions in developing new mechanisms to deal with algorithmic discrimination. First, we are fixated on data and have the tendency to perceive technology and procedures as neutral things that operate outside human biases. Second, procedural remedies focus on individuals and individual redress as a way to access justice. However, law is not individual-centric as such, but due to institutional development and design. Future policymaking should acknowledge that even though procedures are individual-centric for a reason, this should not prevent future procedural reforms from adapting new forms of ex ante collective protection.

Discrimination, technology, and collectivity are entwined and cannot be treated separately. Therefore, redesign of collective protection should recognise the individual person behind the data and the particularities of individual encounters with injustice. However, the procedural design should also allow a more rigorous action at a collective level in order to close the distance created by technologies.

I used feminist critique on individualised and inflexible law, which provides a useful point from which to rethink procedural design and algorithmic discrimination as collective and co-constitutive processes. On a more concrete level, I suggested ex ante oversight as a novel and promising way to address algorithmic discrimination collectively. Overall, the regulatory framework already includes some mechanisms that provide ex ante protection and collective protection. Collective and proactive protection are direct or side effects of these mechanisms. However, I emphasised the role of public institutions to point out the relevance that public oversight and public enforcement has to preventing structural discrimination. Ex ante oversight could be a hybrid mechanism, located at the crossroads of procedural and administrative law. It would take responsibility away from individuals to detect discrimination and pursue remedy, and place it, not in the hands of industry or technology, but in the hands of public oversight authorities.

References

Adam, A. (1998). Artificial knowing: Gender and the thinking machine. Routledge.

Akrich, M. (1992). The de-scription of technical objects. In W. Bijker & J. Law, Shaping technologies/building society: Studies in sociotechnical change (pp. 205–224). MIT Press.

Arendt, H. (1998). The human condition (2nd ed). University of Chicago Press.

Arrow, K. (1994). Methodological Individualism and Social Knowledge. The American Economic Review, 84(2), 1–9.

Bacchi, C. L., & Goodwin, S. (2016). Poststructural policy analysis: A guide to practice. Palgrave Macmillan.

Baer, H. (2016). Redoing feminism: Digital activism, body politics, and neoliberalism. Feminist Media Studies, 16(1), 17–34. https://doi.org/10.1080/14680777.2015.1093070

Bartlett, K. (1990). Feminist legal methods. Harvard Law Review, 103(4), 829–888. https://doi.org/10.2307/1341478

Bender, L. (1988). A Lawyer’s Primer on Feminist Theory and Tort. Journal of Legal Education, 38(1/2), 3–37.

Bosters, T. (2017). Collective Redress and Private International Law in the EU. T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-186-9

Callon, M. (1984). Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay. The Sociological Review, 32(1_suppl), 196–233. https://doi.org/10.1111/j.1467-954X.1984.tb00113.x

Cappelletti, M., & Garth, B. (1978). Access to Justice: The Newest Wave in the Worldwide Movement to Make Rights Effective. Buffalo Law Review, 27(2), 181–292.

Cappelletti, M., Garth, B., & Trocker, N. (1982). Access to Justice, Variations and Continuity of a World-Wide Movement. Rabels Zeitschrift Für Ausländisches Und Internationales Privatrecht, 46(4), 664–707.

Cappelletti, M., Kollmer, P. J., & Olson, J. M. (1989). The judicial process in comparative perspective. Clarendon Press.

Christodoulidis, E. (2004). The Objection that Cannot Be Heard: Communication and Legitimacy in the Courtroom. In A. Duff (Ed.), The trial on trial. Hart.

Cohen, J. E. (2019a). Between truth and power: The legal constructions of informational capitalism.

Cohen, J. E. (2019b). Turning Privacy Inside Out. Theoretical Inquiries in Law, 20(1), 1–31. https://doi.org/10.1515/til-2019-0002

Crenshaw, K. (2018). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory, and Antiracist Politics [1989]. In K. T. Bartlett & R. Kennedy (Eds.), Feminist Legal Theory (1st ed., pp. 57–80). Routledge. https://doi.org/10.4324/9780429500480-5

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.

Diver, L. (2020). Digisprudence: The design of legitimate code [Preprint]. LawArXiv. https://doi.org/10.31228/osf.io/nechu

Eubanks, V. (2011). Digital dead end: Fighting for social justice in the information age. MIT Press.

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor (First Edition). St. Martin’s Press.

Farkas, L. (2014). Collective Actions under European Anti-Discrimination Law. European Anti-Discrimination Law Review, 19. http://ec.europa.eu/justice/discrimination/files/adlr-19-2014-final.pdf

Foucault, M. (1977). Discipline and punish: The birth of the prison (1st American ed). Pantheon Books.

Gandy, O. H. (2016). Coming to terms with chance: Engaging rational discrimination and cumulative disadvantage.

Gilbert, J. (2014). Common ground: Democracy and collectivity in an age of individualism. Pluto Press.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gómez, M. A. (2012). Will the birds stay south? The rise of class action and other forms of group litigation across Latin America. University of Miami Inter-American Law Review, 43(3), 481–521.

Gómez, M. A. (2016). Smoke signals from the south: The unanticipated effects of an `unsuccessful ́ litigation on Brazil’s anti-tobacco war. Class action in context: How culture, economics and politics shape collective litigation. In D. R. Hensler, C. Hodges, & I. Tzankova (Eds.), Class Actions in Context How Culture, Economics and Politics Shape Collective Litigation. Edward Elgar Publishing.

Habermas, J. (2001). Between facts and norms: Contributions to a discourse theory of law and democracy (J. Habermas & W. Rehg, Eds.; 1 MIT Press paperback ed., 4. printing). MIT Press.

Hacker, P. (2018). Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies Against Algorithmic Discrimination Under EU law. Common Market Law Review, 55(4), 1143–1185.

Hildebrandt, M. (2015). Smart technologies and the end(s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

Hildebrandt, M. (2017). Saved by Design? The Case of Legal Protection by Design. Nanoethics, 11(3), 307–311. https://doi.org/10.1007/s11569-017-0299-0

Hodges, C. (2015). Mass collective redress: Consumer ADR and regulatory techniques. European Review of Private Law, 23(5), 829-.

Hodges, C. (2019). Collective Redress: The Need for New Technologies. Journal of Consumer Policy, 42, 59–90. https://doi.org/10.1007/s10603-018-9388-x

Ingram, D. (2000). Group rights: Reconciling equality and difference. University Press of Kansas.

Isaacs, T. (2018). What Would a Feminist Theory of Collective Action and Responsibility Look Like? In K. Hess, V. Igneski, & T. Isaacs (Eds.), Collectivity: Ontology, Ethics, and Social Justice. Rowman & Littlefield International.

Jackson, S. J. (2014). Rethinking Repair. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media Technologies (pp. 221–240). The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0011

Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and social order. Routledge.

Jouët, J. (2018). Digital Feminism: Questioning the Renewal of Activism. Journal of Research in Gender Studies, 8(1), 133. https://doi.org/10.22381/JRGS8120187

Kaminski, M., & Malgieri, G. (2021). Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law, 11(2), 125–144. https://doi.org/10.1093/idpl/ipaa020

Katell, M., Young, M., Dailey, D., Herman, B., Guetler, V., Tam, A., Binz, C., Raz, D., & Krafft, P. (2020). Toward situated interventions for algorithmic equity: Lessons from the field. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45–55. https://doi.org/10.1145/3351095.3372874

Koivisto, I. (2020). Thinking Inside the Box: The Promise and Boundaries of Transparency in Automated Decision-Making (Working Paper EUI AEL, 2020/01). Academy of European Law Working Papers. http://hdl.handle.net/1814/67272

Koulu, R. (2020). Human control over automation: EU policy and AI ethics. European Journal of Legal Studies, 1, 9–46. https://doi.org/10.2924/EJLS.2019.019

Lahuerta, S. (2018). Enforcing EU equality law through collective redress: Lagging behind? Common Market Law Review, 55(3), 783–817.

Latour, B. (1988). The pasteurization of France (A. Sheridan & J. Law, Trans.). Harvard University Press.

Latour, B. (2007). Reassembling the social: An introduction to Actor-Network-Theory (1. publ. in pbk). Oxford Univ. Press.

MacDermott, T. (2018). The collective dimension of federal anti-discrimination proceedings in Australia: Shifting the burden from individual litigants. International Journal of Discrimination and the Law, 18(1), 22–39. https://doi.org/10.1177/1358229118759712

MacKinnon, C. A. (1979). Sexual harassment of working women: A case of sex discrimination. Yale University Press.

MacKinnon, C. A. (1987). Feminism unmodified: Discourses on life and law (9. printing). Harvard Univ. Press.

Mann, M., & Matzner, T. (2019). Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data & Society, 6(2). https://doi.org/10.1177/2053951719895805

Mantelero, A. (2014). Social Control, Transparency, and Participation in the Big Data World. Journal of Internet Law, 23–29.

Mantelero, A. (2016). Personal Data for Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension of Data Protection. Computer Law & Security Review, 32(2), 238–255. https://doi.org/10.1016/j.clsr.2016.01.014

Marres, N. (2014). Material participation: Technology, the environment and everyday publics 2012. Palgrave Macmillan.

Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron? Boston University Law Review, 81(3). https://nissenbaum.tech.cornell.edu/papers/securingtrust.pdf

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). Crown.

Oswald, M., Grace, J., Urwin, S., & Barnes, G. C. (2018). Algorithmic risk assessment policing models: Lessons from the Durham HART model and ‘Experimental’ proportionality. Information & Communications Technology Law, 27(2), 223–250. https://doi.org/10.1080/13600834.2018.1458455

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061

Petkova, B., & Ojanen, T. (Eds.). (2020). Fundamental rights protection online: The future regulation of intermediaries. Edward Elgar Publishing.

Pfaffenberger, B. (1992). Technological Dramas. Science, Technology, & Human Values, 17(3), 282–312. https://doi.org/10.1177/016224399201700302

Reaume, D. (1988). Individuals, Groups, and Rights to Public Goods. The University of Toronto Law Journal, 38(1), 1. https://doi.org/10.2307/825760

Regan, P. M. (2002). Privacy as a Common Good in the Digital World. Information, Communication & Society, 5(3), 382–405. https://doi.org/10.1080/13691180210159328

Schneider, E. (1986). The Dialectic of Rights and Politics: Perspectives from the Women’s Movement. New York University Law Review, 61(4), 589–652.

Simon, J. (n.d.). The entanglement of trust and knowledge on the Web. Ethics and Information Technology, 12(4), 343–355.

Solum, L. (2004). Procedural Justice. Southern California Law Review, 78(1), 181–321.

Somek, A. (2008). Individualism: An essay on the authority of the European Union. Oxford University Press.

Swierstra, T., & Rip, A. (2007). Nano-ethics as NEST-ethics: Patterns of Moral Argumentation About New and Emerging Science and Technology. NanoEthics, 1(1), 3–20. https://doi.org/10.1007/s11569-007-0005-8

Van Zoonen, L. (2021). Performance and Participation in the Panopticon: Instruments for Civic Engagement with Urban Surveillance Technologies. In G. Jacobs, I. Suojanen, K. E. Horton, & P. S. Bayerl (Eds.), International Security Management (pp. 243–254). Springer International Publishing. https://doi.org/10.1007/978-3-030-42523-4_17

West, R. (1988). Jurisprudence and Gender. University of Chicago Law Review, 55(1), 1–72. https://doi.org/10.2307/1599769

Williams, Brooks, & Shmargad. (2018). How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications. Journal of Information Policy, 8, 78. https://doi.org/10.5325/jinfopoli.8.2018.0078

Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121 136.

Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: A method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89–103. https://doi.org/10.1007/s10676-019-09497-z

Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier of power. Profile Books.

Footnotes

1. See https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G, page visited 19.5.2021.

2. See https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/, page visited 19.5.2021.

3. See https://www.business-humanrights.org/en/latest-news/italy-court-rules-against-deliveroos-rider-algorithm-citing-discrimination/, page visited 19.5.2021.

4. See e.g., ‘White Paper on Artificial Intelligence’ from the European Commission (COM(2020) 65 final); ‘the Ethical Guidelines for Trustworthy AI’ and ‘Policy, and Investment Recommendations for AI’ by the High-Level Expert Group (AI HLEG) set up by the European Commission, and the proposal for harmonised rules on Artificial Intelligence (Artificial Intelligence Act) from the European Parliament and the Council (COM(2021) 206 final, 21.4.2021).

5. See https://ioewec.newsletter.ioe-emp.org/industrial-relations-and-labour-law-february-2021/news/article/italy-bologna-labour-court-held-a-previously-used-algorithm-of-a-platform-company-as-discriminatory, visited 19.5.2021.

6. See https://www.yvtltk.fi/en/index/opinionsanddecisions/decisions.html, visited 19.5.2021.

7. See also the proposal for Artificial Intelligence Act, which proposes ex ante conformity assessment as a regulatory tool for AI systems, e.g. art. 19.

Add new comment