Discrimination grounds and personalised pricing: Consumer perceptions of fairness, norm alignment, legality, and trust in markets

Kimia Heidary, Department of Business Studies, Leiden University, Netherlands
Jean-Pierre van der Rest, Department of Business Studies, Leiden University, Netherlands
Bart Custers, eLaw - Center for Law and Digital Technologies., Leiden University, Netherlands

PUBLISHED ON: 18 Oct 2024 DOI: 10.14763/2024.4.1809

Abstract

This article explores consumer perceptions of different grounds by which online prices can be personalised. We conducted a survey among Dutch consumers (n = 727) presenting them with 25 segmentation bases, drawing from legally permissible and legally prohibited grounds. We then ranked these bases and accompanying consumer perceptions across five dimensions: fairness, alignment with personal norms, alignment with social norms, perceived legality, and trust in markets. We find that while consumer perceptions generally align with what is currently prohibited in law, there are some “new” grounds, in particular intelligence and physical appearance, that elicit similar negative perceptions as legally prohibited grounds. This raises questions regarding the further regulation of personalised pricing. We discuss pros and cons of updating legislation to better reflect (new) ethical and social norms.
Citation & publishing information
Received: Reviewed: Published: October 18, 2024
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Personalised pricing, Consumer profiling, Price discrimination, Norms, Anti-discrimination law
Citation: Heidary, K., van der Rest, J.-P., & Custers, B. (2024). Discrimination grounds and personalised pricing: Consumer perceptions of fairness, norm alignment, legality, and trust in markets. Internet Policy Review, 13(4). https://doi.org/10.14763/2024.4.1809

1. Introduction

Online price discrimination or personalised pricing, the practice of setting prices based on consumer characteristics and behaviour, with the help of AI-based technologies and big data processing, is slowly becoming a reality. The rapidly increasing flow of personal data, in combination with technological developments that make it possible to analyse and value these data, makes it increasingly possible for companies to personalise prices based on what they (think they) know about consumers. These data are observed, derived from behaviour, or provided by consumers themselves and can be information about a person's device type or purchasing behaviour, the type of pages they view, social media data, and even the battery percentage of the device that is used to surf the Internet (OECD, 2018a). Especially when these data are combined, companies can build more sophisticated profiles and experiment with pricing accordingly. Personalising prices and differentiating between different types of consumers can be quite profitable for companies (Odlyzko, 2003; Shiller, 2014). As the practice continues to evolve, consumer profiles and the underlying technology are expected to become more sophisticated (OECD, 2018a).

Differentiating between consumers and the price they pay is not a new phenomenon. Examples of charging ships different lighthouse fees can already be found in 13th century England (Odlyzko, 2004). From an economics perspective, such practice is viewed in a neutral way. The term “price discrimination” represents a value free concept, without the normative (often negative) connotations that a common understanding of the word “discrimination” suggests in other disciplines such as law (Steppe, 2017). Price discrimination in an economic sense is not always harmful and can even be beneficial in terms of opening the market to consumers who might not have been able to purchase a product or service under a uniform pricing system (Sauter, 2020). At first glance, economic textbooks present quite innocent and generally accepted ways of distinguishing between consumers. For example, students and the elderly may be charged reduced prices by offering discounts. Loyal customers who are offered loyalty discounts is another well-known example. In addition, prices can differ when products do not have a fixed price, such as at a car dealership or street vendor, where consumers with high negotiating skills may walk away with better deals than consumers who do not know how overpriced the product is or who cannot hide their interest well enough.

Many forms of price discrimination are much less accepted. Empirical research and anecdotal evidence indicate that discriminatory pricing often meets with resistance from consumers. They generally view the practice as unfair and deem it to be illegitimate (Poort & Zuiderveen Borgesius, 2019; Turow et al., 2005; Priester et al., 2020). Some examples, although deeply rooted in our daily lives, still raise debates about acceptability and fairness. A typical example includes distinguishing between consumers based on gender, with women often getting the short end of the stick and paying a higher price for products and services: the so-called “pink tax” (Department of Consumer Affairs, 2015). Another example is the selective charging of higher prices, for example for consumers who live in a wealthier neighbourhood or a “fine” for loyal customers (Maxwell & Garbarino, 2010; CMA, 2020). Social norms are an important explanation why some practices are deemed more (un)fair than others. Consumers that perceive a pricing practice to violate a social norm, react negatively towards the company, such as lower purchase intentions (Campbell, 1999) and loss of trust (Garbarino & Lee, 2003).

The current European legal framework does not explicitly prohibit online price discrimination, but it does pose boundaries as to the rightful use and processing of data for personalising prices and the transparency that needs to be provided by companies who engage in the practice.1 Several authors have questioned whether the current European legal framework is equipped to address the challenges associated with online price discrimination (Sears, 2021; Barros Vale, 2020; Grochowski et al. 2022). One of these challenges is the discrimination that can occur with the use of AI-based pricing algorithms. Anti-discrimination law and data protection law protect certain “sensitive” characteristics and fundamental choices that cannot be changed, or at least not without high damage to one’s core identity, such as religion, gender, and sexual orientation (Clarke, 2015; Khaitan, 2015). The use of such grounds in personalised pricing would constitute (in)direct discrimination and likely elicit negative reactions from consumers. Even if companies make efforts not to include such sensitive (synthetic) data in their analyses, the technology used can still indirectly lead to the systematic disadvantaging of certain (groups of) consumers (Calders & Žliobaitė, 2013). In other words: “insensitive” (non-legally prohibited) data can show strong associations with sensitive or legally prohibited data (Li, 2022). Moreover, some “newer” grounds or profiles that could be used for personalised pricing, such as intelligence or browsing activity, could also constitute similar immutability and vulnerability or are at least not within a person’s control, which could warrant further regulatory protection (Wachter, 2022). In order to assess to what extent the current legal framework is aligned with social norms and perceptions surrounding personalised pricing, it is imperative to study existing fairness perceptions regarding both legally prohibited and permissible grounds.

The literature tends to pay little attention to the various grounds for discrimination on the basis of which prices can be personalised. There is work that has explored consumer perceptions of personalised pricing (e.g. Turow et al., 2005; Poort & Zuiderveen Borgesius, 2019), as well as some specific non-prohibited grounds (e.g. location, device type, purchase history) for personalised pricing (Priester et al., 2020; Hufnagel et al., 2022). Yet, a more comprehensive review of perceptions regarding discrimination grounds – legally prohibited or not – is missing. While some grounds and the processing thereof are protected under data protection law and anti-discrimination law, the question remains how well the legally protected grounds align with consumer (un)fairness perceptions and the extent to which the EU legal framework can be leveraged to provide future protection against discrimination and/or exploitation on the basis of these grounds – especially given that these new segmentation criteria (i.e. discrimination grounds that are not legally prohibited) could still have (unforeseen) legally discriminatory outcomes. The focus of this article is mostly on the role of anti-discrimination law, as the grounds that we examine flow mostly from this framework. It builds on the existing debate regarding the role that anti-discrimination law can play to address the discrimination that can result from the use of pricing algorithms (Tanna & Dunning, 2023; Wachter, 2022) However, we believe that the findings have implications for other fields of law, particularly regarding enforcement.

Therefore, the aim of this research is twofold. First, we provide a comprehensive overview of consumer perceptions regarding known grounds that can be used to personalise prices. Here, we do not only analyse grounds that are already prohibited or declared sensitive by law, but also perceptions regarding “new” grounds (i.e. grounds that have more recently come to play and are not legally prohibited), as they are heavily intertwined (Solove, 2024). Second, using this overview, we examine whether there is a gap between perceptions of discrimination grounds that are prohibited, and those that are not legally prohibited. Our findings provide input for the discussion on the extent to which current legislation offers appropriate safeguards against personalised pricing and which grounds, if any, should be better protected in the future (e.g. Van der Rest et al., 2020). We explored consumer perspectives by conducting a survey among 727 Dutch consumers.

The paper is structured as follows. In Section 2, we review the current literature on personalised pricing and the current legal framework. In Section 3, we explain the methodology used in the empirical study. Section 4 presents the survey results and Section 5 discusses the implications of the findings of our study, particularly the extent to which there is a gap between the current legal landscape and the perceptions of consumers, and whether the legal framework should be reconsidered, including avenues for future research. In Section 6, we provide conclusions.

2. Literature review

2.1 Online price discrimination

Online price discrimination is generally defined as charging different prices to consumers for the same product, based on inferences drawn from consumer data with the help of pricing algorithms and big data processing (Zuiderveen Borgesius & Poort, 2019). Here, the difference in price cannot be explained by a difference in costs, but rather the information that the company has about its (prospective) clients (Carroll & Coates, 1999). Price differences due to, for example, higher shipping or delivery costs, or due to insured parties posing a higher risk to an insurer, are not considered price discrimination (Preston McAfee, 2008). The reasoning behind price discrimination is that consumers vary in their willingness to pay for products and services, as they value these differently (OECD, 2016). Companies observe such differences in consumers’ often fluctuating preferences and assess the willingness to pay accordingly. These assessments do not have to be accurate to the penny in order to be considered price discrimination (OECD, 2018a). If there is no discernible difference between consumers and their willingness to pay, it is often not profitable for a company to roll out a discriminatory pricing strategy (Stole, 2007).

When creating and optimising segments, companies can use a plethora of consumer data. These data include, but are not limited to, observed data (e.g. information about device, purchase history), volunteered data (e.g. name, gender), and inferred data (e.g. income, life phase (OECD, 2018b)). Data originally collected for other purposes can be reused for this (Custers & Bachlechner, 2017). Section 2.2 will map an overview of legally prohibited grounds of discrimination, on which companies could in theory base their analyses.

2.2 Legally prohibited grounds of discrimination

The current European legal landscape does not explicitly prohibit personalised pricing; the only direct mention of personalised pricing can be found in Directive 2019/2161,2 on the basis of which companies are required to disclose use of automated personalised pricing to consumers. Interestingly, companies are not required to disclose the parameters used for personalised pricing, only that the price has been personalised. Nevertheless, there are two concrete starting points for protected or sensitive grounds for price discrimination. These can be found in anti-discrimination law and data protection law. We will zoom in on the Dutch constitution and its enumeration of discrimination grounds, as it was recently updated to include new grounds of discrimination and concurrently, these were the grounds that we presented to survey respondents.

The Charter of Fundamental Rights of the European Union (CFEU), which entered into force on 1 December 2009, enshrines various fundamental rights, freedoms, and principles. The principle of equal treatment is protected in Article 21 CFEU, which states that any discrimination shall be prohibited and provides a non-exhaustive enumeration of fourteen discrimination grounds, more than any national constitution of European member States. The discrimination grounds mentioned include sex, race, genetic features, religion, disability, and age. The rationale behind these protected characteristics is to protect both immutable characteristics (i.e. characteristics that were not chosen and cannot be changed, or at least not without high damage to one’s core identity), such as gender and ethnicity, and fundamental choices, such as religion (Clarke, 2015; Khaitan, 2015). Although the scope of the CFEU is formally restricted to EU institutions when implementing EU law (Article 51 CFEU), the CJEU has recognized horizontal effects in recent jurisprudence (Muir, 2019).3

The Dutch national constitution provides a similar non-exhaustive enumeration, although with less examples listed. Article 1 of the national constitution (Grondwet) states that all persons in the Netherlands shall be treated equally in equal circumstances. Discrimination on the grounds of religion, belief, political opinion, race or sex, or any other grounds whatsoever shall not be permitted. As of 2023, the Dutch constitution includes two new grounds of discrimination: disability and sexual orientation (Corder, 2023). The Equal Treatment Act (AWGB), which dates from 1994 and constitutes secondary legislation, further elaborates on Article 1 of the constitution. The AWGB prohibits making a distinction on the basis of an exhaustive enumeration of sensitive criteria. The most fundamental objective of this Act is the protection of human dignity, more specifically to promote equal participation in society, without being subjected to discrimination or exclusion on the basis of personal characteristics.4 The exhaustive enumeration in Article 1 prohibits discrimination on the basis of religion, belief, political opinion, race, gender, nationality, sexual orientation, and marital status. In later subordinate legislation some additional grounds were added, including disability or chronic illness, age, and type of employment contract.5 It is important to note that throughout European constitutions, there exists a clear lack of harmonisation of discrimination grounds and whether an exhaustive or non-exhaustive enumeration is used (Custers, 2023). For instance, wealth and social status are a protected characteristic in 11 out of 27 Member States but are not mentioned in Article 21 CFEU.

In addition to anti-discrimination law, data protection law also provides legal boundaries to the use of certain grounds in personalising pricing. Although the GDPR does not directly deal with bias and discrimination that can flow from the use of AI-based pricing algorithms, it is an important instrument in addressing these concerns as it provides both ex post legal remedies and ex ante measure aimed at preventing unfair processing of personal data (Ivanova, 2020; Li, 2022). A key objective of the GDPR is to protect fundamental rights of data subjects, among which are the right to privacy and to non-discrimination.6 In accordance with Article 5(1)(a) and (b), personal data shall be processed fairly, lawfully, and transparently. Although “fairness” is an elusive concept, it at least means that data processing should not create detrimental effects and should not discriminate or exploit consumer vulnerabilities (Clifford & Ausloos, 2018; Tanna & Dunning, 2023). Articles 9 and 10 GDPR prohibit the processing of “sensitive” personal data, i.e. trade union membership, genetic and biometric data, health data, and criminal convictions, given that the exemptions of Article 9(2) do not apply. This ties in closely with transparency, as without transparency it is highly difficult to identify breaches or (non-)compliance with the legal framework (Tanna & Dunning, 2023). Article 22 GDPR, the right not to be subjected to automated decision-making, could be an important instrument to address decisions that flow from pricing algorithms. However, there is currently disagreement on the application of Article 22 GDPR to personalised pricing, which weakens its potential remedial role (Zuiderveen Borgesius & Poort, 2019; Wong, 2020). Furthermore, affinity-based personalised pricing, where consumers are grouped according to their inferred preferences rather than their personal data, may circumvent the legal frameworks of data protection law and anti-discrimination law (Wachter, 2020; Li, 2022).

2.3 Reconsidering price discrimination grounds

Apart from grounds that are legally protected, there are many segmentation bases conceivable (e.g. loyalty status, intelligence/education level, income) which are not protected in anti-discrimination or data protection law (See, Baker, 2001; Tannock, 2008; Maxwell & Garbarino, 2010). Grounds that companies have started to use, for example, include physical appearance (Hern, 2020), and battery level (Natelhoff, 2023). While these grounds are permissible, they can lead to consumer backlash; when consumers deem them unfair and exploitative, there can be a sudden and strong reaction against it in social media. However, little attention has been paid to the (legal) implications of the plethora of (new) segmentation grounds on which personalised pricing can be based online, beyond a call to broaden the consumer backlash and corporate social responsibility (CSR) literature to include personalised pricing (Van der Rest et al., 2022), and a study that explored why companies are reluctant to use personalised pricing online (Heidary et al., 2022). Given a lack of harmonisation of protected grounds across national constitutions, the risk of unintended or indirect discrimination, and consumer unfairness perceptions, a reconsideration of the current European legal framework nonetheless seems warranted.

First, the level of protection provided against discrimination varies across member states in terms of the number of grounds listed, but also the wording of the enumeration – some Member States provide exhaustive enumerations of grounds, while others provide a non-exhaustive enumeration, or even no enumeration at all (Custers, 2023). Although the Dutch constitution was recently updated and two grounds were added to reflect developments in society, it takes much time and effort to update national constitutions to reflect such changes. This raises questions regarding the robustness of existing legal frameworks in relation to developments in the technology surrounding personalised pricing. There has been quite some scepticism regarding whether national constitutions can (and should) keep up with societal developments (Gerards, 2016). This point has also been raised for anti-discrimination law in general (Tanna & Dunning, 2023). This raises the question whether perhaps other modes of regulation are more suitable to deal with the challenges associated with personalised pricing, such as the constantly changing grounds on which a personalised price can be based.

Second, there is a risk of unintended or indirect discrimination. In the case of direct discrimination, a legally prohibited ground is decisive for the unequal treatment, for example, refusing to sell a product to a male customer because of his gender. In the case of indirect discrimination, a seemingly neutral ground leads to a distinction that asserts the same effect as direct discrimination. This distinction can be either intentional or unintentional (Dinur, 2021; ECHR, 2022). A typical example of indirect discrimination is that of online tutoring company The Princeton Review. The company charged different prices for customers, based on geographic location, which is not legally prohibited. However, as a result of this pricing strategy, Asian consumers were almost twice as likely to be charged a higher price than non-Asians – discrimination on the basis of ethnicity, a legally protected characteristic (Larson et al., 2015). Regardless of whether this effect was intentional or not, direct and indirect discrimination are prohibited in most jurisdictions (Van Bekkum & Zuiderveen Borgesius, 2023). Indirect discrimination can be difficult to prevent, discover, and enforce. Even if companies do not directly use sensitive data in their analyses, neutral characteristics can still be correlated to sensitive characteristics (Calders & Žliobaitė, 2013; Solove, 2024). For example, companies can infer health data, income, and protected demographics from consumers’ purchasing habits (Duhigg, 2012; Solove, 2024). Furthermore, it is difficult to discover what the actual reason for a price fluctuation is and enforce accordingly – if a price fluctuation is even observed at all (Mikians et al., 2012; Hannak et. al., 2014). Scholars have proposed developing discrimination-aware data mining tools to counter the difficulties associated with the opacity of the technologies used in data analysis (Berendt & Preibusch, 2017).

Third, the instances of personalised pricing that have come to light, many of which used “new” grounds for personalisation, were met with much consumer backlash (Rosencrance, 2000). When asked about their perception of the practice, consumers report high unfairness perceptions; they deem the practice to be unethical and illegitimate (Turow et al., 2005, Poort & Zuiderveen Borgesius, 2019). Research has shown that consumers deem some grounds to be more unfair than others and that there are nuances in perceptions of fairness. For instance, personalisation based on device type is deemed more unfair than location-based discrimination (Hufnagel et al., 2022). Pricing based on location, in turn, is deemed less fair than purchase history as a discriminatory ground (Priester et al., 2020). However, a complete overview that considers legally prohibited grounds as well as some newer grounds, is still missing. What is perceived to be fair may strongly depend on existing societal norms, as well as personal norms (Maxwell & Garbarino, 2010). In general, the perception of unfairness flows from the perception that decisions about an important element of a transaction (i.e. the price) are made with generally low transparency, and on the basis of seemingly arbitrary – or at least for a large part immutable – characteristics (Miller, 2014). A violation of (perceived) pricing norms has been shown to lead to unfairness perceptions (Maxwell, 2002). Unfairness perceptions, in turn, may have far-reaching consequences, such as a decrease in consumers’ trust in the digital market, as well as less intention to participate in the digital market (Garbarino & Lee, 2003).

It is therefore imperative to explore what the current existing norms and fairness perceptions surrounding personalised pricing are, in order to assess the extent to which they align with the existing legal framework for discrimination grounds. These findings add to the discussion on the extent to which legal regulation can address these fairness concerns and reflect existing (and future) norms (Van der Rest et al., 2020; Van der Rest et al., 2022). As some scholars suggest, altering the existing enumeration of sensitive data might be a dead end, as non-sensitive data are so heavily intertwined with sensitive data (Solove, 2024). Solove suggests that (privacy) law must focus on the harms and risks rather than the nature of pre-defined personal data, where “harm involves negative consequences from the collection, use, or transfer of personal data that affect individuals or society […] risk involves the likelihood and gravity of certain harms that have not yet occurred” (2024, p. 1128). While it is complex to assess the degree of harm inflicted by the use of grounds for personalised pricing, assessing consumer perceptions in terms of, for instance, their trust in the digital market, can provide insight into the degree of harm that may be inflicted. A low degree of trust could cause existing consumers to withdraw or decrease their participation in the online market (OECD, 2015).

3. Methodology

3.1 Sample

To examine consumers’ perceptions of various grounds of personalised pricing, we conducted a survey via a national online panel to represent the Dutch online population, aged 18 years and older. We recruited 957 participants, who were rewarded with €2.25 for taking part. We excluded 204 participants who did not finish the complete survey (n = 102) or did not pass the attention check (n = 102). In addition, we manually excluded 26 respondents who had both a critical z-score value7 (above ±3.29 SD) for the time spent on the page on which the grounds were listed and straight-lined their answers. We verified that the participants who we excluded were not significantly different from those we did not exclude. We tested for gender, age, education, and income differences, using t-tests and chi-square tests, and found no significant differences between these two groups, meaning that we could continue with our analysis.

Our final sample consisted of 727 participants. The average age was 34.6 years (SD = 7.7) and 62,7% was female. More than half of the participants had attained higher education (40,6% higher vocational training), and 31,3% of all participants indicated a gross household income of more than €50,000 per year.

3.2 Procedure and design

We surveyed consumer perceptions of discrimination grounds used in personalised pricing. The survey was translated from English to Dutch using back-to-back translation8 (Brislin, 1970). The online survey consisted of three parts: (1) demographic questions, (2) discrimination grounds questions, and (3) concluding questions. It took about six to seven minutes to complete the survey.

Regarding the second part, participants were shown an information page where the setup of this part of the survey was explained. The page explained that companies increasingly experiment with prices which can result in a higher or lower price for different consumers and that on the next page participants would be shown various grounds on which companies can segment prices. The grounds were introduced with examples. For instance, the ground “religion” was accompanied by the examples “Christian”, “Muslim”, “Jew”, “Atheist”, etc. Participants’ perceptions of these grounds were measured along five dimensions.

As for the grounds, we compiled a (non-exhaustive) list of twenty-five grounds, based on the Dutch and European legal frameworks, anecdotal evidence of personalised pricing, and “new” grounds mentioned in the literature (See Appendix 1).9 We divided these grounds into five groups of five grounds, assuring a balanced combination of legal and non-legal grounds per group. The rationale behind this choice was that since we were going to ask participants about their perceptions on five different dimensions, rating five times twenty-five grounds would strain participants in terms of attention span and ability to compare grounds. To avoid response bias, we thus employed a between-subjects design, where we randomly assigned the participants to one out of five groups. Moreover, considerable effort was devoted to ensuring the comparability of the grouped samples.

Table 1 provides an overview of the twenty-five grounds and five groups. We tested the groups in terms of differences in demographics and their reported internet usage and online purchase behaviour to ensure that the randomization was successful. We found no significant differences between the groups, which meant that the participants were distributed evenly across groups.

Table 1: Overview of grounds per group. The underlined grounds are legally prohibited either in the European legal framework or in the Dutch constitution. The grounds in italics are legally prohibited in some other national constitutions in the EU. We aimed for a balanced distribution of legally prohibited and legally permitted grounds per group

Group 1

Group 2

Group 3

Group 4

Group 5

Religion

Gender/Sex

Device type

Purchase history

Intelligence

Ideology/Philosophy

Sexual orientation

Location

Social media data

Socioeconomic status

Political views

Marital status

Browsing activity

Loyalty status

Photo/Appearance

Race/Ethnicity

Impairment/Disability

Student status

Income/Wealth

Battery percentage

Nationality

Health data

Criminal record/offences

Age

Browser type

3.3 Measures

The first part included four demographic measures: age, gender, income, education level. The five (single-item) dimensions of part two were drawn from previous research, in particular perceived fairness, alignment with personal norms, alignment with social norms, legality, and trust in markets. All items, unless otherwise indicated, used a seven-point Likert scale, with 1 = strongly disagree and 7 = strongly agree (See Appendix 2).

To assess consumer perceptions of fairness, we drew from Poort and Zuiderveen Borgesius (2019) and gave participants the following statement: “The use of this ground for setting prices is fair”, which they had to answer for each of the (five) grounds. For perceived alignment with personal and social norms, we drew inspiration from Garbarino & Maxwell (2010), who studied the effect of the belief that a given pricing practice is a norm of (American) society on, among others, fairness perceptions. To explore to what extent participants thought the use of grounds aligned with their personal norms, we asked them to indicate to what extent they agreed with the following statement: “Personally, I find it acceptable to base prices on this ground”. We assessed participants’ perceived alignment of a ground with existing societal norms with the following statement: “In society, it is considered acceptable to base prices on this ground”. To identify how participants assessed the legality of a discrimination ground, we drew from Turow et al. (2005) and included the following statement: “The use of this ground for setting prices should be legally prohibited”. Finally, to explore to what extent the use of certain grounds would diminish participants’ trust in the digital market, we proposed the following statement: “The use of this ground for setting prices would diminish my trust in the market”. The trust measure was added as a dimension to empirically examine the concern that has been raised in literature, namely that perceptions of unfairness can lead to a loss of trust in both companies and the digital market, which could in turn lead to diminished participation in the digital market (OECD, 2018b; Malgieri, 2023). Trust in sellers has been empirically examined (Grewal et al., 2004; Hufnagel et al., 2022), but trust in markets to our knowledge has not.

Participants were asked a series of concluding questions in part three. We measured the overall attitude towards personalised pricing (“It is [acceptable]/[fair]/[reasonable] if an online store charges different prices for consumers based on personal data”). We merged these three items to form one scale on the overall attitude towards personalised pricing; the scale had a high reliability score (α = 0.93). We measured participants’ opinions on whether consumers with a higher willingness to pay should pay higher prices (“I find it acceptable to charge people with a higher willingness to pay a higher price”). Finally, we asked some general questions about consumers’ daily internet usage, online purchase frequency in the last six months, and the average amount spent on said purchases.

4. Results

Before examining the ranking of grounds along the five different dimensions, we analysed the general attitude of participants regarding personalised pricing. Participants reported a negative attitude towards personalised pricing, with an average score of 2.47 (SD = 1.54). Regarding their opinion on solidarity, whether it is acceptable to charge people with a higher willingness to pay a higher price, participants on average somewhat disagreed with the statement (M = 3.09, SD = 1.65). Participants who reported a higher income, found it less acceptable to charge higher prices for people with a higher willingness to pay, r(755) = -.098, p = .008. On average, in the last six months, participants reported to have made online purchases on a weekly basis, spent about two to three hours online for non-work-related purposes, and spent between €101-300 on online purchases.

The five dimensions across which we collected participant perceptions, resulted in five rankings of twenty-five grounds. For an overview, see Figure 1 and Table 2.


Figure 1: Overview of perceptions surrounding discrimination grounds (N = 727).
Table 2: Overview of perceptions per ground. The underlined grounds are legally prohibited either in the European legal framework or in the Dutch constitution. The grounds in italics are legally prohibited in some other national constitutions in the EU

Ground

Unfairness

Personal norm

Social norm

Legality

Trust in market

Age

2.72

2.76

3.2

5.08

4.92

Battery percentage

2.79

2.64

2.94

4.84

4.88

Browser type

2.78

2.62

3.1

5.14

4.93

Browsing activity

3.08

3.00

3.39

4.7

4.8

Criminal record/offences

2.77

2.9

3.14

4.95

4.72

Device type

2.92

2.91

3.25

4.8

5.00

Gender/Sex

2.41

2.25

2.73

5.3

5.18

Health data

2.53

2.52

2.95

5.4

5.23

Ideology/Philosophy

2.83

2.78

3.07

5.08

5.11

Impairment/Disability

2.83

2.74

2.86

5.01

4.82

Income/Wealth

3.45

3.36

3.69

4.32

4.51

Intelligence

2.52

2.43

2.8

5.39

5.29

Location

3.57

3.54

4.03

4.31

4.35

Loyalty status

3.82

3.82

4.12

4.14

4.33

Marital status

2.96

2.92

3.16

5.00

5.08

Nationality

2.41

2.36

2.76

5.45

5.30

Photo/appearance

2.71

2.50

3.06

5.35

5.27

Political views

2.63

2.48

2.90

5.38

5.35

Purchase history

2.99

2.94

3.54

4.69

4.99

Race/Ethnicity

2.44

2.28

2.42

5.30

5.07

Religion

2.27

2.17

2.33

5.36

5.26

Sexual orientation

2.66

2.52

2.83

5.19

5.12

Social media data

2.88

2.83

3.48

4.83

4.82

Socioeconomic status

3.27

3.23

3.74

4.44

4.70

Student status

3.49

3.55

3.80

4.09

4.16

First, fairness perceptions of the discrimination grounds revealed that participants ranked all grounds as unfair (i.e. all under 4 on the 7-point scale). Out of all grounds, the use of loyalty status for personalised pricing was deemed the least unfair (M = 3.82, SD = 1.70), with location (M = 3.57, SD = 1.69) and student status (M = 3.49, SD = 1.77) in second and third place, respectively. Religion was deemed the most unfair ground (M = 2.27, SD = 1.50), with nationality (M = 2.41, SD = 1.60) and gender (M = 2.41, SD = 1.60) in shared second place and ethnicity (M = 2.44, SD = 1.70) in third place. For the complete ranking, see Figure 2.


Figure 2: Perceived fairness of discrimination grounds (N = 727).

Second, when asked about the alignment of the discrimination grounds with personal norms, a response similar to perceived fairness was observed. That is, none of the grounds aligned with personal norms (i.e. all under 4 on the 7-point scale). The grounds that aligned the least with participants’ personal norms were religion (M = 2.17, SD = 1.55), gender (M = 2.25, SD = 1.59), and ethnicity (M = 2.28, SD = 1.57). Loyalty status (M = 3.82, SD = 1.74), student status, and location reportedly misaligned the least with personal norms. For the complete ranking, see Figure 3.


Figure 3: Perceived alignment with personal norms of discrimination grounds (N = 727).

Third, the grounds were also ranked in terms of the (perceived) alignment with existing social norms. Here we however observed a small positive difference with personal norms, t(579) = -7.47, p < .001, D = .31. The scores for social norms were consistently higher for each ground than for personal norms. The grounds were perceived to align less with personal norms than with social norms. In other words, for all grounds participants assumed higher acceptance in society than their own acceptance.

The grounds that were perceived to align the least with societal norms, were religion (M = 2.33, SD = 1.45), ethnicity (M = 2.42, SD = 1.53), and gender (M = 2.73, SD = 1.78). Loyalty status (M = 4.12, SD = 1.74), location (M = 4.03, SD = 1.60), and student status (M = 3.8, SD = 1.82) were perceived to align more with societal norms but still scored under 4 on the 7-point scale; see Figure 4 for the complete ranking.


Figure 4: Perceived alignment with social norms of discrimination grounds (N = 727).

Fourth, we asked participants to what extent each ground should be legally prohibited. The three grounds that were deemed the most illegitimate, were nationality (M = 5.45, SD = 1.60), health data (M = 5.4, SD = 1.59), and intelligence (M = 5.39, SD = 1.72). Student status (M = 4.09, SD = 1.89), loyalty status (M = 4.14, SD = 1.83), and location data (M = 4.31, SD = 1.70) were deemed the least illegitimate, albeit they still were ranked over 4 on the 7-point scale. Figure 5 shows the complete ranking.


Figure 5: Perceived illegitimacy of discrimination grounds (N = 727).

Fifth and last, participants had to indicate to what extent the use of a certain ground would diminish their trust in the market. The use of political views (M = 5.35, SD = 1.54), nationality (M = 5.3, SD = 1.69), and intelligence (M = 5.29, SD = 1.75) were the top three grounds that would lead to a loss of trust. Student status (M = 4.16, SD = 1.81), loyalty status (M = 4.33, SD = 1.73), and location (M = 4.35, SD = 1.58) came out as the three grounds that would lead to less loss in trust, albeit still all grounds scored over 4 on the 7-point scale, hence all leading to less trust in markets. For the complete ranking, see Figure 6.


Figure 6: Reported loss of trust in the market in case of use of discrimination grounds (N = 727).

We compared the legally prohibited grounds to the grounds that are not (yet) legally prohibited. We created two categories, bundling all perceptions for each of the five dimensions and conducted a Paired Samples test to compare the average scores of the two categories. For fairness, personal norms, and social norms, the mean value of all legal grounds was significantly higher than that of the “non-legal” grounds. Legal grounds were perceived as more unfair than non-legal grounds (M = -.43, SD = 1.09), less in alignment with personal norms (M = -.45, SD = 1.10) and social norms (M = -.60, SD = 1.16). Additionally, overall, legal grounds were deemed more illegitimate (i.e. less permissible) than “non-legal grounds” (M = .46, SD = 1.21) and were reported to lead to a higher loss in trust (M = .32, SD = 1.16).

In addition to categorising grounds based on their current legality, we also distinguished between immutable and (technically) mutable grounds. We selected three unambiguous grounds for each category. We considered ethnicity, sexual orientation, and intelligence to be immutable, and browser type, battery percentage, and device type to be mutable. As for the immutable grounds, ethnicity and sexual orientation are legally protected discrimination grounds, whereas intelligence is not. We found significant differences (p < .001) between the groups: immutable grounds were considered more unfair than mutable grounds (M = 2.53 vs. M = 2.82), less aligned with personal norms (M = 2.40 vs. M = 2.72), and less aligned with social norms (M = 2.68 vs. M = 3.10). Moreover, immutable grounds were deemed to be more illegitimate than mutable grounds (M = 5.30 vs. M = 4.93) and would diminish participants’ trust in the market more (M = 5.16 vs. M = 4.93).

5. Discussion

The aim of this research was to provide a comprehensive overview of consumer perceptions regarding grounds used for personalised pricing, and to examine whether there is a gap between perceptions of grounds that are prohibited and not prohibited. Where previous research focused on general (un)fairness perceptions surrounding personalised pricing (Turow et al., 2009; Poort & Zuiderveen Borgesius, 2019) or zoomed in on specific non-legally prohibited grounds (Priester et al., 2020; Hufnagel et al., 2022), our research set out to map an overview of perceptions of various grounds across several dimensions. By combining both types of grounds (legally and not legally prohibited), it is possible to assess the extent to which current legislation aligns with societal perceptions and whether grounds that are (not) yet legally prohibited, evoke similar responses as legally prohibited grounds. Our findings provide valuable input for the discussion on the extent to which current legislation offers – or should offer – appropriate safeguards against the challenges associated with personalised pricing.

We find quite some overlap between consumer perceptions of unfairness (“illegitimacy”) and illegality. The grounds that participants in our survey consider most unfair are also the grounds that most need protection by law, a finding largely consistent with what is already established in anti-discrimination and data protection law. Moreover, from the grounds that are not prohibited, intelligence and physical appearance scored high on unfairness and were reported to align the least with personal and social norms. Personalising prices on browser type and battery percentage also scored relatively high in terms of unfairness and misalignment with personal and social norms.

Perceptions of (social) norm misalignment could have far-reaching negative consequences for companies and the digital market. For the latter, we included the dimension “trust in market” to assess the extent to which the use of a ground would lead to (self-reported) loss of trust among participants. We found that the use of grounds that were deemed the least aligned with social norms and the most illegitimate were accompanied with a higher self-reported loss of trust in the digital market. “Newer” grounds that scored high on norm misalignment, such as intelligence and appearance, showed a similar correlation.

5.1 Policy implications

From a policy perspective, it is interesting to zoom in on the grounds that are not yet legally prohibited, but that were perceived as unfair and in violation of personal and social norms: intelligence, appearance, battery percentage, and browser type. One could argue that these grounds could potentially be included in legislation as prohibited grounds – especially intelligence and appearance.

There are at least two arguments for this. One is the viewpoint that the law is a codification of social norms (Basu, 2002). The law is not a static system, but changes over time, reflecting changes in norms and perceptions in society. If social norms and perceptions change, in this case because new technologies enable new grounds for (price) discrimination, this can be sufficient reason to change the legislation accordingly. Legally prohibiting discrimination grounds that are considered unfair by people would mean a further alignment between the legal system and social norms and perceptions. Basically, this is a fairness argument, which applies the strongest to inherently immutable personal characteristics.

A second argument to consider changing the law has a more economic perspective: a lack of protection of grounds for price discrimination that are considered unfair would also reportedly lead to a relatively high loss of trust in the market (Cross, 2005). Hence, apart from the unfairness at an individual level, there is a larger economic effect that may provide an argument for the legislator to step in and offer protection through regulation.

When reconsidering the grounds that need protection, the question is which grounds to include. The current lack of harmonisation of grounds in EU Member States show that there is not a shared understanding of fairness, or at least no agreement on which grounds require protection (Custers, 2023). Although the reported loss of trust and unfairness perceptions associated with newer grounds could form a justification to protect these grounds – or at least reassess the current legal framework – the rapid pace in which the technology is developing makes it difficult to predict to what extent legal regulation could keep up with these developments. New grounds could emerge that are also perceived as unfair, but companies could also find ways to circumvent prohibited grounds by using proxies or engaging in indirect discrimination, both of which are proven to be difficult to detect and enforce (Zuiderveen Borgesius, 2018). Therefore, adding several new prohibited discrimination grounds to the legal framework does not seem to be the way forward according to some authors (Solove, 2024).

Out of all the non-legally prohibited grounds, pricing based on intelligence and appearance received the most negative reactions. Intelligence and appearance are both (technically) immutable, meaning that they are unchangeable, or at least not changeable without significant effort. Discrimination based on intelligence or on appearance is not legally prohibited10 and happens on a daily basis (Tannock, 2008; Liu, 2017). Photos are not only considered biometric data,11 but readily reveal all kinds of physical attributes, such as race, religion, health, and ethnicity. While intelligence is for a large part innate, environmental factors such as a high socioeconomic status can contribute to developing intellectual skills, and the other way around: more intelligent individuals tend to achieve higher level of education, occupational status, income, and even better health outcomes (Deckers et al., 2017; Bosma et al., 2007). However, since data are so heavily intertwined, it would be difficult to substantiate why exactly these two grounds would need to be added to the existing legal framework (Solove, 2024).

Alternatively, instead of providing justification for adding new prohibited grounds, our findings provide insight into the degree of harm that might be inflicted when specific grounds are used for price personalisation. In our findings, socioeconomic status comes forward as a relatively acceptable – or less unacceptable – ground to base prices on. Socioeconomic status and its supposed protection are a point of discussion among EU Member States (Ganty & Benito Sanchez, 2021). Socioeconomic status is not (yet) legally protected in Dutch constitutional law, partly because there is still no consensus regarding whether it is an inherent or immutable characteristic (Tweede Kamer, 2020).12 In the context of personalised pricing, the effect of this pricing strategy on consumer welfare is ambiguous (Elegido, 2011). In some cases, it could be beneficial to distinguish between consumers and their socioeconomic position, as it might allow certain consumers access to a product or service that they – because of their socioeconomic position – would not have been able to afford otherwise. However, the vulnerabilities that come with being part of a certain socioeconomic class (e.g. low level of digital literacy), could very well be exploited by companies (Strycharz & Duivenvoorde, 2021). Furthermore, socioeconomic status and social class are heavily intertwined with sensitive data such as political preferences and ethnicity (Gandy, 2009). In line with Solove (2024), we propose that a case-by-case analysis is needed, focusing on how certain data are used in personalised pricing, rather than the type of data. If such analyses would reveal the need to include new discrimination grounds in legislation, this does not necessarily need to be done in formal legislation, such as international treaties, national constitutions, or equal treatment acts. The open-endedness of the listings of discrimination grounds (i.e. non-exhaustive listings that use phrases like “or other characteristics”) in many legal instruments allows judges and courts to qualify new grounds as illegal in particular contexts.

Furthermore, company behaviour that exploits consumer characteristics can possibly be deemed as unfair under competition and consumer law (Sauter, 2020; Li et al., 2023; Duivenvoorde, 2023). Nevertheless, online price discrimination remains difficult to detect due to the difficulties in isolating the grounds that the price difference was based on. Considering how the current European legal framework could be leveraged to address changes in the grounds that could be used for online price discrimination is therefore only one part of solving the puzzle. Increased transparency about the underlying mechanisms of a personalised price will further our knowledge about the current state of the art and the (personal) information that can be used to this end. The individual access and information rights in the GDPR could aid in overcoming consumers’ evidentiary deficiencies (Article 13-15 GDPR; Hacker, 2018). Clearing up the current confusion surrounding the applicability of Article 22 GDPR would be another step towards providing a more robust protection against online price discrimination (Wong, 2020). Furthermore, data protection law can provide for ex ante measures to implement more transparency about the grounds used in data processing (Li, 2022).

5.2 Theoretical implications

Our findings are consistent with previous findings and observations that personalised pricing is generally viewed by the public as unfair and illegitimate (Poort & Zuiderveen Borgesius, 2019; Turow et al., 2005, Priester et al., 2020). The differences that we find between grounds are also in line with previous studies that focused on specific non-legally prohibited grounds. For instance, in line with Hufnagel et al. (2022), we find that personalised pricing based on location is deemed as less unfair than device type. Furthermore, in line with Priester et al., (2021), we also find that location is deemed as less unfair than purchase history. The grounds that we investigated all rank under the second half of the seven-point scale of fairness, personal norms, and social norms inasmuch that no ground is considered fair for personalised pricing – only less unfair. A similar trend is observed for the perceived illegitimacy and the effect that the use of the discrimination grounds has for consumers’ trust in markets. In sum, this confirms that the current perception of online price discrimination among consumers is still negatively loaded. Consumers’ unfairness perceptions can have far-reaching consequences, such as a decrease in trust in the market, which has been associated with lower digital participation (Malgieri, 2023).

5.3 Practical implications

For companies currently engaging or looking to engage in personalised pricing, this research provides an overview of how personalised pricing grounds are perceived by consumers. Not to encourage them to focus their personalised pricing strategies on the grounds that are not yet legally prohibited or deemed as less unfair, but rather to raise awareness that data are heavily intertwined and could very well result in legally prohibited discrimination. Take loyalty status as an example, as our findings showed that this was deemed one of the least unfair grounds and the least misaligned with current social norms. In practice, loyal customers are often offered a discount. To become a loyal member, however, often requires purchasing power, which is closely associated with income and socioeconomic status. In addition, signing up for a loyalty program often entails giving up personal data, some of which might be legally protected. Therefore, attention is warranted when engaging in personal pricing. Companies seem to be aware of this (Heidary et al., 2022).

5.4 Limitations and future research

In our research, we aimed to map a comprehensive overview of consumer perceptions and social norms surrounding grounds for personalised pricing. However, social norms can change over time and differ between cultures and countries (Maxwell & Garbarino, 2010). Since we asked respondents to report on personal and (perception of) social norms regarding (sometimes sensitive) discrimination grounds, there is a risk of respondents providing socially desirable answers (i.e. disapproving more strongly of sensitive discrimination grounds). We framed the question as “the use of this ground is fair” rather than “the use of this ground is unfair”, to mitigate any priming or steering effects that might occur. While we observed a variety in the answer range, meaning that not all consumers found the use of sensitive grounds unfair, we would still recommend that future research adds an indirect measure, or uses other (more observational methods) to study these consumer perceptions. Since we conducted our survey among Dutch consumers, we recommend that this research is replicated in other (European) countries and updated as new grounds emerge, to assess to what extent perceptions are universal across countries and time. However, we believe that this study still has a high level of generalizability with regards to the regulatory implications, as this is mainly a matter of European law. Given that the national constitutions are also not fully harmonised in terms of their implementation of protection against discrimination, it is imperative to assess what the underlying social norms are for other countries and to what extent they align with the existing legal frameworks, to advance the regulatory debate.

Furthermore, in the information that we gave to participants, we provided participants with a definition of personalised pricing (“charging different prices to different consumers based on their personal data, such as personal characteristics or online behaviour”) and that this could result in a higher or lower price for certain (groups of) consumers. While we opted for this formulation, it does leave room for consumers to fill in for themselves whether the grounds that they were shown would entail a higher or lower price. This neutral formulation could have resulted in different interpretations for consumers. Hence, we propose that future research includes more specific scenarios as well, such as specifically mentioning higher or lower prices, to investigate whether and to what extent they influence (un)fairness perceptions. Not only are lower prices perceived as fairer, but they are also expected to be a more realistic future manifestation of personalised pricing (Heidary et al., 2022). Furthermore, future research should investigate the elasticity of the perceived unfairness to different levels of price changes, against the background of different grounds: this would shed more light on what the boundaries of online price discrimination are.

The focus of this article concerned predominantly the regulatory framework of anti-discrimination law and data protection law. There has been increased attention for analysing online price discrimination through the lens of different fields of law, as the practice touches upon various principles: think of consumer empowerment and protection (consumer law), protection of human dignity and access to the market (anti-discrimination law), assuring a fair and competitive market (competition law), and protection of autonomy and privacy (data protection law). Considering the specific role of AI-pricing algorithms that can lead to uncertainties in the (desired) applicability of the current legal framework, it is imperative to consider other fields of law relevant to online price discrimination.

The current information requirement as proffered in Article 6(1)(ea) Consumer Rights Directive does not require companies to disclose information about the parameters used for personalised pricing, only that the price has been personalised. This relatively low threshold seems to be in contrast with more recent initiatives addressing similar technologies under the EU “Digital Single Market Strategy”, which aims to ensure the free movement of goods, people, services and capital, fair competition, and a high level of consumer and personal data protection irrespective of place or residence (EC, 2015). These initiatives are – among others – the Digital Services Act (DSA13), the Digital Markets Act (DMA14), and the AI Act.15 Although these initiatives do not apply directly to online price discrimination, there is a clear trend towards more stringent transparency requirements for digital technologies, safeguards against manipulation resulting from such technologies, and protection of the internal market against the exploitative use of big data by large platforms.

For example, the DSA prescribes more stringent transparency requirements for practices that use similar data processing techniques, such as recommender systems and personalised advertisements.16 Targeting based on sensitive data as classified in Article 9(1) GDPR is banned under the DSA, stemming from the premise that targeted advertising that profiles based on consumers’ interests (i.e. behavioural advertising) can potentially exploit consumer vulnerability and possibly result in manipulation of these consumers.17 Depending on the effectiveness of enforcement based on this provision, a similar ban could also be considered for online price discrimination. Moreover, the AI Act proposes more stringent requirements for AI systems that constitute a high-risk (i.e. negatively affect safety or fundamental rights). However, the consumer-facing AI systems used for personalised pricing do not seem to fall under the current provisions for “high-risk” algorithms, even though the associated risks for fundamental rights show considerable overlap with the AI applications marked as high-risk in Annex III AI Act (e.g. Annex III under 5(b) AI Act). As such, requirements such as logging, auditing, and human oversight do not apply to the AI systems underlying personalised pricing and thus it boils down to the limited transparency requirements already in place. Given the need for regulation identified in this paper, it would be worthwhile to reassess whether the risks of online price discrimination would not require more stringent regulation going forward.

6. Conclusion

Our research sheds light on the current state of consumer perceptions and (perceived) social norms surrounding price personalisation grounds, both legally permissible and prohibited. In the context of personalised pricing, there is support for the grounds that are currently prohibited and protected through anti-discrimination and data protection law. In other words, there is no substantial gap between law in the books and law in action, i.e. the grounds that people find unfair or unacceptable. The acceptance of grounds is, however, relatively low; it is therefore not a matter of what participants find fair but what they find less unfair. There are some upcoming grounds (e.g. intelligence and appearance) that evoke negative reactions as strong as legally protected characteristics.

This raises the question whether new grounds should be included in future legislation. Although extended protection for these new discrimination grounds is worth considering (both on the basis of fairness arguments and market trust arguments), expanding the lists of discrimination grounds in legislation may not be the way to keep up with the rate in which “new” data can be used to personalise prices. An ever-expanding list of discrimination grounds may water down the protection it intends to offer, as it may be hard to enforce and easy to circumvent. Furthermore, the acceptability of many grounds may be dependent on the context. Hence, before changing legislation, it is important to further investigate specific scenarios and context dependencies. This will reveal where further legal protection is needed. This legal protection can be offered by including new discrimination grounds in legislation, but alternatively it is also possible to focus the regulation on the potential risks and harms caused by these practices.

References

Baker, W., Marn, M., & Zawada, C. (2001). Price smarter on the net. Harvard Business Review, 79(2), 122–127. https://hbr.org/2001/02/price-smarter-on-the-net

Barros Vale, S. (2020). The omnibus directive and online price personalization. A mere duty to inform? European Journal of Privacy Law & Technologies, 2, 92–117. https://universitypress.unisob.na.it/ojs/index.php/ejplt/article/view/1263

Basu, K. (2002). Social norms and the law. In P. Newman (Ed.), The new Palgrave dictionary of economics and the law (pp. 1876–1881). Palgrave Macmillan UK. https://doi.org/10.1007/978-1-349-74173-1

Berendt, B., & Preibusch, S. (2017). Toward accountable discrimination-aware data mining: The importance of keeping the human in the loop—and under the looking glass. Big Data, 5(2), 135–152. https://doi.org/10.1089/big.2016.0055

Bosma, H., Van Boxtel, M. P., Kempen, G. I., Van Eijk, J. T., & Jolles, J. (2007). To what extent does IQ ‘explain’ socio-economic variations in function? BMC Public Health, 7. https://doi.org/10.1186/1471-2458-7-179

Brislin, R. W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1(3), 185–216. https://doi.org/10.1177/135910457000100301

Calders, T., & Žliobaitė, I. (2013). Why unbiased computational processes can lead to discriminative decision procedures. In B. H. M. Custers, T. Calders, B. Schermer, & T. Zarsky (Eds.), Discrimination and privacy in the information society. Data mining and profiling in large databases (Vol. 3, pp. 43–57). Springer. https://doi.org/10.1007/978-3-642-30487-3_3

Campbell, M. C. (1999). Perceptions of price unfairness: Antecedents and consequences. Journal of Marketing Research, 36(2), 187–199. https://doi.org/10.2307/3152092

Carroll, K., & Coates, D. (1999). Teaching price discrimination: Some clarification. Southern Economic Journal, 66(2), 466–480. https://doi.org/10.2307/1061156

Clarke, J. A. (2015). Against immutability. The Yale Law Journal, 125(2), 1–102. https://www.yalelawjournal.org/article/against-immutability

Clifford, D., & Ausloos, J. (2018). Data protection and the role of fairness. Yearbook of European Law, 37, 130–187. https://doi.org/10.1093/yel/yey004

Competition and Markets Authority (CMA). (2020). Loyalty penalty update—Progress two years on from the CMA’s super-complaint investigation [Report]. https://www.gov.uk/cma-cases/loyalty-penalty-super-complaint#loyalty-penalty-progress-2-years-on

Corder, M. (2023, January 17). Dutch Senate expands constitutional ban on discrimination. The Seattle Times. https://www.seattletimes.com/nation-world/world/dutch-senate-expands-constitutional-ban-on-discrimination/

Cross, F. B. (2005). Law and trust. Georgetown Law Journal, 93(5), 1457–1546.

Custers, B. (2023). Reconsidering discrimination grounds in the data economy: An EU comparison of national constitutions. Computer Law & Security Review, 50, 1–13. https://doi.org/10.1016/j.clsr.2023.105851

Custers, B., & Bachlechner, D. (2017). Advancing the EU data economy: Conditions for realizing the full potential of data reuse. Information Polity, 22(4), 291–309. https://doi.org/10.3233/IP-170419

Deckers, T., Falk, A., Kosse, F., Pinger, P., & Schildberg-Hörisch, H. (2017). Socio-economic status and inequalities in children’s IQ and economic preferences (Discussion Paper No. IZA DP No. 11158). IZA Institute of Labor Economics. https://docs.iza.org/dp11158.pdf

Department of Consumer Affairs. (2015). From cradle to cane: The cost of being a female consumer. A study of gender pricing in New York City (pp. 1–76) [Study]. https://www.nyc.gov/assets/dca/downloads/pdf/partners/Study-of-Gender-Pricing-in-

Dinur, R. (2021). Intentional and unintentional discrimination: What are they and what makes them morally different. Journal of Moral Philosophy, 19(2), 111–138. https://doi.org/10.1163/17455243-20213430

Duhigg, C. (2012, February 16). How companies learn your secrets. The New York Times Magazine. https://www.nytimes.com/2012/02/19/magazine/shopping-

Duivenvoorde, B. (2023). Consumer protection in the age of personalised marketing: Is EU law future-proof? [Text/html,PDF]. European Papers, 8(2), 631–646. https://doi.org/10.15166/2499-8249/679

Elegido, J. M. (2011). The ethics of price discrimination. Business Ethics Quarterly, 21(4), 633–660. https://doi.org/10.5840/beq201121439

European Court of Human Rights (ECHR). (2022). Guide on Article 14 of the European Convention on Human Rights and on Article 1 of Protocol No. 12 to the Convention (pp. 1–67) [Case law guide]. https://web.archive.org/web/20221028055319/https://www.echr.coe.int/Documents/Guide_Art_14_Art_1_Protocol_12_ENG.pdf

Gandy, O. H. (2009). Coming to terms with chance. Engaging rational discrimination and cumulative disadvantage. Routledge. https://doi.org/10.4324/9781315572758

Ganty, S., & Benito Sanchez, J. C. (2021). Expanding the list of protected grounds within anti-discrimination law in the EU [Report]. Equinet – European Network of Equality Bodies. https://equineteurope.org/wp-content/uploads/2022/03/Expanding-the-List-of-Grounds-in-Non-discrimination-Law_Equinet-Report.pdf

Garbarino, E., & Lee, O. F. (2003). Dynamic pricing in internet retail: Effects on consumer trust. Psychology & Marketing, 20(6), 495–513. https://doi.org/10.1002/mar.10084

Garbarino, E., & Maxwell, S. (2010). Consumer response to norm-breaking pricing events in e-commerce. Journal of Business Research, 63(9–10), 1066–1072. https://doi.org/10.1016/j.jbusres.2008.12.010

Gerards, J. (2016). The irrelevance of the Netherlands Constitution, and the impossibility of changing it. Revue Interdisciplinaire d’études Juridiques, Volume 77(2), 207–233. https://doi.org/10.3917/riej.077.0207

Grewal, D., Hardesty, D. M., & Iyer, G. R. (2004). The effects of buyer identification and purchase timing on consumers’ perceptions of trust, price fairness, and repurchase intentions. Journal of Interactive Marketing, 18(4), 87–100. https://doi.org/10.1002/dir.20024

Grochowski, M., Jabłonowska, A., Lagioia, F., & Sartor, G. (2022). Algorithmic price discrimination and consumer protection: A digital arms race? Technology and Regulation, 36–47. https://doi.org/10.26116/TECHREG.2022.004

Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. https://doi.org/10.54648/COLA2018095

Hannak, A., Soeller, G., Lazer, D., Mislove, A., & Wilson, C. (2014). Measuring price discrimination and steering on e-commerce web sites. Proceedings of the 2014 Conference on Internet Measurement Conference, 305–318. https://doi.org/10.1145/2663716.2663744

Heidary, K., Custers, B., Pluut, H., & Van der Rest, J.-P. (2022). A qualitative investigation of company perspectives on online price discrimination. Computer Law & Security Review, 46. https://doi.org/10.1016/j.clsr.2022.105734

Hern, A. (2020, March 17). TikTok ‘tried to filter out videos from ugly, poor or disabled users’. The Guardian. https://www.theguardian.com/technology/2020/mar/17/tiktok-tried-to-filter-out- videos-from-ugly-poor-or-disabled-users

Hufnagel, G., Schwaiger, M., & Weritz, L. (2022). Seeking the perfect price: Consumer responses to personalized price discrimination in e-commerce. Journal of Business Research, 143, 346–365. https://doi.org/10.1016/j.jbusres.2021.10.002

Ivanova, Y. (2020). The data protection impact assessment as a tool to enforce non-discriminatory AI. In L. Antunes, M. Naldi, G. F. Italiano, K. Rannenberg, & P. Drogkaris (Eds.), Privacy Technologies and Policy (Vol. 12121, pp. 3–24). Springer. https://doi.org/10.1007/978-3-030-55196-4_1

Khaitan, T. (2015). A theory of discrimination law. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199656967.001.0001

Larson, J., Mattu, S., & Angwin, J. (2015). Unintended consequences of geographic targeting [Project report]. ProPublica. https://static.propublica.org/projects/princeton-

Li, Z. (2022). Affinity-based algorithmic pricing: A dilemma for EU data protection law. Computer Law & Security Review, 46. https://doi.org/10.1016/j.clsr.2022.105705

Liu, X. (2017). Discrimination and lookism. In K. Lippert-Rasmussen (Ed.), The Routledge handbook of the ethics of discrimination (pp. 276–288). Routledge. https://doi.org/10.4324/9781315681634

Malgieri, G. (2023). In/acceptable marketing and consumers’ privacy expectations: Four tests from EU data protection law. Journal of Consumer Marketing, 40(2), 209–223. https://doi.org/10.1108/JCM-03-2021-4571

Maxwell, S. (2002). Rule-based price fairness and its effect on willingness to purchase. Journal of Economic Psychology, 23(2), 191–212. https://doi.org/10.1016/S0167-4870(02)00063-6

Maxwell, S., & Garbarino, E. (2010). The identification of social norms of price discrimination on the internet. Journal of Product & Brand Management, 19(3), 218–224. https://doi.org/10.1108/10610421011046193

Mikians, J., Gyarmati, L., Erramilli, V., & Laoutaris, N. (2012). Detecting price and search discrimination on the internet. Proceedings of the 11th ACM Workshop on Hot Topics in Networks, 79–84. https://doi.org/10.1145/2390231.2390245

Miller, A. A. (2014). What do we worry about when we worry about price discrimination? The law and ethics of using personal information for pricing. Journal of Technology Law & Policy, 19(1), 41–104. https://scholarship.law.ufl.edu/jtlp/vol19/iss1/2

Muir, E. (2019). The horizontal effects of charter rights given expression to in EU legislation, from Mangold to Bauer. Review of European Administrative Law, 12(2), 185–215. https://doi.org/10.7590/187479819X15840066091312

Natelhoff, Y. (2023, April 10). Différents tarifs pour une même course: Uber ne met pas tout le monde sur un même pied d’égalité [Different fares for the same journey: Uber does not put everyone on the same footing]. Dernière Heure. https://www.dhnet.be/conso/auto-moto/2023/04/10/differents-tarifs-pour-une-meme-course-uber-ne-met-pas-tout-le-monde-sur-un-meme-pied-degalite-CE3CGQL3EVH6HAJN33NRKM36Q4/

Odlyzko, A. (2003). Privacy, economics, and price discrimination on the Internet. Proceedings of the 5th International Conference on Electronic Commerce, 355–366. https://doi.org/10.1145/948005.948051

Odlyzko, A. (2004). The evolution of price discrimination in transportation and its implications for the internet. Review of Network Economics, 3(3). https://doi.org/10.2202/1446-9022.1055

Organisation for Economic Co-operation and Development. (2015). Data-driven innovation: Big data for growth and well-being. OECD Publishing. https://doi.org/10.1787/9789264229358-en

Organisation for Economic Co-operation and Development. (2016). Price Discrimination – Background note by the Secretariat (Report No. DAF/COMP(2016)15). https://one.oecd.org/document/DAF/COMP(2016)15/en/pdf

Organisation for Economic Co-operation and Development. (2018b). Personalised pricing in the digital era – Note by the Netherlands (Report No. DAF/COMP/WD(2018)124). https://one.oecd.org/document/DAF/COMP/WD(2018)124/en/pdf

Organisation for Economic Co-operation and Development. (2018a). Personalised pricing in the digital era – Background note by the Secretariat (Report No. DAF/COMP(2018)13). https://one.oecd.org/document/DAF/COMP(2018)13/en/pdf

Poort, J., & Zuiderveen Borgesius, F. J. (2019). Does everyone have a price? Understanding people’s attitude towards online and offline price discrimination. Internet Policy Review, 8(1). https://doi.org/10.14763/2019.1.1383

Preston McAfee, R. (2008). Price discrimination. In W. D. Collins & J. Angland (Eds.), Issues in competition law and policy 465 (pp. 465–484). ABA Section of Antitrust Law.

Priester, A., Robbert, T., & Roth, S. (2020). A special price just for you: Effects of personalized dynamic pricing on consumer fairness perceptions. Journal of Revenue and Pricing Management, 19(2), 99–112. https://doi.org/10.1057/s41272-019-00224-3

Rosencrance, L. (2000, September 13). Customer outrage prompts Amazon to change price-testing policy. Computerworld. https://www.computerworld.com/article/2597088/customer-outrage-prompts-amazon- to-change-price-testing-policy.html

Sauter, W. (2020). A duty of care to prevent online exploitation of consumers? Digital dominance and special responsibility in EU competition law. Journal of Antitrust Enforcement, 8(2), 406–427. https://doi.org/10.1093/jaenfo/jnz023

Sears, A. M. (2021). The limits of online price discrimination in Europe. Science and Technology Law Review, 21(1), 1-42 Pages. https://doi.org/10.7916/STLR.V21I1.5761

Shiller, B. (2014). First degree price discrimination using big data (Working Paper No. 58). https://EconPapers.repec.org/RePEc:brd:wpaper:58

Solove, D. J. (2024). Data is what data does: Regulating use, harm, and risk instead of sensitive data. Northwestern University Law Review, 118(4), 1081–1138. https://doi.org/10.2139/ssrn.4322198

Steppe, R. (2017). Online price discrimination and personal data: A General Data Protection Regulation perspective. Computer Law & Security Review, 33(6), 768–785. https://doi.org/10.1016/j.clsr.2017.05.008

Stole, L. A. (2007). Price discrimination and competition. In M. Armstrong & R. Porter (Eds.), Handbook of industrial organization (Vol. 3, pp. 2221–2299). Elsevier. https://doi.org/10.1016/S1573-448X(06)03034-2

Strycharz, J., & Duivenvoorde, B. (2021). The exploitation of vulnerability through personalised marketing communication: Are consumers protected? Internet Policy Review, 10(4), 1–27. https://doi.org/10.14763/2021.4.1585

Tanna, M., & Dunning, W. (2023). Bias and discrimination in the use of AI in the financial sector. In N. Remolina & A. Gurrea-Martinez (Eds.), Artificial intelligence in finance (pp. 320–349). Edward Elgar Publishing. https://doi.org/10.4337/9781803926179.00025

Tannock, S. (2008). The problem of education-based discrimination. British Journal of Sociology of Education, 29(5), 439–449. https://www.jstor.org/stable/40375368

Turow, J., Feldman, L., & Meltzer, K. (2005). Open to exploitation: America’s shoppers online and offline. (Departmental Papers (ASC)) [Working paper]. https://repository.upenn.edu/handle/20.500.14332/2000

Tweede Kamer der Staten-Generaal [ House of Representatives of the Netherlands]. (2020). Het Voorstel van wet van de leden Bergkamp, Özütok en Van den Hul houdende verklaring dat er grond bestaat een voorstel in overweging te nemen tot verandering in de Grondwet, strekkende tot toevoeging van handicap en seksuele gerichtheid als non-discriminatiegrond (32411) [Proposal for a law by the members Bergkamp, Özütok and Van den Hul declaring that there are grounds to consider a proposal to amend the Constitution to add disability and sexual orientation as grounds for non-discrimination (32411)] (No. Proposal). Government of the Netherlands. https://www.eerstekamer.nl/behandeling/20200623/voortzetting_behandeling/docume

Van Bekkum, M., & Zuiderveen Borgesius, F. (2023). Using sensitive data to prevent discrimination by artificial intelligence. Does the GDPR need a new exception? Computer Law & Security Review, 48, 1–12. https://doi.org/10.1016/j.clsr.2022.105770

Van der Rest, J.-P. I., Sears, A. M., Miao, L., & Wang, L. (2020). A note on the future of personalized pricing: Cause for concern. Journal of Revenue and Pricing Management, 19(2), 113–118. https://doi.org/10.1057/s41272-020-00234-6

Van der Rest, J.-P., Sears, A. M., Kuokkanen, H., & Heidary, K. (2022). Algorithmic pricing in hospitality and tourism: Call for research on ethics, consumer backlash and CSR. Journal of Hospitality and Tourism Insights, 5(4), 771–781. https://doi.org/10.1108/JHTI-08-2021-0216

Wachter, S. (2020). Affinity profiling and discrimination by association in online behavioural advertising. Berkeley Technology Law Journal, 35(2), 367–430. https://doi.org/10.15779/Z38JS9H82M

Wachter, S. (2022). The theory of artificial immutability: Protecting algorithmic groups under anti-discrimination law (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2205.01166

Wong, B. (2020). Online personalised pricing as prohibited automated decision-making under Article 22 GDPR: A sceptical view. Information & Communications Technology Law, 30(2), 193–207. https://doi.org/10.1080/13600834.2020.1860460

Zuiderveen Borgesius, F. (2018). Discrimination, artifiical intelligence and algorithmic decision-making [Study]. Council of Europe. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73

Zuiderveen Borgesius, F., & Poort, J. (2017). Online price discrimination and EU data privacy law. Journal of Consumer Policy, 40(3), 347–366. https://doi.org/10.1007/s10603-017-9354-z

Appendices

Appendix 1: Explanation of discrimination grounds

Note: The underlined grounds are legally prohibited either in the European legal framework or in the Dutch constitution. The grounds in italics are legally prohibited in some other national constitutions in the EU.

Ground

Origin

Reference/Field of Law

Article

Examples used in survey

Age

Law

Anti-discrimination law

21 CFEU; Article 1 Grondwet;

“Young”/”old", age category, etc.

Battery percentage

Literature

OECD 2018a; Natelhoff, 2023

 

The battery percentage of the device on which the price is shown.

Browser type

Literature

Mikians et al., 2012; OECD 2018a

 

Google Chrome or Safari, cookies, incognito mode, etc.

Browsing activity

Literature

OECD 2018a

 

Browsing history, origin page, time spent on webpages, etc.

Criminal record/offences

Law

Data protection law

Article 9 GDPR

Criminal record, criminal offences, allegations, etc.

Device type

Literature

OECD 2018; Hufnagel et al., 2022

 

Mobile, PC or tablet, Apple or Windows, etc.

Gender/Sex

Law

Anti-discrimination law

21 CFEU; Article 1 Grondwet; Article 1 AWGB

Female, male, transgender, non-binary, etc.

Health data

Law

Data protection law

Article 9 GDPR

Medical record, information about smoking, blood pressure, etc.

Ideology/Philosophy

Law

Anti-discrimination law

21 CFEU; Article 1 Grondwet; Article 1 AWGB

Personal convictions about society, humanity, etc.

Impairment/Disability

Law

Anti-discrimination law

21 CFEU; Article 1 Grondwet

Mental or physical disability, chronic illness, etc.

Income/Wealth

Law

Solove, 2024

 

‘Rich’/‘poor’, high or low income, etc.

Intelligence

Literature

Tannock, 2008

 

IQ-score, highest level of education obtained, etc.

Location

Literature

Larson, Mattu & Angwin, 2015; OECD, 2018a; Priester et al., 2020

 

The country or place in which one resides or is located.

Loyalty status

Literature

Maxwell & Garbarino, 2010; OECD, 2018a; CMA, 2020

 

Loyal customer, new customer, bulk customers, etc.

Marital status

Law

Anti-discrimination law

Article 1 AWGB

Married, civil partnership, unmarried, etc.

Nationality

Law

Anti-discrimination law

21 CFEU; Article 1 AWGB

Place of birth, citizenship, etc.

Photo/Appearance

Literature

Hern, 2020; Solove, 2024

 

Conformity with beauty ideals, physical appearance, etc.

Political views

Law

Anti-discrimination law; Data protection law

21 CFEU; Article 1 Grondwet; Article 1 AWGB; Article 9 GDPR

“Left”/“right”, liberal/conservative, etc.

Purchase history

Literature

OECD 2018a;

 

Earlier purchases, type of purchases, amount spent, etc.

Race/Ethnicity

Law

Anti-discrimination law; Data protection law

21 CFEU; Article 1 Grondwet; Article 9 GDPR

Skin colour, ancestry, etc.

Religion

Law

Anti-discrimination law; Data protection law

21 CFEU; Article 1 Grondwet; Article 1 AWGB; Article 9 GDPR

Christian, Muslim, Jew, Atheist, etc.

Sexual orientation

Law

Anti-discrimination law; Data protection law

21 CFEU; Article 1 Grondwet; Article 1 AWGB; Article 9 GDPR

Heterosexual, homosexual, bisexual, etc.

Social media data

Literature

OECD 2018a;

 

Likes, comments, interactions, etc.

Socioeconomic status

Law

Solove, 2024; Ganty & Benito Sanchez, 2021

 

Position on social scale, i.e. through education or job position.

Student status

Literature

Carroll & Coates, 1999

 

Enrollment in (higher) educational institution.

Appendix 2: Survey instrument

Introduction

Thank you for your time and participation in this survey. Your answers contribute to research on personalising online prices. Your participation is highly appreciated. Completing the questionnaire takes approximately 5-6 minutes. Please note that it is easiest to fill in this survey on a computer or laptop.

We kindly request that you answer all questions truthfully. Your data and answers will be treated confidentially. Only the researchers have access to your data. Participation is completely voluntary, and you can stop at any time during the survey. It is also possible to request the deletion of the data provided afterwards by contacting the researchers.

For questions or comments, please contact Kimia Heidary at k.heidary@law.leidenuniv.nl.

By clicking the 'I agree' button, you indicate that you have read the above information, that you are aware that participation is voluntary and that you agree to your data being used for research purposes. If you do not want to participate, you can stop now by closing this page. Thank you again for your cooperation.

▢ I agree to the terms and conditions and would like to participate in the questionnaire.

Section 1 – Demographic information

1. What is your age?

2. What gender do you identify with most?

▢ Male

▢ Female

▢ Non-binary/Other

▢ Prefer not to say

3. In which country do you reside?

[dropdown menu with 206 countries]

4. What is the highest level of formal education you have attained?

▢ Primary school

▢ High school

▢ Higher professional education (HBO)

▢ Bachelor’s Degree

▢ Master’s Degree

▢ Doctorate

▢ Other, namely: ….

5. What is your gross annual income?

▢ Less than €20,000

▢ €20,000 – €49,999

▢ €50,000 – €74,999

▢ €75,000 – €99,999

▢ More than €100,000

▢ Prefer not to say

Section 2 – Scenario

Please read the following text carefully before clicking 'next':

'Companies are increasingly experimenting with their prices online. This also applies to offering personalised prices: charging different prices to different consumers based on their data, such as personal characteristics or online behavior. This may result in a higher or lower price for certain groups or consumers.'

On the next page we present you with a number of grounds on the basis of which companies could personalise the price. We are curious about your opinion on this.

Click 'next' to continue to the questions. Please note that once you click on this, you will not be able to return to the previous page

[Participants were randomly assigned to one out of five groups and being shown 5 (out of 25) grounds]

Section 3 – Questions related to grounds

[All answers on a scale of 1 (‘strongly disagree’) to 7 (‘strongly agree’): 1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = neither agree nor disagree, 5 = somewhat agree, 6 = agree, 7 = strongly agree]

6. Statement: ‘The use of this ground for setting prices is fair.’

[Indicate answer for each ground]

7. Statement: ‘Personally, I find it acceptable to base prices on this ground.’

[Indicate answer for each ground]

8. Statement: ‘In society, it is considered acceptable to base prices on this ground.’

[Indicate answer for each ground]

9. Statement: ‘The use of this ground for setting prices should be legally prohibited.’

[Indicate answer for each ground]

10. Statement: ‘The use of this ground for setting prices would diminish my trust in the market.’

[Indicate answer for each ground]

Section 4 – General questions

11. Please indicate to what extent you agree or disagree with the following statements:

[Answers on a scale of 1 (‘strongly disagree’) to 7 (‘strongly agree’)]

- It is acceptable if an online store charges different prices for consumers based on personal data.

- It is fair if an online store charges different prices for consumers based on personal data.

- It is reasonable if an online store charges different prices for consumers based on personal data.

Attention check

12. It is very important for the quality of the research that you pay close attention when completing this questionnaire. Please indicate neutral for this statement.

▢ Disagree

▢ Somewhat disagree

▢ Neutral

▢ Somewhat agree

▢ Agree

13. Please indicate to what extent you agree or disagree with the following statement:

‘I find it acceptable to charge people with a higher willingness to pay a higher price.’

[Answers on a scale of 1 (‘strongly disagree’) to 7 (‘strongly agree’)]

14. How often have you made an online purchase on average in the last 6 months? Choose the answer that comes closest.

▢ Daily

▢ Weekly

▢ Monthly

▢ Less than monthly

▢ Never

15. What is your average daily (non-work-related) internet use in hours?

▢ Less than one hour

▢ 1 to 2 hours

▢ 2 to 4 hours

▢ 4 to 6 hours

▢ More than 6 hours

16. How much money have you spent in total on online purchases in the past 6 months?

▢ €0

▢ €1 – €50

▢ €51 – €100

▢ €101 – €300

▢ €301 – €500

▢ €501–€1000

▢ More than €1000

30. Do you have any comments or would you like to share something that was not covered in the survey?

[open question]

Footnotes

1. See in this regard also Article 6(1)(ea) of the Consumer Rights Directive, which introduces an information requirement for companies engaging in personalised pricing.

2. Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules.

3. See for example Case C-414/16 Vera Egenberger v Evangelisches Werk für Diakonie und Entwicklung eV, §81.

4. Preamble AWGB.

5. Wet gelijke behandeling op grond van handicap of chronische ziekte [Equal Treatment (Disability and Chronic Illness) Act] (WGBH/CZ), 2003; Wet gelijke behandeling op grond van leeftijd bij arbeid [Equal Treatment on the Grounds of Age at Work Act] (WGBL), 2004; Wet onderscheid arbeidsduur [Distinction in Working Hours Act] (WOA), 1996; and Wet onderscheid bepaalde en onbepaalde tijd [Distinction in Definite Time Act] (WOBOT), 2002.

6. See Article 7, 8, and 21 CFEU.

7. A z-score measures the distance between a data point and the mean using standard deviations. z-Score analysis is an objective way to determine whether a suspected outlier case should be deemed a concern and be removed from the data set accordingly. The value of ±3.29 is the standard value used to identify extreme outliers.

8. This method is also called reverse translation and involves translating the translated content back into the source language. In this case, it entailed translating the survey text from English to Dutch, and then translating the Dutch translation back into English by a different researcher. This method helps ensure that the original meaning of content is preserved when translated.

9. For literature containing lists of (both anecdotal and hypothetical) discrimination grounds, beyond the grounds mentioned in legislation, see for example Baker (2001), Maxwell & Garbarino (2010), Hern (2020), Ganty & Benito Sanchez (2021), Solove (2024), Custers (2023), and Natelhoff (2023). Note that there are other (newer) grounds thinkable, especially ones that flow from literature, that were not included in this survey. As such, the selection of twenty-five is non-exhaustive.

10. Apart from Belgium, France and Serbia, where discrimination on the basis of physical appearance is prohibited.

11. Recital 51 GDPR.

12. The ground is currently protected in 11 Member States.

13. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC.

14. Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828.

15. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)).

16. See Article 27, 38 and 39 DSA.

17. See Recital 69 DSA.