Restricting access to AI decision-making in the public interest: The justificatory role of proportionality and its balancing factors
Abstract
In the European Union, individuals subject to fully automated AI-based decisions by the government have the right to request access to the processing information under Article 15(1)(h) of the General Data Protection Regulation (GDPR). However, this right may be restricted on public interest grounds, such as preventing system manipulation to ensure the effectiveness of the law or reducing excessive burdens on public officials to maintain efficiency in government operations. The principle of proportionality is a key instrument to assess the legality and legitimacy of such restrictions, yet its application has been largely overlooked in discussions concerning the competing public interests under Article 15(1)(h) of the GDPR. This paper draws on legal and interdisciplinary scholarship, as well as the case law of the Court of Justice of the European Union (CJEU) to explore the role of proportionality and its balancing factors in shaping justifications for public interest restrictions on the data protection right of access to AI decision-making information.This paper is part of AI systems for the public interest, a special issue of Internet Policy Review guest-edited by Theresa Züger and Hadi Asghari.
1. Introduction
Despite its potential to enhance public administration tasks and address social challenges, the application of Artificial Intelligence (AI) within the public administration often lacks the transparency necessary to ensure it serves the social good, leading to project failures and missed opportunities (Floridi et al., 2020, pp. 1772-1773). Studies and case law demonstrate that some AI applications, intended to serve the public interest, have instead exacerbated and perpetuated systemic biases. For instance, the 2021 welfare fraud detection scandal (SyRI case) in the Netherlands revealed systemic discrimination against immigrant and low-income families in welfare adjudications (Bekker, 2021). Similarly, a 2020 journalist report exposed the City of Rotterdam's disproportionate flagging of single mothers, young people, and non-Western immigrants as high-risk groups for illegal behaviour through risk scoring processes (Braun et al., 2023). Moreover, studies on the performance of facial recognition technologies by transnational tech companies have highlighted their disproportionate higher accuracy for white males compared to black females (Buolamwini & Gebru, 2018).
Given that the use of AI systems in high-stakes decision-making scenarios is likely to produce biassed decisions against vulnerable groups, it is crucial for individuals affected by these decisions to have access to information about how and why those decisions were made. Article 15(1)(h) General Data Protection Regulation (GDPR) grants individuals the right of access to information about the decision-making process when it is fully automated. Yet, governments may justify restricting such access based on “public interest” reasons.
The concept of “public interest” varies significantly across different disciplines. This paper provides a brief overview on the term from a legal perspective, focusing on how it has been used to justify restrictions on access to information regarding AI decision-making processes. These restrictions are often grounded in public interest objectives, such as preventing the circumvention of the law (gaming the system), and ensuring government efficiency. Specifically, this paper examines restrictions on the right of access under Article 15(1)(h) of the GDPR, which seeks to equip individuals affected by AI decisions to challenge them by understanding their underlying logic.1
The paper is organised as follows: Section 2 explores the concept of public interest within the GDPR, focusing on public interest restrictions to the right of access to AI decision-making information under Article 15(1)(h). Section 3 frames Article 15(1)(h) and the Court of Justice of the European Union's (CJEU) broad interpretation of GDPR terms, which involves accounting various actors throughout the processing chain to ensure the effectiveness of the right of access. Section 4 reviews common public interest objectives that justify withholding AI decision-making information. Section 5 introduces the principle of proportionality as a tool to balance the right of access with public interest objections but also as a framework for justifying such restrictions. Section 6 discusses additional challenges in the provision of justifications for right of access restrictions. The final section (Conclusion) summarises these points.
2. Public interest and the GDPR
In 1962, Frank Sorauf described the concept of “public interest” as being obscured by confusion and disagreement, essentially a “conceptual muddle” (Sorauf, 1962, p. 183). Over time, the concept of public interest has varied across different disciplines and doctrines. It has been understood in utilitarian terms as determined by the majority's preferences or serving “the ends of the whole public rather than those of some sector of the public” (Meyerson & Banfiel, 1955, p. 322) – preponderance theories. It has been also characterised by moral values in which overarching public interest harmonises conflicting individual interests. “Public interest” is conceived in that vein as “the interests which people have in common qua members of the public” (Barry, 1964) – unitary theories. Similarly, Boot defines it as the increase of “opportunities of the members of the public to pursue and realise the (permissible) ends they all share with members of the public” (Boot, 2024, p 111). Alternatively, it has been viewed through a balanced approach that sees public interest as a goal that is universally shared, potentially conflicting with other interests but striving for collective well-being – common interest theories (Held, 1970).
Bozeman offers an integrated form and procedural definition of “public interest” as “the outcomes best serving the long-run survival and wellbeing of a social collective construed as a 'public'” (Bozeman, 2007, p. 21). He argues that public interest is an ideal that is both dynamic and adaptable “case-by-case but also within the same case as time advances and conditions change” (Bozeman, 2007, p. 13). Similarly, Feintuck considers that this dynamic character of public interest allows decision-makers to adapt to evolving societal values and democratic priorities (Feintuck, 2004).
In the legal sphere, some define “public interest” more narrowly as involving “the aggregate of citizen entitlements that the state is charged to safeguard” (Tridimas, 2023, p. 189). However, while this definition appears more limited than others previously mentioned, it remains ambiguous and can be used as an open clause to justify a broad set of government measures. This is especially true because Member States are often granted discretion in determining what constitutes “public interest” and the means for achieving it.
This flexibility is evident in EU law and policy documents, where “public interest” serves as a legal basis for government operations upon individuals. For instance, the concept is embedded in the GDPR, where it serves as a basis for processing ordinary personal data (Article 6(1)(e) GDPR), and as exceptions to process sensitive personal data (Article 9(2)(g),(i),(j) GDPR). Recital 45 and 52 offer some examples of processing operations in the public interest, including “public health, social protection, and the management of health care services”. In those cases, personal data processing may be justified for preventing or controlling communicable diseases and other significant threats to health; for archiving, scientific or historical research, or statistical purposes, to put some examples. Moreover, public interest also justifies restrictions on data protection rights, including the right of access to personal data. Article 23 of the GDPR provides an “exhaustive” list of grounds where limiting such rights is permissible, provided these limitations are necessary and proportionate. These include national security; defence; public security; the prevention, investigation, detection, or prosecution of criminal offences; the execution of criminal penalties; other objectives of general public interest (including monetary, budgetary, taxation, public health, and social security); the protection of judicial independence and proceedings; the prevention, investigation, detection, and prosecution of breaches of ethics for regulated professions; the protection of the data subject or the rights and freedoms of others; and the enforcement of civil law claims. While Article 23 list is intended to be exhaustive and specific, the inclusion of “public interest” as ground for restricting rights under the GDPR makes this provision rather open-ended. This is particularly due to the lack of clarity regarding its meaning and the discretion granted to governments in determining what constitutes “public interest” (A29WP 83, 2003, p. 5).2 Similarly the CJEU has recognised various restrictive measures taken by governments as pursuing the “public interest”. For instance in Puškár (C‑73/16), the court considered that a government-maintained list of “white horses” – individuals possibly involved in tax fraud – was regarded a “task carried out in the public interest” aimed at combating tax fraud (p. 108). Moreover, in FT (C-307/22), the Advocate General (AG) Emiliou expressed a similar view, stating that the determination of the level of protection of the public interest in the “protection and improvement of human health” falls within the margin of appreciation granted to Member States (pp. 68-69).
The lack of a clear definition and criteria for identifying “public interest” in the legal context, as noted by Boot, may result in ad hoc determinations and legal uncertainty (Boot, 2024). While this is a valid concern, given the evolving nature of the term, a dynamic and flexible interpretation may not be entirely negative. This flexibility allows “public interest” to consider all relevant factors of a case without being limited by strict constraints (Wyatt, 2020, p. 691). By functioning as a “moving target” (Züger & Asghari, 2023, p. 818), “public interest” can adapt organically to the specific details and circumstances of each case (Wyatt, 2020, p. 691).
2.1. The justificatory role of public interest under restrictions on data protection rights
While the term “public interest” lacks a precise definition in the legal context, some authors have proposed conditions to determine what constitutes public interest in the context of AI applications. Züger and Asghari (2023) coined the term “public interest AI” to describe a set of criteria for assessing whether AI in fact serves the public interest. These criteria include having public interest justifications based on egalitarian values, being developed through deliberative and co-design processes, following technical standards, and being openly available for validation (Züger & Asghari, 2023, p. 819). According to the authors, public interest justification implies that the design and use of an AI system should not be driven solely by innovation or for profit motives. Instead, if an AI system is claimed to address a social problem, it must provide a justification that demonstrates how it will tackle that problem (p. 819). Justification in that sense may be an overarching goal encompassing the other conditions and explaining how they were addressed for the AI to serve the public interest.
This paper builds on this notion of justification essential to public interest, but examines it from a different angle. Assuming the AI system is already operational, the focus here shifts from public interest justifications for implementing AI applications to public interest justifications for restrictions on the right of access to AI decision-making process information under the GDPR when these systems are already operational. The justificatory dimension of public interest discussed by the authors remains relevant to this paper's analysis on these restrictions as part of justifying why these restrictions are in place is explaining that the government has carefully accounted for individual interests. For instance, as the authors explain, justifications should account for assurances that the system delivers its promised outputs, that potential biases, inaccuracies, and malfunctions were addressed from the very outset of the system (pp. 821-2). Including explaining that mechanisms for validation exist so individuals can directly scrutinise and act upon (p. 822). This aspect was considered by the CJEU in automated processes of flight passenger personal data or passenger name record (PNR) to fight against terrorism and serious crime (Ligue des droits humains case, Case C-817/19, p. 73). The Court observed that the opacity of the system would hinder the understanding of the decision by the individual subject to processing, and that that would constitute a threat to this individual’s right to effective remedy under Article 47 of the Charter (p. 195). Thus, individuals need an opportunity to review all the grounds and evidence behind the decision, including the assessment criteria and how the programs applied them (p. 211). Within the public interest grounds to justify restrictions on the GDPR right of access these justifications should also be made clear to individuals subject to processing.
Moreover, public interest justifications require that public interest claims are not only based on the government authority but actual legal justifications. As argued by Held (1970), when “individuals asserting claims concerning the public interest are acting in their roles as holders of official positions…they may be capable of asserting claims concerning the public interest” on the basis of their “superior political authority” (p. 184). However, she notes that “when these persons assert that a given action is [in fact] in the public interest, they are, 'asserting that it is justifiable'” (p. 184). For this, justifications have to reflect public values of liberal democracy, citizenship expectations, and the collective interests that require protection, as outlined by Feintuck (2004). He argues that without the explicit articulation of these important public values, government interventions “in the public interest” – including restrictions on fundamental rights – would be without justification (Feintuck, 2004, p. 180).
The justificatory dimension of public interest in the context of rights restrictions is not merely an ideal but a legal obligation mandated by EU law at various levels, i.e. European Charter of Fundamental Rights (Charter) and data protection law. Without appropriate justifications, restrictions in the public interest would not meet legal standards. For example, public interest claims must be proven and assessed through the principle of proportionality (Article 52 (1) of the Charter and Article 23(1) of the GDPR). This paper aims to further explore how the principle of proportionality applies to the public interest justification, particularly regarding restrictions on the right of access to AI decision-making information (as discussed in Section 4). For now, it is essential to understand what is at stake in the conflict of interests arising from the implementation of data protection access rights under Article 15 (1)(h), which will be examined next.
3. The right of access to AI decision-making process information
When decisions affecting individuals are made by fully automated means, Article 15(1)(h) of the GDPR grants individuals the right to access information about “the existence of automated decision-making…and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. The interpretation of this provision has sparked a debate about its scope, temporal application, and the meaning of “meaningful” and “logic involved” (Wachter et al., 2017; Edwards and Veale, 2017; Selbst and Powles, 2017).
Regarding what constitutes “meaningful” information, there is consensus that it does not entail complex mathematical explanations. According to the A29 Working Party, meaningful information does not involve providing complex explanations of the algorithms used or mathematical calculations but more “simple ways to tell the data subject about the rationale behind, or the criteria relied on reaching the decision” (A29WP, 2018, p. 57). This interpretation aligns with AG Pikamäe's opinion in the SCHUFA case (C-634/21).3 AG Pikamäe explained that the primary objective of Article 15(1)(h) is “to ensure that data subjects obtain information in an intelligible and accessible form according to their needs”, especially given the “technological complexity” that makes it difficult for data subjects to understand who is collecting their personal data and for what purpose. This interpretation is consistent with Article 12 of the GDPR, which requires that information be presented in an intelligible, clear, and plain language (p. 57).
Regarding the purpose of the information provided, some scholars argue that it should be “useful”, intelligible, and “actionable” for data subjects, enabling them to exercise their rights and contributing to broader data protection objectives (Edwards & Veale, 2017; Selbst & Powles, 2017, p. 242; Hacker & Passoth, 2020, p. 349). This perspective aligns with the CJEU's interpretation of the right of access as crucial for verifying the correctness and lawfulness of data processing and as a prerequisite for exercising other data protection rights (Rijkeboer, C-553/07, p. 49; YS, C-141/12, 44; Nowak, C-434/16, p. 57). Additionally, AG Pikamäe's opinion in the SCHUFA case supports this view, emphasising that the information provided should be “useful for [the data subject] to challenge any decision” (SCHUFA, Case C‑634/21, p. 58).
Moreover, the effectiveness of the information provided under Article 15(1)(h) depends on when it is given, as this affects its level of specificity. This introduces the temporal aspect of the information – whether it should refer to ex-ante details (before the decision is made) or ex-post details (after the decision has been made). The latter would require more specific information about the individual case. Wachter and others argue that, based on the similar wording of Articles 13(2)(f), 14(2)(g), and 15(1)(h), the information should be provided ex-ante, focusing on the forward-looking nature of “envisaged consequences of processing”. They believe Article 15(1)(h) concerns general information about “system functionality” rather than specific details about the decision made (Wachter et al., 2017). However, most interpretations suggest that Article 15(1)(h) requires ex-post information, provided after the decision, which offers more granular details about the specific decision. Some scholars argue that an ex-ante interpretation would undermine the provision's purpose, as it would only give general information about system functionality rather than the actual reasons for the decision (Selbst & Powles, 2017; Brkan, 2019, p. 114; Zanfir-Fortuna, 2020, p. 462; Naudts et al., 2022, p. 547). This interpretation is further supported by the EDPB's recent guidelines on the right of access, which state that, “if possible, information under Art. 15(1)(h) should be more specific in relation to the reasoning that led to specific decisions concerning the data subject who requested access” (EDPB, 2023, p. 121).
The extent of detail over what should be accounted for under the “logic” of the decision-making under Article 15(1)(h) remains disputed and unclear (Schmidt-Wudy, 2021, Rn. 78.4). The A29 Working Party states that while not complex information should be sufficiently comprehensive for the data subjects to understand the decision (A29WP, 2018, p. 25). Moreover, it states that this information should be useful to the data subject to challenge the decision, thus referring to the “factors taken into account for the decision-making process, and on their respective ‘weight’ on an aggregate level” (p. 27). This is in line with AG Pikamäe Opinion in the SCHUFA case, which concerned a credit scoring assessment for taking lending decisions. The AG interpreted further the “logic involved” under Article 15 (1)(h) as “the calculation method used by a credit information agency to establish a score” which should be explained with sufficient detail of “the method used to calculate the score and the reasons for a certain result” (pp. 54-55). Other scholars have proposed different types of information, including (a) details about the input data, (b) factors influencing the decision, (c) the relative importance of these factors, and (d) a reasonable explanation for the decision. Brkan acknowledges that providing details on (b) and (c) may be challenging, so the right to explanation may realistically only cover the essential reasons for decisions (Brkan, 2019, p. 112). This raises the question: What exactly constitutes these “reasons”, and how can their usefulness and actionability be identified?
Understanding the factors weighing and calculation methods in AI decision-making, especially given its evolving complexity, can be extremely challenging. It is important to distinguish simply providing information about the system's logic from offering actual justifications. This aligns with the view that explanations should address not only how a system functions but also why it operates in a particular way (Coglianese & Lehr, 2019, Diakopoulos, 2020). In more concrete terms, the right of access should be seen as a procedural safeguard that informs individuals about why certain choices – such as the categories of data used or reasons for their relevance and pertinence to the processing – were made, as pointed out by the A29 Working Party (A29WP, 2018, p. 31). According to Malgieri, explaining the relevance and pertinence of such data is essentially a call for justifications, which he interprets ass a call for justifications over the lawfulness, fairness, necessity, accuracy, and legitimacy of AI decision-making systems (Malgieri, 2021, pp. 11, 21). This approach may also better fulfil the due process dimension of the right of access that Mahieu argues allows “people to assess and contest evidence that is used to make decisions about them” (Mahieu, 2023, p. 324).
The selection of data categories is a choice made at both the design and deployment stages. These categories of data are used to train the system but also feed the system during its operational phase. Justifying the relevance and pertinence of such categories should not only be referred to input data but training data, as the latter determines the decision output based on the former. Choices made by designers, developers, and operators can occur at the outset of data preparation and model building or during the operational stage, where operators' discretion and deference to the system may influence decision outcomes (Fussey et al., 2021; Selbst et al., 2019). By examining these aspects, institutions could justify in part their impactful decisions upon individuals by looking at why these choices at the design and deployment stages were made. They could ask whether the data is of sufficient quality, diversity, and relevance to the purpose of the processing operation. This approach is considered more meaningful by some authors than focusing solely on understanding the specific rules governing the model's operation (Gillis & Simons, 2019, p. 81; Coyle & Weller, 2020, p. 1433; Lehr & Ohm, 2017).
To illustrate, many facial recognition systems have failed to recognise Black women, often due to a lack of data diversity in training data sets, which predominantly contained White male faces (Buolamwini & Gebru, 2018). Data diversity in training datasets is a design and development choice over which operators acting as controllers have no control. In other cases, poor system performance might result from operators' poor implementation choices, such as high similarity thresholds for recognition, or from using poor-quality lighting or cameras essential for effective facial recognition performance (Neto et al., 2022). Thus, a failed recognition process leading to a decision that denies access to an individual may stem not from the decision-making process itself and the factors involved as the system does exactly what it is designed to do but from earlier design, development or deployment stages. Lehr and Ohm observe that the legal community often neglects this aspect of AI decision-making: the “playing with the data” stage. They argue that the focus has largely been on the “running model” stage, overlooking the impact of decisions made before the model becomes operational (Lehr & Ohm, 2017, pp. 709-710). Similarly, Asghari and others argue that AI explanations extend beyond technical outputs and interfaces; they involve a complex communication process that starts with the training data used to build the system and includes the justifications for using the system in a specific context (Asghari et al., 2022, p. 12). Kroll highlights that tracing AI decision-making processes to the design and development processes provides insight not only into how a system functions but also into how it was created and its purpose, thereby explaining its dynamics and behaviours (Kroll, 2021, p. 758). Thus, information to be provided may not only refer to the technical logic involved but may refer to the “evaluation results and procedural characteristics of the past” (Asghari et al., 2022, p. 10).
In any case, even when controllers are able to provide meaningful information to data subjects, they may still have to observe other reasons for choosing not to do so.
4. Public interest reasons against the right of access to AI
Up to this point, the intrinsic opacity characteristic of complex AI decision-making systems challenges the application of Article 15(1)(h). However, there is another layer of opacity stemming from legal constraints or “legal opacity” (Burrel, 2016). These limitations refer to competing interests in disclosing information about an AI system, notably trade secrecy and privacy concerns of third party individuals may be compromised during information disclosure or other competing interests based on public interest reasons. This paper examines the latter objections, in particular, reasons based on the prevention of the circumvention of the law and ensuring government efficiency, which may derive in the protection of various public interest goals, including the integrity of individuals and the timely administration of justice.
4.1. The effectiveness of the law and the risk to gaming the AI system
Governments may restrict access to information about the AI decision-making information in fear that individuals targeted by the system will “game” it, thus circumventing the system goal (stipulated by law). An AI decision-making system can be gamed through the knowledge of “proxies involved, only the relationships between input data and features and between features and outcome variable predictions” (Cofone & Strandburg, 2019, p. 640). When targeted actors understand the proxies used by the AI and modify their behaviour accordingly to alter inputs or obfuscate the system detection (Bambauer & Zarsky, 2018; Floridi et al., 2020, p. 1778). For example, a law allowing the use of face recognition to prevent violence within protests may be circumvented by protesters who may learn how to specifically use artefacts such as masks, make-up, or prosthetics to confuse, mislead, and obfuscate the recognition process of that system (Brunton & Nissenbaum, 2015). Refusals to disclose AI processing information may be justified by public interest concerns, particularly to prevent targets of the system from exploiting information to evade detection. This approach aims to safeguard the law's effectiveness in, i.e. preventing welfare fraud or ensuring fairness in welfare allocation.
Ensuring the effectiveness of government operations has been used as an argument to limit and restrict information about the system processing. The CJEU has stated in this regard, especially in scenarios of criminal investigation, that some forms of limited provision of information (notifications) to individuals subject to investigation must be done but only when such notifications are not able to jeopardise the effectiveness of the investigation (La Quadrature du Net, C-511/18, p. 191; Tele2Sverige, C-203/15, p. 121).
Gaming the system was also used as an argument against information disclosure in the context of tax and welfare applications. An illustrative case is the Systeem Risiko Indicatie - SyRI system used by the Dutch government to combat fraud in welfare applications. The government withheld AI information, such as the risk model and its indicators, from data subjects to prevent them from “gaming the system” by adjusting their behaviour (SyRI case, 2020, p. 6.49). The Court deemed this explanation a “deliberative choice” and considered that the government did not provide sufficient information (SyRI legislation) for data subjects to assess whether “objective factual data justifiably indicated an increased risk of fraud” (6.87), information that was important to ensure the plausibility of risky behaviour and address potential (unintentional) discriminatory effects associated with these risk models and analyses.
It is important to note that limiting access to information about the AI system to prevent gaming can also intentionally conceal problematic algorithm design (Cofone & Strandburg, p. 658). Some decision makers may advocate for secrecy not just to avoid gaming but also to hide undesirable aspects of the system (Cofone & Strandburg, p. 658). Thus, justifications cannot rely solely on the protection of the public interest: effectiveness of the law. Instead, these claims must be proven and also demonstrate that potential system flaws were addressed.
4.2. Government efficiency and the excessive burden of access requests
Information disclosure, when excessive or misguided, can disrupt and influence operators' behaviour in unexpected ways, leading to the impairment of the efficiency in their operations (Heald, 2006, p. 61; Endicott, 2021). When several alternatives lead to the same public goal, governments have to select the one requiring the least resources. This principle of “efficiency” is central to public administration rational decision-making (Simon, 1997, p. 38-9). By prioritising efficiency, governments help maintain the normal functioning of the public administration. The efficiency of the administration may protect important public interest objectives such as the timely administration of justice, which can be compromised by excessive and repetitive access requests that overburden public officials and hinder their ability to process administrative claims promptly. Other objections based on public interest reasons related to the efficiency of the administration include the protection of public fundings or cost-saving administration. However, the EDPB has argued that “financial burden on public budgets are not sufficient to justify a public interest in restricting the rights of the data subjects” (EDPB, 2021, p.10).
In the context of information disclosure obligations, when they are done without accounting on efficiency related public interest objections, they may hinder public authorities' ability to fulfil their functions and duties. In addition to being a time-consuming and costly endeavour, releasing information can result in pointless legal proceedings (Endicott, 2021, p. 189). In many cases, a request for access to information may require a great deal of time or may prove excessive to process. Sometimes, processing a request for access to information can be overly burdensome. For instance, in the Österreichische Post AG (Case C‑154/21) case, the applicant requested that the controller disclose the actual identities of recipients to whom the data subject's information had been or would be disclosed. While in the case, the Court considered this necessary to ensure the effectiveness of data protection rights (p. 39), it also admitted that exceptions to such requests exist when it is impossible to identify these recipients or when the request is clearly unfounded or excessive (p. 48). In those cases the law must be interpreted considering controllers' responsibilities, powers, and capabilities (p. 51). In this case, the mere fact that the data have been disclosed to a large number of recipients would not render the request excessive (p. 188).
Article 12(5) GDPR allows controllers to disregard or impose a fee for access requests that are manifestly unfounded or excessive (Article 12(5) GDPR), this applies when fulfilling such requests would involve a disproportionate amount of effort on the part of controllers. For instance, controllers may want to restrict such access to information because they are unable to provide it as such information no longer exists or because the request is repetitive. However, for this restriction to be justified, it must be proven. Controllers must prove that the request in fact requires a disproportionate effort. In the context of automated decision-making (ADM), this is more especially relevant considering the complexity of these systems, which may put controllers in a difficult position to identify the information needed to fulfil access requests. As Mahieu points out, controllers cannot solely use the complexity of the processing as a reason to restrict access to information. In fact, as operations become more complex, so do the burdens, which must be assessed through the proportionality principle (Mahieu, 2023, p. 174).
Furthermore, information disclosure may also influence the freedom of deliberation of decision-makers (YS case). The Dutch government invoked this argument in the YS (C-141/12) case, which dealt with the denial of access to the rejection decisions of residence permit applications for third-country nationals in the Netherlands. Ministers refused to provide these nationals with a copy of the minutes containing the legal analysis of those decisions, fearing that it would affect the freedom of the case worker responsible for compiling the minutes and potentially compromise the decision-making process (p. 25). Additionally, when access requests involve linking decisions to specific individuals, they can impose a burden on public officials and potentially impact their privacy (Pankki case, C-579/21, p. 76-7,9; FT case, Case C‑307/22).
Moreover, an excessive burden may arise when requests of access refer to information that no longer exists. For instance, in the Rijkeboer (C-553/07) case, a data subject requested the Board of Aldermen of Rotterdam access to all information about his personal data that had been disclosed to third parties during the previous two years. This request was partially granted, concerning data disclosed for the year prior to his request but not for the previous year. The Board argued that such data had been erased under Dutch law (pp. 23-25). Although the Court decided that the restriction on the right of access did not strike a fair balance due to differences of treatment between basic personal information and the requested information about third party access (p. 66), it explained that in the case of numerous recipients or frequent disclosure to a small number of recipients, keeping information on the recipients for such a long time could be an excessive and disproportionate burden on the controller (pp. 59-62).
4.3. Article 23 GDPR
The objections mentioned may fall under Article 23 (1)(e) of the GDPR, which addresses “important objectives of general public interest”. However, controllers must meet several conditions for this provision to apply. Article 23(1) outlines specific requirements for legally justifying restrictions on data subject rights. The EDPB’s guidelines on Article 23 highlight that a primary objective of data protection law is to enhance individuals' control over their personal data. Therefore, any imposed restrictions must respect the essence of fundamental rights; excessive restrictions that undermine these rights' basic content cannot be justified (EDPB, 2021, p. 7). This is an aspect acknowledged by the Advocate General Pikamäe in the SCHUFA case, where he noted the tension between the right of access to ADM processing information and other interests such as the trade secrecy right of SCHUFA (p. 56).4 The AG acknowledged that a complete refusal of information would undermine the essence of the right to data protection. While there is limited development on the essence of this right in the context of ADM, some scholars have acknowledged the constitutive role of the right of access within the right to data protection under Article 8 of the Charter (Zanfir-Fortuna, 2020; Kranenborg, 2021).
Additionally, Article 23 requires the establishment of legislative measures by Union or Member State law. Recital 41 stresses that legal bases must be clear and precise, allowing individuals to understand the conditions under which restrictions may apply. While legislative measures must align with the pursued objectives, they do not necessarily need to be time-bound; some restrictions may address ongoing objectives without a specific timeframe, while others related to temporary situations, such as emergencies, must clearly state their duration (EDPB, 2021, p. 8).
Furthermore, restrictions under Article 23 are lawful only if they are necessary and proportionate. The assessment of whether these restrictions are proportionate to the aims sought by the government must occur before enacting the law and must be supported by sufficient evidence across the proportionality test's various sub-tests (EDPB, 2021, p. 12). The next section will explore the characteristics and role of proportionality in restricting access rights based on public interest.
5. The principle of proportionality
Recognising that certain rights, such as the right of access, are not absolute and can be legitimately restricted for public interest objectives serves as the starting point for understanding the principle of proportionality. Proportionality is a legal test commonly employed by courts globally to assess the justification of government restrictions on fundamental rights (Barak, 2012). Originating from Prussian courts in the late nineteenth century to limit police arbitrariness, proportionality gained recognition as a legal standard after World War II (Peters, 2021, p. 1136). In the European Union, the test is enshrined in Article 52(1) of the Charter of Fundamental Rights, which outlines conditions for limiting fundamental rights. It is also embedded in Article 23 of the GDPR, which establishes conditions and grounds for restricting “obligations and rights provided for in Articles 12 to 22”, provided these restrictions do not violate the essence of the right to data protection, and are necessary and proportionate in a democratic society.
This test is typically composed of three sequential subtests: suitability, necessity, and proportionality in stricto sensu, often preceded by the identification of the restrictive measure's legitimate purpose. The suitability test evaluates whether a restrictive measure “would rationally lead” or “increase the likelihood” of achieving its purpose, known as the rational connection. The Advocate General Emiliou in the FT case which concerned the imposition of a fee requirement for accessing personal health files, considered that the fee may deter unnecessary claims and avoid excessive burden on controllers. Thereby “doctors are less likely to have to employ their time and resources for avoidable administrative tasks”, in that way ensuring the public interest of cost-saving and efficient government (FT case AG Opinion, C-307/22, p. 54).
The necessity test assesses whether the chosen measure is the least harmful among other alternatives to the restricted right. This measure must be equally effective and the least restrictive means to achieve the measure's objective (Barak, 2012). If a less harmful alternative fulfilling these conditions exists, choosing that alternative would be the more rational decision (Gerards, 2020, p. 8). It follows that instead of restricting full access to AI processing information, other means to provide such information while preserving the secrecy of the information should be the ideal route. For instance, trusted partners such as data trusts or intermediaries may serve as alternatives for evaluating system outputs' accuracy and lawfulness while preserving information secrecy (Matulionyte, 2021, p. 12; Giannopoulou et al., 2022, p. 318). Furthermore, even if confidentiality outweighs the need for information, it cannot be used as a reason to refuse complete disclosure if milder means of blacking out information, such as non-disclosure agreements, are available (Schmidt-Wudy, 2021, Rn. 78.4)
The suitability and necessity are means-end tests aimed at ensuring that restrictions on rights are not useless or more intense than necessary to achieve the public interest objective (Peters, 2021, p. 1140). In contrast, the proportionality in stricto sensu test, also known as the balancing stage, is result-oriented and value-laden (Barak, 2012, p. 342). It assesses the proper relation between the benefits gained by the restrictive measures and the harm caused to the right in obtaining the purpose. This test embodies the concept that greater government-imposed harms should be justified by weightier reasons (Alexy, 2003). While necessity and suitability tests are grounded in factual analysis about means-ends of measures (Klatt & Meister, 2012), proportionality in stricto sensu is the stage that directly addresses the issue of the weight given to rights in tension. Barak contends that any principled approach to the status of rights, including those that grant rights a special status and thereby a high level of protection, can be incorporated at this stage (Barak, 2012, p. 489)
The advantages of proportionality as a review mechanism are manifold. Most constitutional scholars praise the attributes of proportionality as facilitating a rational and equitable comparison of diverse interests, ideas, values, and facts (Beatty, 2004, p. 169). However, the test does not go without issues.
Some scholars consider that the last step, proportionality in stricto sensu is subjective and irrational, because it intends to measure incommensurable competing interests (rights and public interests), which according to Tsaryakis puts fundamental rights on the same footing with public interests. He asserts that shifting the focus from determining right or wrong in human rights cases to assessing appropriateness, adequacy, intensity, or scope constitutes a genuine assault on the notion of human rights leading to a “loss of rights” (Tsakyrakis, 2009, p. 474). This makes decisions unpredictable and context-specific, dependent on individual circumstances, which in turn sacrifices legal certainty, coherence, and generality (Pulido, 2006, p. 197; da Silva, 2011, p. 278). Moreover, Janneke Gerards explains that judges cannot be neutral if they have to balance incommensurable interests such as freedom of information or data protection objectively. The contrary would be unrealistic because no reliable, neutral, and objective instruments exist to measure intangible matters (Gerards, 2020, p. 3). In that line, the test gives excessive discretion to judges to imbue their personal preferences (value judgments) in the analysis, undermining the rule of law (Peters, 2021, p. 1137).
Contrary to these claims, advocates of proportionality argue that its balancing stage does not lead to a “loss” of rights because there is always a presumption in the balancing in favour of a fundamental right against other interests (Tridimas, 2024, p. 178; Schauer, 2014). Moreover, as Beatty admits, even when there is not such a presumption, the fact that rights and interests are evaluated on the same footing is what makes proportionality objective and integral (Beatty, 2004, p. 172). This does not mean that balancing is completely value-free, as considerations are endless in a case; balancing requires that at some point value judgments be made (Peters, 2021, p. 1142). Yet the openness, logical, and deliberative process of proportionality contributes to shaping justifications that are accepted by all parties. Furthermore, relying on a single metric to describe and assess competing interests is likely not the appropriate approach, as it would overlook their nuanced qualities, which would distort their actual impact on well-lived lives and important legal considerations (Sunstein, 1993, p. 20).
Moreover, even though proportionality allows for discretion of public decision-makers, it also sets limits on such discretion. Balancing serves as a check on discretionary powers, requiring decision-makers to provide justifications firmly grounded in the relevant legal framework (Jackson, 2015; Enqvist & Naarttijärvi, 2023). In other words, proportionality is a double-edged sword: while it allows for limitations on rights, it also protects them from arbitrariness (Herlin-Karnell, 2021, p. 2).
In essence, the principle of proportionality offers the examiner the benefits of flexibility and the ability to accommodate seemingly incompatible values, underscoring its nuanced nature (Cohen-Eliya & Porat, 2011; Peters, 2021).
5.1. The justificatory role of proportionality on restriction to the right of access
The concepts of public interest and proportionality are interconnected, as the latter involves a mechanism for assessing the former. Proportionality evaluates the compatibility of public interest restrictions imposed by the government with fundamental rights. In that sense, proportionality is deemed to be an instrument leading to proper justifications (Cohen-Eliya & Porat, 2019). As seen in the previous section, the principle of proportionality has a transparent and deliberative character for public interest reasons and provides a stable and structured framework for decision-making (Jackson, 2015, p. 3142; Sauter, 2013). This methodical approach ensures that decisions are grounded in reasons rather than arbitrariness, allowing affected individuals to scrutinise and challenge the decisions (Peters, 2021, p. 1143). I would add here that not all types of reasons could be considered justifications that create such accountability. Justifications demand substantive reasons, which imply that public authorities, when issuing individual adversarial decisions, must provide not just any reasons but “proper” reasons (Cohen-Eliya & Porat, 2011, p. 474). Applying proportionality requires governments to justify their actions based on reasonableness or correctness, entailing a proper balance between conflicting considerations and appropriate means-ends rationality (Cohen-Eliya & Porat, 2011, p. 480). These justifications must be understandable, meaningful, and acceptable to the parties and the general public (Jackson, 2015, p. 3142; Stone, 2020, p. 145). It implies that while not everyone may agree with the analysis, the legitimacy of such arguments is acknowledged, even in the presence of disagreements about the outcomes (Stone, 2020, p. 145).
Proportionality signals to governments and their officials the need for stronger and more compelling justifications for decisions that impose significant burdens and disadvantages on individuals, compared to cases where violations of rights and liberties are less severe (Beatty, 2004, p. 164). It follows that when balancing the data protection right of access against other public interest grounds, governments must provide sufficient reasons. It is insufficient to merely assert that disclosing information is not possible because it would jeopardise the effectiveness of the law or the efficiency of the government. The government would likely need to prove that this is the case and provide safeguards to the data subject to mitigate the impact of the restriction on their data protection rights. While proportionality offers a structured framework, determining the appropriate level of transparency for a given case – or whether the right of access to AI should prevail over other public interests – cannot be determined upfront. Yet, the CJEU case law may inform about factors that are important for the government to consider when justifying its restrictive measures.
5.2. Balancing factors for public interest justifications
Proportionality serves as “structural guidance”, enabling decision-makers to render morally justifiable decisions. This concept can be applied to transparency conflicts in AI contexts (Karliuk, 2023, p. 987). While proportionality and its balancing function does not provide a strict set of steps, we can learn from the CJEU considerations in cases involving the conflict between the data protection right of access, generally, and public interest objections, i.e. protecting both the administration efficiency and the end goal of the law in which the system operations is based.
Scholars have identified factors used by courts to balance rights and interests (Barak, 2010, p. 487; Gerards, 2020; Tridimas, 2024). Gerards noted that factors are used to weigh and explain why certain rights may be outweighed by other interests in some cases (Gerards, 2020). For instance, Barak emphasises that the likelihood of a restrictive measure purpose being achieved and the probability of limiting a right are key in determining the relative weight of these factors (Barak, 2010, pp. 10-15). Tridimas found that the CJEU also considers various factors, such as the importance of the restricted right and the seriousness of the restriction to tilt the balance to one or the other interest in conflict (Tridimas, 2024, p. 10).
These factors identified in case-law, as seen here, do not only guide courts but may guide governments about what are some important points that they need to consider to properly justify rights restrictions based on public interest. The use of factors to consider in the balancing of rights and public interests does not negate the crucial role that courts have in safeguarding the constitutional framework, including its core values and rights (Novelli et al., 2024). However, considering these factors may give some clarity to decision-makers and data subjects on what to justify when balancing rights with other interests.
Drawing from past cases on disproportionate measures restricting the right of access, ex ante mechanisms can play a crucial role in strengthening public interests justifications on access restrictions, such as the excessive burden on the government. For instance, simply arguing that unlimited storage is burdensome without considering alternative measures or safeguards is insufficient. In Rijkeboer, for example, the Court established that time-limit provisions and notifying data subjects about the deletion of their records could have provided them with the opportunity to take action, such as requesting access to the decision-making process (effective remedies) before the deletion occurred (Rijkeboer, C-553/07, p. 63). In light of this, Member States have to establish a reasonable time limit for storing information to maintain a balance between ensuring the rights of the data subject and minimising the burden associated with the controller's obligation to store that information (p. 64). They must consider time limits in the law for bringing an action, the sensitivity of the data, storage duration, and the number of recipients (Rijkeboer, Case C-553/07, p. 63-4). Thus, when justifying restrictions on access requests, it is important for the government to consider whether ex-ante mechanisms can be implemented to provide data subjects with alternative remedies to fulfil their access rights. Although not mandatory, considering these factors may contribute to a fair balance.
Moreover, controllers have to pay special attention to the nature of data they are dealing with, as the CJEU has in many occasions set a higher threshold for information obligations when sensitive data is in place. For instance, in the FT case, the Court thus highlighted the importance of ensuring that individuals subject to personal data processing by a medical practitioner are given access to the data in their medical records as fully and precisely as possible, in a legible form (FT, C‑307/22, p. 81). The Court explained that given the technical nature of examination results and treatments, presenting a simple summary or aggregate compilation by a medical practitioner may risk omitting or inaccurately reproducing relevant data, making it difficult for the patient to verify the accuracy and completeness of the data (p. 78).
Moreover, another factor to consider is the seriousness of the objective pursued by the government compared to the seriousness of the interference with a right. Alexy refers to this in his weight formula, stating that “the greater the degree of non-satisfaction of, or detriment to, one right or principle, the greater must be the importance of satisfying the other” (Alexy, 2010, p. 28). In general, proportionality tells governments and their officials that stronger and more compelling reasons are needed for decisions that impose significant burdens and disadvantages on individuals than in cases where violations of rights and liberties are less serious (Beatty, 2004, p. 164). The Court has highlighted the seriousness of the infringement in data protection cases, such as in Google Spain, which concerned a request for the removal of personal data from search results. Among other aspects, the Court considered that a search engine's ability to allow “any internet user” to access detailed private life information constitutes a serious interference. Therefore, a balancing exercise must consider the nature of the information in question and its sensitivity to the data subject's private life (Google Spain, Case C-131/12, p. 81). This suggests a link between the sensitive nature of the information being processed and the seriousness of a restriction on data protection rights, thus demanding a higher level of safeguards.
In essence, restrictions on the right of access to AI decision-making process information, grounded in the public interest, must demonstrate that their benefits outweigh any impact on individuals. The factors reviewed above indicate that balancing these two competing interests: public interest restrictions on one hand, and the right of access to AI decision-making information on the other, is not an easy task. Governments must provide proper justifications by considering, for instance, the importance of the restricted right, the seriousness of the restrictions, as well as the nature of personal data at hand. This assessment is crucial in order to implement preventative measures that ensure the timely exercise of access rights or safeguards that offer alternative redress mechanisms. These factors may guide governments in determining whether restrictions on access rights to AI decision-making process information are legally viable.
6. Additional difficulties
The burden of proving that disclosing AI decision-making information would indeed jeopardise the public interest falls on controllers. For example, controllers might need to prove that disclosing information about the system's logic could enable the target population to “game the system”, or that there is no alternative way to provide necessary information without revealing details about the system's proxies. Controllers often depend on other actors to properly justify those restrictions, as information about these technical limitations is held by other actors, not controllers.
The issue is that controllers are the sole targets of Article 15(1)(h) of the GDPR, meaning that they are the only responsible for justifying restrictions on access rights. The challenge lies in their ability to provide factually based justifications for restrictions when they do not hold relevant information about the system “logic involved”. As such, controllers not only struggle to explain the underlying logic of the system, but they also cannot justify why such information cannot be disclosed.
Article 15(1)(h) of the GDPR assumes that controllers have sufficient control and understanding of the systems they use to intervene when they fail and to respond to access requests. However, in practice, controllers lack the necessary information to do so. Instead, technology providers – who are not legally bound by the GDPR – hold greater knowledge about systems potential risks (Colonna, 2023, p. 9). Elish argues that blaming the operator alone for AI malfunctions is flawed, as it turns them into the “moral crumple zone” for system failures, even when they have limited control over the system (Elish, 2019, p. 41).
The SCHUFA case (C-634/21), which involved a credit scoring assessment that led to a loan refusal, sheds light on this aspect. The CJEU broadly interpreted Article 22(1) of the GDPR, considering that an automated “decision” can encompass multiple acts affecting the data subject in various ways (p. 46). According to the Court, a low probability value given by SCHUFA often led to the denial of the loan, meaning the credit scoring assessment qualified as a decision under Article 22(1) GDPR, making SCHUFA liable under this provision (p. 50). The Court cautioned that restricting Article 22(1) to only the final decision-maker (the bank) would undermine its aim of protecting individuals from risks posed by fully automated processing, including unlawful discrimination (p. 57). Conversely, a broader interpretation would ensure the effectiveness of the law and allow data subjects to assert their right of access to specific information under Article 15(1)(h) GDPR. Furthermore, and most importantly, the Court noted that even if only the bank was covered under Article 22(1), it might still lack the specific information required under Article 15(1)(h), as it generally does not hold it (p. 63).
The Court has previously interpreted GDPR concepts broadly to ensure the “complete and effective” protection of individuals. In Wirtschaftsakademie (Case C-210/16), which involved a company administering a Facebook fan page as controllers, the Court addressed the distribution of roles between controllers over the collection of statistical information of users of the fan page via the “Insights” function, without users' consent (p. 15). The Court ruled that for the “effective and complete" protection of data subjects, the concept of a “controller” under Article 2(d) of Directive 95/46 must be interpreted broadly (p. 28). This interpretation extends not only to the social network itself but also to entities that, by creating fan pages, contribute to the processing of personal data. Such action enables Facebook to place cookies on the devices of visitors, whether or not they have a Facebook account (p. 35). Moreover, by defining parameters that influence how personal data is collected and processed by Facebook, Wirtschaftsakademie (the fan page administrator) participates in determining the purposes and means of processing visitors' personal data (p. 36). As a result, it is considered a joint controller with Facebook Ireland under Article 2(d) (p. 39).
Cases such as Jehovan Todistajat (Case C-25/17) and Fashion ID (Case C-40/17) have followed a similar approach, extending the scope of “data controller” to include various parties influencing the data processing. According to Mahieu, the law's intent in considering these relationships among different actors in the decision-making process is to ensure that data subjects can exercise their rights against any of the joint controllers, regardless of their complex relationships (Mahieu, 2023, p. 153). Given the trend in CJEU cases towards broad interpretations of key GDPR concepts, it is likely that this practice will extend to the concepts of “meaningful" information and the “logic involved” in AI decision-making processes. This is particularly relevant given that case law reviewed shows that data processing operations are becoming more complex than anticipated.
The same complexity applies to the distribution of responsibility between controllers and those who hold information about the decision-making process and are, therefore, in a better position to fulfil Article 15(1)(h) of the GDPR. This includes not only entities such as SCHUFA, which act as joint controller, but also other actors in the ADM lifecycle: such as system designers, developers, or providers. While extending Article 15(1)(h) to cover information held by providers may ensure the effectiveness of the provision's purpose, some authors argue interpreting GDPR concept broadly is not always the ideal approach to ensure the effectiveness of the law, as it may have the opposite effect. For instance, Purtova points out that an overly broad interpretation of “personal data” in the GDPR risks creating an all-encompassing law where “personal data” could apply to nearly anything, including weather information, as long as it can identify or make individuals identifiable (Purtova, 2018). Similarly, Lynskey suggests that the Court's broad interpretation of “controller” could lead to an untenable situation where virtually anyone could be considered a controller. She contends that, while the intention behind making the GDPR “effective and complete” is commendable, it may result in legal uncertainty and undermine its practical enforcement (Lynskey, 2024, p. 299). Consequently, she calls for a radical overhaul of the law (Lynskey, 2024, p. 300).
This paper acknowledges the need to rewrite the GDPR; however, such a solution is a long term prospect given the extensive deliberation that goes into amending or updating a law. In the short term, this does not adequately address the urgent issue of potentially unfair ADM. We cannot overlook the primary goal of the GDPR and the right of access under Article 15(1)(h) – namely, to equip individuals so they can make their case against wrong decisions. To achieve this goal, the “logic involved” must encompass information beyond just the decision model, which offers only a partial view on why decisions are made in a particular way (determinant factors and their weighing). When restrictions on the right of access are justified by public interest, information about design, development, and deployment choices, as explained previously, becomes even more relevant if we are truly concerned about the risk of bias or failures in the decision-making process. Therefore, the information provided (although restricted) should include justifications for the measures taken to prevent bias or failures at the design, development and deployment stages. Can this be achieved solely by contractual means, as envisioned in the GDPR?
The GDPR acknowledges the distribution of responsibility in processing operations by specifying that relationships between actors, including processors and joint controllers, should be governed by contractual agreements (Articles 28-33 GDPR). Despite this framework, the legal consequences for failing to adhere to such arrangements remain unclear (Mahieu et al., 2019, p. 87). Although the CJEU's recognition of multiple actors in processing chains is beneficial, even sophisticated data protection laws may struggle to address the complexities of algorithmic supply chains (Cobbe et al., 2023, p. 8). To begin with, the GDPR does not account for the asymmetry of power and information in the relationships between controllers and other actors in the system supply chain. This leaves controllers dependent on providers “best efforts” to supply the necessary information for compliance with Article 15(1)(h) of the GDPR.
In practice, controllers have limited negotiating power. Sanchez-Graells notes that, for instance, in public procurement, “regulation by contract” assumes the public buyer's functional superiority. However, the effectiveness of such contracts relies heavily on the public buyer's ability to enforce, scrutinise, and monitor the contractor, which often leads to a “weak public buyer” problem (Sanchez-Graells, 2024, p. 57). Public administrations may face challenges due to informational disadvantages, especially when dealing with large, dominant contractors that offer “take it or leave it” type of contracts (Cobbe et al., 2023). Moreover, information asymmetries can undermine the public administration's bargaining position and its ability to create effective contracts, as seen when public buyers fail to alter the terms offered by tech providers (Sanchez-Graells, 2024, p. 58).
Even when providers are contractually obliged, this does not guarantee immediate compliance, as they may challenge controllers' claims through lengthy and costly legal procedures (Cobbe et al., 2023). Thus, directly obliging providers by law, rather than by contract, may be a more effective way to ensure they provide detailed information about processing operations and specific decision outcomes.
The recently passed AI Act addresses this issue by requiring providers to disclose specific information regarding the system limitations and affordances. The interplay between the GDPR and the AI Act is particularly evident in their approaches to ADM and the information requirements. Article 13 of the AI Act outlines transparency requirements for providers of AI systems, mandating them to provide sufficient information to guide deployers use of the system and interpreting its outputs. While the AI Act will impact the implementation of Article 15(1)(h) of the GDPR, it only partially aligns with its goals. First, “AI systems” under the AI Act and “ADM” under the GDPR are not inherently equivalent; qualifying under one law does not automatically trigger obligations under the other. Additionally, although both regulations aim to protect fundamental rights, they adopt different approaches and focus. The AI Act focuses on minimising, rather than eliminating, risks associated with AI-assisted decisions through ex-ante requirements and conformity assessments (Palmiotto, 2024, p.11). These requirements mainly apply to providers during the design and development stages and apply to deployers only to a lesser extent. Differently, the GDPR refers mainly to the decision-making stage once the system is already on the market, with the exception of specific ex-ante requirements implemented before the system is put into use (Arts. 25 and 35 GDPR).
While the application of transparency requirements on providers under the AI Act may enhance the fulfilment of Article 15(1)(h) GDPR, its primary focus is ensuring the deployer's understanding of the system functionality, rather than ensuring data subjects' understanding of the AI decision-making process.
Ultimately, providing justifications for restrictions on the right of access, based on the information made available to deployers under the AI Act, may not offer complete justifications needed to substantiate public interest objections. Therefore, additional efforts to justify these restrictions through safeguards and mitigation mechanisms may be necessary while waiting for clearer and more comprehensive legislation.
Conclusion
This paper addressed the challenges of balancing the right of access to AI processing information under Article 15(1)(h) GDPR and objections based on public interest such as preventing the circumvention of the law and an excessive burden placed on controllers. It highlighted the practical difficulty controllers face in fulfilling Article 15(1)(h) GDPR because they have limited control over AI decision-making systems and do not hold meaningful information about the system's logic. The paper examined objections to accessing AI decision-making information on public interest grounds and the need for controllers to justify these objections. It argued that the principle of proportionality, despite its limitations, provides a useful framework for this. Proportionality and its balancing factors offer a transparent and structured approach for data subjects to understand the justifications behind public interest restrictions on their data protection rights. Moreover, this principle also helps shape these justifications through concrete factors, such as the nature of the data, the seriousness of the restriction, alternative redress mechanisms, and safeguards to mitigate the impact of the restrictions.
References
Alexy, R. (2003). On balancing and subsumption. A structural comparison. Ratio Juris, 16(4), 433–449. https://doi.org/10.1046/j.0952-1917.2003.00244.x
Anthony, G. (2013). Public interest and the three dimensions of judicial review. Northern Ireland Legal Quarterly, 64(2), 125–142. https://doi.org/10.53386/nilq.v64i2.339
Article 29 Working Group. (2014). Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC (Opinion No. WP 217). https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp217_en.pdf
Article 29 Working Party. (2003). Opinion 7/2003 on the re-use of public sector information and the protection of personal data (Opinion No. WP 83). https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2003/wp83_en.pdf
Article 29 Working Party. (2018). Guidelines on Automated Individual Decision-making and Profiling for the Purposes of Regulation 2016/679 (Guidelines No. WP251rev.01). https://ec.europa.eu/newsroom/article29/items/612053/en
Asghari, H., Birner, N., Burchardt, A., Dicks, D., Faßbender, J., Feldhus, N., Hewett, F., Hofmann, V., Kettemann, M. C., Schulz, W., Simon, J., Stolberg-Larsen, J., & Züger, T. (2022). What to explain when explaining is difficult. An interdisciplinary primer on XAI and meaningful information in automated decision-making [Report]. Explainable AI research clin. https://doi.org/10.5281/zenodo.6375784
Bambauer, J., & Zarsky, T. (2018). The algorithm game. Notre Dame Law Review, 94(1). https://scholarship.law.nd.edu/ndlr/vol94/iss1/1
Barak, A. (2010). Proportionality and principled balancing rights, balancing & proportionality. Law & Ethics of Human Rights, 4(1), 1–16. https://doi.org/10.2202/1938-2545.1041
Barak, A. (2012). Proper purpose. In Proportionality: Constitutional rights and their limitations (pp. 245–302). Cambridge University Press. https://doi.org/10.1017/CBO9781139035293
Barry, B. M. (1964). The public interest (I). Proceedings of the Aristotelian Society, Supplementary Volumes, 38(1), 1–38. https://doi.org/10.1093/aristoteliansupp/38.1.1.
Beatty, D. M. (2004). Proportionality. In The ultimate rule of law. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199269808.003.05
Bekker, S. (2021). Fundamental rights in digital welfare states: The case of SyRI in the Netherlands. In O. Spijkers, W. G. Werner, & R. A. Wessel (Eds.), Netherlands yearbook of international law 2019 (pp. 289–307). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-403-7_24
Binns, R., & Veale, M. (2021). Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR. International Data Privacy Law, 11(4), 319–332. https://doi.org/10.1093/idpl/ipab020
Boot, E. R. (2024). The public interest: Clarifying a legal concept. Ratio Juris, 37(2), 110–129. https://doi.org/10.1111/raju.12401
Bozeman, B. (2007). Public values and public interest: Counterbalancing economic individualism. Georgetown University Press. http://www.jstor.org/stable/j.ctt2tt37c
Braun, J.-C., Constantaras, E., Aung, H., Geiger, G., Mehrotra, D., & Howden, D. (2023). Suspicion machine methodology [Report]. Lighthouse Reports. https://www.lighthousereports.com/methodology/suspicion-machine/
Brkan, M. (2017). Do algorithms rule the world? Algorithmic decision-making in the framework of the GDPR and beyond (No. 3124901). Social Science Research Network. https://doi.org/10.2139/ssrn.3124901
Brunton, F., & Nissenbaum, H. F. (2015). Obfuscation: A user’s guide for privacy and protest. The MIT Press.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Case C-131/12. (n.d.). Google Spain SL and Google Inc. V Agencia Española de Protección de Datos (AEPD) and Mario Costeja González. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62012CJ0131
Case C-141/12. (n.d.). YS v Minister voor Immigratie, Integratie en Asiel and Minister voor Immigratie, Integratie en Asiel v M and S. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62012CJ0141
Case C-203/15. (n.d.). Tele2 Sverige AB v Post- och telestyrelsen and Secretary of State for the Home Department v Tom Watson and Others. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:62015CJ0203
Case C‑300/21. (n.d.). Österreichische Post AG [Austrian Post AG]. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/GA/TXT/?uri=CELEX:62021CJ0300
Case C-307/22. (n.d.). FT (Copies du dossier médical) [Copies of medical records]. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62022CJ0307
Case C-434/16. (n.d.). Peter Nowak v Data Protection Commissioner. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62016CJ0434
Case C-511/18. (n.d.). La Quadrature du Net and Others v Premier ministre and Others. The Court of Justice of the European Union. https://curia.europa.eu/juris/liste.jsf?language=en&num=C-511/18
Case C-553/07. (n.d.). College van burgemeester en wethouders van Rotterdam v M.E.E. Rijkeboer. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62007CA0553
Case C-634/21. (n.d.). SCHUFA holding (scoring). The Court of Justice of the European Union. http://data.europa.eu/eli/C/2024/913/oj
Case C-817/19. (n.d.). Ligue des droits humains [Ligue des droits humains]. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62019CJ0817
Cases C-92/09 and C-93/09. (n.d.). Volker und Markus Schecke GbR (C-92/09) and Hartmut Eifert (C-93/09) v Land Hessen. The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62009CJ0092
Cobbe, J., Veale, M., & Singh, J. (2023). Understanding accountability in algorithmic supply chains. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1186–1197. https://doi.org/10.1145/3593013.3594073
Cofone, I. N., & Strandburg, K. J. (2019). Strategic games and algorithmic secrecy. McGill Law Journal, 64(4), 623–663. https://lawjournal.mcgill.ca/article/strategic-games-and-algorithmic-secrecy/
Coglianese, C., & Lehr, D. (2019). Transparency and algorithmic governance. Administrative Law Review, 71(1), 1–56. https://ssrn.com/abstract=3293008
Cohen-Eliya, M., & Porat, I. (2011). Proportionality and the culture of justification. The American Journal of Comparative Law, 59(2), 463–490. http://www.jstor.org/stable/23045668
Colonna, L. (2023). Addressing the responsibility gap in data protection by design: Towards a more future-oriented, relational, and distributed approach. Tilburg Law Review, 27(1), 1–21. https://doi.org/10.5334/tilr.274
Coyle, D., & Weller, A. (2020). ‘Explaining’ machine learning reveals policy challenges. Science, 368(6498), 1433–1434. https://doi.org/10.1126/science.aba9647
Crootof, R., Kaminski, M. E., & Price II, W. N. (2023). Humans in the loop. Vanderbilt Law Review, 76(2), 429–510. https://doi.org/10.2139/ssrn.4066781
da Silva, V. A. (2011). Comparing the incommensurable: Constitutional principles, balancing and rational decision. Oxford Journal of Legal Studies, 31(2), 273–301. https://doi.org/10.1093/ojls/gqr004
Diakopoulos, N. (2020). Transparency. In M. D. Dubber, F. Pasquale, & D. Sunit (Eds.), The Oxford handbook of ethics of AI (pp. 197–213). https://doi.org/10.1093/oxfordhb/9780190067397.013.11
Dove, E. S. (2018). The EU General Data Protection Regulation: Implications for international scientific research in the digital era. Journal of Law, Medicine & Ethics, 46(4), 1013–1030. https://doi.org/10.1177/1073110518822003
Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18–84. https://scholarship.law.duke.edu/dltr/vol16/iss1/2
Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260
Endicott, T. (2021). Administrative law (5th ed.). Oxford University Press. https://doi.org/10.1093/he/9780192893567.001.0001
Enqvist, L., & Naarttijärvi, M. (2023). Discretion, automation, and proportionality. In M. Suksi (Ed.), The rule of law and automated decision-making: Exploring fundamentals of algorithmic governance (pp. 147–178). Springer International Publishing. https://doi.org/10.1007/978-3-031-30142-1_7
European Commision. (2021). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the regions: Fostering a European approach to Artificial Intelligence (Communication No. COM/2021/205). https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM:2021:205:FIN
European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust (White Paper No. COM(2020) 65). https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
European Data Protection Board. (2023). Guidelines 01/2022 on data subject rights (Guidelines No. Version 2.1). https://www.edpb.europa.eu/system/files/2023-04/edpb_guidelines_202201_data_subject_rights_access_v2_en.pdf
European Data Protection Board (EDPB). (n.d.). Guidelines 10/2020 on restrictions under Article 23 GDPR (No. Version 2.1). https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-102020-restrictions-under-article-23-gdpr_en
Feintuck, M. (2004). 'The public interest’ in regulation. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199269020.001.0001
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Fussey, P., Davies, B., & Innes, M. (2021). ‘Assisted’ facial recognition and the reinvention of suspicion and discretion in digital policing. The British Journal of Criminology, 61(2), 325–344. https://doi.org/10.1093/bjc/azaa068
Gerards, J. (2020). The age of balancing revisited? European Data Protection Law Review, 6(1), 13–20. https://doi.org/10.21552/edpl/2020/1/5
Giannopoulou, A., Ausloos, J., Delacroix, S., & Janssen, H. (2022). Intermediating data rights exercises: The role of legal mandates. International Data Privacy Law, 12(4), 316–331. https://doi.org/10.1093/idpl/ipac017
Gillis, T. B., & Simons, J. (2019). Explanation < justification: GDPR and the perils of privacy. Pennsylvania Journal of Law and Innovation, 2, 71–99. https://doi.org/10.2139/ssrn.3374668
Goldenfein, J. (2024). Lost in the loop: Who is the ‘human’ of the human in the loop (No. 4750634). Social Science Research Network. https://doi.org/10.2139/ssrn.4750634
Hacker, P., & Passoth, J.-H. (2020). Varieties of AI explanations under the law. From the GDPR to the AIA, and beyond. xxAI - Beyond Explainable AI, 343–373. https://doi.org/10.1007/978-3-031-04083-2_17
Heald, D. (2006). Transparency as an instrumental value. In C. Hood & D. Heald (Eds.), Transparency: The key to better governance? (pp. 58–73). Proceedings of the British Academy. https://doi.org/10.5871/bacad/9780197263839.003.0004
Held, V. (1970). The public interest and individual interests. Basic Books.
Herlin-Karnell, E. (2021). EU data protection and the principle of proportionality. Nordic Journal of European Law, 4(2), 66–74. https://doi.org/10.36969/njel.v4i2.23782
Jackson, V. (2015). Constitutional law in an age of proportionality. The Yale Law Journal, 124(8), 3094–3196. https://www.yalelawjournal.org/article/constitutional-law-in-an-age-of-proportionality
Karliuk, M. (2023). Proportionality principle for the ethics of artificial intelligence. AI and Ethics, 3(3), 985–990. https://doi.org/10.1007/s43681-022-00220-1
Klatt, M., & Meister, M. (2012). The constitutional structure of proportionality. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199662463.001.0001
Kranenborg, H. (2021). Article 8. In S. Peers, T. Hervey, J. Kenner, & A. Ward (Eds.), The EU Charter of Fundamental Rights: A commentary (2nd ed.). Hart Publishing.
Lehr, D., & Ohm, P. (2017). Playing with the data: What legal scholars should learn about machine learning. UC Davis Law Review, 51, 653–717. https://lawreview.law.ucdavis.edu/archives/51/2/playing-data-what-legal-scholars-should-learn-about-machine-learning
Lynskey, O. (2023). Complete and effective data protection. Current Legal Problems, 76(1), 297–344. https://doi.org/10.1093/clp/cuad009
Mahieu, R. (2023). The right of access to personal data in the EU [Doctoral dissertation]. https://researchportal.vub.be/en/publications/the-right-of-access-to-personal-data-in-the-eu-a-legal-and-empiri
Mahieu, R., van Hoboken, J., & Asghari, H. (2019). Responsibility for data protection in a networked world: On the question of the controller, ‘effective and complete protection’ and its application to data access rights in Europe. Journal of Intellectual Property, Information Technology and E-Commerce Law, 10(1). https://www.jipitec.eu/archive/issues/jipitec-10-1-2019/4879
Malgieri, G. (2021). ‘Just’ algorithms: Justification (beyond explanation) of automated decisions under the General Data Protection Regulation. Law and Business, 1(1), 16–28. https://doi.org/10.2478/law-2021-0003
Matulionyte, R. (2021). Reconciling trade secrets and AI explainability: Face recognition technologies as a case study (No. 3974221). Social Science Research Network. https://ssrn.com/abstract=3974221
Meyerson, M., & Banfield, E. C. (1955). Politics, planning, and the public interest: The case of public housing in Chicago. Free Press.
Naudts, L., Dewitte, P., & Ausloos, J. (2022). Meaningful transparency through data rights: A multidimensional analysis. In Research handbook on EU data protection law (pp. 530–571). Edward Elgar Publishing. https://doi.org/10.4337/9781800371682.00030
Neto, P. C., Gonçalves, T., Pinto, J. R., Silva, W., Sequeira, A. F., Ross, A., & Cardoso, J. S. (2022). Causality-inspired taxonomy for explainable artificial intelligence (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2208.09500
Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2024). AI risk assessment: A proportionality-based, risk model for the AI Act. Digital Society, 3(12). https://doi.org/10.1007/s44206-024-00095-1
Opinion 1/15 on the EU/Canada PNR Agreement. (n.d.). The Court of Justice of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:62015CV0001(01)
Peters, A. (2021). A plea for proportionality: A reply to Yun-chien Chang and Xin Dai. International Journal of Constitutional Law, 19(3), 1135–1145. https://doi.org/10.1093/icon/moab071
Pulido, C. B. (2006). The Rationality of Balancing. Archiv Für Rechts- Und Sozialphilosophie, 92(2), 195–208. https://www.jstor.org/stable/23681588
Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law, Innovation and Technology, 10(1), 40–81. https://doi.org/10.1080/17579961.2018.1452176
Regulation 2016/679. (n.d.). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR). European Parliament and Council. https://eur-lex.europa.eu/eli/reg/2016/679/oj
Sanchez-Graells, A. (2024). Digital technologies and public procurement: Gatekeeping and experimentation in digital public governance. Oxford University Press. https://doi.org/10.1093/oso/9780198866770.001.0001
Sauter, W. (2013). Proportionality in EU Law: A balancing act? Cambridge Yearbook of European Legal Studies, 15, 439–466.
Schauer, F. (2014). Proportionality and the question of weight. In G. Huscroft, B. W. Miller, & G. Webber (Eds.), Proportionality and the rule of law: Rights, justification, reasoning (pp. 173–185). Cambridge University Press. https://doi.org/10.1017/CBO9781107565272.011
Schmidt-Wudy, F. (2021). In B.-W. Wolff/Brink F. (2021) in Wolff/Brink, BeckOK Datenschutzrecht. 35th Edition, C. H. Beck, Münche (Ed.), BeckOK Datenschutzrecht [BeckOK Data Protection Law]. C.H. Beck.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104
Selbst, A. D., Boyd, D., Friedler, S., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598
Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–243. https://doi.org/10.1093/idpl/ipx022
Simon, H. A. (1997). Administrative behavior: A study of decision-making processes in administrative organization (4th ed.).
Sorauf, F. (1962). The Conceptual Muddle. In C. J. Friedrich (Ed.), Nomos V: The public interest.
Stone, A. (2020). Proportionality and Its alternatives. Federal Law Review, 48(1), 123–153. https://doi.org/10.1177/0067205X19890448
Sunstein, C. R. (1993). Incommensurability and valuation in law. Michigan Law Review, 92(4), 779–861. https://repository.law.umich.edu/mlr/vol92/iss4/2
SyRI Case. (2020). The decision of the Rechtbank Den Haag (District Court of the Hague), found the legislation concerning the SyRI (Systeem Risico Indicatie) (No. ECLI:NL:RBDHA:2020:865). The Hague District Court.
Tridimas, T. (2023). Wreaking the wrongs: Balancing rights and the public interest in the EU way. Columbia Journal of European Law, 29(2), 185–213. https://cjel.law.columbia.edu/files/2023/04/10.-TRIDIMAS-PROOF.pdf
Tsakyrakis, S. (2009). Proportionality: An assault on human rights? International Journal of Constitutional Law, 7(3), 468–493. https://doi.org/10.1093/icon/mop011
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
Waldron, J. (2006). Safety and security. Nebraska Law Review, 85(2), 454–507. https://digitalcommons.unl.edu/nlr/vol85/iss2/5
Wyatt, D. (2020). The anaemic existence of the overriding public interest in disclosure in the EU’s access to documents regime. German Law Journal, 21(4), 686–701. https://doi.org/10.1017/glj.2020.37
Zanfir-Fortuna, G. (2020). Article 15 Right of access by the data subject. In C. Kuner, L. A. Bygrave, C. Docksey, & L. Drechsler (Eds.), The EU General Data Protection Regulation (GDPR): A commentary (pp. 449–468). Oxford University PressNew York. https://doi.org/10.1093/oso/9780198826491.003.0046
Züger, T., & Asghari, H. (2023). AI for the public. How public interest theory shifts the discourse on AI. AI & Society, 38(2), 815–828. https://doi.org/10.1007/s00146-022-01480-5
Footnotes
1. It is important to note that Article 15(1)(h) GDPR refers to fully automated decision-making (ADM) processes, including AI-based systems. Unlike other forms of fully automated processes, AI systems are inscrutable and opaque which in turn pose an added challenge for Article 15(1)(h) implementation. For this reason, this paper primarily focuses on AI decision-making systems whenever it refers to “ADM” and its transparency challenges under the data protection right of access to processing information.
2. The Article 29 Working Party was replaced by the European Data Protection Board (EDPB) with the entry into force of the GDPR on 25 May 2018.
3. See section 6 on the details of the case. It is important to note that while not binding, the Opinion of AG Pikamäe gives important hints to the debate over the scope of Article 15(1)(h)GDPR.
4. See Section 6 for more details over the case