The Artificial Intelligence Act. Taking normative imbalances seriously

Michał Araszkiewicz, Jagiellonian University, Department of legal theory, Cracow, Poland, michal.araszkiewicz@uj.edu.pl
Grzegorz J. Nalepa, Jagiellonian University, Human-centered artificial intelligence department, Cracow, Poland
Radosław Pałosz, Jagiellonian University, Cracow, Poland

PUBLISHED ON: 17 Dec 2024

Funding Note

This paper is funded from the XPM (Explainable Predictive Maintenance) project funded by the National Science Center, Poland, under CHIST-ERA programme Grant Agreement No. 857925 (NCN UMO-2020/02/Y/ST6/00070).

Introduction

The final text for the Artificial Intelligence Act (AIA, 2024; here also referred to as “Act”) was approved by the European Council on May 21. The Act will be generally applicable from 2 August 2026. The main goals of the recently adopted regulation are: “to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights, including democracy, the rule of law and environmental protection, against the harmful effects of artificial intelligence systems (AI systems) in the Union, and to support innovation.” Generally speaking, the Act should be a vehicle for the development of economy and innovation in the EU, while protecting the values, which, in the absence of regulation, could be demoted by the functioning of AI systems, including the values embodied in the fundamental rights. Can the AI Act fulfil its promise?

While the AI Act sets ambitious goals, we must acknowledge the significant risk that these goals may not be met, at least not to a satisfactory level. When analysed from a high-level perspective, the normative means used in the Act seem not to be well-balanced enough to provide a valuable framework for achieving assumed goals because of the rule-focused regulatory approach. The main conclusion from this observation is that extensive effort will be needed to comply with the Act, and the fundamental rights may remain inadequately protected or innovation insufficiently promoted. This concern needs to be addressed, especially considering the potential impact of AI on the economy and society.

Rule-based regulations and principle-based regulations

Any legal regulation should be well-balanced. This means that it should use appropriate normative means to attain the assumed goals; generally speaking, it should attempt to strike an optimal balance between its underlying values and the often-competing interests of different groups of stakeholders. We follow a rudimentary typology of the said normative means encompassing rules and principles. Rules are known to generally either apply or disapply, if a specific condition is met; consequently, they may be complied or not complied with, making them extremely useful in compliance systems. Typically, legal rules tend to be formulated in relatively rigid terms. On the contrary, principles have more abstract characteristics, often indicating a highly general expression of value or goal that should be promoted. Principles apply through the operation of balancing, where the importance of realising certain principles is compared with the scope of demotion of others. Their collisions are typically resolved in concrete situations (classical elaborations are Dworkin, 1978, and Alexy, 2002).

Due to their characteristics, rule-based regulations are typically regarded as more precise and predictable. However, this comes with a specific price: Rule-based regulations may be less adaptable to the changing circumstances. Due to their rigidness, if inadequately formulated, they lead to suboptimal results. Principle-based regulations offer an opportunity to evaluate the regulated states of affairs through the direct balance of goals and values. However, due to the typically abstract formulation thereof, their application may be subject to disputes and vary from one context to another. Effective implementation of regulation based on principles typically involves designing specific procedures and bodies competent to authoritatively concretise principles (such as the Court of Justice of the European Union). Eventually, an effective regulation may involve a combination of principles and rules and an appropriate procedural framework enabling implementation and enforcement.

The normative structure of the AIA

As commonly known, the AIA is principally a rule-based regulation founded on the four-level classification of risks associated with the functioning of AI systems. Some types of risk that infringe on the core of European values are considered unacceptable and covered by the prohibitions (Art. 5). The most extensive set of rules concerns requirements related to high-risk systems (Art. 8–27). However, it is also presumed that harmonised standards (Art. 40) or common specifications (Art. 41) will be enacted, and compliance with these standards or specifications will create a presumption of conformity with the requirements applicable to high-risk systems. A similar set of requirements is relevant to general-purpose AI systems, which have been defined as their category. The remaining categories of AI systems, classified as creating moderate or minor risks, are subject to significantly less onerous regulation.

We intend to focus on the core of the AIA regulation: the requirements applicable to high-risk AI systems. They are imposed through rules, which typically specify what should be established (implemented, documented) and its elements or modalities. For example, Art. 9 imposes an obligation concerning establishing a risk management system and its obligatory parts (steps). Significantly, the rules concerning risk management system elements use open terms requiring contextual concretisation, such as “reasonably foreseeable risk”. Importantly, lack of compliance with the requirements may lead to severe liability, especially to imposing significant administrative fines (Art. 99).

Imbalanced regulation

The Act is broadly formulated and subject to further clarification through lower-level regulations. While it is possible that these rules could achieve a proper balance of the values underlying the AIA, we believe this is unlikely, especially concerning three key issues.

First, while the Act includes several requirements to safeguard fundamental rights, such as an ex-ante fundamental rights impact assessment for high-risk systems (Art. 27), these measures are insufficient. Fundamental rights norms function as principles that must be balanced in specific contexts, a challenge exacerbated by the rapidly evolving AI field and the lack of extensive case law. The guidelines issued under Art. 96, while helpful, lack the precedential authority of Court of Justice of the European Union (CJEU) or European Convention on Human Rights (ECHR) rulings and cannot account for the diverse use cases of AI applications. Comparing the AIA’s provisions to a scenario without them is misleading, as the focus should be on better potential solutions.

Second, the AIA's governance structures may impose disproportionately heavy burdens on obligated parties (Novelli et al., 2024). The Act assigns various powers to bodies like the Commission, AI Office, and notifying authorities, which can escalate demands. For example, a Conformity Assessment Body (CAB) might risk losing its status, if it applies lenient standards, pushing parties to overcomply to avoid criticism.

Third, AI providers may attempt to avoid classifying their systems as high-risk or invoke exemptions under Art. 6.3, which uses vague terms like "narrow procedural task." This could lead to more efforts to qualify for exemptions than to meet high-risk system requirements, undermining fundamental rights protections. Alternatively, overly strict interpretations of Art. 6.3 could push businesses to relocate to jurisdictions with fewer restrictions.

Concluding remarks

While the issues identified in the AIA may not be fully eliminated in its current form, several measures can be implemented to mitigate their negative effects. This commentary provides only a general outline of these ideas. First, AI regulatory sandboxes (Art. 57 ff.) could be expanded to assess impacts on fundamental rights through hypothetical scenarios in controlled environments. These scenarios would simulate the behaviour of affected individuals and authorities, offering a proxy for a case-by-case analysis of potential rights infringements and predicting the reactions of empowered bodies. Second, implementing acts and guidelines should encourage alternative dispute resolution methods, such as negotiation and mediation, to reduce the overly stringent demands on high-risk systems. Such methods should replace formal administrative proceedings where possible. Third, we propose the creation of expert court panels and fast-track proceedings in Member States to swiftly address infringements related to high-risk systems. These panels could employ AI-based tools to support their work, provided these tools comply with AIA standards – note that if their compliance with AIA is also supported by AI, we obtain an interesting iterative structure! In conclusion, while not all regulation stifles innovation (Bradford 2024), the regulatory framework of the AIA is likely to yield sub-optimal outcomes, unless the measures, like the ones outlined here, are adopted.

References

Alexy, R. (2002). A theory of constitutional rights. Oxford University Press.

Bradford, A. (2024). The false choice between digital regulation and innovation. Northwestern University Law Review, 119(2), 377–452.

Dworkin, R. (1978). Taking rights seriously. Harvard University Press.

Novelli, C., Hacker, P., Morley, J., Trondal, J., & Floridi, L. (2024). A robust governance for the AI Act: AI office, AI board, scientific panel, and national authorities. European Journal of Risk Regulation, 1–25. https://doi.org/10.1017/err.2024.57

Regulation 2024/1689. (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). http://data.europa.eu/eli/reg/2024/1689/oj