Identifying harm in manipulative artificial intelligence practices

Suzanne Vergnolle, Swiss Institute of Comparative Law, Lausanne, Switzerland

PUBLISHED ON: 30 Nov 2021

This op-ed is part of a series of opinion pieces edited by Amélie Heldt in the context of a workshop on the Digital Services Act Package hosted by the Weizenbaum Institute for the Networked Society on 15 and 16 November 2021 in Berlin. This workshop brought together legal scholars and social scientists to get a better understanding of the DSA Package, in detail and on a meta level.

Often presented as one of the most promising technologies of the decade, artificial intelligence gives us hope for economical, societal, and ecological progress. Alongside this hope, artificial intelligence already generates various adverse social consequences. This is why the European Commission has presented legislation to lay down harmonised rules for the development, placement on the market and use of AI systems (the Artificial Intelligence Act or AI Act)1. The draft intends to reconcile this double approach of “promoting the uptake of AI and of addressing the risks associated with certain uses of such technology.”2 In other words, it aims at fostering innovation while regulating harmful artificial intelligence practices.

The drafted rules follow a risk-based approach, assigning AI systems one of four distinct levels of risks: minimal, low, high, and unacceptable. In the proposal, most of the unacceptable risks attract outright prohibitions, while high-risk AI systems must comply with specific requirements.

The AI Act refers multiple times to the notion of harm, which sometimes triggers the qualification of unacceptable risk resulting in the prohibition of the AI system. For instance, article 5. 1. a and b prohibit manipulative systems that distort a person’s behaviour “in a manner that causes or is likely to cause that person or another person physical or psychological harm3. Notwithstanding the importance of the criteria, the draft regulation does not provide a definition of harm, nor does the previous European legislation or case law.

No general definition of harm in European legislation or case law

At first glance, it is striking to note that previous European legislation does not offer a general definition of harm. For instance, the directive 2014/104 governing the actions for damages under national law for infringements of the competition law refers to the notion but does not provide a definition. Looking at the European case law does not help in finding a generic definition of harm.

On second thought, this lack of definition is understandable since EU law relies on the “systems of Member States for enforcement, remedies and procedural rules” and harm is closely related to enforcement. The consequence is that the European notion of harm mainly depends on the ones provided by EU member states. This might lead to some difficulties for providers of AI systems, who will need to assess if the harm requirement is met within every country they plan to operate in. A discussion at the European level could be opened to identify some common criteria which would hopefully lead to a better circumscription of the types of harm resulting from manipulative artificial intelligence practices.

Only individual harm can trigger the prohibition

Another important element is the fact that article 5. 1. a and b of the proposal only refer to individual harm. The letter of the proposal implicitly excludes any type of collective harm since the article specifically refers to harm caused to “that person or another person”. The consequence is that only individual harm can trigger—when the other conditions are met—the prohibition of manipulative artificial intelligence practices. This is a critical restriction since artificial intelligence practices can also impact entire communities or groups of individuals. For instance, AI systems used to evaluate potential tenants have perpetuated housing discrimination for marginalised communities. Here, the harm is not only individual but also collective, unintentionally influencing minorities that are often under-represented in the training data set. I join other commentators in criticising this requirement as entailing problematic loopholes and lacking practical effect. Recognising collective harm as a trigger of the prohibition would be a good way to highlight the potential risks of AI systems for minorities.

Only physical or psychological harm can trigger the prohibition

By referring to “physical or psychological harm”, article 5. 1. a and b adopt a narrow approach to the type of harms triggering the prohibition. This narrow application is confirmed by the illustrations provided by the Commission, which come closer to a novel of anticipation than real-life applications. Such restriction cannot be found in other European legislation which often refers to harm in its most generic term, without specifying which types of harm are covered. It is intriguing that the Commission decided to limit the prohibition of manipulative systems only to these two specific harms, which are not encompassing financial harm. This narrow scope is particularly surprising since it is difficult to identify harms in digital practices.

Difficulty to identify harms in digital practices

Multiple jurisdictions have been elaborating typologies of harm, particularly in Tort Law4. With the development of technology, some authors have extended these typologies to harms relating to privacy and data protection infringements5. Most authors agree in considering “that delineating remediable harms has been a challenge for law and policy makers since the early days of the Internet”. Manipulative artificial intelligence practices will be no exception. It might even be more challenging since the prohibition only applies after multiple other conditions are met.

To conclude, the harm requirement will be difficult to identify when analysing manipulative artificial intelligence practices. This might lead to a narrow application of the prohibition resulting in a low protection of individuals, which was one the main objectives of the draft regulation. Therefore, the AI Act could be amended to provide a definition of harm. Such a definition could help identify the prohibited manipulative practices but it will probably be difficult to delineate because of the disparities between member states and the difficulty to identify harms in digital practices. In any case, if this requirement is to be kept, it will be important to consider the collective nature of harms resulting from artificial intelligence practices and expand the criteria to other types of harm (including pecuniary).

Footnotes

1. According to Article 3 (1) of the proposal, AI systems are software developed with specific techniques that can generate outputs such as content, predictions, recommendations, or decisions.

2. European Commission, Explanatory Memorandum, 2021/0106, p. 1.

3. Article 5 1. a and b prohibits the following artificial intelligence practices: “(a)the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;” and “(b)the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.”

4. See for instance, Dintilhac, Rapport du groupe de travail chargé d'élaborer une nomenclature des préjudices corporels, July 2005. More generally, on the typologies of damages, see O. Gout, « Quelle méthodologie pour l’indemnisation des préjudices moraux : globalisation ou recours à une nomenclature? », Des spécificités de l'indemnisation du dommage corporel, 2017, Larcier, p. 251.

5. See for the United States: D. Citron and D. Solove, “Privacy Harms,” 102 B.U. L. Rev. _ (2022); or for France : S. Vergnolle, L’effectivité de la protection des personnes par le droit des données à caractère personnel, PhD thesis Paris II, p. 426 s.

Add new comment