Civil legal personality of artificial intelligence. Future or utopia?

Karolina Ziemianin, Faculty of Law and Administration, University of Szczecin, Poland

PUBLISHED ON: 07 Apr 2021 DOI: 10.14763/2021.2.1544

Abstract

The technology associated with artificial intelligence is developing rapidly. As a consequence, artificial intelligence is being applied in many spheres of life and increasingly affects the functioning of society. Actions of artificial intelligence may cause harm (e.g. in the case of autonomous vehicles that cause traffic accidents). Rules of civil law, especially those relating to liability for damage resulting from somebody’s fault or risk, came into being before artificial intelligence's invention and mostly before the latter’s significant recent development. They include the Polish Code of civil procedure, which addresses the issues associated with liability, adopted in 1964 and still in force today, although with certain amendments. Therefore, no provisions that would directly refer to artificial intelligence and legal consequences of its actions have been introduced into the Polish civil law. The same applies to European law. Therefore, the issue of whether existing regulations may be applied in the case of artificial intelligence or, perhaps, whether they should be appropriately adjusted, needs to be analysed. The starting point for this analysis is the possibility of conferring upon artificial intelligence the status of an entity under the law, allowing it to independently bear the liability for the damage caused by it. This issue needs to be examined in the context of technology used today (e.g. autonomous vehicles), and also in the future. The analysis performed herein specifies who would bear liability for the actions of artificial intelligence. The deliberations in this area are based on the achievements of Polish and European legal science. Therefore, the conclusions formulated in the article regarding legislative changes apply to all national legal orders informed by European civil law principles.
Citation & publishing information
Received: August 14, 2020 Reviewed: November 6, 2020 Published: April 7, 2021
Licence: Creative Commons Attribution 3.0 Germany
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Artificial intelligence, Tort liability, Autonomous car, European civil law
Citation: Ziemianin, K. (2021). Civil legal personality of artificial intelligence. Future or utopia?. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1544

1. Preliminary remarks

Technology associated with artificial intelligence is developing rapidly. As a consequence, artificial intelligence is being applied in many spheres of life and increasingly affects the functioning of society. The actions of artificial intelligence may cause damage (e.g. autonomous vehicles that cause traffic accidents). The occurrence of such damage in practice has prompted experts to take actions with regard to qualifying artificial intelligence as a legal entity bearing liability, where the former is understood as being subject to rights and obligations under the law.

Rules of civil law, especially those relating to liability for damage that results from somebody’s fault or risk (strict liability), had been formed before artificial intelligence appeared and mostly before its significant recent development. They are included in the Polish Code of civil procedure (hereinafter: PCCP), which addresses issues associated with liability, adopted in 1964 and still in force today, though with certain amendments. Therefore, no provisions that would directly refer to artificial intelligence and the legal consequences of its actions have been introduced into the Polish civil law. This also applies to European law. The European Union forum has taken some initiatives to consider the possibility of applying existing regulations of member states to artificial intelligence and to formulate conclusions regarding the need for legislative changes.

This paper presents an analysis of existing regulations in the context of their potential application to artificial intelligence and a formulation of conclusions as to the need to apply these regulations to the essence of artificial intelligence. An examination of the possibility of attributing the status of an entity before the law to artificial intelligence is adopted as a starting point for this analysis. This is because only when artificial intelligence is considered a legal entity, can it bear independent liability for the damages it causes. Considering artificial intelligence as a legal entity requires an examination in the context of technology (e.g. autonomous vehicles) currently used, and also in the future (e.g. when dealing with fully independent robots that can deal with every aspect of life).

A consequence of the introduced analysis involves a specification of whether artificial intelligence may (now or in the future) bear liability for the damages it causes. A negative answer to this question means that we need to establish which other person does or will bear liability for the actions of artificial intelligence. Also, this person’s liability for damage has the nature of tort liability; however, it results not only from this person’s own actions, but from the risk of bearing liability for another person or thing (e.g. an animal, and in this case - artificial intelligence). A relevant solution must take into account primarily the compensatory function of tort liability for damages, which means the possibility to level off the harm caused to the aggrieved party.

This paper addresses the issue of tort liability. This liability is the second regime of liability for damage—next to contractual liability. Tort liability results from a prohibited act caused mainly due to the individual’s own fault or due to risk. In this case, it is a prohibited act that is an event causing the damage (Art. 415 - 449 of PCC). The underlying event for contractual liability involves non-performance or improper performance of the contract executed between parties that causes damage. Contractual liability stays beyond the scope of this study.

With reference to artificial intelligence, the risk of activity of this person is of primary importance. Therefore, any person, not only an individual bound to artificial intelligence by a contract, may suffer an injury as a result of the tort of artificial intelligence. Incidents that involve tort-related damage caused by artificial intelligence already occur in practice, e.g. traffic accidents caused by autonomous vehicles.

The reflections included in this paper refer to the issue of liability for actions of artificial intelligence against the background of the achievements of Polish and European civil law scholarship, especially principles of the European tort law. The conclusions formulated in the paper relating to legislative changes are thus applicable to all national legal orders informed by European principles of civil law (European Group on Tort Law, 2005). The starting point of these reflections is to establish the possibility of attributing the status of a legal entity to artificial intelligence under the provisions of Polish civil law while considering ethical issues and the degree to which artificial intelligence would be subordinate to humans, and also the types of risks associated with its actions (Bryson et al., 2017, p. 273ff; Teubner, 2018, p. 106ff).

Such an approach to the issues of artificial intelligence leads to the formulation of de lege ferenda conclusions. They refer both to Polish law and other national legal orders that are affected by the European principles of civil law (European Group on Tort Law, 2005).

The research, as a result of which this paper was drafted, was conducted by means of various methods, in particular by the method of interpretation of applicable laws, the analytical method and, in an auxiliary role, the comparative method. This interpretation and analysis concern national and European legislation in force. The comparative method was used to analyse the Polish law vis-à-vis foreign law.

2. Artificial intelligence

Artificial intelligence is defined inconsistently. Sometimes it is perceived widely as a field of science primarily related to computer science and robotics. In a narrower sense, artificial intelligence is the ability of an IT system to correctly interpret external data, to learn from it, and to use the experience gained in this way to accomplish specific tasks. This ability includes the capacity to flexibly adapt to external conditions (Wang, 2008, p. 362; Kaplan & Haenlein, 2019, pp. 15 - 25; Kok et al., 2002, p. 1095 ff).

An analysis of legal solutions relating to the consequences of actions of artificial intelligence and the possibility of attributing liability for damage to it requires recognising that artificial intelligence, for the needs of this study, means the ability of an IT system to interpret data correctly, to learn from this data and to use the experience acquired in this manner to carry out specific tasks. From the perspective of the analysis performed in this article, the fact whether this IT system is only analytical, inspired by a human or humanoid is irrelevant and so is the device or object in which it was placed, and the purpose for this. Therefore, issues concerning liability for damage caused by artificial intelligence refer to artificial intelligence located in computers, cars or robots.

Artificial intelligence is no longer just a vision of the future—we are surrounded by it. The most common devices equipped with artificial intelligence include mobile phones, computers and cars, which can even drive autonomously. Artificial intelligence is used extensively to create so-called bots, i.e. programmes whose purpose it is to replace people (Grimme et al., 2017, p. 279; Klopfenstein et al., 2017, pp. 555-565). Bots perform their tasks primarily in the services market, in the absence of human beings. One of the bots tested by IT tools producers for the needs of internet communication was a Microsoft product called ‘Tay’, which interacted with the public via Twitter. The bot created its entries based on interactions with the users of this portal. However, within a few hours of operation it began to publish offensive entries, so the project was closed (Neff & Nagy, 2016, p. 4915).

Artificial intelligence is also placed in robots that have a physical or even humanoid shape. Sophia, created by Hanson Robotics’ scientists from Hong Kong, is a human-like robot. She is endowed with artificial intelligence, thanks to which she is able to learn and adapt to human behaviour. She has given many interviews around the world and also obtained citizenship of Saudi Arabia (Retto,2017, p. 3).

Models of processing a natural language are also significant from the point of view of artificial intelligence, which is able to write texts the way a human would do (GPT-3). Such an algorithm was also created by the research laboratory OpenAI. The text samples it formulates are difficult to distinguish from texts prepared by humans (Zagórna, 2020).

The listed examples of the application of artificial intelligence are evidence of its rapid development. Therefore, a question arises about how artificial intelligence will function in the future. This concerns primarily the legal aspects of its actions, ethical issues and the degree to which artificial intelligence is subordinate to humans and also—the threats associated with its functioning.

3. Civil law entities

A legal entity should be understood as an entity participating in legal relations that has rights and obligations under the given legal system with respect to other entities and tangible or intangible objects (Wolter et al., 2001, p. 157). Article 1 of the Polish Civil Code provides that this code regulates the civil law relations between natural and legal persons. Every human being is a natural person. Such a person's legal position is determined by the following attributes: legal capacity, capacity to perform acts in law, surname and first name, place of residence, marital status, personal status and personal rights (Art. 8 – 32 of PCC). Legal persons include the State Treasury and organisational units that are granted legal personality by provisions of law. The law does not indicate any general characteristics of legal persons on the basis of which an organisational unit could be classified in this category. Legal personality is acquired from the moment of entry into the relevant register (Art. 37 of PCC).

Civil law entities are also organisational units with legal capacity, despite the fact that the Act does not grant them legal personality (Art. 331 of PCC). These entities are not listed in Art. 1 of PCC. However, under Art. 331 of PCC there is no doubt that they have legal personality. Nevertheless, inaccuracy in this respect caused that in the draft of the new Civil Code, an organisational unit with legal capacity was recognised as a legal entity (Art. 31 of the draft) (Machnikowski, 2017, p. 47). In the current legal status, legal persons (and organisational units with legal capacity) are entitled to the following attributes: legal capacity, capacity to perform acts in law, name, seat and personal rights (Art. 33 – 43 of PCC).

These regulations show that the category of legal entities (i.e. entities that have rights and obligations under the law) is broad and includes two types of entities—natural persons and legal persons. A natural person is a legal entity, but he or she is not a legal person. Similarly, a legal person is a legal entity, but is not a natural person.

The most important attributes of natural persons, legal persons and organisational units to which pursuant to Art. 331 of PCC the provisions on legal persons apply, are legal capacity and capacity to perform acts in law. Legal capacity is the possibility of being a subject of rights and obligations in the field of civil law, while the capacity to perform acts in law is the possibility to acquire rights and incur obligations in the field of civil law through one’s own actions (Ziemianin & Kuniewicz,2007, p. 75). The procedural consequence of having legal capacity and capacity to perform acts in law is granting in the Code of Civil Procedure the capacity to be a party in court proceedings and the capacity to perform actions in court proceedings to natural persons, legal persons and organisational units with legal capacity (Art. 64 and 65 of the Polish Code of Civil Procedure). Therefore, these entities may be parties to civil proceedings and carry out procedural acts.

An analysis of the provisions of the Civil Code leads to the conclusion that legal personality is equivalent to having legal capacity. Therefore, deciding whether artificial intelligence can be a legal entity requires a reference to its ability to obtain legal personality.

4. Artificial intelligence as a legal entity

I. Natural person

Civil law scholars and commentators often assumed that people are legal entities by their very nature (Pilich, 2018, art. 8; Targosz 2004, p. 1). In comparison with other living organisms they are distinguished by biological properties, social skills and individual character. Therefore, a person is someone, not something. Thus, it is assumed that human legal capacity is an inherent feature, as is dignity. However, this view needs to be supplemented. It happened in history that certain people were denied legal capacity, e.g. slaves under Roman law (see Shumway, 1901) 1. Despite this, each person has legal capacity because they are human. However, this ability also derives from the law because it has been confirmed by its regulations. These provisions confer legal capacity upon newborn babies the moment they enter the world. For example, Article 8 of the Polish Civil Code stipulates that every human being has legal capacity from the moment of birth. Similarly, § 1 of the German Civil Code BGB shows that the legal capacity of a human being begins at birth.

Therefore, one may conclude that the provisions of the law reflect the basic ethical and moral principles of society. The legal order is built on the premise of a certain system of values. For modern democratic countries, this system should be based on a culturally neutral understanding of humanity (Chauvin, 2020). Human dignity results from this humanity, while in the sphere of civil law, legal capacity is understood as the possibility to acquire rights and obligations resulting from such dignity. Legal capacity of a human, and in consequence, his status as a legal entity, is conferred by the law. Therefore, it derives from legal regulations, although it confirms the ideologically neutral human dignity.

Age or incapacitation do not affect a human’s legal capacity. A small child, e.g. is an entity before the law. It will not, naturally, be able to sign a contract independently, but this does not affect its legal capacity. The possibility to execute a contract independently results from the attribute of the capacity to perform acts in law. This attribute is secondary in relation to the status of a legal entity itself.

It is different in the case of other entities upon whom provisions of civil law currently confer the status of a legal entity, e.g. commercial companies. Their status as legal entities results from provisions of the law, and therefore has a solely norm-based nature. Such status will be discussed later.

Artificial intelligence is an element of an IT system that is created by humans to perform specific tasks. Therefore, it cannot be said that it has inherent biological properties or social skills (as is the case of legal personhood of natural persons). Even if these features can be attributed to it, they are programmed by its creator. Of course, artificial intelligence may then be subject to certain social processes, but this occurs as a consequence of human activities. By the very nature of artificial intelligence, it is neither possible to speak of its birth. Currently, it is also difficult to imagine the possibility of robots creating social structures. Possible behaviours aimed at this goal can therefore be—as it seems—only a consequence of human programming. Artificial intelligence can perfectly imitate human beings. However, it cannot be assumed that it is human. The legal capacity of artificial intelligence is not natural. Hence it can only be normative, i.e. deriving from and established by the provisions of the law. Therefore, artificial intelligence is certainly not a natural person 2.

II. Legal person

Due to the legal capacity of artificial intelligence resulting from provisions of the law, one should perhaps consider the possibilities of adapting artificial intelligence to the requirements of legal persons.

In connection with the development of the concept of legal persons over the years, legal scholars and commentators have formulated theories regarding the essence of a legal person. A concept was distinguished according to which a legal person, like a natural person, is a real entity, as well as an opposite concept, under which a legal person is only a legal entity (Wolter et al., 2001, p. 202; Radwański & Olejniczak, 2013, pp. 180-182). Leaving the analysis of these theories outside the scope of this article, it should be emphasised that a legal person is a certain organisation whose activities depend—directly or indirectly—on the intent of a natural person. From a legal point of view, the action of a natural person is the action of a legal person only if the natural person acts in the manner provided for in the Act and the statute based on it as the body of a legal person (Art. 38 of PCC). Under civil law, it is possible for a legal person to be liable for damages. For example, a commercial company which deals with renovations, causes damage by improper performance of the renovation. Nevertheless, this issue results from statutory regulations, and in some cases the responsibility lies with natural persons acting for the legal person. It is people that sit on its bodies that act on behalf of the legal person. For example, members of the management board of a limited liability company are liable for the obligations of this company if enforcement of the company’s assets is ineffective, i.e. the company’s assets are not sufficient to cover the debt.

The concept of granting legal personality to artificial intelligence is widely discussed in legal and philosophical literature. At the European Union forum initiatives are being taken to consider the possibility of applying the current legal regulations of the member states in relation to artificial intelligence and to formulate conclusions as to the need for legislative changes (inter alia European Commission, 2019). These initiatives expressed the view that granting legal personality to artificial intelligence is unnecessary, since the responsibility for its actions should be borne by existing persons (European Commission, 2019, p. 4). The Polish position on the legal personality of artificial intelligence, expressed in the key points in the strategy of artificial intelligence in Poland (Ministerstwo Cyfryzacji, 2018), is sceptical. According to these assumptions, granting legal personality to artificial intelligence does not seem beneficial due to the lack of a concept regarding the principles of liability. Therefore, we do not know how artificial intelligence, which is an independent legal entity, should bear liability—are people supposed to bear this liability?; is it supposed to have its own funds to pay compensation?; or would it be necessary to maintain a register of such artificial intelligence? Therefore—according to the authors who made these assumptions—the legal personality of artificial intelligence should be opposed. Liability for AI actions should be attributed to its creators, operators or possible end users 3.

This state of affairs is primarily the result of the impossibility to predict how artificial intelligence will function in the future. However, it seems that the speed of development of artificial technology necessitates careful analysis of the possibility of granting legal capacity to artificial intelligence. Such an attempt can be made with respect to those types of artificial intelligence that are currently functioning or appear to be able to function in the near future. This is because it may turn out that in the future, artificial intelligence will be completely independent of humans, and thus—no one alive today will be naturally liable for its actions.

Common features of artificial intelligence and a legal person include the fact that legal capacity can only be granted to them by law; in contrast with natural persons, they do not obtain this in a natural way. However, there are clear differences between artificial intelligence and a legal person. Artificial intelligence cannot be considered an organisational unit whose acts can only be performed through its bodies. The issue of artificial intelligence boils down to determining the relationship between it acting alone, without the help of a natural person, and man—the creator or owner of the robot. Therefore, it should be recognised that the concept of legal personality cannot be directly applied to artificial intelligence, because AI is not an organisational unit acting through organs stipulated by the statute.

III. Electronic person

A view was formulated in a study commissioned by the European Parliament according to which artificial intelligence may be another, new legal entity—an electronic person (Nevejans, 2016, p. 14). Electronic personality would to some extent refer to legal personality, in particular taking into account the fact that legal capacity derives from provisions of the law. The actions of such a person would require the introduction of appropriate, detailed legal regulations. Due to such nature of personality, an electronic person could acquire legal capacity upon its entry in the appropriate register. The actions of an electronic person, although undertaken independently, would, to a certain extent, burden natural persons, e.g. persons listed in this register (as is the case of liability of persons on the management board of a company in relation to the activity of this company). These people would primarily include programmers, creators and owners. The scope of their liability for an electronic person would be determined on the basis of legal regulations, but also on the manual (rules) for operating the robot. With the development of artificial intelligence, it cannot be excluded that the concept of ownership would change. Representation similar to the statutory representation of minors or commercial law companies could then apply to electronic persons.

When referring to the ethical aspects of its functioning in society, it should be assumed that the personality of artificial intelligence is not justified by its very essence. The basis for granting legal personality to an electronic person in the future may be the advanced level of technological development of artificial intelligence, which could make it impossible to predict the way it works. Artificial intelligence, whose actions are predictable, can be framed in detailed provisions on liability (e.g. for a product or an animal). The more autonomously artificial intelligence acts, the broader the concept of liability that should be applied to it. This concept could indeed result from legal personality.

It has, however, been sceptically received by European experts (Open letter, 2018). Therefore, it seems that artificial intelligence’s status of a legal entity gains importance only if its application may lead to easier assignment of liability for the actions of artificial intelligence. This is about avoiding a situation in which in the future no one will be responsible for its actions. The concept of this status should, however, be similar to the legal persons’ status of being a legal entity, for whose actions humans are liable, not to the concept of natural persons who bear liability themselves. As has been mentioned, such status seems admissible only where it would make it possible to specify the principles of liability for the actions of artificial intelligence.

As a side note, the future actions of artificial intelligence, and in consequence also specifying the possibility of bearing liability for them, depends on how artificial intelligence will function tomorrow. Therefore, it is important to emphasise the ethics of creating artificial intelligence.

5. Tort liability

Under the Polish Civil code tort liability covers liability for culpable human behaviour (tort in the strict sense) and for other types of tort, e.g. caused by things or animals (Czachórski, 1994, p. 144). This means that the legislator links the obligation to repair the damage with a person’s actions or omissions or another phenomenon if it was the reason for the damage (Śmieja, 2009, pp. 338-340). On this basis, tort liability covers liability for one’s own deeds (Art. 415 of PCC) and liability for other people’s deeds, including liability for negligent supervision (Art. 427 of PCC), liability for fault in choosing the performer of the task (Art. 429 of PCC), liability for the subordinate (Art. 430 of PCC), as well as liability for damage caused by animals (Art. 431 of PCC ) and liability for damage caused by a dangerous product (Art. 4491 et seq. of PCC 4). These types of tort liability are based on the principles of fault, risk and equity.

Civil law scholars and commentators distinguish intentional fault and unintentional fault (Ohanowicz & Górski, 1970, p. 126). Intentional fault can be attributed to the perpetrator when they acted with the intention of causing unlawful effects (dolus directus), or when they acted without such an intention, but were aware that unlawful effects could arise and agreed to their creation (dolus eventualis). Unintentional fault can be attributed to the perpetrator when they act carelessly, that is, they do not exercise due diligence, which causes unlawful effects (Longchamps de Berier, 1939, p. 232). The perpetrator bears responsibility on a fault basis, including for their own deeds. For example, when throwing a stone, the perpetrator breaks a neighbour’s window.

The principle of risk shapes tort liability (strict liability) when a debtor bears liability for accidental damage that is damage caused not through their own fault (Nowakowski, 1979, p. 108). Liability for the subordinate, as long as the damage was this subordinate’s fault, and liability for a dangerous product are examples of liability based on the principle of risk. The principle of equity is associated with bearing liability due to strong ethical motives set out by the principles of community life, for example in the case of liability for animals (Szpunar, 1985, p. 43). The principles of community life are rules of fair, reliable and loyal conduct, principles of equity and ethics.

Depending on the type of tort liability, different conditions must be met. Under the Polish Civil Code, the conditions for liability for one’s own deeds include damage (i.e. damage to goods or interests protected by law, arising against the will of the injured party (Radwański,1997, p. 83), an act violating the law or principles of community life, and a causal relationship between the damage and this act. However, the conditions for liability for other people’s deeds include damage and causation, while the other conditions depend on the type of tort liability.

In the case of liability for negligent supervision, the premises include a violation of the law or the rules of community life by the person who caused the damage, lack of supervision or improper supervision over that person, a causal relationship between the damage and lack of supervision or improper supervision, and the fault of the person obliged to supervise. The basis for liability for fault in choosing the performer of the task is—apart from the damage—entrusting activities to be performed by another person, violation of the law or rules of community life by the performer of the task, a causal relationship between the damage and the unlawful action of the person who performed the entrusted activity, incorrect choice of the performer of the task and fault in choosing the performer. Liability for the subordinate is based on entrusting the subordinate with the performance of activities on behalf of the supervisor, the subordinate to a third party causing damage, violation of the law or rules of community life by the subordinate, a causal relationship between the damage and violation of the law or rules of community life by the subordinate and the fault of that subordinate. As for liability for animals, this is associated with the occurrence of damage as a result of the animal’s behaviour, when there is a causal relationship between the damage and this behaviour. Conversely, the conditions for liability for a dangerous product include damage, placing the product on the market and causation (see Ziemianin & Kitłowski, 2013, p. 195ff).

European rules on tort liability, developed in connection with the attempt to create the European Civil Code, were shaped similarly. As part of the harmonisation of European private law, the following projects were created: Draft Common Frame of Reference (von Baret al., 2009), and in relation to tort liability—Principles of European Tort Law, hereinafter as PETL (European Group on Tort Law, 2005). European provisions of civil law stipulate an obligation to compensate for damage in three cases: damage caused by one’s own fault, damage caused by dangerous activities on the basis of risk (strict liability) and damage caused by others (liability for others)—Art. 1:101 of PETL. The scope of liability—in accordance with Art. 3:201—of these principles depends on the following circumstances: the possibility of a reasonable person to foresee the damage at the time of the activity, taking into account in particular the closeness in time or space between the damaging activity and its consequence, or the magnitude of the damage in relation to the normal consequences of such an activity, the nature and the value of the protected interest, the basis of liability, the extent of the ordinary risks of life and the protective purpose of the rule that has been violated.

In accordance with the Principles of European Tort Law, liability on the basis of fault consists of intentional or negligent violation of the required standard of conduct (Art. 4:101 of PETL). The required standard of conduct is that of a reasonable person who takes into account the nature and value of the protected interest involved, the dangerousness of the activity, and the expertise to be expected of a person carrying it on (Art. 4:102 of PETL).

Liability on the basis of risk includes mainly abnormally dangerous activities. Article 5:101 of PETL provides that a person who carries on an abnormally dangerous activity is strictly liable for the damage characteristic to the risk presented by the activity and resulting from it. Pursuant to this provision, an activity is abnormally dangerous if it creates a foreseeable and highly significant risk of damage even when all due care is exercised in its management and this risk is not a matter of common usage.

Draft European provisions in the scope of private law also provide for liability for others. Article 6:101 of PETL provides that a person in charge of another who is a minor or subject to mental disability is liable for damage caused by the other unless the person in charge shows that they maintained the required standard of conduct in supervision. Conversely, under Art. 6:102 of PETL, a person is liable for damage caused by their auxiliaries acting within the scope of their functions, provided that they violated the required standard of conduct; this provision does not, however, apply to independent contractors.

The principles mentioned above are detailed in the Draft Common Frame of Reference. It also introduces rules on liability for damage caused by others (Art. VI.–3:104, 3:201 of DCFR), by animals (Art. VI.–3:203 of DCFR) and by products (Art. VI.–3:204 of DCFR). In terms of liability rules, these provisions are consistent with the Polish provisions.

Under the above-mentioned provisions of Polish and European civil law, the liability is borne by a civil law entity. Therefore, artificial intelligence can only be held liable if it is granted the status of an entity before the law. In this case, liability for artificial intelligence will be borne by natural or legal persons, e.g. as in the case of animals, minors or mentally disabled persons 5.

6. Tort liability for damage caused by artificial intelligence

In the present legal state, with the current level of technological development, there are no grounds for granting legal personality to artificial intelligence. Lack of legal personality therefore results in the inability to bear responsibility for one’s own deeds. This means that if the artificial intelligence currently existing causes damage, another person should be responsible for it. The concept of bearing responsibility for other people’s deeds, as already indicated above, is not foreign to civil law. It was reflected in both the Polish Civil Code and European rules on tort liability. However, in this regard it is necessary to analyse which rules of tort liability can be applied in the event of damage caused by artificial intelligence, and who should be liable for such damage. The starting point of this analysis is to consider whether the current provisions of the Polish Civil Code and the principles of European contract law correspond to the specifics of damage caused by artificial intelligence.

Under the Polish Civil Code, liability for other people’s deeds is related to the need for one person to redress the damage caused by another person. In consequence, the legal entity—a natural person, legal person or organisational unit with legal capacity will be responsible for someone else’s act, if this person/unit can be accused of negligent supervision over another person who cannot be held liable (Art. 427 of PCC), entrusts performance of a task to another person (Art. 429 of PCC) or if the person is the superior of the person to whom they entrust the performance of the task (Art. 430 of PCC). In each of these cases, the person causing the damage is a legal entity. These provisions, in their current wording, cannot therefore be applied to artificial intelligence (see Bosek,2019, p. 13).

Article 427 of PCC provides that anyone who, under the law or contract, is obliged to supervise a person who cannot be held liable due to age or mental or physical condition, is obliged to redress the damage caused by that person, unless the obligation of supervision has been fulfilled or the damage would also arise even with supervision being exercised with due care. This provision also applies to persons who, without a legal or contractual obligation, take permanent care of a person who cannot be held liable due to age, or mental or physical condition. Pursuant to this article, it is the supervisor who is liable for negligent supervision in the event of damage caused by a minor or mentally disabled persons, because these persons—in accordance with Art. 425 and 426 of PCC—cannot be held liable. Art. 427 of PCC in its current wording cannot be applied to damage caused by artificial intelligence, as it is neither mentally disabled nor a minor. It is also—as indicated above—not a legal entity. It should be noted, however, that the rule of liability normalised in this provision could apply to liability for artificial intelligence. Minors and mentally disabled persons are individuals whose actions cannot be fully predicted, similarly to actions of artificial intelligence with a higher degree of independence.

Articles 429 and 430 of PCC relate to the issue of liability in the event of entrusting the performance of a task to another person. The first of these articles stipulates that a person who entrusts the performance of tasks to another person, is responsible for damage caused by the perpetrator in the performance of the entrusted task, unless the person entrusting the performance of the task is not at fault in the choice or that the performance of the task was entrusted to a person, enterprise or factory which perform such acts within the scope of their professional activity. According to the second of the above-mentioned articles, anyone who, on their own account, entrusts the performance of a task to a person who, while performing the task, is under this person’s supervision and is obliged to follow the instructions of that person is liable for damage caused by that person when performing the entrusted tasks.

Under Art. 429 of PCC a legal entity may entrust the performance of a task to any person, but is liable for the actions of that person if they are at fault when choosing the performer of the task. This provision does not apply to artificial intelligence, because it relates only to damage caused by the legal entity. However, it seems that responsibility on the basis of fault in the selection of artificial intelligence, whose task would be to perform a specific action, could rest on the person who made this choice, contrary to the manufacturer’s recommendations regarding the scope of skills of artificial intelligence. Acting against the given creator’s recommendations would then be a basis for liability for artificial intelligence.

In contrast, Art. 430 of PCC includes within its scope situations in which damage is caused by a subordinate who follows the instructions of a supervisor during the performance of tasks. This responsibility is based on the principle of risk, thus has the nature of strict liability. Under this provision, a subordinate is a natural person. Therefore, this does not apply to artificial intelligence. However, it can be assumed that the development of autonomous devices containing artificial intelligence will require consideration of similar principles of liability. However, this may only apply to situations in which artificial intelligence is so advanced that it responds to the user’s instructions.

The status of technologically advanced artificial intelligence seems similar to that of an animal. In both cases, it is a certain individual, on whose behaviour a natural person does not have full influence. The liability for an animal may be based on Art. 415 or Art. 431 of PCC. Article 415 of PCC, which refers to liability for one’s own deeds, concerns the use of an animal as a tool. The basis of liability in this situation is fault. In contrast, if an animal causes damage by itself, the liability for its behaviour results from Art. 431 of PCC.

Section 1 of this article provides that anyone who keeps or uses an animal is obliged to redress the damage caused by it, regardless of whether it was under supervision, had strayed or escaped, unless neither it nor the person for whom it is responsible is at fault. However, pursuant to § 2, even if the person who keeps or uses an animal is not responsible for it in accordance with the provisions of the preceding section, the aggrieved party may demand full or partial compensation if it follows from the circumstances, and especially from a comparison of the financial condition of the aggrieved party and that of the other person, that the rules of community life require so (see Art. 431 § 2 PCC). As a rule, responsibility for an animal is therefore based on the principle of fault, but in an auxiliary role, also on the principle of equity.

The above provisions do not currently apply to artificial intelligence. They concern animals and cannot be interpreted in a way that would extend their scope. This is because the rules of legal interpretation adopted under Polish law do not allow such broad interpretation. Nevertheless, the widespread use of artificial intelligence in the future may require similar rules of liability.

Therefore, none of the provisions listed above can be applied to liability for damage caused by artificial intelligence. The situation is different in the case of legal regulations regarding liability for damage caused by a dangerous product (Art. 4491 of PCC). In accordance with Art. 4491 § 2 a product means a movable thing, even if it is attached to another thing. Animals and electricity are also considered as a product. However, under § 3 of this article, a product is dangerous if it does not guarantee the safety that could be expected based on normal use of the product. The circumstances at the time the product is placed on the market, and especially the manner in which the product is presented on the market and the information provided to the consumer regarding the product’s properties, dictate whether the product is dangerous. A product cannot be considered unsafe only because a similar, improved product is placed on the market at a later time.

These regulations will only apply if artificial intelligence is classified as a product (Barton2019). However, artificial intelligence understood as a computer programme or computer application is not a thing, because under the provisions of the Polish civil code a thing must be tangible and separated from nature (e.g. a literary piece is not a thing) (see Dubis, 2016, p. 920). Therefore, a thing can only be a device equipped with artificial intelligence. For damage caused by such a device the provisions of Art. 4491 et seq. of PCC apply. They will apply, for example, for damage caused by autonomous vehicles. The legislator should, however, consider extending the definition of a product.

Responsibility for a dangerous product lies with the manufacturer who placed the product on the market if there is a causal relationship between the damage and the placing of the product on the market. The mere placing on the market of a product that caused damage may be a basis for being held liable. This responsibility is based on the principle of risk, thus it is strict liability. The manufacturer may, however, be released from liability if it has not placed the product on the market or if the product has been placed on the market outside the scope of its business activity (Art. 4493 § 1 of PCC), as well as when the dangerous properties of the product have come to light after it has been placed on the market, unless they were due to an element inherent in the product. The manufacturer is also not liable if the dangerous properties of the product could not have been foreseen based on scientific and technological conditions at the time of the placement of the product on the market or if these properties resulted from the application of legal provisions (Art. 4493 § 2 of PCC). However, in the case of products with built-in artificial intelligence, e.g. electronic cars, the exclusion of manufacturer’s liability requires proof that the dangerous properties were not foreseeable at the stage of production and could not have a different design.

Legal regulations regarding liability for damage caused by a dangerous product are currently the only ones applicable to devices equipped with artificial intelligence, although liability on this principle is subject to restrictions. For example, if an autonomous car causes an accident due to a system error, due to e.g. faulty design, the liability for damage can be attributed to the manufacturer. If the accident occurred due to a change made to the product by its user (car owner), then the owner should be liable for damage—e.g. when the owner changes the software setting by himself.

As has been pointed out above, the manufacturer or owner may be liable for an accident caused by an autonomous car, but such liability must be limited as well as the liability of a natural person being the driver. In each road accident, the reason for such an accident and its circumstances will be assessed.

The possibility of attributing responsibility for damage caused by artificial intelligence in a situation in which it works completely independently of the creator and operator or customer is most questionable (see Vladeck,2014, p. 122ff). The controversy in this regard relates in particular to the lack of a causal relationship between the damage and the actions of these people (Barton,2019, p. 1ff). However, it should be emphasised that currently the law of obligations provides for tort liability also in cases where there is no causal relationship between the damage and the actions of the person responsible. An example is liability for an animal. Causation in this case may be regulated, i.e. it results from a provision of the law (Machnikowski, 2015, p. 394). If there was no such provision, there would be no natural basis for attributing human liability to another living being, either.

The above-mentioned de lege ferenda conclusions in the scope of changes in civil law correspond to the demands expressed by members of the European expert group on artificial intelligence (European Commission, 2019, p. 3ff). They point out that both the user and the manufacturer may be responsible for artificial intelligence. The user is obliged to use the technology properly. The role of the manufacturer is to introduce “good” artificial intelligence to the market. According to the proposed concept, in the future artificial intelligence can be treated as a helper, for whom the person entrusting it with the task is responsible. This concept is part of the current civil law regulations regarding tort liability for other people’s actions. When supplementing this view, it should be emphasised that legal provisions should be adapted to the appropriate technological solutions so as to protect the society and to comply with human rights. In order not to become a brake on technological development, the legal provisions cannot be too advanced. They should also take into account technological developments beneficial to society (see Rommetveitet al, 2020, p. 47ff).

It cannot be ruled out that conferring the status of an entity under the law to artificial intelligence will be beneficial from the point of view of society in the future. An analysis of the Polish and European regulations led to the conclusion that artificial intelligence, which would acquire legal personality, would bear responsibility itself, just like a natural person. Fault would be primarily the basis for attributing this responsibility to it. Such a concept of liability would not, however, remove all doubts about the compensation for damage caused by artificial intelligence. The question arises whether artificial intelligence, which would receive a legal personality regulated by the law, would be able to redress the damage itself, e.g. whether it would have adequate financial resources. The current state of technology does not allow this question to be answered. The assumption of a legally-regulated rather than inherent personality leads to the conclusion that the responsibility of artificial intelligence would be more similar to the responsibility of a legal person. Therefore, every electronic legal entity of this kind would have to have certain funds, collected e.g. from compulsory insurance of manufacturers and users of artificial intelligence (similar to a company which possesses funds contributed initially by shareholders). Having such independent funds is necessary for remedying the damage independently.

7. Final remarks

The conducted analysis of civil law leads to the conclusion that there are currently no grounds to grant legal personality to artificial intelligence. Therefore, liability for damages caused by artificial intelligence must be borne by natural persons, legal persons or organisational units with legal capacity. The principles of this responsibility should depend on the type of artificial intelligence and its technological advancement. Civil law must be changed in this respect and adapted to the requirements that artificial intelligence sets for the law, because none of the above-mentioned provisions can refer directly and comprehensively to artificial intelligence, which is already operating and may cause damage (as in the case of autonomous cars). These changes should take place gradually, but some de lege ferenda conclusions should be taken into account as soon as possible. Currently, compensation for damage caused by artificial intelligence can only be made on the basis of regulations on dangerous products. However, they do not apply to all types of artificial intelligence—even those known today, if they cannot be classified as a product.

In the future, the compensatory function of tort liability may justify granting legal personality to artificial intelligence. This should involve the simplification of rules of liability for devices whose operation will be completely unpredictable. It may transpire that only such a solution will make it possible to redress damage in relation to an injured party.

There is no doubt that the creation of a coherent and comprehensive concept of legal personality of artificial intelligence will require the cooperation of experts from various fields, primarily lawyers, IT specialists and philosophers. The purpose of their work should be to shape artificial intelligence in such a way that it works for the benefit of humankind within the established legal regulations. It cannot be ruled out that with the development of technology, the legislative solutions previously proposed will prove to be insufficient and ineffective. Nevertheless, the changes in law should be gradual, taking into account the specificity of artificial intelligence. Some changes, concerning inter alia extension of the definition of a product, should be adopted without delay.

References

Act of 17 November 1964 – the Code of Civil Procedure (Dz. U. (Journal of Laws) of 2019, item 1460 as amended).

Act of 23 april 1964 – the civil code (dz. U (Journal of Laws) of 2019, item 1145 as amended).

Barton, J. T. (2019). Introduction to AI and IoT issues in product liability litigation. Thomson Reuters Westlaw.

Bosek, L. (2019). Perspektywy rozwoju odpowiedzialności cywilnej za inteligentne roboty. Forum Prawnicze, 2(52). https://doi.org/10.32082/fp.v2i52.200

Bryson, J., Diamantis, M., & Grant, D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9

Chauvin, T. (2020). Godność człowieka jako źródło podmiotowości prawnej i granica władz. Edukacja prawna, 1(175), 5–11.

Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210 of 7 August 1985), amended with the Directive 1999/34/EC (OJ L 141 of 4 June 1999).

Cyfryzacji, M. (2018). Założenia do strategii AI w Polsce. Ministerstwo Cyfryzacji.https://www.gov.pl/web/cyfryzacja/ai.

Czachórski, W. (1994). Zobowiązania. Zarys wykładu. Wydawnictwo Naukowe PWN.

Dijk, N. (2020). In the hall of masks: Contrasting modes of personification. In M. Hildebrandt & K. O’Hara (Eds.), Life and the law in the era of data-driven agency (pp. 230–251). Edward Elgar Publishing.

Dubis, W. (2016). Title VII.Performance obligations. Dział III. Wykonanie i skutki niewykonania zobowiązań z umów wzajemnych. In E. Gniewek & P. Machnikowski (Eds.), Kodek cywilny. Komentarz (5th ed., pp. 917–921). C.H. Beck.

European Commission. (2019). Liability for artificial Intelligence and other emerging digital technologies [Report]. Publications Office of the European Union. https://doi.org/10.2838/573689

European Group on Tort Law. (2005). Principles of European tort law. Text and commentary. Springer.

Grimme, C., Preuss, M., Adam, L., & Trautmann, H. (2017). Social bots: Human-like by means of human control? Big Data, 5(4). https://doi.org/10.1089/big.2017.0044

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

Klopfenstein, L., Delpriori, S., Malatini, S., & Bogliolo, A. (2017). The rise of bots: A survey of conversational interfaces, patterns, and paradigms. In O. Mival (Ed.), DIS ’17: Proceedings of the 2017 conference on designing interactive systems (pp. 555–565). Association for Computing Machinery. https://doi.org/10.1145/3064663.3064672

Kok, J. N., Boers, E. J. W., Kosters, W. A., Putten, P., & Poel, M. (2002). Artificial intelligence: Definition, trends, techniques and cases. In J. N. Kok (Ed.), Encyclopedia of life support systems (pp. 270–299). Eolss Publishers.

Longchamps de Berier, R. (1999). Polskie prawo cywilne: Zobowiązania (Vol. 2). Ars boni et aequi.

Machnikowski, P. (2015). Prawo zobowiązań w 2025 roku. Nowe technologie, nowe wyzwania. In A. Olejniczak, J. Haberko, A. Pyrzyńska, & D. Sokołowska (Eds.), Współczesne problemy prawa zobowiązań (pp. 379–396). Wolters Kluwer.

Machnikowski, P. (Ed.). (2017). Kodeks cywilny. Księga pierwsza. In Część ogólna. Projekt Komisji Kodyfikacyjnej Prawa Cywilnego przyjęty w 2015 r. Z komentarzem członków Zespołu Problemowego KKPC. C.H. Beck.

Neff, G., & Nagy, P. (2016). Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10, 4915–4931. https://ijoc.org/index.php/ijoc/article/view/6277

Nevejans, N. (2016). European civil law rules in robotics [Study]. Publications Office of the European Union. https://doi.org/10.2861/946158

Nowakowski, Z. (1979). Wina i ryzyko jako podstawy odpowiedzialności. In Z. Radwański (Ed.), Studia z prawa zobowiązań (pp. 103–115). Państwowe Wydawnictwo Naukowe.

Ohanowicz, A., & Górski, J. (1970). Zarys prawa zobowiązań. Państwowe Wydawnictwo Naukowe.

Open letter to the European Commission: Artificial intelligence and robotics. (2018). https://www.politico.eu/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf.

Pegani, A. (2016). Sztuczny Człowiek. Wizerunek w wybranej literaturze oraz filmie. Rozpisani.

Pilich, M. (2018). Prezdmowa & Ustawa z dnia 23 kwietnia 1964 r. – Kodeks cywilny. In J. Gudowski (Ed.), Kodeks cywilny. Część ogólna. Komentarz do wybranych przepisów (pp. 8–22). Wolters Kluwer.

Radwański, Z. (1997a). Prawo cywilne – część ogólna. C.H. Beck.

Radwański, Z. (1997b). Zobowiązania. Część ogólna. C.H. Beck.

Retto, J. (2017). Sophia, first citizen robot of the world.

Rommetveit, K., Dijk, N., & Gunnarsdóttir, K. (2020). Make way for the robots! Human- and machine-centricity in constituting a European public–private partnership. Minerva, 58(1), 47–69. https://doi.org/10.1007/s11024-019-09386-1

Shumway, E. (1901). Freedom and slavery in Roman law (Vol. 49, pp. 636–653). The American Law Register. https://doi.org/10.2307/3306244.

Śmieja, A. (2009). Ogólna charakterystyka odpowiedzialności z tytułu czynów niedozwolonych. In A. Olejniczak (Ed.), System prawa prywatnego. Prawo zobowiązań – część ogólna (Vol. 6, pp. 335–363). C.H. Beck; Instytut Nauk Prawnych PAN.

Świerczyński, M., & Żarnowiec, Ł. (2019). Prawo właściwe dla odpowiedzialności za szkodę spowodowaną przez wypadki drogowe z udziałem autonomicznych pojazdów. Zeszyty Prawnicze, 19(2), 101–135. https://doi.org/10.21697/zp.2019.19.2.03

Szpunar, A. (1985). Odpowiedzialność za szkody wyrządzone przez zwierzęta i rzeczy. Wydawnictwo Prawnicze.

Targosz, T. (2004). Nadużycie osobowości prawnej. Zakamycze.

Teubner, G. (2018). Digital personhood? The status of autonomous software agents in private law.

Vladeck, D. C. (2014). Machines without principals: Liability rules and artificial intelligence. Washington Law Review, 89(1), 117–150.

von Bar, C., Clive, E., & Schulte-Nölke, H. (Eds.). (2009). Principles, definitions and model rules of European private law. Draft common frame of reference (DCFR). Articles and comments. Sellier.

Wang, P. (2008). What do you mean by “AI”? In P. Wang, B. Goertzel, & S. Franklin (Eds.), Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference (Frontiers in Artificial Intelligence and Applications) (Vol. 171, pp. 362–373). IOS Press.

Wolter, A., Ignatowicz, J., & Stefaniuk, K. (2001). Prawo cywilne. Zarys części ogólnej. Lexis Nexis.

Zagórna, A. (2020, June 9). GPT-3, czyli SI dobra dla ludzi. Podobno. https://www.sztucznainteligencja.org.pl/gpt-3-czyli-si-dobra-dla-ludzi-podobno.

Ziemianin, B., & Kitłowski, E. (2013). Prawo zobowiązań. Część ogólna. Wolters Kluwer.

Ziemianin, B., & Kuniewicz, Z. (2007). Prawo cywilne. Część ogólna. Ars boni et aequi.

Footnotes

1. For historical development of personification, see van Dijk (2020).

2. In the context of these considerations, it is possible to recall the concepts of artificial intelligence personifying human beings, which can be discussed from the point of view of social predictions. See Pegani (2016).

3. Resolution no. 196 of the Council of Ministers on establishing “A policy for the development of artificial intelligence in Poland till 2020”, Monitor Polski 2021, item 23.

4. These provisions are an implementation into the Polish legal system of the Council Directive 85/374/EEC.

5. More on this in section 5 of this study.

Add new comment