Skip to main content

9. Legal Personhood for AI?

This chapter will engage with the legal issues of autonomous systems, asking the question whether (and if so, under what conditions) such systems should be given legal personhood.

Published onJun 02, 2019
9. Legal Personhood for AI?
·

Barbara Kasten

Scene III, 2012


(c) Barbara Kasten

Courtesy the artist, Bortolami, New York, and Thomas Dane Gallery

Barbara Kasten’s Scene III evokes the reflective nature of human personhood, where reflection refers to the complex interactions between a person and their environment, the shadows they throw, the light that puts them in perspective and the mirrors they may encounter in their environment, whether human or other. This may throw some light on the artificial, constructive nature of legal personhood and how it may enhance or diminish human agency.


In 1942 science fiction author Asimov formulated his famous ‘Laws of Robotics’, in his short story ‘The Runaround’ (included in his 1950 collection of short stories I, Robot):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These ‘laws’ raise more questions than they answer, which makes them a very interesting attempt to confront the unpredictability of autonomous computational systems. The first type of questions regards the sequence of the laws; as the cartoon below indicates, the sequence is not arbitrary.

The second type of questions actually proves the point made by the cartoon; these laws (and the sequence of applying them) are not merely relevant for individual choice but implicate society as a whole. This also goes for the question of how society as a whole enables or restricts individual choice. Asimov in point of fact articulated a fourth or zeroth law that was meant to precede the others:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

By now the rise of autonomous systems (from connected cars and industrial robotics to search engines and fintech) has come to a point where the paradoxes that are implied in these laws become apparent. The Massachusetts Institute of Technology (MIT) has developed an online software tool that invited users to answer questions about the kind of choices that fully autonomous, self-driving car would have to make.1 For instance, whether the car should prioritise its passengers when faced with the dilemma of either killing pedestrians or its passengers, or, whether it should decide such options by ranking people based on their age, their number or other potentially relevant criteria. This raises yet other questions, such as whether such choices can be hardwired into the car’s firmware by the manufacturer at its discretion or should be decided by the owner (which may be a car rental service) or the user (which may be anybody who actually ‘drives’ the car as a passenger). One could also imagine legislation where such choices are made by the legislature and imposed on developers, manufacturers, retailers, owners and/or users.

A closer look at the reality of supposedly driverless cars, however, demonstrates two objections against the way the issue is framed. First, experts are not in agreement whether the level of autonomy that is assumed in the portrayal of these choices will ever be achieved. Some suggest that this type of robotics is running into a wall, due to the limitations of data-driven ‘intelligence’ in real-life scenarios and the risks its employment generates. In robotics, developers speak of ‘the envelop’ of a robot. ‘The envelop’ is usually designed simultaneously with the robot, to ensure its functionality and the safety of those interacting with it. Often this implies physically separate spaces for robots and humans, as the navigation of robots in a shared space generates substantial risk of harm. In point of fact, Rodney Brooks, a famous roboticist who designed an industrial robot that may be trusted in a shared space, predicts that self-driving cars will require separate lanes and roadblocks to reduce the risk of impact on human users of public roads.

Second, the issues are framed in somewhat naïve utilitarian terms, defining the problem in terms of individual preferences that can then be aggregated and decided based on whatever the majority of a specific user-community prefers. Such a utilitarian calculus assumes that preferences are given, do not fluctuate over time, concern independent variables and can be assessed out of context (based on a schematic depiction that restricts itself to specific details, abstracting from many other details). The framing plays around with the question of whether such choices are agent-dependent: will an agent’s choice about whose life must be prioritised depend on whether they are in control of the car’s behaviour, or on whether they may be the victim? Is the fact that the agent has a family relationship with the potential victim morally relevant, or should choices be made from behind a ‘veil of ignorance’ about such agent-dependent details? Is it a good idea to consider such questions to be a matter of individual preferences, similar to a taste for either red or white wine?

In the context of this book, the question we face is one of legal personhood rather than moral agency. In 2017, the European Parliament (EP) voted on a resolution, requiring the European Commission (EC) to address the potential of ‘civil law rules on robotics’.2 The resolution was passed with 396 against 13 votes, with 85 abstentions. Though the EC is not bound by the resolution it must respond and explain if it does not act upon the recommendations it contains. Under point 31, the EP:

Calls on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore the implications of all possible legal solutions, such as:

(…)

f) creating a specific legal status for robots, so that

  • at least the most sophisticated autonomous robots

  • could be established as having the status of electronic persons

  • with specific rights and obligations,

  • including that of making good any damage they may cause, and

  • applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;

This chapter will engage with the legal issues of autonomous systems, asking the question whether (and if so, under what conditions) such systems should be given legal personhood. Note that legal personhood can be attributed in the context of different legal domains: enabling legal personhood for corporations can e.g. be done for private law, while restricting criminal law liability to natural persons. It should be clear that criminal law liability with its emphasis on censure assumes a kind of moral agency that is not obvious in the case of current day autonomous systems. Strict liability in private law, however, would not necessarily be concerned with moral blame.

To investigate these issues, we will first discuss the concept of legal subjectivity and legal agency, followed by the concept of artificial agency, resulting in a first assessment of the potential of civil liability of autonomous systems.

9.1 Legal subjectivity

In modern positive law, there are two types of legal subjects: (1) natural persons and (2) legal persons. Human beings are considered to be ‘natural persons’, though this should not be seen as something ‘natural’. In the past, human beings such as slaves and women have been denied the status of legal subject, meaning they could not own property, not conclude contracts, they could not vote or claim a right to privacy or freedom of expression. The decision that all human beings are legal subjects was a political decision that sprang from the idea that governments should treat each individual as deserving equal respect and concern (see above 2.2. and 3.3). Legal subjectivity is attributed by positive law, just like subjective rights (the rights of legal subjects) depend on objective law (the totality of rules and principles that decides the what legal conditions result in which legal effect, see above 3.1.3).

Apart from individual human beings, the law can and does attribute legal personhood to other entities, for instance to corporations, associations and foundations or to municipalities and the state. So, both private and public bodies can qualify as legal persons if the legislature (or precedent in common law) attributes legal subjectivity to them. If so, they can act in law: own property, conclude contracts, and they can be held liable for damage caused under private law and they may even be charged with a criminal offence. However, whereas human beings are legal subjects under private, constitutional and criminal law, this is not necessarily the case for legal persons such as a corporation. This will vary per jurisdiction; in some jurisdictions a corporation is a legal person under private law, but not under criminal law.

The concept of a person derives from the latin persona, which means mask. A mask does two things: it enables one to play a role, and it shields the entity behind the mask. The mask thus provides its bearer with positive freedom (the role it can now play) and with negative freedom (warding of identification between the mask and its bearer). On the one hand, the ‘mask’ of the legal persona allows an entity to act in law (to create legal effect), and to be held liable; on the other hand, the ‘mask’ shields and thus protects that entity. The mask prevents identifying a person of flesh and blood with their role in law, thus ruling out that a person is defined by their legal status. This way the law leaves room for reinvention of the self. The idea of the persona is pivotal for the instrumental and protective role of law: it is an instrument where it enables an entity to act in law or to be held liable and it protects where it prevents equating legal status with the living person.

This raises the question whether there are criteria that condition the attribution of legal personhood. Many authors believe that human beings are naturally legal subjects, whereas corporations are legal subjects due to a legal fiction. They are treated as if they are legal persons (as a legal fiction), whereas they are not ‘really’ persons or subjects. This has given rise to metaphysical musings about what distinguishes real from fictional persons. The problem with this perspective is that it overlooks that the attribution of legal subjectivity always concerns an artificial construct. As John Dewey observed in a famous article on legal personhood, a legal fiction such as legal personhood is real even though it is artificial. Just like an an artificial lake is a real lake, not an imaginary lake.

To emphasise that legal subjectivity is an artificial construct, based on a performative speech act that qualifies an entity as a legal subject, we should be reminded that at some point animals could be charged with a criminal offence; black people have been ‘regarded as beings of an inferior order’ with ‘no rights which the white man was bound to respect’ (Dred Scott);3 while, for instance, an unborn baby may be ‘regarded to have been born already as often as its interests require so’ (art. 1:2 Dutch Civil Code).

The artificiality of legal personhood is related to the fact that legal subjectivity is by definition attributed by positive law (statute or common law) and cannot be assumed, while—in turn—the legal capacity of legal subjects can be restricted by positive law (for instance in the case of minors, or in the case of guardianship).

Note that the terminology is such that the term legal subject is used for both natural persons and legal persons, whereas the term legal persons is only used for legal subjects that are not natural persons. As a consequence, legal persons always require representation; a corporation cannot act other than by way of its legal representatives. Clearly, if a legal person is liable under criminal law, it cannot be put in prison, though other punishments will apply (such as fines, closure of operations or even termination of the organisation).

All this should lead to the conclusion that, in principle, positive law can attribute legal personhood to whatever entity, depending on whether the legislature (or the common law) deems such an attribution necessary to protect legally relevant rights, freedoms and interests.

9.2 Legal agency

As to terminology it makes sense to distinguish between a human person, used as a biological term (distinguishing humans from other animals, but also raising the question when a cyborg stops qualifying as a human person); a moral person, used as a moral term (raising the issue of whether and if so, under what conditions, an artificial agent can be qualified as a moral person, capable of acting rightly or wrongly); and, a natural or a legal person, used as a legal term based on positive law (raising the question which animals or artificial agents would qualify for legal personhood, noting that this will involve a political decision).

These issues can also be framed in terms of agency instead of personhood, e.g. in terms of moral agency, which is generally understood as the capability to engage in intentional action, which in turn assumes the capability of giving reasons for one’s actions; or, in terms of legal agency, generally understood as the capability, attributed by law, to act in law and to be liable for one’s own actions (legal subjectivity). Interestingly, however, there is a second meaning for the concept of legal agency, which refers to the capability, attributed by law, to act on behalf of another (acting as a proxy, a representative).

This second meaning of agency assumes a specific legal relationship between an agent and its principal, where the agent acts on behalf of a principal. This is usually based on a contractual relationship between the agent and its principal on the one hand and the agent and a third party on the other hand, thus creating a contractual relationship between principal and third party.

For instance, a corporation that is running fashion shops in various locations, may be represented by salespersons who actually sell clothing to visitors of the shop. In that case the salesperson is the agent, the corporation is the principal. Note that under current law (in most jurisdictions) both the principal and the agent must be legal subjects for the third party to be bound by the actions of the agent. This already raises the interesting question whether the corporation is bound if clothes are sold by an artificial agent (a bit of software that is part of a webshop that sells clothes online). The answer is yes, but this is based on the fact that such a software agent is considered a tool used by the corporation, not based on agency law.

Under agency law, it is crucial to establish the authority of the agent (that is, the extent to which the agent is allowed to act on behalf of the principal). We distinguish the scope of the authority and its origin. As to the scope the law differentiates between universal agents (that have authority for all acts), general agents (that have authority for all acts regarding a specific function), and special agents (with authority for one specific type of act). As to the origin of the authority, the law differentiates between: actual authority (implied or express), ostensible, or apparent authority (estoppel), and ratified authority (where the principal confirms authority despite the fact that the agent acted ultra vires, i.e. beyond the stipulated authority).

An important question is whether the principal is liable for actions of an agent that acts ultra vires. In other words: does the legal effect of a contract with a third party, concluded by the agent on behalf of the principal, apply to the principal if the agent went beyond its authority and the principal did not ratify? The answer to this question depends. The principal is bound:

  • if the third party was justified in trusting the agent to act within the scope of their authority, and

  • the principal acted or omitted in a way that generated justified trust.

  • or, if the risk is for the principal, on the basis of generally accepted principles.

If these conditions do not apply the agent is liable.

9.3 Artificial agents

Before moving deeper into the question of whether software or embedded systems can or should be qualified as legal persons, we need to define what is meant with an artificial agent. Luc Steels (a renowned scientist working on AI), defines an agent as follows:

  1. A system (a set of elements with relations amongst themselves and with the environment);

  2. performing a function for another agent;

  3. capable of maintaining itself.

He then differentiates between an automatic agent, that is self-steering on the basis of external laws, and an autonomous agent, that is both self-steering and self-governing.

In other work I have distinguished between automatic, autonomic and autonomous agency (where agency is defined as a combination of perception and the ability to act on what is perceived, while perception is informed by potential action):

  1. Automatic agency implies that the conduct of the agent is entirely predefined, e.g. a thermostat or a smart contract;

  2. Autonomic agency implies that the agent is capable of self-management, self-repair, self-configuration, e.g. a biological central nervous system, power management in a data centre, cooperating wireless sensor networks that ‘run’ e.g. a smart home;

  3. Autonomous agency implies both consciousness and self-consciousness, meaning that the agent is capable of self-reflection, intentional action, argumentation, and the development of second order desires, e.g. and notably human beings. Second order desires are desires about our desires, such as a desire not to desire smoking.

Steels’ automatic agents would fit with my autonomic agency. Note that autonomic agency does not necessarily imply consciousness and many organisms, including conscious animals would fall within its scope, whereas autonomous agency requires self-awareness in a way that escapes autonomic agents. It seems that moral personhood is contingent upon autonomous agency. If, and to the extent that legal personhood would require self-consciousness, autonomic agents would not qualify. However, corporations that enjoy legal personhood are not self-conscious even if they may be represented by human beings that are.

This implies that there is no categorical legal answer to the question whether an autonomous computational system (usually an autonomic systems in the above sense) should be given legal personhood. That question is a political question that must be answered by a legislature weighing the advantages and disadvantages of such a move. There is, however, one caveat. Legal personhood that involves criminal law liability or constitutional rights such as the right to privacy or non-discrimination seem to require entities that can be called to account for their actions, which assumes a kind of self-consciousness. Interestingly, corporations can be made liable under criminal law, and e.g. the ECtHR has found that corporations may have a right to privacy, despite corporations not having consciousness let alone self-consciousness. It is pivotal to acknowledge that legal personhood is always restricted, compared to the kind of full legal subjectivity enjoyed by natural persons, but it is also pivotal to recognise the fact that restricted forms of legal personhood have been attributed that seem to involve blame (criminal law liability) or the kind of freedom that is often at stake when human rights are violated (constitutional or fundamental rights).

The question to be answered when inquiring whether legal personhood should be attributed to artificial agents is a pragmatic question about (1) what problem the introduction of such attribution solves, (2) what problem it doesn’t solve, and (3) what problems it creates.

9.4 Private law liability

In this chapter we will focus on the attribution of legal personhood to artificial agents that enables private law liability of such agents. If we follow the definition of Luc Steels, where an artificial agent acts on behalf of another agent, combined with the issue of an artificial agent acting on behalf of a natural or legal person, the following problem surfaces: under current law, to be a legal agent implies being a legal subject, whereas an artificial agent would be an legal object, a tool, but not a legal subject. This means that an artificial agent cannot bind the legal subject on whose behalf it operates, other than as a tool. Many scholars have raised the question of an artificial agent that causes harm or damages in a way that was unforeseeable for its ‘principal’, as they fear that such unforeseeability will stand in the way of liability of the ‘principal’.

In the case of machine-to-machine contracting with the help of software agents that are entirely determined by their algorithms, those employing the ‘agents’ can foresee what types of contracts will be concluded. In the case of machine-to-machine contracting with software agents that act autonomically (displaying e.g. emergent behaviours), those employing them cannot foresee all the consequences. Insofar as this would imply that those employing such agents escape liability (as their own conduct may not have been wrongful, precisely because they could not have foreseen the harm), one could argue that victims would benefit if the agent itself could be held liable. To protect potential victims against suffering damages for which they cannot be compensated, artificial agents whose behaviour cannot be foreseen by those who employ them could be certified and registered as legal persons on the condition that they have access to funds that compensate potential victims in case of harm or damage. One could even imagine a prohibition of artificial agents with a propensity to cause harm or damage unless they are certified, registered and either insured or provided with sufficient funds to compensate actual victims.

If the problem to be solved is that of unforeseeable damage that rules out liability of whoever employs the agent, we can foresee the following solutions:

  1. The agent can be seen as tool (as under current law):

  • Courts or legislatures could relax the requirement of intent or negligence on the side of whoever employs the tool (a move towards strict tort liability);

  • The law could deny validity to transactions that were generated by autonomic agents that are unpredictable (which might, however, stifle innovation).

  1. The artificial agent can be registered as a legal person (future law?):

  • This would enable to attribute actual or ostensible agency to the agent, thus making its principal liable (raising the question of what’s the difference with strict liability);

  • This would, however, also enable to make the agents liable on their own account (certification, own funds,.. ) if, for instance, they overstep their authority.

The question of legal personhood for artificial agents clearly demonstrates that even if its attribution would solve some problems, it will create others. Many legal and other scholars warn that such attribution should not enable those who develop and employ artificial agents to outsource and escape responsibility, thus incentivizing them to take risks and externalise costs because they know they will not be liable.

In 2019, an Expert Group on Liability and New Technologies (set up by the European Commission), published its Report on Liability for Artificial Intelligence,4 in response to the resolution of the European Parliament, referred to in the introduction of this chapter. The Expert Group developed the following recommendations:

  • A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.

  • In situations where a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology.

  • A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor, and maintain the technology in use and—failing that—should be liable for breach of such duties if at fault.

  • A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.

  • Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.

  • For situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability.

  • Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof.

  • Emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order not be to the detriment of the victim.

  • The destruction of the victim’s data should be regarded as damage, compensable under specific conditions.

  • It is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.

It seems that the expert group seeks to solve problems caused by emergent behaviour and subsequent unpredictability of artificial agents by means of adaptation of the requirements of private law liability, without resorting to legal personhood for such agents. The main concern of the experts seems to be that those manufacturing, operating, using, or updating these agents must be held accountable for harm caused—to prevent hazardous employment of artificial agents. By ensuring that victims can hold to account those taking the risk of unpredictable artificial agents, this particular approach can stimulate innovation, as it will increase the reliability of artificial agents that are put on the market.

References

Re Asimov’s laws of robotics:

Clarke, Roger. 1994. ‘Asimov’s Laws of Robotics: Implications for Information Technology’. Computer 27 (1): 57–66. https://doi.org/10.1109/2.248881.

Pasquale, Frank A. 2017. ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’. Ohio State Law Journal 78: forthcoming.

Re legal subjectivity for non-humans:

Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant. 2017. ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’. Artificial Intelligence and Law 25 (3): 273–91. https://doi.org/10.1007/s10506-017-9214-9.

Chopra, Samir, and Laurens White. 2004. ‘Artificial Agents: Personhood in Law and Philosophy’. In Proceedings of the European Conference on Artificial Intelligence, 635–39. IOS Press.

French, Peter A. 1979. ‘The Corporation as a Moral Person’. American Philosophical Quarterly 16 (3): 207–15.

Hildebrandt, Mireille. 2011. ‘Criminal Liability and “Smart” Environments’. In Philosophical Foundations of Criminal Law, edited by R.A. Duff and Stuart Green, 507–32. Oxford University Press. http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199559152.001.0001/acprof-9780199559152-chapter-22.

Koops, B.J., M Hildebrandt, and David-Olivier Jacquet-Chiffelle. 2010. ‘Bridging the Accountability Gap: Rights for New Entities in the Information Society?’ Minnesota Journal of Law Science & Technology 11 (2): 497–561.

Wells, Celia. 2001. Corporations and Criminal Responsibility. Vol. 2nd. [Oxford Monographs on Criminal Law and Justice]. Oxford ; New York: Oxford University Press.

Comments
0
comment
No comments here
Why not start the discussion?