I enjoyed A Legal Theory for Autonomous Artificial Agents (LTAAA) by Samir Chopra and Laurence White, and, still, I have some doubts. In order to clarify this reaction, let me distinguish three different kinds of legal agents.
A1 – Agents can be a “source” of responsibility for other agents in the legal system
A2 – Agents can be considered as “strict agents” in civil (as opposed to criminal) law
A3 – Agents can be “proper persons” with rights (and duties) of their own
I found convincing how LTAAA deals with A1 in Chapter 4 (tort law) and A2 in Chapters 2 and 3 (contracts). My doubts revolve around A3 in Chapter 5 (which includes matters of legal agency that concern the criminal law field as well).
Although I reckon that the hottest legal issues of AAAs today concern A1 and A2, let me thus dwell on A3.
In a nutshell, the thesis of LTAAA is that “none of the philosophical objections to personhood for artificial agents – most but not all of them based on ‘a missing something argument’ – can be sustained, in the sense that artificial agents can be plausibly imagined that display that allegedly missing behaviour or attribute. If this is the case, then in principle artificial agents should be able to qualify for independent legal personality, since it is the closest legal analogue to the philosophical conception of a person” (op. cit., 182).
To be sure, I concede the point as a matter of principle. In the wording of Lawrence Solum’s Legal Personhood for Artificial Intelligence: “one cannot, on conceptual grounds, rule out in advance the possibility that AIs should be given the rights of constitutional personhood” (1992: 1260).
Besides, I agree with LTAAA that (some types of) AAAs can, or should, properly be conceived as strict agents in civil law (A2). For example, I have proposed the parallel between the Roman law mechanism for A2 in the case of slaves, that is, the peculium, and today’s A2 for AAAs. However, what LTAAA claims is different. Forms of artificial accountability such as the “digital peculium” would not be unsatisfactory because, say, the parallels between AAAs and slaves are deemed unethical or anthropologically biased. Rather, the autonomy granted by such forms of accountability is reckoned insufficient because once we accept that some artificial agents may be properly conceived of as strict agents in the field of contracts, their legal personhood would then follow as a result. Moreover, “at the risk of offending humanist sensibilities,” LTAAA argues that we should yield before the fact that, sooner or later, AAAs will be a sort of “being sui juris,” capable of “sensitivity to legal obligations” and even of “susceptibility to punishment,” that finally allows us “to forgive a computer” (op. cit., 180).
My doubts on how LTAAA addresses A3 for AAAs can be summed up in 4 points.
First, the example of the legal status of slaves under ancient Roman law shows that strict legal agency in contract law (A2) and the legal personhood of AAAs (A3) are not correlated. Aside from ethical aberrations of humans being treated as mere things, there are no particular reasons for claiming that the legal personhood of AAAs (A3) is necessarily entwined with their status of strict agents in the civil law field (A2). Even the European Union, after all, existed for decades without enjoying its own legal personhood!
Second, according to the current state of the art technology, AAAs are far away from achieving a human-like endowment of free will, autonomy, and moral sense, despite the controversial meaning of such expressions. I would admit that some AAAs are endowed with self-knowledge and autonomy “in the engineering meaning of these words” (EURON 2007). However, it is the engineering meaning of these words that reminds us of the very difference between civil and criminal law. The level of autonomy of some AAAs, which is sufficient to have relevant effects in the field of contracts, arguably is insufficient to bring AAAs before judges and have them found guilty by criminal courts.
Third, LTAAA should explain the pragmatic (rather than conceptual) reasons of their stance. As far as I understand, “not only is according artificial agents with legal personality a possible solution to the contracting problem, it is conceptually preferable to the other agency law approach to legal agency without legal personality, because it provides a more complete analogue with the human case” (op. cit., 162, italics added). However, had not these same authors insisted on the thesis that the dependent legal responsibility of AAAs is “based on a combination of human chauvinism and a misunderstanding of the notion of legal person”? (op. cit., 27) Why should we endorse “analogy with the human case” in the case of AAAs?
Finally, I may admit that, once a novel generation of AAAs endowed with human-like free will, autonomy, or moral sense materializes, lawyers should be ready to tackle both A3 and the constitutional rights of AAAs seriously. But, if we accept the thesis of LTAAA, it is more than likely that the meaning of traditional notions such as contracts, torts, or constitutional rights, will change. As a matter of fact, what the meaning of such legal concepts would be is still assigned to the imagination of science fiction writers, rather than the science faction-analysis of legal experts. Would an AAA lawyer be an advocate of the tradition of natural law, so that rules should be viewed as an objective imperative whose infringement implies a violation of the nature of the artificial agent? Would the lawyer vice versa be a sort of legal realist, so that norms depend on how AAAs affect human understanding of the world, their own knowledge and environment? And how about the institutional stances of AAA lawyers who, contrary to their fellow colleagues keen to follow the Kelsenian lesson of the pure doctrine of the law, focus on the substantive mechanisms of a new artificial order?