LTAAA Symposium: Legal Personhood for Artificial Agents

In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to  Gillick v West Norfolk and Wisbech Area Health Authority)

I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.

Sonia says,

We see a clear picture that justifies the extension of corporate personhood. The unavailability of a path of liability, for example, and the existence of powerful group dynamics might arguably justify the doctrine. But similarly powerful justifications are missing from Chopra and White’s eloquent formulation. It may be that personifying artificial agents might lead to more standardization or lower administrative costs, but one might need to see more discussion of why that is a more appropriate remedy than others that raise lesser philosophical objections.

In responding to Sonia, I think one clarification is in order (this is a general point, and is thus directed at Lawrence Solum’s remarks on zombies and legal personhood as well, though I will have more to say about that particular thought experiment a bit later). Much of the rhetorical thrust of our concluding chapter was directed at noting that for us, there did not appear to be a conceptual reason for denying artificial agents legal personhood once certain capacities had been attained (this claim is obviously contingent on their continued deployment and use). Our arguments in that chapter were thus largely driven towards what in fact, is a very weak position: that there did not seem to be any knock-down “impossibility theorem” against artificial agents being granted legal personhood.

We noted that the granting of legal personhood is likely when our society and our legal systems consider the artificial agent has attained a particular position within our network of social, economic and legal relations and is thus able to attenuate or help realize ends we might have in mind. I do not think that position has been attained as yet. I don’t want to play the part of the prophet but I think that trends are such that we are only like to increase the executive power granted these entities and to thus arrive at such a situation sometime in the near future. The contingencies of technical and economic development being what they are, I perhaps don’t even have a clear idea of what sorts of ends we might want to bring about that could warrant such a grant of personhood. Our intention in the personhood chapter was to point out how the law’s standard requirements of legal persons could be met by artificial agents by considering present capacities (and their plausible extrapolation). If personhood is considered a solution for a doctrinal or empirical situation that needs remedy, their technical capacities and functionality and their ability to fulfill the functions required of legal persons will play a significant role in over-riding purely conceptual arguments.

Our historical review of personhood jurisprudence showed it to be largely result-oriented, with grants of personhood made to bring about particular outcomes that had been elevated in our society’s ordering of priorities. It was not clear to us that such a situation had come about even in the most pertinent of situations: that of contracting online. There, we felt that even if personhood would certainly be a solution for the contracting problem, it might be a case of overkill, a solution unlikely to be adopted by a system as conservative as the law. Our suggestion of an agency analysis as a solution to the contracting problem was driven by our considering that the analogy with human agents and with the principles underlying the common law notion of legal agency was plausible enough to serve as the basis for an argument that artificial agents be considered legal agents of their principals. This would require some creative stretching and interpretation of the common-law of doctrine of agency–as Deborah DeMott noted in her post–but this is a common enough activity that were the desired outcomes to require it, it could be carried out.

My understanding of law is perhaps informed by a mix of positive law, legal realism and critical legal studies; I don’t think the law only captures intuitions or reifies them through its practices, rather the law also creates a particular ordering of our relations that enable certain sorts of ends to be met. I think of the law of making changes as and when needed to facilitate its own efficient implementation and continued tractability. If legal agency is required, it will do so regardless of our intuitions about whether those entities have philosophical agency or not; if legal personhood is required the law will make that change accordingly. Worries about whether these legal persons are like human persons or not will be sidelined. This is why we considered that the strongest arguments against grants of legal personhood will be “pragmatic rather than conceptual”. But once that is done, because of law’s expressive impact, the way law uses those terms will have an effect on future philosophizing about that term.  So there is some weight to be attached to that grant, and some importance to the philosophical debate that surrounds it. But when we consider that philosophical debate we need to make some distinctions and to make clear that a grant of legal personhood does not prima facie, bring with it, membership in the human species, or in the class of moral or metaphysical persons.

All of this is a roundabout way of responding to Sonia by saying that our arguments in the concluding chapter were devoted to pointing a path to legal personhood for artificial agents, to noting how occasions for such grants might have arisen, and how they may arise in the future. And importantly, to try and clear up conflations between “human” and “person”, between “legal person” and “moral person”, and to induce some granularity in our notion of legal personhood. This debate will be revisited again and again in the years to come. The philosophical arguments, I suspect, will remain the same; the technical capacities of artificial agents will give them some empirical bite; the proceedings in the courts of law will draw on them, but ultimately, its deliberations will be driven by our felt needs and desired ends.

You may also like...

3 Responses

  1. A.J. Sutter says:

    You mention the law’s expressive impact in the context of affecting the philosophical debate about a fuller personhood for artificial agents, once legal personhood is accepted. Yet at the same time, you sound pretty open to the possibility of automated judging. If I’ve understood you correctly, then I suggest your apolitical reification of “the law” is leading you astray.

    You speak of “the law” as if it’s some kind of intellectual, economistic construct imposing order on society, and changing with a teleology of improved efficiency. Pragmatism, including considerations of “technical capacities and functionalities,” drives the changes first, and the expressive stuff will kick in afterwards. Maybe autonomous agents would see it that way, too. But if they take on the role of judges, how can you be so sanguine that their deliberations “will be driven by our felt needs and desired ends”?

    There are two ways that “the law” remains the law. One is by sufferance of those subject to it. And one of the things that encourages people to accept a legal regime is a feeling of justice — part of law’s expressive side. Will citizens subject to “the law” continue to accept it when, say, judges are automata? Or might they just get angry enough to throw out that legal regime, even by revolution? In which case “the law” won’t be the law anymore.

    The other way “the law” remains the law is when it’s imposed on its subjects by force. Just as most citizens don’t necessarily stand to benefit from the legal personhood of AAAs, and just as “pragmatism” too often benefits a powerful minority at the expense of everyone else, the excessive abstraction in your analysis could lead you to the side of the few who are pointing their weapons at the many. (I’m speaking of humans in both categories, by the way.) Are you OK with standing there?

  2. Samir Chopra says:

    I said ” I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences”. Did you read into this some expression of support? Since I didn’t express any such support, I’m not going to defend myself against the charge of technocratic blindness to human needs.

    But let me say this now, as this accusation seems to recur time and again in your responses to my work, and as you seem determined to impose this particular vision upon my far more qualified statements: How can you be so sanguine that the deliberations of human beings we have currently placed–or rather, who have placed themselves–in charge–often without our consent–of the law and of the world’s political, economic and moral affairs, are so driven by “our felt needs and desired ends”?

    Really, when you wish to paint me as an imposer of technocratic discipline on a benign human world that is taking such good care of its citizens, all the while informed by relationships of mutual respect and regard, you might wish to consider what it is you are defending.

    You ask whether citizens will continue to accept the law when “judges are automata”? Who knows? Perhaps some citizens, like our African-American ones will prefer judges that consistently and coherently apply the law, as opposed to being driven by racial prejudice. Chew on that, if you will. Certainly many oppressed minorities, the supposed beneficiaries of your humanistic vision, might wish for more dispassionate, rule-driven applications of the law, than the current almost-ludicrously inconsistent and favoring-the-powerful model.

    I suggest the “excessive abstraction” in my analysis might actually work to the benefit of those, who currently, when exposed to the magnificent humanistic vision that you think currently pervades our social orderings, find to their disappointment instead, that the placing of humans at the top of the heap is also to place all their unexamined prejudices and chauvinism there.

  3. A.J. Sutter says:

    Samir, thanks for your reply. I did qualify my comments with the clause “If I’ve understood you correctly,” though your reply hasn’t yet resolved this question for me in the negative. Obviously your dichotomy between robot judges, on the one hand, and human judges driven by racial prejudice, on the other, excludes a lot in the middle.

    As for my personal views, I don’t at all believe that the humans currently in charge are doing such a great job. That, along with my despair at the prospects for constructive political change in the US within my lifetime, contributed to my decision a couple of years ago to move away. Not that they’re doing such a great job in Japan currently either; but I feel that political change here is more humanly possible, in every sense of that word. (Though whether that can be accomplished within the current constitutional framework is a separate question.)

    Since you mention in another post that you’ll respond more fully to the respect for humanity issue separately, I’ll hold some other comments on your #2 above until I’ve read that response.