Autonomous Artificial Agents: Contracting or Expanding?

Is this the book to separate the legal issues of “autonomous artificial agents” from the more controversial questions of whether code or silicon can function as “people”? The one that can stick to the practical issues of contract formation, tort liability and the like, without blurring the boundaries between legal personhood and personhood in a fuller sense?

I think this was the intention of the authors (C&W). And I certainly agree with other participants in the forum that they’ve done a wonderful job of identifying and analyzing many key legal and philosophical issues in this field; no doubt the book will be framing the debate about autonomous artificial “agents” (AAAs) for years to come. But the style of C&Ws’ argument and the philosophical positions they take may make it hard to warm up to some of their analysis and recommendations unless you’re happy to take a rather expansive view of the capabilities of artificial intelligence — such as imputing a moral consciousness to programs and robots. And even if you’re happy to do so, what about everyone else? I’ll explain below the fold.

First, a brief comment about the substantive legal aspects of the book. I’ve spent my career mainly as a transactional practitioner, a trade that doesn’t require deep immersion in legal doctrine. So I won’t wade too far into that pool. Andrea Matwyshyn and Deborah DeMott have already set out many pertinent doctrinal arguments far better than I possibly could. Suffice it to say that I wasn’t entirely convinced that a new legal fiction was really necessary (though I admit that “multi-agents” (@44) had me scratching my head), and thought C&Ws’ rejection of a notion of “constructive agency” (@24), which might have been an adequate substitute for full legal personhood for AAAs, was a little too quick. I was also surprised at how relaxed C&W seemed to be about the matter of registering AAAs. Most jurisdictions require some registration formality to create a legal person (the US general partnership being somewhat exceptional). C&W address this issue only provisionally, proposing a “Turing register” as a sort of last resort, in case “social norms” don’t solve the problem of identifying agents (@182). Even human children get birth certificates; it seems odd to allow a proliferation of legal persons without at least the same degree of formality.

The rest of my comments will focus mainly on some of the rhetorical aspects of the book.

A. Legal agents, legal persons, or …?In an exchange of emails, Samir Chopra mentioned to me that the book’s “speculative” legal personhood arguments in Chap. 5 were distracting attention from the narrower issues of legal agency in contracting (Chap. 2) and knowledge attribution (Chap. 3). As an author, I can easily sympathize with the fear that one’s favorite arguments will be neglected. But on reading even the early chapters, it struck me that the concept of legal personhood, and even of personhood plain and simple, was always present as a kind of attractive nuisance.

(1) Vocabulary: The vocabulary with which C&W define “agent” is highly loaded with person connotations. E.g., in Chap. 1, we find many references to software or some other inanimate form that “acts… and tries to meet certain objectives,” “manage[s],” “knows how to do things,” “engage[s] in dialog,” and responds to “its experiences;” and “things matter to [it].” Analogies to Roman slaves, and human slaves in general, are also frequent in the book. Indeed, adoption of the word “agent” from its AI/computer engineering context already creates some bias that might not be present if we used a blander and more inanimate-sounding moniker, e.g. “automated mediation module” (or “device”, etc.). At a minimum, a “purer” argument that only legal personhood was intended might have come from foregrounding corporations, LLCs and the like, rather than humans. Not that the analogy to those other legal entities would be entirely persuasive either, as I’ll get to below.

(2) Remedies: C&W propose that in certain contracting situations the ultimate risk of agent misbehavior would fall on the AAA (@48; BTW, this is said, characteristically, to provide “a complete parallel with the treatment of human agents,” rather than corporate ones). The problem here is how could I sue the AAA — or more precisely, how could I recover damages from it? Apparently I couldn’t, unless it were allowed to own money and property (@170; see also reference to a patrimony @149). So in this context the legal personhood and legal agency issues are literally two sides of the same coin. (By the way, there’s also the matter of service of process. This is usually dealt with in the statutory scheme for the formation of a legal person. Absent some sort of formal formation or registration step that identifies an agent for service, it’s hard to see why I as a potential plaintiff should be happy to have risk fall on the AAA.),

B. Dreamt of in whose philosophy?: The atmosphere of personhood, legal and perhaps otherwise, is further reinforced by C&W’s insistence on the intentional stance. Unlike many other arguments in the book, whose pros and cons were presented in a relatively balanced way, the treatment of this topic seemed more polemical. After it’s been explained, the intentional stance is treated as being pretty much accepted. In fact, the authors’ attachment to it is the first reason they give for rejecting the “constructive agency” solution (the other reason being a speculative risk of “further confusion down the line” when applying agency law principles of attribution of knowledge and respondeat superior to AAAs) (@24). A few pages later C&W suggest that “philosophically unfounded chauvinism about human uniqueness” shouldn’t play a role in approaching the issue of legal personality. (@27). These attitudes raise several other issues distinct from personhood.

(1) Why link the intentional stance so tightly to the claim for legal personhood? E.g., C&W cite the example of a ship being a legal person in admiralty law (@137, 158). Does anyone seriously take the intentional stance regarding the ship? (There’s an obvious rhetorical advantage to binding the two concepts together, of course: it makes the book more provocative than it would be were an AAA such an inert “actor.”)

(2) What does “philosophically unfounded” mean? Doesn’t this presume a great deal of consensus about what is the appropriate philosophical basis for the inquiry? C&W show a rather Anglophone bias for cognitive science resting on a foundation of analytic philosophy (and its honorary Europeans, e.g. Kant and Wittgenstein). I don’t think their point of view toward humans necessarily represents that of, say, Bergson, or the Husserl of the Crisis of European Sciences, or Levinas, to say nothing of rabbinic or Catholic philosophy. E.g., in Leon Wein’s 1992 HJLT article relied on by C&W, we’re told of Rabbi Aharon Kotler, who avoided using automated tool machines, preferring human ones instead. “It’s not kovod habriyos (respectful of humanity) to pass up a man for a machine,” he explained. [Wein 1992, n. 16] I don’t think such a philosophy is likely to be consistent with the notion that some lines of code or a silicon-based neural network circuit has intentions, much less moral principles.

(3) Even in the area of analytic philosophy, the book doesn’t engage John Searle’s well-known Chinese Room argument, which Searle summarized as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

I asked Samir about this omission by email, and he explained to me that he’d avoided it because it drags in the notion of personhood, rather than merely legal personhood. Fair enough. But Searle’s argument could just as easily be turned to intentionality: is the man in the room intentionally communicating in Chinese? I’d say not. The arguments adduced by C&W for attributing intentionality to such black boxes (see esp. Chap. 1) are no more limited to legal personhood than Searle’s argument would be in this context. So once again, the “philosophically unfounded” description feels a bit premature.

(4) Is philosophical consistency necessarily the most important thing for a legal system? What about reflecting the values of the society and polity that it serves? I was disappointed that the book didn’t get into this issue with any depth.

Suppose we adopt the position that anything less than the intentional stance is “philosophically unfounded” — will this be persuasive to most people? Compare this to the vigorous arguments advanced by Daniel Dennett (on whose understanding of intentionality C&W heavily rely), Richard Dawkins and others for why belief in a deity is a mistake. Even supposing their philosophizing is impeccable, will their ideas find wide acceptance in society as a whole? A snowball has a better chance of surviving in some mythical hot place. (I suppose some might counter that “most people” didn’t used to think of women or persons of color as having any rights either, and that the current prejudice against AAAs is no more justified than that earlier prejudice was. But then this is to introduce the personhood debate that C&W were trying to avoid.)

If most people would have a hard time ascribing intentionality, moral sense, etc. to an AAA, then having a legal system that enshrines that ascription might not be politically appropriate, however straightforward it might seem to a logician. Consider the following passage:

If we could predict an artificial agent’s behavior on the basis that it rationally acts upon its moral beliefs and desires, the adoption of such a moral stance towards it is a logical next step. An artificial agent’s behavior could be explained in terms of the moral beliefs we ascribe to it: “The robot avoided striking the child because it knows that children cannot fight back.” Intriguingly, when it is said that corporations have a moral sense we find the reasons for doing so are similar to those applying to artificial agents: because they are the kinds of entities that can take intentional actions and be thought of as intentional agents [citation omitted].

This seems to rely on the following syllogism:

α: we’re comfortable ascribing a moral sense to corporations;
β: a corporation is a legal person, not a human;
γ: an AAA is a legal person (by hypothesis), not a human
∴ Ω: we’ll be comfortable ascribing a moral sense to AAAs too.

(Actually, this passage could even be read as suggesting the converse: that we’re cool with ascribing a moral sense to corporations because we can ascribe one to AAAs!). Of course, C&W’s if-clause builds in the premise that an AAA does indeed have moral beliefs and desires. But I doubt most people could swallow that so blithely. Call me illogical, but I kind of suspect that the attitude most people have about companies’ moral endowment might just be because we know humans are ultimately making the decisions. Just a hunch.

C. Social Implications. As the last comment illustrates, C&Ws’ intense advocacy for AAAs and their emphasis on logical consistency sometimes leads to their overlooking some basic tendencies of human beings and human society. There are a couple of other social implications of their book that I found troubling, and that I can’t necessarily attribute to lack of attention.

(1)Quo bono?: C&W propose that deeming AAAs legal agents could contribute to economic efficiency, and also reduce the liability of principals of AAAs. They also suggest that it could make the use of AAAs more attractive (@43). A couple of other commentators have questioned whether these impacts are necessarily positive. They’re right to do so. Bot-deployment isn’t exactly an equal opportunity activity, except in the inverse sense of Anatole France’s famous observation, “The law, in its majestic equality, forbids the rich and the poor alike to sleep under bridges, to beg in the streets, and to steal bread.” While consumers could in some contexts be “principals” of AAAs, the capital and technical skill necessary to set up a bot are going to be beyond the abilities of most. I wonder therefore whether C&W are maybe showing too much solicitude for better-heeled industrial and financial interests against consumers and citizens.

(2)The Creatures of Prometheus: Throughout the book, it’s taken for granted that ever-more advanced bots are both inevitable and desirable. This is certainly consistent with the discourses of capitalism, innovation, growth and complexity science coming out of America. But even here in Japan, home of Paro, AIBO, Cosmo and innumerable robot fantasies in manga and anime, I sense that there are some reservations about bots without faces. This march of “innovation” is not inevitable, and it’s far from evident that it’s desirable. A society could choose to restrict or prevent the development or deployment of these advanced forms of automaton. Indeed, based on some of the downsides mentioned above, some precaution might be a good idea. Some jurisdictions have already taken a similar step regarding human cloning, for example. Those who believe that “can” means “ought” and “eventually will be” might consider such reservations a token of a “religious” sort of “fear” (@218n31). But it would be more accurate to call it simply kavod habriyot: respect for humanity.

My thanks and congratulations to C&W for such a deeply-argued and stimulating book. Regardless of one’s position on the issues discussed, this is a very fruitful work with which to engage.

You may also like...