Tagged: artificial agents


Three Roads to Legal Agency and the Personhood of Autonomous Artificial Agents (AAAs)

I enjoyed A Legal Theory for Autonomous Artificial Agents (LTAAA) by Samir Chopra and Laurence White, and, still, I have some doubts. In order to clarify this reaction, let me distinguish three different kinds of legal agents.

A1 – Agents can be a “source” of responsibility for other agents in the legal system

A2 – Agents can be considered as “strict agents” in civil (as opposed to criminal) law

A3 – Agents can be “proper persons” with rights (and duties) of their own

I found convincing how LTAAA deals with A1 in Chapter 4 (tort law) and A2 in Chapters 2 and 3 (contracts). My doubts revolve around A3 in Chapter 5 (which includes matters of legal agency that concern the criminal law field as well).

Although I reckon that the hottest legal issues of AAAs today concern A1 and A2, let me thus dwell on A3.

In a nutshell, the thesis of LTAAA is that “none of the philosophical objections to personhood for artificial agents – most but not all of them based on ‘a missing something argument’ – can be sustained, in the sense that artificial agents can be plausibly imagined that display that allegedly missing behaviour or attribute. If this is the case, then in principle artificial agents should be able to qualify for independent legal personality, since it is the closest legal analogue to the philosophical conception of a person” (op. cit., 182).

To be sure, I concede the point as a matter of principle. In the wording of Lawrence Solum’s Legal Personhood for Artificial Intelligence: “one cannot, on conceptual grounds, rule out in advance the possibility that AIs should be given the rights of constitutional personhood” (1992: 1260).

Besides, I agree with LTAAA that (some types of) AAAs can, or should, properly be conceived as strict agents in civil law (A2). For example, I have proposed the parallel between the Roman law mechanism for A2 in the case of slaves, that is, the peculium, and today’s A2 for AAAs. However, what LTAAA claims is different. Forms of artificial accountability such as the “digital peculium” would not be unsatisfactory because, say, the parallels between AAAs and slaves are deemed unethical or anthropologically biased. Rather, the autonomy granted by such forms of accountability is reckoned insufficient because once we accept that some artificial agents may be properly conceived of as strict agents in the field of contracts, their legal personhood would then follow as a result. Moreover, “at the risk of offending humanist sensibilities,” LTAAA argues that we should yield before the fact that, sooner or later, AAAs will be a sort of “being sui juris,” capable of “sensitivity to legal obligations” and even of “susceptibility to punishment,” that finally allows us “to forgive a computer” (op. cit., 180).

My doubts on how LTAAA addresses A3 for AAAs can be summed up in 4 points.

First, the example of the legal status of slaves under ancient Roman law shows that strict legal agency in contract law (A2) and the legal personhood of AAAs (A3) are not correlated. Aside from ethical aberrations of humans being treated as mere things, there are no particular reasons for claiming that the legal personhood of AAAs (A3) is necessarily entwined with their status of strict agents in the civil law field (A2). Even the European Union, after all, existed for decades without enjoying its own legal personhood!

Second, according to the current state of the art technology, AAAs are far away from achieving a human-like endowment of free will, autonomy, and moral sense, despite the controversial meaning of such expressions. I would admit that some AAAs are endowed with self-knowledge and autonomy “in the engineering meaning of these words” (EURON 2007). However, it is the engineering meaning of these words that reminds us of the very difference between civil and criminal law. The level of autonomy of some AAAs, which is sufficient to have relevant effects in the field of contracts, arguably is insufficient to bring AAAs before judges and have them found guilty by criminal courts.

Third, LTAAA should explain the pragmatic (rather than conceptual) reasons of their stance. As far as I understand, “not only is according artificial agents with legal personality a possible solution to the contracting problem, it is conceptually preferable to the other agency law approach to legal agency without legal personality, because it provides a more complete analogue with the human case” (op. cit., 162, italics added). However, had not these same authors insisted on the thesis that the dependent legal responsibility of AAAs is “based on a combination of human chauvinism and a misunderstanding of the notion of legal person”? (op. cit., 27) Why should we endorse “analogy with the human case” in the case of AAAs?

Finally, I may admit that, once a novel generation of AAAs endowed with human-like free will, autonomy, or moral sense materializes, lawyers should be ready to tackle both A3 and the constitutional rights of AAAs seriously.  But, if we accept the thesis of LTAAA, it is more than likely that the meaning of traditional notions such as contracts, torts, or constitutional rights, will change. As a matter of fact, what the meaning of such legal concepts would be is still assigned to the imagination of science fiction writers, rather than the science faction-analysis of legal experts. Would an AAA lawyer be an advocate of the tradition of natural law, so that rules should be viewed as an objective imperative whose infringement implies a violation of the nature of the artificial agent? Would the lawyer vice versa be a sort of legal realist, so that norms depend on how AAAs affect human understanding of the world, their own knowledge and environment? And how about the institutional stances of AAA lawyers who, contrary to their fellow colleagues keen to follow the Kelsenian lesson of the pure doctrine of the law, focus on the substantive mechanisms of a new artificial order?


LTAAA Symposium: Campaign 2020’s Bots United

A Legal Theory of Autonomous Artificial Agents offers a serious look at several legal controversies set off by the rise of bots. “Autonomy” is one of the key concepts in the work. We would not think of a simple drone programmed to fly in a straight line as an autonomous entity. On the other hand, films like Blade Runner envision humanoid robots that so closely mimic real homo sapiens that it seems churlish or cruel to dismiss their claims for respect and dignity (and perhaps even love). In between these extremes we find already well-implemented, cute automatons. As Sherry Turkle has noted, when confronted by Paro (above right), children “move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?””

For today’s post, I want to move to another, perhaps childish, question: can the bot speak? The question will be particularly urgent by 2020, but is relevant even now because corporate and governmental entities want to promote armies of propagandizing bots to disseminate their views and drown out opposing voices. Consider the experiment run by Tim Hwang, of the law firm Robot, Robot, & Hwang, on Twitter, as explained in conversation with Bob Garfield:

GARFIELD: Earlier this year, 500 or so Twitterers received tweets from someone with the handle @JamesMTitus who posed one of several generic questions: How long do you want to live to, for example, or do you have any pets? @JamesMTitus was cheerful and enthusiastic, kind of like those people who comment on the weather and then laugh heartily. Perhaps because of that good nature or perhaps because of his inquiring spirit and interest in others, @JamesMTitus was able to strike up a fair number of continuing conversations. Only thing is, there is no @JamesMTitus. He, or it, is a bot, a software program designed to engage actual humans in social networks.

Read More


Artificial Agents, Zombies, and Legal Personhood

Legal Personhood for Artificial Agents?

A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence F. White, raises a host of fascinating questions–some of immediate practical importance (how should contract law treat artificial agents?) and some that are still in the realm of science fiction.  In the latter group is a cluster of questions about legal personhood for artificial agents that do not yet exist–agents with functional capacities that approach those of humans.

I’ve written on this question, and my essay, Legal Personhood for Artificial Intelligence, suggests that the legal personhood should and will be awarded to artificial intelligences with the functional capacities of other legal persons.  But legal personhood does not necessarily imply the full panoply of rights we assign to human persons.  Current doctrine may afford free speech rights to corporations–but we can certainly imagine the opposite rule.  If artificial agents are awarded legal personhood, they might be given rights to own property, sue and be sued, but denied others.  Artificial agents might be denied freedom of speech.  And like corporations, but unlike all natural persons, they might be denied the protection of the 13th Amendment.  Legal persons can be owned by natural persons.

Can we imagine a (perhaps far distant future) in which artificial agents possess a set of capacities and characteristics that would lead us to grant them the full set of rights associated with human personhood?

Rather than tackling this question directly, I will use a thought experiment developed by the philosopher David Chalmers (who uses it to tackle a very different set of issues in the philosophy of mind).  For some background, you can check out this wikipedia entry, this entry in the Stanford Encyclopedia of Philosophy, and this web page created by Chalmers.

Meet the Zombies

Zombies look like you and me, and indeed, from our vantage point they are indistinguishable from human persons. But there is one, very important difference: Zombies lack “consciousness.” Zombie neurons fire just like ours. Zombies laugh at jokes, go to work, write screenplays (unless they are on strike), get into fights, have sex, and go to Milk and Honey for drinks. Just like us. But zombies do not have a conscious experience of finding jokes funny. No awareness that work is boring. No phenomenological correlate of their writer’s block. No inner sensation of anger. No feelings of pleasure. No impaired consciousness from inebriation. Following the philosophers, let us call these missing elements qualia. Zombies have no qualia.

Let us imagine a world in which there are both humans and Zombies.  Of course, if the Zombies were exactly like us, we wouldn’t know they exist.  So let us suppose that there is some subtle characteristic that allows us to recognize the Zombies.  How would we treat them?  What legal rights would (and should) they have?

Equal Rights for Zombies

Zombies would, of course, demand the rights of legal personhood.  (Remember that their behavior is identical to ours!)  Imagine a world in which the Zombies demanded full equality with humans.  They might argue that such equality is guaranteed by the Equal Protection Clause, or they might propose an Equal Zombie Rights Amendment.  Because Zombies behave just like humans, they would no more be satisfied with less than full equality than would we.  They would engage in political action to campaign for legal equality.  They would make speeches, hold demonstrations, organize strikes and boycotts, and even resort to violence.  (Humans do all these things.)  If zombies were sufficiently numerous, it seems likely that the reality of human-zombie relations would result in full legal equality for zombies.  Either zombies would be recognized as constitutional persons, or the Equal Zombie Rights Amendment would become law.  Antidiscrimination ordinances would forbid discrimination against zombies in housing, employment, and other important contexts.  One imagines that full social integration might never be accomplished—some humans might be polite to zombies in public context but shun zombies as friends.

But Should They Have Equal Rights?

Zombies could be given equal rights, and we can imagine scenarios where it seems likely that they would be given such rights.  But should they have equal rights?  I would like to suggest that the answer to this question is far from obvious.  We might try answering this question by resorting to our deepest beliefs about morality.  Are Zombies Kantian rational beings?  Would a utilitarian argue that Zombies lack moral standing because they have no conscious experiences of pleasure and pain?  Zombies would share human DNA: does that make them human?  And whether they are human or not, are they persons?

One problem with thinking about equal rights for Zombies is that our moral intuitions, beliefs, and judgments have been shaped by a world in which humans are the only creatures with all of the capacities we associate with personhood.  Animals may experience pleasure and pain, and some higher mammals have the capacity to communicate in bounded and limited ways.  But there are no nonhuman creatures with the full set of capacities that normally developed human persons possess.  A world with Zombies would be a different moral universe–and it isn’t clear what our moral intuitions would be in such a universe.

Back to Artificial Agents

Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons.  And so we can imagine a world in which we would grant them the full panoply of rights that we grant human persons because it would serve our own interests (the interests of human persons).  The truly hard question is whether we might come to believe that we should granted artificial agents the full rights of human personhood, because we are morally obliged to do so.  We don’t yet live with artificial agents with functional capacities that approach or exceed those of human persons.  We don’t have the emotional responses and cultural sensibilities that would develop in a world with such agents.  And so, we don’t know what we should think about personhood for artificial agents.


Artificial Agents and the Law: Some Preliminary Considerations

I am grateful to Concurring Opinions for hosting this online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion here; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion. (I notice that James Grimmelmann and Sonia Katyal have already posted very thoughtful responses; I intend to respond to those in separate posts later.)

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or possibly even beyond? How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever ”know” anything? Who is doing the knowing?

I’ll be addressing questions like these and others during this online symposium; for the time being, I’d like to make a couple of general remarks.

The modest changes in legal doctrine proposed in our book are largely driven by two considerations.

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is (i.e., merely tweak them to accommodate artificial agents without changing their status vis-a-vis contracting) but run the risk of imposing implausible readings of contract theories in doing so. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings some of us take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?). Furthermore, as we argue in Chapter 1 and 2, there is a perfectly coherent path we can take to start to consider such artificial agents as legal agents (perhaps without legal personality at first). This strategy is philosophically and legally coherent and the argument is developed in detail in Chapter 1 and 2.  The argument in the latter case suggest that they be considered as legal agents for the purpose of contracting; that in the former lays out a prima facieargument for considering them legal agents. Furthermore, in Chapter 2, we say “The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not.”

Which brings me to my second point. A change in a  legal doctrine can bring about better outcomes. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of  contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. As we note “Arguably, agency law principles in the context of contracting are economically efficient in the sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider (Rasmusen 2004, 369). Therefore, the case for the application of agency doctrine to artificial agents in the contractual context is strengthened if we can show similar considerations apply in the case of artificial agents as do in the case of human agents, so that similar rules of apportioning liability between the principal and the third party should also apply.”

And I think we do.

An even stronger argument can be made when it comes to privacy. In Chapter 3, the dismissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents. (This follows on the heel of an analysis of knowledge attribution to artificial agents so that they can be considered legal agents for the purpose of knowledge attribution.)

Much more on this in the next few days.


LTAAA Symposium: Complex Systems and Law

The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.

One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.

But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.

Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.

In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.

As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.

Symposium Next Week on “A Legal Theory for Autonomous Artificial Agents”

On February 14-16, we will host an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Given the great discussions at our previous symposiums for Tim Wu’s Master Switch  and Jonathan Zittrain’s Future of the Internet, I’m sure this one will be a treat.  Participants will include Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden.  Chopra will be reading their posts and responding here, too.  I discussed the book with Chopra and Grimmelmann in Brooklyn a few months ago, and I believe the audience found fascinating the many present and future scenarios raised in it.  (If you’re interested in Google’s autonomous cars, drones, robots, or even the annoying little Microsoft paperclip guy, you’ll find something intriguing in the book.)

There is an introduction to the book below the fold.  (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN).  We look forward to hosting the discussion!

Read More