Legal Personhood for Artificial Agents?
A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence F. White, raises a host of fascinating questions–some of immediate practical importance (how should contract law treat artificial agents?) and some that are still in the realm of science fiction. In the latter group is a cluster of questions about legal personhood for artificial agents that do not yet exist–agents with functional capacities that approach those of humans.
I’ve written on this question, and my essay, Legal Personhood for Artificial Intelligence, suggests that the legal personhood should and will be awarded to artificial intelligences with the functional capacities of other legal persons. But legal personhood does not necessarily imply the full panoply of rights we assign to human persons. Current doctrine may afford free speech rights to corporations–but we can certainly imagine the opposite rule. If artificial agents are awarded legal personhood, they might be given rights to own property, sue and be sued, but denied others. Artificial agents might be denied freedom of speech. And like corporations, but unlike all natural persons, they might be denied the protection of the 13th Amendment. Legal persons can be owned by natural persons.
Can we imagine a (perhaps far distant future) in which artificial agents possess a set of capacities and characteristics that would lead us to grant them the full set of rights associated with human personhood?
Rather than tackling this question directly, I will use a thought experiment developed by the philosopher David Chalmers (who uses it to tackle a very different set of issues in the philosophy of mind). For some background, you can check out this wikipedia entry, this entry in the Stanford Encyclopedia of Philosophy, and this web page created by Chalmers.
Meet the Zombies
Zombies look like you and me, and indeed, from our vantage point they are indistinguishable from human persons. But there is one, very important difference: Zombies lack “consciousness.” Zombie neurons fire just like ours. Zombies laugh at jokes, go to work, write screenplays (unless they are on strike), get into fights, have sex, and go to Milk and Honey for drinks. Just like us. But zombies do not have a conscious experience of finding jokes funny. No awareness that work is boring. No phenomenological correlate of their writer’s block. No inner sensation of anger. No feelings of pleasure. No impaired consciousness from inebriation. Following the philosophers, let us call these missing elements qualia. Zombies have no qualia.
Let us imagine a world in which there are both humans and Zombies. Of course, if the Zombies were exactly like us, we wouldn’t know they exist. So let us suppose that there is some subtle characteristic that allows us to recognize the Zombies. How would we treat them? What legal rights would (and should) they have?
Equal Rights for Zombies
Zombies would, of course, demand the rights of legal personhood. (Remember that their behavior is identical to ours!) Imagine a world in which the Zombies demanded full equality with humans. They might argue that such equality is guaranteed by the Equal Protection Clause, or they might propose an Equal Zombie Rights Amendment. Because Zombies behave just like humans, they would no more be satisfied with less than full equality than would we. They would engage in political action to campaign for legal equality. They would make speeches, hold demonstrations, organize strikes and boycotts, and even resort to violence. (Humans do all these things.) If zombies were sufficiently numerous, it seems likely that the reality of human-zombie relations would result in full legal equality for zombies. Either zombies would be recognized as constitutional persons, or the Equal Zombie Rights Amendment would become law. Antidiscrimination ordinances would forbid discrimination against zombies in housing, employment, and other important contexts. One imagines that full social integration might never be accomplished—some humans might be polite to zombies in public context but shun zombies as friends.
But Should They Have Equal Rights?
Zombies could be given equal rights, and we can imagine scenarios where it seems likely that they would be given such rights. But should they have equal rights? I would like to suggest that the answer to this question is far from obvious. We might try answering this question by resorting to our deepest beliefs about morality. Are Zombies Kantian rational beings? Would a utilitarian argue that Zombies lack moral standing because they have no conscious experiences of pleasure and pain? Zombies would share human DNA: does that make them human? And whether they are human or not, are they persons?
One problem with thinking about equal rights for Zombies is that our moral intuitions, beliefs, and judgments have been shaped by a world in which humans are the only creatures with all of the capacities we associate with personhood. Animals may experience pleasure and pain, and some higher mammals have the capacity to communicate in bounded and limited ways. But there are no nonhuman creatures with the full set of capacities that normally developed human persons possess. A world with Zombies would be a different moral universe–and it isn’t clear what our moral intuitions would be in such a universe.
Back to Artificial Agents
Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons. And so we can imagine a world in which we would grant them the full panoply of rights that we grant human persons because it would serve our own interests (the interests of human persons). The truly hard question is whether we might come to believe that we should granted artificial agents the full rights of human personhood, because we are morally obliged to do so. We don’t yet live with artificial agents with functional capacities that approach or exceed those of human persons. We don’t have the emotional responses and cultural sensibilities that would develop in a world with such agents. And so, we don’t know what we should think about personhood for artificial agents.