Personhood to artificial agents: Some ramifications
Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.
The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand, the argument for according “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and do not have clearly defined identities), and then argue how they might be overcome legally.
Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent independent, is not too far into the future. In fact, the aftermath of Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”
We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if) artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?
And when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?
These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network? What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent? What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?
Are these scenarios too far away for us to worry about, or close enough? I wonder…