LTAAA Symposium: Complexity, Intentionality, and Artificial Agents

I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick   and AJ in the comments spaces where their remarks were originally made).

Giovanni’s post was very useful, I think, in explaining the plausibility and indispensability of the intentional stance. We use the intentional stance all the time; indeed, we would use it all the time if we could with all sorts of beings but we find other modes of description and interpretation work better. For psychological beings like humans it works exceptionally well; so well in fact, that we often disdain lower-level neuroscientific descriptions in particular domains (like courts of law). It infects our language so systematically and richly that I have no idea of how we would function without it (I am reminded of what a colleague of mine said when talking about the Churchlands’ eliminativist hypothesis: Try ordering a hot dog while trying to be an elmininativist!).  Our use of it in the book was driven by the methodological consideration that it would be what we would find ourselves relying on increasingly in dealing with artificial agents.

In his post Ken Anderson asked, “Is the position taken by the book finally one that either reduces the intention to the sum of behaviors, or else suggests that for the purposes for which we create – “endow,” more precisely – artificial agents, behavior is enough, without it being under any kind of description?” I think the short answer to that is while suggesting the intentional stance as an interpretive strategy for dealing with artificial agents (as a way of dealing with their complexity), we also pushed for the idea that the distinction that is commonly made between real and ersatz intentionality is not as sacrosanct as we might take it to be; that our view of ourselves as the sole possessors of real intentionality should change as a result of our thinking about artificial agents; most ambitiously we suggested that the appearance-reality distinction in intentionality is not viable (the intentional stance is not just a facon de parler as Patrick O’Donnell suggested in his comment).

The intentional stance is writ large into the law’s practices; I think this is why James Grimmelmann’s post, in many ways, gets to the heart of the matter when it comes to artificial agents: the law’s strategic response to artificial agents will often be a function of how complex an entity it perceives the artificial agent to be. From our perspective this becomes the question of how complex will these agents need to become, or how complex will we perceive them to be, before we find the language of the intentional stance indispensable as a means of doing justice to their richly inter-related responses to us, before the law starts to address their complexity as a unitary entity and respond, perhaps, by granting it a change in status. It is because artificial agents are complex and interestingly different and competent that we even think of them as posing challenges for our legal system. Challenges so severe in fact, that Ian Kerr felt compelled to say that an old legal apparatus just wouldn’t work.  (Notice the polar extremes: keep things just as they are versus radical change is required; something is clearly at hand that has occasioned such divergent responses.)

This is why I found it peculiar that Ryan Calo, after asking, ‘Is an autonomous robot like a hammer?’ answered, “I don’t know”. Well, in one sense, I don’t know either. But I think I can tickle my intuitions by asking myself some questions: Would NASA send a hammer to explore the surface of Mars? Could a hammer drive the streets of Mountain View well enough to provoke the scholars at the Stanford Center for Internet and Society into organizing a seminar dedicated to exploring the legal implications of hammers capable of driving? Can hammers drive the roads well enough for some people to wish they would replace some human-driven cars on the road? Would hammers be used to sniff out survivors from the rubble of earthquake-devastated buildings? Would they be used to defuse bombs? Perhaps answering some of these questions might prompt us to think about whether a hammer is like an autonomous robot or whether it’s more like a rolling-pin. In the chapter on tort liability we split up liability schemes for artificial agents into two broad headings (James Grimmelmann read a draft of this chapter and suggested we dice up our treatment to reflect this kind of division): artificial agents understood as tools or instrumentalities or artificial agents understood as agents of varying levels of autonomy. In the former schemes we could talk about product liability, in the latter we could draw all sorts of analogies with diverse bodies of case law: are artificial agents like children? Are they like pets? Are they like a animals confined to enclosures that could do harm if released? Thinking about these analogies might help us figure how to fit the artificial agent into our legal frameworks.

Many legal constructs we are familiar with are responses to human complexity and an entire legal and moral vocabulary has developed as a result. As my response to Harry Surden’s excellent post  indicated, it might be that we find this vocabulary so useful for pragmatic purposes that even if empirical research were to dispel some of this complexity, we might still want to hold on to it, because it lets us achieve ends—perhaps legal, perhaps moral—nearer and dearer to us. Conversely, our interactions with artificial agents are fraught with an epistemic asymmetry: we know a great deal about their innards; we know, despite the protestations of our best neuroscience, very little about ourselves. This familiarity, as we note in the book, can breed ill-directed contempt (“it’s just ones and zeroes”); it can cause us to ignore the significant ways in which the functionality of the artificial agent causes our legal doctrines and categories severe stress.

Fundamentally, we are creatures whose knowledge of the existence of other minds is doomed to never rise above the level of a particularly good abduction, a wonderful explanation that seems to do justice to the rich level of apparent intersubjective agreement that we appear to possess in many crucial areas. From our vantage, first-person perspective, an ‘I’ looks out, sees other beings possessing a range of external responses that correlate systematically with his own external modes of interaction, which are cued to his own internal states, and posits other ‘I’s as an explanation. The sneaking suspicion has never left us that we could engage in such communication with other beings who had no such internal lives as ours (this is the intuition at the heart of Solum’s post on zombies and has been around ever since Putnam’s “Robots: Machines or Artificially Created Life?” ).

The privileging of our inner spaces, our inner selves, the first-person-subjective point of view, runs the risk of making us an “autistic” species, locked away in our own subjectivities, unable to think about, or even want to, consider the possibility of other selves.  If we define intelligence or personhood as being like us in all the relevantly human ways, then we will have preserved a special status for us, but it will be a pyrrhic victory, one obtained by merely defining away all competitors and sitting rather comfortably with our carbon-centric chauvinism. I think some of the unease occasioned by the idea that artificial agents could be legal persons is that in doing so, we might somehow be acknowledging that humans are more like machines than we are willing to admit. But admitting artificial agents as legal persons does not mean that we can now treat humans like machines. And more to the point, a glance at the history of how the law has handled the question of legal persons should convince us that ‘legal person’ and ‘person’ are distinct and we can keep it that way long after artificial agents have become legal persons.

Returning to the intentional stance, and to Solum’s post on zombies, I think some intuitions can be tickled in a little thought experiment. Let us cast aside robots and artificial agents for a moment. What would we do when extraterrestrials alight on this planet of ours and say “Take us to your Supreme Court Justices, we have a Personhood Petition to submit? How would practitioners of the law go about evaluating their claims? Would they say “Stand here, advance no further, I see no evidence of carbon-based life, no evidence of human methods of cognition used to accomplish these stupendous engineering tasks of constructing spacecraft that have brought you thus far, no internal evidence of human emotions in the letters of longing you write to your fellow creatures left back home on the Planet of Aspirational Personhood. We are a species of being committed to our uniqueness in the natural order, to the singularity we represent”?

Is that what they would say or would they start functioning like diligent field anthropologists, looking for some external behavioral evidence that they could systematically correlate with their pronouncements, and on finding that it was like ours on the surface, even if not on the interior, start thinking about whether they would be willing to file an amici on their behalf? Would our lawyers assess the status of these beings in our social orderings and on seeing they filled and performed many important executive roles, that people had formed relationships with them, think about evaluating their application seriously?

What if these creature’s innards were so mysterious that our best science gave us no handle on what their interiors represented or how they functioned? What if they made us rethink our notion of always looking for law-like correlations of outer with inner and revealed to us that in fact that was an old reductionist dogma? What if we came to realize the wisdom of the adage that the imputation of reasons is the best way to makes sense of our ETs’ behavior? Would we even then reject their claims to personhood because we were so invested in maintaining a special status for ourselves? Kavod habriyot indeed; I want us to note my worry that Kavod habriyot might do double duty in masking human chauvinism.

Philosophers have often, through the history of philosophical speculation, acknowledged the possibility that our elaborate constructions of ourselves as freely-acting, freely-chosing, rational, autonomous beings was a happy and convenient “fiction” (there’s that word again). The most sustained dismissals of these happy reassurances to ourselves, of course, takes place in Nietzsche. I am not going to attempt anything like that here; once done by that fellow, there’s no point in trying to follow up. But Nietzsche also would have told us that these are fictions we live by, ones we need; they play a rich and sustained role in the “ economy of life”. A world in which our fellow human beings were not considered freely acting human beings would be an intolerable one. Our social orderings would collapse; the ends we had settled on would not be attainable. Someone will now wail, “Are you suggesting free will is just a useful fiction”? Yes, but there is no need to be so scared of fictions. Much more is fiction than we imagine; our pictures of ourselves is one. But it is one we live by.

You may also like...

4 Responses

  1. Thanks for the sustained and careful engagement with all of your interlocutors, Samir. Your replies will help me better think through some of the arguments I summoned (nothing original to me, save perhaps my enthusiasm for them!) and perhaps inspire me to write something to post at SSRN. It’s refreshing to have an author engage his critics to the depth and extent you’ve done here and while I still find myself in deep disagreement about some matters, I think you provide an exemplary model of how a writer can discuss her work in an online forum to the benefit of all parties, anonymous readers included. Thanks again.

    Best wishes,

  2. Samir Chopra says:


    You are most welcome! Thanks very much for making me think very hard. I’ve been sick for most of this past week, but writing here has certainly been a partial curative. Incidentally, I’ve only just discovered your excellent blog Ratio Juris, and noticed the diversity of your philosophical interests (many of which are shared by me). I look forward to more discussions in the future.


  3. A.J. Sutter says:

    As for extraterrestrials petitioning for personhood: there is ample dramatic precedent (cf. also here) to make it evident that complying with such a request or granting such a petition would be a singularly bad idea (unless under duress, though in that case legal process would hardly seem necessary) — particularly if prior to the time they revealed themselves to us the ETs already would have become so knowledgeable about one of our national legal systems.

    In the comment where you announce the Chopra Theorem [*], you criticize a limited expansion of legal personhood to primates and cetaceans on the grounds that “it might still leave us thinking there was something unique about the particular biology of this planet.” Why should it be the function of any legal system on this planet to remind us that things might be different elsewhere?

    But actually, there might be a way to accommodate even your ETs while also denying AAAs personhood of any sort. We could declare that one of the criteria for personhood is to be (i) a member of a population that evolves by natural selection, and for legal personhood to fall within either (i), (ii) a collectivity of such members, or (iii) an organization managed by one or more such members (corporations, LLCs, etc.). [**] Let’s focus on (i). At a minimum, this means phenotypic selection by the environment that culls the weakest individuals in a population (ecological selection). Since this is awfully broad, we might require that the selection operate by both ecological selection and selection for reproductive success (sometimes called sexual selection). We could layer on a further criterion that the organisms display whatever mental and moral faculties you think would justify personhood for an AAA.

    One distinguishing feature for at least some AAAs here would be the absence of a phenotype. Another might be the presence of a teleology, such as a “fitness function” in a genetic algorithm. And of course the issue that they didn’t come into being by natural selection, but by design.

    In that regard, suppose we gave them a sort of a handicap (in the golf sense) and said, OK, from here on let them evolve by natural selection without any intrinsic teleology. Then there isn’t any assurance that the AAAs would ever become — or remain — as intelligent as you suggest. What’s adaptive and what isn’t is entirely determined after the fact; and many adaptations can be lost as things change. Same is true for us.

    There isn’t any carbon, much less human, chauvinism in this (other than, perhaps, the same human chauvinism you have in following Minsky’s criterion for intelligence). And it’s no more arbitrary than privileging intelligence as the indicium of personhood. But even despite possible resistance in some American quarters to the notion of natural selection itself, I expect that connecting personhood to biological entities would be closer to most peoples’ moral intuitions than deeming some automaton to be a person.

    Of course, you might cite your theorem and say that my point of view is just a preference. It is, and not just a philosophical one — it’s a political preference as well.

    Thanks for the enjoyable argument.

    [*] See also Stigler’s Law.
    [**] Here I’m talking only about legal personhood of the type that has intentionality ascribed to it — not about the sort ascribed to ships in admiralty law, e.g.

  4. Samir Chopra says:


    1. By stipulation, in my example, the extraterrestrials had established themselves as beings with a rich network of relationships on this planet.

    2. You ask, “Why should it be the function of any legal system on this planet to remind us that things might be different elsewhere?”

    Because legal systems consistently traffic in alternative, desirable states of affairs. It is implicit in all briefs: “A world in which this ruling is not made is worse than a world in which it is”. Because legal systems have expressive impact, and because a world in which biological-chauvinism is determinative of the possiblity for an entity to gain standing and become the subject of legal rights and duties seems a world worse than one in which it is not.

    3. Your strategy for denial is curious: a) AAAs might have evolved by “environmental filtration” (my preferred term instead of the endlessly-confusing “natural selection). b) They could part of collectives: swarms, robot groups (or are you ruling that out by stipulation?) etc. c) Ruling out ships is fine, but remember, part of the thrust of the rhetoric in Chapter 5 was “if it can accommodate ships, why not AAAs?”

    More problematically, your use of “natural selection” seems to ride on a misunderstanding of the theory of adaptation and environmental filtration, one that causes you to think that biology has anything to do with it. The adaptation and natural selection theory is so abstract and general that the evolution of AAAs could easily satisfy it in the right circumstances (reproduction with inherited traits that are not perfect copies and that vary in their adaptedness to the environment).

    Also, if you think they didn’t come to being by “natural selection” but by design, then that’s another confusion. They weren’t designed any more that we were in that sense. We represent a particular location in the evolutionary thrust of a particular physical process – so do they. We’re viewing them as designed much in the same way, some other creature with a different perspective could view us as being fashioned or designed by the ‘laws’ of physics and biology.

    Again, it is part of the stipulative nature of my argument in Chapter 5 that AAAs do become so intelligent and social that they can form rich relationships. If they do become so competent, then on what basis do deny them personhood other than mere reliance on a crude biological essentialism associated with personhood?