LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities

I want to thank Harry Surden for his rich, technically-informed response  to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:

There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”

The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.

I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.

Having said this, I can now turn to responding to Harry’s excellent post.
Harry says,

[E]mbedded in many existing legal doctrines are underlying assumptions about cognition and intentionality that are implicit and are so basic that they are often not articulated.

This is indeed true and I hear the note of caution that Harry wants to sound about changes in legal doctrine that might think they are responding to human-like capacities in artificial agents but are only clever ‘simulations’. But it also worth acknowledging that many of our practices in dealing with other humans are also embedded in assumptions about cognition and intentionality that frankly, are little more than admissions of our ignorance about details (this idea is implicit in James Grimmelmann’s excellent post on law’s response to complexity), and that neuroscientific investigations might force us to reconsider (the pre-conscious encoding of decisions for instance). As we noted in the concluding chapter, we might reject the conclusions of these neuroscientific investigations precisely because we want to preserve our legal and moral vocabularies. Then, I think, we can see the influence going the other way; it is the legal and moral picture that also drives our conceptions and knowledge of ourselves, not just the other way around.

Harry’s post also makes us come face to face with with the fact that our knowledge of our cognitive abilities remains remarkably obscure. Notice  that when we describe human capabilities, as contrasted with those of artificial agents, we  often retreat into obscurity and the usage of terms that we accept uncritically. Notice for instance, Harry’s description of human facility in translation as arising from an kind of “profound understanding with the underlying “meaning” of the translated sentences”. What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others.

And what are “meanings”? I’m glad Harry put “meaning” into quotes. Are there meanings hung up in a museum (as Quine suggested and rejected as “uncritical semantics” in Two Dogmas of Empiricism)? Or do I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language? (As Wittgenstein suggested in The Philosophical Investigations?) If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

To repeat and sum up: we might find our cognitive abilities are realizable in a variety of physical substrates, by a variety of implementation schemes; the language of our law and moral systems often reflect assumptions about humans’ capacities that on closer inspection are often shrouded in obscurity; we retain such language and such assumptions because of over-riding social objectives that might cause us to disdain the neuroscientific vocabulary in preference for the extant legal and moral vocabulary.

So Harry is right that we should not understand the multiple-realizability of human cognitive skills as a purely technical issue, but I want to suggest that the lens should be turned back on humans as well, and on the often uncritical assumptions we make that we possess uniquely, non-replicable qualities, and think more about how such grants of uniqueness underwrite important methods of self-conception, which are then written into the law.

You may also like...

6 Responses

  1. A.J. Sutter says:

    Beneath the eloquent expression, the gist of your argument seems to be simply that you have a preference for one legal fiction over another.

    @ Fiction OLD arises from “our law and moral systems often reflect[ing] assumptions about humans’ [cognitive] capacities that on closer inspection are often shrouded in obscurity”
    @ Fiction NEW is that AAAs that replicate some human cognitive abilities are legal persons.

    It doesn’t seem that you’re saying OLD should be displaced from the law; rather, your argument seems to be that a fortiori NEW should be accepted into it.

    An aim is “that the lens should be turned back on humans as well, and on [certain] uncritical assumptions we make …, which are then written into the law.” Moreover, acceptance of NEW would “do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.”

    A couple of points:

    1. ALL legal fictions involve writing uncritical assumptions into law.

    2. In the particular cases of OLD and NEW there’s a remarkable congruence between the uncritical assumptions being made. The argument for taking the intentional stance regarding AAAs relies on the obscurity of what’s going on inside the black box.

    3. Let’s assume for the sake of this paragraph that it’s desirable to “displace our sense of uniqueness” and to “re-invoke the sense of wonder” you describe. And in the same spirit let’s assume the somewhat shakier notion that these are suitable objectives for “the law.” Even assuming further that according legal personhood to AAAs would be sufficient to meet these goals, nothing suggests that it would be necessary. E.g., according legal personhood status to primates and cetaceans might also suffice.

    4. Yoking together legal and moral systems, legal and moral vocabulary, etc. is to mix apples and oranges in the same breath. The reasons we might or might not “disdain” some other vocabulary or a fiction in the law can be distinct from why we might make the corresponding judgment morally. E.g., judges who accept the arguments from efficiency and logical coherence your book proposes might adopt the fiction NEW mentioned above. But many persons in society at large might find such a change in law morally repellent, and seek to reverse it by political or other means.

    Is your argument that we ought to accord legal personhood to AAAs based solely on legal grounds, or are you also arguing that we should do so on moral grounds?

  2. Samir Chopra says:

    AJ: I think I can now confidently state a theorem pertaining to philosophical discourse. Let me term it the Chopra Theorem of Philosophical Argument: With probability one, if a philosophical argument carries on long enough, one of the interlocutors will suggest the other’s conclusions or premises are ‘merely preferences’. (This also needs Chopra’s Lemma: All philosophical premises are expressions of preferences). So your suggestion that my argument above is merely an expression of ‘preferences’ is not too perplexing, and is unlikely to cause me offence.

    I am however, puzzled by why you have responded the way you have to a post that doesn’t actually feature a single mention from me of ‘legal personhood’. My claims about the aims of AI and the need for a lens to be turned back on human cognitive capacities and our self-conceptions can be made independently of any arguments for legal personhood for AAs.

    But lets take the argument there anyway. And lets take it via the notion of “fictions” which seems to be doing a lot of work for you in your rejoinders to me.

    So, point by point:

    1. As I have noted before, a great deal of our so-called substantive discourse is ‘fiction’, just well-established ones. Consider the notion of a ‘person’ for instance. Despite thousands of years of philosophical blathering about this, it seems to me that a ‘person’ is a fiction, invented to reify our notions of responsibility, blame and agency. Only a society interested in blame, holding people responsible, and in describing some kind of unitary entity as a cause for an event would invent ‘agents’ and ‘persons’. So I think constantly suggesting that I’m trafficking in ‘fictions’ is to not really mount a critique at all.

    2. I’m glad you made your point 2 for it supports my position. Because just like “The argument for taking the intentional stance regarding AAAs relies on the obscurity of what’s going on inside the black box” the argument for taking us to possess magical properties like free-will and autonomy rely on similar obscurity. When AAs get to be so complex that they become authorities on what they say, we will find we have to treat them as persons. (cf. Rorty, Incorrigibility as Mark of the Mental; Dennett, The Case for Rorts)

    3. I agree again. If thinking about AA’s makes us revisit our conceptions of ourselves then our job will have been done. But I think it might not be so easy; primates and cetaceans might still leave us thinking there was something unique about the particular biology of this planet.

    4. When I say “legal and moral vocabularies” I am not yoking them together; I am merely relying on a grammatical convenience that lets me refer to them in the same sentence (but they are somehow related, otherwise, I would not spend so much time in my philosophy of law class talking about natural law theories). To mention them in the same sentence is not to imply equivalence, identity or any tighter a relationship than when you say “apples and oranges” or when you might say “moral and scientific vocabularies”. It’s merely conjunction, that’s all.)

  3. Harry Surden says:

    Thank you for your thoughtful reply Samir.

  4. A.J. Sutter says:

    Samir: Of course my rhetorical point in context wasn’t that they’re just preferences, but preferences between things that differ only slightly and/or not along a very significant dimension.

    As for your point 1, just curious: can you present a counterexample of an actual society that lacks any concept of person, while also lacking any interest in (i) blame, (ii) holding people responsible, and (iii) describing some kind of unitary entity as a cause for an event?

    As for your point 3, I’ll address the biology point under one of your later posts.

    As for your point 4, the fact that you spend so much time on natural law indicates merely that some people believed that they are related in a particular way. 😉

  5. Samir Chopra says:

    Harry:

    You’re welcome. That technical perspective was really very useful.

  6. Samir Chopra says:

    AJ:

    I think the point in suggesting ‘person’ as a fiction was just that: societies that reach levels of organization that require blame and responsibility settle on the notion of an agent, selected from the flux around us, as a cause for events. These become reified into ‘persons’. This is a pragmatic choice and does not mark out some privileged ontological category.

    I hear you on point 4 🙂