LTAA Symposium: Response to Sutter on Artificial Agents

I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).

First, it is entirely unsurprising that our language of “agent” is loaded with person connotations; indeed, our language to describe any entity that has as much functionality and capacity that artificial agents is going to be willy-nilly, loaded with such connotations; as Giovanni Sartor’s example of the autopilot shows, such language is almost inevitable once we start dealing with a sufficiently complex and sophisticated system. We could start using “automated mediation module” and start describing its functions using language like “now it has executed a set of procedures for error-checking of input data” or “it is now executing algorithm number 235” and so on. But would we? Would such language be useful, helpful, and explanatorily predictive? More to the point who would use it? Those who had some knowledge of its innards? And so then, we’d be back at the same point: When we lose this kind of epistemic hegemony, or come close to losing it, we start using the language of intentionality for a sufficiently complicated and interesting enough entity. It’s what we do all the time; we are intentional systems; we use this language as much and as often as we can, dropping it only when better modes of description are available; the availability of those modes of description will vary according to the competence of the speaker and the nature of the system interacted with. Our fellow humans innards are unavailable and inaccessible to us; we are forced to use the language of intentionality to describe them; their behavior makes the most sense to us that way.

And this is why the intentional stance is linked to personhood; we start to ascribe personality when we consider the entities we are interacting with as intentional systems, capable of entertaining beliefs and desires and acting on the basis of them. Matters are more complicated of course; we normally look for evidence of higher-order intentionality as well; this is what stops some of us from considering animals as full persons or legal subjects. This argument is developed much more carefully by Daniel Dennett in “Conditions of Personhood” in Brainstorms, MIT Press, 1981.

Second, I share Andrew’s skepticism about neuroscientific reductionism about our moral, ethical and normative vocabulary (and say so quite explicitly in the concluding chapter on personhood!). And neither am I committed to a computational view–and as Andrew seems to think as a result, of an analytic philosophy–of the mind; no such commitment underwrites any claim that we make in the book. The artificial agents we write about in the book often do rely on explicitly computational architectures, but the underlying paradigms of implementation often vary significantly. Rodney Brooks’ robots, for instance, rely on subsumption architectures that do not require explicit representations of the outside world, and as such address many of the concerns raised by Dreyfus- or Husserl-style critiques; network or connectionist models can similarly be plausibly understood as non-computational. Indeed, the claims we make in the book can be made independently of any internal architecture whatsoever: if beings with such capacities presented themselves how would we react? What strategies do we have besides adopting one of the physical, design or intentional stances and seeing which works?

This brings me to another of Andrew’s worries: that I’m privileging one particular philosophical tradition at the expense of another, more specifically, the Anglophone analytic tradition as opposed to the continental tradition. I do not think this is true. The approach described above is Wittgensteinian; but  it is the Wittgenstein of The Philosophical Investigations not of the Tractatus. And that is a distinction that is worth making.

In suggesting that in practice we adopt an intentional stance towards our fellow human beings and should adopt it for other beings as well, I am taking refuge in a kind of hermeneutics, a project of trying to interpret others as best as we can, given our ends and means. My inspiration for trying to use the interpretive stance with artificial agents draws heavily on Daniel Dennett, but also on Richard Rorty’s treatment of classical philosophical problems in The Mirror of Nature; my philosophical outlook is currently most inspired by the triad of Schopenhauer, Nietzsche and Freud, a triad who do not fit comfortably into the Anglophone analytic tradition that Andrew thinks I’m a card-carrying member of. (When I said “philosophically unfounded” I meant to say that human chauvinism is philosophically unfounded; while reserving some special place for humans in our various orderings can be motivated, it needs to not be nakedly-prejudicial in its dismissal of other entities’ claims.)

So, no, I’m not a straightforward analytically-inclined philosopher. I was certainly trained as one, but in my impatience with notions of special human qualities, or the singularity that human consciousness and intentionality represent, and in my desire to see humans as part of a naturalistic world order, I am perhaps more broadly drawing on Nietzsche than anyone else.  The clash in fact, is not between analytic and continental philosophical traditions as Andrew seems to suggest but rather between naturalistic and non-naturalistic views of man, between naturalistic and non-naturalistic vocabularies to use when describing man and society. In that sense, I take particularly or uniquely human capacities to be capable of emulation, imitation and simulation by all kinds of beings.  We might not want to extend the language of personhood and intentionality to those beings because we feel that some vitally human ends would be compromised by our doing so, but that’s a choice we will have to make and we might have to weigh a host of other considerations as well.

It is possible that I have not done justice to Catholic or Rabbinical philosophy, but I have not studied those traditions adequately. But if my failing is that I have not attended to the religious or theological sensibility then I plead guilty. It wasn’t an organizing principle in my writing. But I do wish we would be less chauvinistic as humans; it might help us treat animals better. You know, the ones reckoned mere machines or automatons by Descartes?

It is true we did not engage with Searle’s Chinese Room argument. Perhaps I’ve spent too much time talking about it in graduate school, perhaps I’ve read too many dismissals of it. More to the point, I find Searle’s invocation of mysterious causal powers for the particular carbon-based biology of the brain hopelessly mysterious and chauvinistic. When I look closer at the human brain, I too, find no genuine understanding or intentionality; I see cells, neurons, synapses, perhaps elaborate arrangements of biochemicals. Change the level of description, all appears mysterious; change it again; suddenly everything snaps into focus; our intentionality, our understanding is also mysterious when viewed at particular levels of description. “It’s just lines of code, or its just ones and zeroes” is about as meaningful a criticism of artificial agents’ abilities as is dismissing Shakespeare as “just inkmarks on a page.” It is the investment of meaning that matters, the interpretation we assign to it that makes the difference. There is no meaning in the lines of code; there is none in the neuron-synapses either; we are the ones that invest meaning.

In this sense, it is worth reminding ourselves that our analysis and our reliance on interpretations driven by our ends retains the most important position of meaning-makers for human beings; we remain the entities that invest meaning. Artificial agents get their meaning from us. There’s no need to feel so threatened.

Incidentally, the syllogism Andrew provided to try and unpack the claim we make about adopting the moral stance is slightly off in what it imputes to us regarding corporations. I tend to think (and certainly the citation omitted provides such arguments) that a perfectly good case can be made for considering corporations to be intentional entities, the best stance towards which is the intentional stance. Try describing the complex behavior of corporations always in terms of individual human beings; its not going to be easy to do so. In general, some corporate actions simply will not make as much sense as they do when it itself is considered the subject of the intentional stance; much like it would be idiotic to suggest that a human’s actions be cashed out in terms of neurons firing or cells acting or particles decaying; it simply isn’t the appropriate level of description and we have a much better language of description available.

The point about a corporation’s moral sense followed from that; once an entity coherently becomes the subject of the intentional stance we find we can start to isolate which one of its beliefs qualify as moral and we start to examine whether we can construct generalized predictions and explanations using those beliefs and desires (I have written a paper on this as applied to robots’ moral senses which explores this idea a bit further).

Furthermore, corporations are large complex entities; they are assemblages of humans, machines and a whole lot else. To say that everything a corporation does is just humans acting seems to grant excessive agency to humans; I think its admirable that Andrew wants to keep humans front and center; but in doing so, let us also acknowledge that our finding human causes and agents everywhere is a choice forced upon us by the particular social relations we have constructed; when the society we construct has invested so much power in the hands of machinic-human assemblages, maybe our causal analysis will change as well.

When Andrew mentions the social implications of our suggested doctrine it is worth noting that in Chapter 3 we explicitly point out that the privacy implications of our doctrine are extremely human-friendly. What the agency analysis taketh away with one hand, it giveth with the other. I think bot-deployment should be an equal opportunity activity; in other contexts, I have argued for modes of organization of the software development world that would certainly make it so. But yes, there is no getting around the fact that the agency doctrine in the context of contracting is friendly to builders of agents, and they are currently financial powerful. But see caveat about privacy.

Andrew invokes kavod habriyot, respect for humanity at the end of his post. I intend to respond to that separately (some of my response was already made in addressing his comment earlier today on my post on legal personhood) .

You may also like...

7 Responses

  1. Samir,

    I’ve now received your book and have just begun reading it so I’ll refrain from any comments on the book itself but do what to point out that Dennett’s conception of “intentionality” is fairly idiosyncratic if not implausible (with regard to its philosophical history and more conventional use in philosophy) and liable to any number of objections. I would therefore simply ask that you and your co-author consider (if you’ve not already) some of the fairly vigorous critiques of his account insofar as you rely on or invoke his conception of intentionality or intentional systems.

    In particular, his notion of “intentional systems” as formulated to encompass, for example, cells, molecules, parts of the brain, thermostats, and computers, is rather implausible save a wholly “persuasive” (or unduly stipulative) definition or a wildly idiosyncratic conception of intentionality. As M.R. Bennett (a neuroscientist) and P.M.S. Hacker (a philosopher) have written,[1] one cannot intelligibly ascribe intentionality to such motley phenomena, for the proper bearers of intentionality are “a subclass of psychological attributes and only animals, and fairly sophisticated animals at that, are the only appropriate subjects of such attributes. We cannot attribute belief, fear, hope, suspicion, etc. to the aforementioned items (without, that is, indulging in crass anthropomorphism). Furthermore, “to ascribe pain, liking, disliking, perceiving, misperceiving, anger, fear, joy, knowledge, belief, memory, imagination, desire, intention, and so on, to living beings, in particular to human beings, is not to adopt an interpretative stance.” There are several manifest difficulties with the characterization of the “intentional stance” as “a mode of interpreting entities AS IF these were rational agents” (as a ‘heuristic overlay’), with beliefs, desires, and other (intentional) mental states. As Bennett and Hacker (B & H) point out, “we do NOT typically treat animals as if they were rational agents—since we know perfectly well that they are not. But we do ascribe a wide range of perceptual, affective, cognitive and volitional attributes to animals in a perfectly literal sense. Being a rational agent is not a precondition for the applicability psychological attributes to a creature.” And we certainly don’t adopt an interpretative (intentional) stance toward our own wants, likings, and so forth, rather, we give unmediated expression to our pain, regret, pleasure, hopes and fears (thus these are not ‘heuristic overlays’ or ‘theoretical posits’ of any sort). Consider too the following illustration from B & H:

    “We might be inclined to say that the computer ‘will not take your knight because it knows that there is a line of ensuing play that would lead to its loosing its rook, and it does not want that to happen.’ But what does this amount to? It is no more than a façon de parler. We know that the computer has been designed to make moves that will (probably) lead to the defeat of whomever plays with it—and there is no such thing as the computer’s wanting or knowing anything. And in order to predict its moves, we need not absurdly ascribe knowledge or wants to it, but need only understand the goals of its program and programmer (viz. to make a (mindless) chess-playing machine. For design is one form of teleology, and teleology is a basis for prediction.”

    And while you note that you share Andrew’s skepticism about neuroscientific reductionism, Dennett himself exhibits and exemplifies that very thing in his computational theory of consciousness (including the claim that the mind is identical to the neural activity of the brain) and when he asserts, for instance, that the brain “gathers information, anticipates things, interprets the information it receives, arrives at conclusions, etc.” [the mereological fallacy]. The brain, as Hacker and Bennett make pellucid, “is not a possible subject of beliefs and desires; there is not such thing as a brain acting on beliefs and desires, and there is nothing that the brain does that can be predicted on the basis of its beliefs and desires.” Dennett’s argument, after all, was in part if not in whole intended to amount to a research methodology to “help[] neuroscientists to explain the neural foundations of human powers.” In short, “no well-confirmed empirical theory in neuroscience has emerged from Dennett’s explanations, for ascribing ‘sort of psychological properties’ to part of the brain does not EXPLAIN anything.” It is hardly surprising that Dennett denies the notion of qualia, or any significant or explanatory role for subjective experiences and first-person phenomena in the endeavor to scientifically or “naturalistically” describe and “explain” the mind and consciousness. The beliefs and desires of folk psychology are mere imaginary entities, not that different from the fictional status of “selves.”[2]

    At a later date I hope to address the importance of and the reasons for coming to as much clarity as possible regarding what it means to be a human animal as distinct from a non-human animal (allowing for some overlap of course in virtue of the notion of ‘animal’), as well as the relevant differences between sentient creatures and non-sentient entities. This would include an elaboration of what science can and cannot tell us about human nature. In so doing, I hope to demonstrate why we might consider it at once metaphysically, ontologically, psychologically, and morally incoherent to speak of “robots’ moral senses,” metaphorically or literally (in as much as corporations are composed of individuals, that is an entirely different matter). For now I would ask interested readers to look at the many recent books by Raymond Tallis (especially his trilogy and the latest book) for a taste of the direction that I find compelling and persuasive. Consciousness (as perceiving, knowing, awareness, etc.), for example, is very different in essence and function from the highly specified frames of reference used with computational devices, the former possessing, in Tallis’s words, “an openness, a boundless availability to what, unscheduledly, happens,” that differs in kind from “programmed, rule-governed responsiveness.” Moreover,

    “The multiplication of rules will not solve [what Dennett defines as] the frame problem except for local AI applications that come nowhere near the global scope of consciousness. The explicit rules that may shape consciousness arise out of a background of explicitness; or the soil out of which rules grow, the solution out of which they crystallize, is a continuum of explicitness, a field of explicitness. The computer has only discrete countable rules, not the continuum of explicitness, this ‘rule mass,’ this boundless, ruly world.”

    Dennett’s reductionism may not be of the crudest sort, but it is no less reductionist, as evidenced in his belief that the mind “is our way of experiencing the machinery of the brain.”[3]

    [1] M.R. Bennett and P.M.S. Hacker, Philosophical Foundations of Neuroscience (Malden, MA: Blackwell, 2003). The first appendix is devoted entirely to these and other topics in several of Dennett’s well-known books. Dennett replies to this critique and B & H respond in turn in Bennett, Dennett, Hacker, and Searle (with Daniel Robinson), Neuroscience and Philosophy: Brain, Mind, and Language (New York: Columbia University Press, 2007).
    [2] As Tallis explains, Dennett’s “’narrative centre of gravity’ actually looks more difficult than the thing it is replacing. After all, narrative is a higher-order activity of a self; and the intuition of a centre of gravity of a larger number of independent narratives seems to be an even higher-order activity. [….] [This is a] particularly striking example of ‘the fallacy of misplaced consciousness.’ When materialists deny consciousness in the places where it is normally thought to be, it has the habit of appearing in an even more complex form where it shouldn’t be.”
    [3] Raymond Tallis, The Explicit Animal: A Defence of Human Consciousness (New York: St. Martin’s Press, 1999 ed.).

  2. erratum (first para.): “do want to point out”

  3. A.J. Sutter says:

    Thanks, Samir, for your response, and to Patrick for his helpful remarks about intentionality, and especially Dennett’s view of it.

    As the current vassal of two cats, and having shared my home with other pets most of my life, I don’t intend at all to endorse the Cartesian view of animals. (Nor the Levinasian anthropocentric view of them, either.) But obviously it’s not necessary to ascribe intentionality, moral sense or other human-like attributes to automata in order to treat animals better.

    Concerning the intentional stance toward corporations, again I think you’re approaching the issue too much from the viewpoint of legal fictions and the categories of academic philosophy and not enough from how humans actually regard corporations. For example, in the management literature, the moral sense you impute to the corporation as a “black-box” sort of entity is usually understood in terms of a culture shared among the humans in the company. And no matter what categories are used in “the law” to speak of a corporation’s rights or responsibilities, I suggest that those legal solutions are tolerated politically (when they are) because those outside the corporation understand that there are humans making the decisions and having responsibility for its “machinic-human assemblages.” That’s why the conclusion of the syllogism described in my earlier post is a non sequitur.

    I also questioned whether our reliance on those assemblages is really so desirable, and whether its spread should be encouraged. I look forward to your post about respect for humanity, and will save some of my other comments on your posts to date until I’ve read that one.

  4. A.J. Sutter says:

    One more comment apropos of this post, concerning the language of “agency”:

    That the word “agent” is full of person connotations was precisely my point, rather than something occasioning surprise. The metaphors we choose frame the way in which we think about things. For example, as I discuss at length in a Japanese book, the positive connotations of the word “growth” have led us to believe that economic growth is a good thing, without stopping to think clearly about what it’s done for us lately. We might regard it differently if we called it, say, “swelling” instead. (In fact it might not be a bad idea to switch: the same technological utopianism that motivates much of the AI project is one reason we’ve come to expect a perpetual geometric or exponential increase of GDP — something that would be a monstrosity if it were somatic growth.) Similarly, the use of the word “agent” as we deliberate on the legal personhood issue makes it perhaps too easy to come to the conclusions set forth in your book, even though, as other posters pointed out, those might be unnecessary doctrinally.

    For the reasons Patrick mentions, among others, it’s not so inevitable as you claim, citing Dennett, that we’ll come to ascribe intentionality to “automated mediation modules” or some more aptly-named contraption. But certainly there’s an irony here, too: our potted histories of science tell us that it was “primitive” peoples who believed that rocks, rivers, the sea and sky had “spirits,” and that it was some sort of “advance” for science to have dispelled these “superstitions.” Insofar as you’re suggesting that we should be more flexible about this, you may be right; I’m not one to deny all immanence, for example. By my belief in a divine immanence differs very much from ascribing intentionality to each separate pebble or wave for the sake of convenience in making predictions, both in the number of agents involved and in the motivations. When folks say “G-d works in mysterious ways,” it’s not because He’s so easy to predict.

  5. Samir Chopra says:

    Patrick, AJ: Thank you both for your comments. I intend to reply soon, either here, or in a summary final post I am preparing (it appears to touch on many of the themes you both raise here). I will post the link here in any case so that this conversation can be viewed together.

    I thank you both for such thought-provoking conversation!

  6. Samir Chopra says:

    Patrick:

    1. The idiosyncrasy of a philosophical position is no argument against it.

    2. We did not take on all objections to Dennett’s theory because our intention was to suggest a methodological strategy that the law already implicitly seemed to be using. The debate surrounding the theory is gigantic indeed, but it shows the same pattern that I notice in your response: it privileges the first-person perspective that by definition only we have, and disdains third-person perspectives and strategies.

    3. Dennett’ does not suggest we use the intentional stance for the entities you suggest; he only points out that our usage of it is almost a reflex, but that we drop it because we notice the design and physical stances are available. This defuses part of the Bennett-Hacker critique because Dennett himself says that only some kinds of creatures can prompt us to rely on the intentional stance to the exclusion of other stances.

    4. Given that the existence of other minds is an abduction at best, I fail to see how our treatment of others does not involve interpretation. Furthemore, the intentional stance is meant to provide a third-person perspective; it is not meant to provide interpretations of ourselves, though I suggest there is a certain amount of self-construction and narrative-construction that we indulge in all the time.

    5. My adoption of Dennett’s theory of the intentional stance does not commit me to a computational model of mind. So Tallis’ critique of that fails to find traction here. I personally think Tallis is on the mark (I share many of his skepticisms), insofar as I think the vocabularies we have chosen for ourselves are indispensable and we will not let go of them so easily. For what it is worth, I think the correct focus is not on the brain, but on the being, the entity, in its locus of interest, in its web of complex relationships. We are actually far more in agreement than you might imagine.

  7. Samir Chopra says:

    AJ:

    1. We might not need to do all those things you mention to treat animals better. But using a language inflected with psychological attributes is a key factor in our doing so.

    2. The reductionism employed toward a corporation could be turned back toward humans as well, to dismiss the unitary notion of a human agent. We wouldn’t do it, because we have an inner life, and a first-person perspective. But from a third-person perspective? From the perspective of an extra-terrestrial? How we slice up the world, the ontological chunking we carry out, is very much driven by our interests and our pragmatic concerns. This is why the law can treat corporations as unitary entities; why our language can rely on them being so; while all the while we can tell ourselves stories about how when a corporations ‘act’ ‘its just humans doing the acting’. Why aren’t all our actions ‘just’ C-fibers firing?

    3. To address your second comment, locating agency in this world is also a pragmatic concern and is also one driven by epistemic considerations. We ascribe plenty of agency to humans, and find plenty of unitary causes when more accurately a multiplicity should be indicted. Consider the notion of an ‘author’ for instance; we pick out one entity out of the bewildering array of forces that actually brought a book about. If we want to use ‘agents’ only for things that have beliefs and desires like us, then certainly AAs are some distance from that, but they are taking actions, and they aren’t inert. From one perspective, our agency vanishes too; we are merely instantiations of physical laws.

    Coming to epistemic considerations: In the time of the ancients you allude to, agency was seen everywhere. We retreated from those ascriptions as our knowledge of the natural world increased. But we are still mysteries to ourselves. So we continue to ascribe agency to ourselves. AAs aren’t obscure and complex enough to be treated as agents yet. There is certainly irony here; our best sciences tell us we need to diminish our sense of ourselves and the growing complexity of AAs might suggest the same.