LTAA Symposium: Response to Sutter on Artificial Agents
I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
First, it is entirely unsurprising that our language of “agent” is loaded with person connotations; indeed, our language to describe any entity that has as much functionality and capacity that artificial agents is going to be willy-nilly, loaded with such connotations; as Giovanni Sartor’s example of the autopilot shows, such language is almost inevitable once we start dealing with a sufficiently complex and sophisticated system. We could start using “automated mediation module” and start describing its functions using language like “now it has executed a set of procedures for error-checking of input data” or “it is now executing algorithm number 235” and so on. But would we? Would such language be useful, helpful, and explanatorily predictive? More to the point who would use it? Those who had some knowledge of its innards? And so then, we’d be back at the same point: When we lose this kind of epistemic hegemony, or come close to losing it, we start using the language of intentionality for a sufficiently complicated and interesting enough entity. It’s what we do all the time; we are intentional systems; we use this language as much and as often as we can, dropping it only when better modes of description are available; the availability of those modes of description will vary according to the competence of the speaker and the nature of the system interacted with. Our fellow humans innards are unavailable and inaccessible to us; we are forced to use the language of intentionality to describe them; their behavior makes the most sense to us that way.
And this is why the intentional stance is linked to personhood; we start to ascribe personality when we consider the entities we are interacting with as intentional systems, capable of entertaining beliefs and desires and acting on the basis of them. Matters are more complicated of course; we normally look for evidence of higher-order intentionality as well; this is what stops some of us from considering animals as full persons or legal subjects. This argument is developed much more carefully by Daniel Dennett in “Conditions of Personhood” in Brainstorms, MIT Press, 1981.
Second, I share Andrew’s skepticism about neuroscientific reductionism about our moral, ethical and normative vocabulary (and say so quite explicitly in the concluding chapter on personhood!). And neither am I committed to a computational view–and as Andrew seems to think as a result, of an analytic philosophy–of the mind; no such commitment underwrites any claim that we make in the book. The artificial agents we write about in the book often do rely on explicitly computational architectures, but the underlying paradigms of implementation often vary significantly. Rodney Brooks’ robots, for instance, rely on subsumption architectures that do not require explicit representations of the outside world, and as such address many of the concerns raised by Dreyfus- or Husserl-style critiques; network or connectionist models can similarly be plausibly understood as non-computational. Indeed, the claims we make in the book can be made independently of any internal architecture whatsoever: if beings with such capacities presented themselves how would we react? What strategies do we have besides adopting one of the physical, design or intentional stances and seeing which works?
This brings me to another of Andrew’s worries: that I’m privileging one particular philosophical tradition at the expense of another, more specifically, the Anglophone analytic tradition as opposed to the continental tradition. I do not think this is true. The approach described above is Wittgensteinian; but it is the Wittgenstein of The Philosophical Investigations not of the Tractatus. And that is a distinction that is worth making.
In suggesting that in practice we adopt an intentional stance towards our fellow human beings and should adopt it for other beings as well, I am taking refuge in a kind of hermeneutics, a project of trying to interpret others as best as we can, given our ends and means. My inspiration for trying to use the interpretive stance with artificial agents draws heavily on Daniel Dennett, but also on Richard Rorty’s treatment of classical philosophical problems in The Mirror of Nature; my philosophical outlook is currently most inspired by the triad of Schopenhauer, Nietzsche and Freud, a triad who do not fit comfortably into the Anglophone analytic tradition that Andrew thinks I’m a card-carrying member of. (When I said “philosophically unfounded” I meant to say that human chauvinism is philosophically unfounded; while reserving some special place for humans in our various orderings can be motivated, it needs to not be nakedly-prejudicial in its dismissal of other entities’ claims.)
So, no, I’m not a straightforward analytically-inclined philosopher. I was certainly trained as one, but in my impatience with notions of special human qualities, or the singularity that human consciousness and intentionality represent, and in my desire to see humans as part of a naturalistic world order, I am perhaps more broadly drawing on Nietzsche than anyone else. The clash in fact, is not between analytic and continental philosophical traditions as Andrew seems to suggest but rather between naturalistic and non-naturalistic views of man, between naturalistic and non-naturalistic vocabularies to use when describing man and society. In that sense, I take particularly or uniquely human capacities to be capable of emulation, imitation and simulation by all kinds of beings. We might not want to extend the language of personhood and intentionality to those beings because we feel that some vitally human ends would be compromised by our doing so, but that’s a choice we will have to make and we might have to weigh a host of other considerations as well.
It is possible that I have not done justice to Catholic or Rabbinical philosophy, but I have not studied those traditions adequately. But if my failing is that I have not attended to the religious or theological sensibility then I plead guilty. It wasn’t an organizing principle in my writing. But I do wish we would be less chauvinistic as humans; it might help us treat animals better. You know, the ones reckoned mere machines or automatons by Descartes?
It is true we did not engage with Searle’s Chinese Room argument. Perhaps I’ve spent too much time talking about it in graduate school, perhaps I’ve read too many dismissals of it. More to the point, I find Searle’s invocation of mysterious causal powers for the particular carbon-based biology of the brain hopelessly mysterious and chauvinistic. When I look closer at the human brain, I too, find no genuine understanding or intentionality; I see cells, neurons, synapses, perhaps elaborate arrangements of biochemicals. Change the level of description, all appears mysterious; change it again; suddenly everything snaps into focus; our intentionality, our understanding is also mysterious when viewed at particular levels of description. “It’s just lines of code, or its just ones and zeroes” is about as meaningful a criticism of artificial agents’ abilities as is dismissing Shakespeare as “just inkmarks on a page.” It is the investment of meaning that matters, the interpretation we assign to it that makes the difference. There is no meaning in the lines of code; there is none in the neuron-synapses either; we are the ones that invest meaning.
In this sense, it is worth reminding ourselves that our analysis and our reliance on interpretations driven by our ends retains the most important position of meaning-makers for human beings; we remain the entities that invest meaning. Artificial agents get their meaning from us. There’s no need to feel so threatened.
Incidentally, the syllogism Andrew provided to try and unpack the claim we make about adopting the moral stance is slightly off in what it imputes to us regarding corporations. I tend to think (and certainly the citation omitted provides such arguments) that a perfectly good case can be made for considering corporations to be intentional entities, the best stance towards which is the intentional stance. Try describing the complex behavior of corporations always in terms of individual human beings; its not going to be easy to do so. In general, some corporate actions simply will not make as much sense as they do when it itself is considered the subject of the intentional stance; much like it would be idiotic to suggest that a human’s actions be cashed out in terms of neurons firing or cells acting or particles decaying; it simply isn’t the appropriate level of description and we have a much better language of description available.
The point about a corporation’s moral sense followed from that; once an entity coherently becomes the subject of the intentional stance we find we can start to isolate which one of its beliefs qualify as moral and we start to examine whether we can construct generalized predictions and explanations using those beliefs and desires (I have written a paper on this as applied to robots’ moral senses which explores this idea a bit further).
Furthermore, corporations are large complex entities; they are assemblages of humans, machines and a whole lot else. To say that everything a corporation does is just humans acting seems to grant excessive agency to humans; I think its admirable that Andrew wants to keep humans front and center; but in doing so, let us also acknowledge that our finding human causes and agents everywhere is a choice forced upon us by the particular social relations we have constructed; when the society we construct has invested so much power in the hands of machinic-human assemblages, maybe our causal analysis will change as well.
When Andrew mentions the social implications of our suggested doctrine it is worth noting that in Chapter 3 we explicitly point out that the privacy implications of our doctrine are extremely human-friendly. What the agency analysis taketh away with one hand, it giveth with the other. I think bot-deployment should be an equal opportunity activity; in other contexts, I have argued for modes of organization of the software development world that would certainly make it so. But yes, there is no getting around the fact that the agency doctrine in the context of contracting is friendly to builders of agents, and they are currently financial powerful. But see caveat about privacy.
Andrew invokes kavod habriyot, respect for humanity at the end of his post. I intend to respond to that separately (some of my response was already made in addressing his comment earlier today on my post on legal personhood) .