The Law Of The Fire

A corporation, it is said, “is no fiction, no symbol, no piece of the state’s machinery, no collective name for individuals, but a living organism and a real person with a body and members and a will of its own.” A ship, described as a “mere congeries of wood and iron,” on being launched, we are told, takes on a personality of its own, a name, volition, capacity to contract, employ agents, commit torts, sue and be sued.” Why do lawyers and judges assume thus to clothe inanimate objects and abstractions with the qualities of human beings?

The answer, in part at least, is to be found in characteristics of human thought and speech not peculiar to the legal profession. Men are not realists either in thinking or in expressing their thoughts. In both processes they use figurative terms. The sea is hungry, thunder rolls, the wind howls, the stars look down at night, time is not an abstraction, rather it is “father time” or the “grim reaper”…

Bryant Smith, Legal Personality, 37 Yale Law Journal 283, 285 (1928)

What are the qualities of artificial agents that make them different from the howling wind, the rolling thunder, the starring stars? In A Legal Theory for Autonomous Agents, Samir Chopra and Lawrence White adopt an “intentional stance” toward certain categories of software. According to the authors, “an artificial agent could, and should, be understood as acting for reasons that are the causes of its actions, if such an understanding leads to the best interpretation and prediction of its behavior.” The title “theory”—the one that “underpins the book”—is that “an agency law approach to artificial agents is cogent, viable, and doctrinally satisfying.” The authors retain this commitment right through the final chapter on personhood: “The best legal argument for denying or granting artificial agents legal personality will be pragmatic rather than conceptual.”

Interesting stuff. But what is the “best interpretation”? What counts as “doctrinally satisfying”? Say I think the state ought to punish a man who sets a fire causing the death of another, even where it cannot be established that the man’s action was willful and malicious. I check the books; there is no crime of negligent arson in my jurisdiction.

And yet… fire has a life and purpose all its own. Predicting fire’s complex behavior means thinking about fire as acting not blindly, but for reasons. “It’s a living thing, Brian.” Robert De Niro’s character tells that of William Baldwin in the 1991 film Backdraft. “It breathes, it eats, and it hates. The only way to beat it is to think like it. To know that this flame will spread this way across the door and up across the ceiling, not because of the physics of flammable liquids, but because it wants to.”

In other words, say I take an intentional stance toward fire. Perhaps I am now free to conclude on this basis that the person who started the fire—unlawfully, yet without an intent to kill anyone—is, in fact, the fire’s guardian or accomplice.

Or say I agree with Matthew Tokson that a user has not shared his email with Google for purposes of the third party doctrine merely because company software automatically filters spam or targets ads. Chopra and White believe that treating this software as an agent of Google will better protect privacy. But were a court to hold that Google’s algorithms “know” my emails the way a human employee would—and, accordingly, that I no longer have a reasonable expectation of privacy under United States v. Miller because I happen to run the spell checker—I might be very unsatisfied indeed.

Chopra asks in his opening post: “Are autonomous robots really just the same as tools like hammers?” I don’t know. That’s the work of a legal theory for autonomous agents. Such a theory requires a full set of criteria for when mere congeries of code become an agent. And it needs some yardstick by which to assess pragmatic or functional goals. Meanwhile, neither the criteria nor the yardstick can rest overly on those “characteristics of human thought” that lead people to anthropomorphize the sea, thunder, wind, the stars, and so on.

Don’t get me wrong. There is a lot to like in this book and I recommend it to anyone interested in liability for complex software. I hope it gets the conversation going at the upcoming We, Robot conference in Miami. But I’m not convinced that the book advances a theory.

You may also like...

3 Responses

  1. Samir Chopra says:

    Ryan,

    I intend to write a longer reply to your post but for the time being, I’m curious to know whether you read Chapter 1, Chapter 2 and Chapter 3 in their entirety, and whether you actually engaged with the arguments advanced in those chapters for considering artificial agents as legal agents for the purposes of contracting and legal agents for the purposes of knowledge attribution. Your failure to address the modest change in doctrine suggested by the contracting problem, especially the careful, incremental, risk-allocation based argument proposed there, suggests to me that you haven’t; your one-paragraph dismissal of the chapter-length knowledge attribution and privacy argument is entirely too glib as well. In the end, you sprinkle red herrings all over the place, suggesting that we are advocating legal personhood as opposed to suggesting that no conceptual barriers stand in its way and that coherent philosophical and legal arguments could be made for it pending the attainment of technical capacities. (For instance, in the quote you provide, you slide from us advocating an agency law approach to advocating legal personhood; we make it clear that we favor legal agency without legal personhood first as an incremental change). It is also clear that you don’t (or choose not to) understand the conditional nature of the claim we make when we say that intentional stance is to be chosen if it results in the best predictive and explanatory position. Read James Grimmelmann’s post; he gets to the heart of the matter when he notes that the complexity of these systems is key. Your example of the fire is silly; you are the one dabbling in in appropriate metaphor here; we always have the physical mode of description available here as the best explanatory device. We note in the book that the intentional stance will become the best strategy when we lose epistemic hegemony over these agents; on other occasions it will be available to us and we can use it to facilitate certain kinds of discourse – as in when we want to treat artificial agents as legal agents.

  2. Samir Chopra says:

    I meant “inappropriate” above.

  3. Jordan J. Paust says:

    Believe it or not, some of this is before the S.Ct. in Kiobel — whether a corporation can have duties (and, I suppose, rights) under international law (treaty-based or customary international law). The answer should be simple for the Justices because some 20 Supreme Court cases have already recognized that a corporaiton or company can have duties or rights under international law! Additionally, cases have recognized that vessels can have duties under international law. See, Nonstate Actor Participation in International Law and the Pretense of Exclusion, 51 Va. J. Int’l L. 977, 978 n.2, 986-92 (2011), available at http://ssrn.com/abstract=1701992
    Will each of the Justices actually pay attention to the 20 S.Ct. cases regarding corporate and company duties and rights?