Camel, Weasel, Whale

Samir Chopra—whom I consider to be something of a pioneer in thinking through the philosophic and legal issues around artificial intelligence—did not much care for my initial thoughts about his and Lawrence White’s new book, A Legal Theory For Autonomous Agents. The gist of my remarks was that, while interesting and well researched, the book does not deliver on its promise of advancing “a legal theory.” Mostly what the book does (I read the book cover to cover, as you can see!) is identify new and old ways the law might treat complex software to advance various, seemingly unrelated goals. The book is largely about removing conceptual obstacles to treating software as “agreeing,” “knowing,” or “taking responsibility,” should we be inclined to do so in particular cases for independent policy reasons.

In the second chapter of the book, for instance, Chopra and White argue that treating software capable of calculating, offering, and appearing to accept terms as legal agents is not only coherent, but results in greater economic efficiency. The upshot is lesser contractual liability than treating the software as a mere instrument because, in instances where software makes the right kind of mistake, the entity that deployed the software—usually a sophisticated corporation—will not be held to the agreement. In the third chapter, the authors abandon economic efficiency entirely. Here the argument is that we ought to look to agency law in order to attribute more information to corporations because “[o]nly such a treatment would do justice to the reality of the increased power of the corporation, which is a direct function of the knowledge at its disposal.” In other words, by treating its software as agents rather than tools, we can either limit corporate liability for reasons of efficiency, or expand it for reasons of fairness.

In reply to my initial post, Chopra writes:

It is also clear that you don’t (or choose not to) understand the conditional nature of the claim we make when we say that intentional stance is to be chosen if it results in the best predictive and explanatory position. Read James Grimmelmann’s post; he gets to the heart of the matter when he notes that the complexity of these systems is key. Your example of the fire is silly; you are the one dabbling in in appropriate metaphor here; we always have the physical mode of description available here as the best explanatory device. We note in the book that the intentional stance will become the best strategy when we lose epistemic hegemony over these agents; on other occasions it will be available to us and we can use it to facilitate certain kinds of discourse – as in when we want to treat artificial agents as legal agents.

The exact opposite is true. I see very well that Chopra and White would adopt the intentional stance only if doing so “results in the best predictive and explanatory position.” I am trying to figure out what “the best predictive and explanatory position” might be. The point of each of my examples is that reasonable minds will disagree about “the best explanatory device” for a given phenomenon; they will disagree as to whether it is better, as a policy matter, for the law to treat Google’s algorithm as though it were a human employee. Perhaps it is better from the perspective of consumer privacy but worse from that of government surveillance.

A Legal Theory For Autonomous Agents continues the project—dating back to at least Sam Lehman-Wilzig’s 1981 essay Frankenstein Unbound—of identifying options for how the law might treat software of ever-increasing complexity and independence. Maybe complex software is a like a child, a slave, a corporation, a ship, an animal, an agent. We are still like Polonius and Hamlet discussing the cloud. I admire Chopra and White’s book for its sustained attention to agency law as a possible means to reach desirable results in some cases involving complex software. I admire its commitment to interdisciplinary study. But I do not see the book as delivering “a prescriptive legal theory to guide our interactions with artificial agents.” Chopra and other participants are free to disagree.

You may also like...

2 Responses

  1. Samir Chopra says:

    Ryan,

    Yes, indeed, things are complicated when they come to artificial agents. They don’t fit easily into established legal categories, and they don’t fit easily into our conceptual ones. Thinking about them is going to be hard, and its going to require computer scientists, lawyers, philosophers and legislators to devise a full-blown theory to get things ‘right.’ Somewhere along the way, we as a community need to figure out what the “best predictive and explanatory position” is – figuring that out will require what our ends are, and how we want to get there. As the examples of contracting and privacy show, treating artificial agents as legal agents will advance economic efficiency in one domain and not in another. It might result in corporate-friendly outcomes in one domain, and in another it might not. These considerations will be in the mix when legislators, courts and law professors writing law review articles think about whether artificial agents should be legal agents or not.

    My book is a contribution to that debate; no more, no less.

  2. Ryan Calo says:

    It sounds to me as though we are on the same page. I certainly agree that the book represents a significant contribution to the fledgling literature here. Please don’t read my previous posts as indicating otherwise!