Category: Articles and Books


Don’t use et al.

As a co-authored piece just recently reminded me, I’ve a huge grudge against the Blue Book.  (Which hasn’t yet escalated on their side to using me as an example as a but see.  Or worse!  Actually, I’m not sure that the great platonic blue book guardians even know I’m mad at them.)  As I wrote in 2007:

“Rule 15.1. R. 15.1 states that when there are two or more authors, you have a choice:

Either use the first author’s name followed by “ET AL.” or list all of the authors’ names. Where saving space is desired, and in short form citations, the first method is suggested . . Include all authors’ names when doing so is particular relevant.

This seems to me to express a pretty strong non-listing preference. The “problem” is that much good interdisciplinary work results from collaborations among more than two authors – it is the nature of the beast . . . This seems like a trivial objection, but it will take on increasing weight over the next ten years as empirical legal studies really comes online in the major law reviews.”

The trend toward interdisciplinary, multiple authored, pieces continues.  And though it’s true that law reviews are a dying beast, there is still no good reason at all for omitting the names of authors in the first footnote in which the work is cited. “Saving space” is a terrible argument: we could do that by getting rid of useless and often inaccurate parentheticals “explaining” the source, often written by cite-checking second year students.

If I were running a law review seeking to differentiate itself, or an author negotiating with a few journals, my deal points would be: (1) color graphics on the web version of the article; and (2) no et al. usage.  That has to be more constructive and useful than “lead article” status!


LTAAA Symposium: Complexity, Intentionality, and Artificial Agents

I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick   and AJ in the comments spaces where their remarks were originally made).

Read More


LTAAA Symposium: Response to Pagallo on Legal Personhood

Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents.  I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read More


LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities

I want to thank Harry Surden for his rich, technically-informed response  to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:

There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”

The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.

I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.

Having said this, I can now turn to responding to Harry’s excellent post.
Read More


LTAA Symposium: Response to Sutter on Artificial Agents

I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
Read More


LTAAA Symposium: Artificial Agents and the Law of Agency

I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive)  response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read More


LTAAA Symposium: Legal Personhood for Artificial Agents

In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to  Gillick v West Norfolk and Wisbech Area Health Authority)

I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.
Read More


Artificial Agents and the Law: Some Preliminary Considerations

I am grateful to Concurring Opinions for hosting this online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion here; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion. (I notice that James Grimmelmann and Sonia Katyal have already posted very thoughtful responses; I intend to respond to those in separate posts later.)

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or possibly even beyond? How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever ”know” anything? Who is doing the knowing?

I’ll be addressing questions like these and others during this online symposium; for the time being, I’d like to make a couple of general remarks.

The modest changes in legal doctrine proposed in our book are largely driven by two considerations.

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is (i.e., merely tweak them to accommodate artificial agents without changing their status vis-a-vis contracting) but run the risk of imposing implausible readings of contract theories in doing so. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings some of us take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?). Furthermore, as we argue in Chapter 1 and 2, there is a perfectly coherent path we can take to start to consider such artificial agents as legal agents (perhaps without legal personality at first). This strategy is philosophically and legally coherent and the argument is developed in detail in Chapter 1 and 2.  The argument in the latter case suggest that they be considered as legal agents for the purpose of contracting; that in the former lays out a prima facieargument for considering them legal agents. Furthermore, in Chapter 2, we say “The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not.”

Which brings me to my second point. A change in a  legal doctrine can bring about better outcomes. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of  contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. As we note “Arguably, agency law principles in the context of contracting are economically efficient in the sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider (Rasmusen 2004, 369). Therefore, the case for the application of agency doctrine to artificial agents in the contractual context is strengthened if we can show similar considerations apply in the case of artificial agents as do in the case of human agents, so that similar rules of apportioning liability between the principal and the third party should also apply.”

And I think we do.

An even stronger argument can be made when it comes to privacy. In Chapter 3, the dismissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents. (This follows on the heel of an analysis of knowledge attribution to artificial agents so that they can be considered legal agents for the purpose of knowledge attribution.)

Much more on this in the next few days.


The Daily You: A Mandatory Read

Over at the Business Insider, Doug Weaver has a terrific review of our guest blogger Joe Turow’s new book The Daily You, demonstrating its practical importance to people in the field like Weaver as well as to policymakers and scholars.Here’s the review:

Listening to the insider discussions and industry reporting about online marketing provides a numbing sense of false comfort.  But every so often, we go outside the bubble and hear civilians talking about what we do.  I’m sure most of us have had someone at a party or family gathering share their ‘creeped out’ moment;  that instance where they finally saw clearly that somehow they were being ‘followed’ online.   Other times, they offer us largely unformed general concerns about online privacy: they don’t really have a sense of what’s going on but they instinctively know they don’t like it.  And once in a great while you’ll hear from someone who’s really done their homework and brings crystal clarity to the issue from the consumer point of view.

That moment came for me when I stumbled on an NPR radio interview with Joseph Turow, author of “The Daily You: How the New Advertising Industry is Defining Your Identity and Your Worth.”  After using up my ten minute commute, I found myself sitting my car in the parking lot of my office for another 30 minutes just listening to this guy.  It was kind of like hearing someone talk about you in a bathroom when they don’t know you’re in one of the stalls.  Except they’re totally getting it right.  Turow, an associate dean at the Annenberg Communication school at Penn, has done a lot of homework.  The book is detailed and rigorous, but also extremely accessible to the curious consumer.  While it’s probably not going to sell millions of copies, I believe it’s going to be a hugely influential and important book for several reasons.

  • To my knowledge, it’s the first crossover book that’s attempted to explain in great detail our industry’s use of data to the consumer.  And while explaining it all to the consumer, Turow also explains it all to the business and consumer press.  Perhaps for the first time, they will really understand the digital marketing ecosystem.  And that understanding is almost certain to drive a lot more reporting.  Expect a lot more stories like the Wall Street Journal’s 2010 “What They Know” series, only better informed.
  • “The Daily You” is also clear eyed and inclusive.  Turow is not a wild eyed privacy crusader tilting at windmills.  A walk through his index and end notes is like thumbing through a digital marketing “who’s who” — you’ll recognize a lot of names, companies and concepts right off the bat.
  • And finally, the book builds an intellectual bridge that’s the link to a very powerful idea:  that on some level this is not just a privacy issue, but a human rights issue.  For Turow, the real issue is the digital caste system that’s being imposed on consumers without their knowledge or consent.  Over time, one consumer will enjoy better discounts and better access to quality brands and offers than his less fortunate counterpart.  Perhaps more important are the ways in which these two consumers content experiences will diverge as a result of all the profiling that’s been done.  Like it or not, each of us is getting an online data version of an invisible credit score.  Turow gets this and his readers will too.

For my money, “The Daily You” should be a mandatory read for anyone in our industry.  It’s the beginning of an important new conversation about sustainable and inclusive data practices, a conversation that will form much quicker than many of us might imagine.


Contracts in the Real World: Ready for Pre-Ordering

This new book on contracts, regaling readers with stories ripped from the headlines, will be published soon and can be pre-ordered now on and other fine booksellers.  

Contracts in the Real World: Stories of Popular Contracts is intended to be a fun, fast, reliable read. It is very useful for 1Ls struggling with the subject, perfect for anyone thinking about going to law school, and designed to entertain devotees of pop culture. It will also captivate experts in contract law by connecting current events with venerable principles and classic cases.

Stories feature such notables as Eminem, Lady Gaga, Charlie Sheen, Donald Trump, and Sandra Bullock, as well as examples such as your cell phone contract, lottery sharing partnership, and on-line privacy policy.

List price is $33. The table of contents follows. 

Read More