LTAAA Symposium: Artificial Agents and the Law of Agency

I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive)  response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.

Thus, in addressing DeMott’s concerns about whether agency doctrine can be so tweaked so as to accommodate artificial agents, we acknowledge our understandings of agency doctrine will have to be stretched and made more flexible. It might be that such accommodations will be felt to be too onerous but I think we made a good argument for how that accommodation could be carried out. I’m thinking in particular about DeMott’s concern about whether agents could manifest ‘consent’ or whether we could ‘delegate’ tasks to them. Our way of handling the former problem was to note that

Both the principal’s right of control of the agent by instructions or directions and the agent’s assent are capable of flexible interpretation; ascriptions of such control and assent are most plausibly made when doing so enables the most coherent, reasonable and fair understanding of a particular fact pattern pertaining to the interactions between the principal and agent. While it may seem artificial agents are incapable of “real consent” as required by agency principles, they will act on their principals’ behalf and subject to their control; such performance is in consonance with “the rationale behind the agent ‘consent’ requirement of traditional agency principles[citation omitted].

Our ‘solution’ therefore hews consistently to a line of reasoning that we employ at many other points in the book: Can we interpret the agent’s actions as consent or our interactions with them as delegation? If we do so, will its other actions and behaviors make sense and permit continued interpretation along the same lines?  In the putative counterexample that Deborah provides of driving a car, it is not clear we are delegating mobility to the car in the way that we delegate responsibility to the shopping bot, or to the autonomous robot sent to explore the surface of Mars. The relationship between the driver and the car is tighter, more organic, more immediate attention is required. Delegation carries some connotation of autonomous performance; hence the constant need for managers to reduce control and supervision and embrace delegation. In driving a car, we don’t seem to delegate, but in sending a rover to Mars, we do. That’s the intuition we’d like to ride.

Similarly, for consent, where we might seem to merely instruct an agent that complies. This might seem like coerced behavior (“it did what it was told to”) and hence the notion of a ‘will’ as lacking in the agent seems to carry some force here. But again, as we note in the last chapter even ‘will’ and ‘consent’ in the case of humans can be problematized by showing human decisions to not be straightforwardly ‘uncoerced’; conversely the actions of the agent can be interpreted in such a way so as to show them possessing a ‘degree’ of will that enables a not-incoherent comparison with human decisions. We thus reject the notion of a ‘will’ for agents by running the risk of needing to do the same for humans on pains of consistency.  Here, as in almost all other scenarios involving comparisons with humans, our dominant, almost over-riding first person perspective may prevent us from seeing how acting in the presence of choices, even if driven by algorithmic or law-like constraints, can be an expression of ‘will’. (And yes, a gigantic philosophical debate lurks here!)

Three other points. First, DeMott wonders why we do not:

[D]eal with the fundamental challenge of accommodating agency relationships within conventional accounts of how contractual obligations are formed. Just as it is difficult to understand how a contract could be formed via an AAA when the parties’ intentions are not referable to a particular offer and acceptance (p. 34), so it seems to be a broader predicament how a principal could be bound by a contract entered into by an agent when the principal was unaware of the specifics of the offer or acceptance. How could the principal be bound when the principal has not consented to the particular transaction?

But I thought we had offered a solution along the same lines that DeMott herself notes when she says:

Agency resolves this predicament not by demanding transaction-by-transaction assent from the principal, but in characterizing the principal’s conferral of authority on the agent as an advance expression of the principal’s willingness to be bound which thereafter lurks in the background of the agent’s dealings with third parties. (Or appears so to do, when the principal can be bound only on the basis of the agent’s apparent authority)

This treatment is implicit in both Chapter 1 and in Chapter 2, when we begin to consider applying agency doctrine to the contracting problem:

The possibility of apparent authority provides an alternative to actual authority, and one which does not require a manifestation of assent by the principal to the agent. This also avoids the need to postulate the agent’s consent to such a manifestation. There would thus be sufficient reason to give rise to an agency by reason of the conduct of the principal alone in clothing the agent with authority (for instance, the initialization and configuration of a website along with its shopping agents, or the deployment of a mobile pricebot) and providing it with the means of entering into contracts with third parties (by running and maintaining its code). However, it might still be desirable to establish a workable concept of actual authority for an artificial agent, to accompany the apparent authority the agent has by virtue of the principal’s conduct vis-à-vis third parties. This would align the treatment in the common-law world with that in civil-law codes, where a contract between the agent and the principal is typically required to confer authority (Sartor 2002). In doing so the law could find an appropriate analog for an agreement between agent and principal in the agent’s instructions, whether expressed in a programming language or as a set of parameters or instructions, explicitly specifying the scope of the agent’s activities and the breadth of its discretion. Such an analog could also be found in the certification of an agent as capable of responding in the appropriate way to a comprehensive test of operational scenarios that demarcate its scope of authority.

Second, DeMott is concerned that we might be too uncritically accepting of the benefits of increased usage and deployment of artificial agents, and thus protective of the legal or moral responsibility of principals for agent’s behavior. I’d suggest in response that this acceptance is not unqualified as our treatment of knowledge attribution and privacy in Chapter 3 shows, and our responses and doctrinal development are largely driven by noticing that our society seems to be headed down a path of such increased usage and deployment (some of which will beneficial for us). This Pandora’s Box is open; we are trying to accommodate its various, seemingly inevitable consequences with mixed results. In saying this, I am not rejecting the idea of a critique of such acceptance; I have written in other areas about the desirable contours of our technologized society and do not aim to discard related lines of theorizing.

Lastly, I agree with DeMott that resolution of the roles of artificial agents will–and should–proceed on a case-by-case basis and will not be driven by legislation. It will be more desirable that it be driven on a case-by-case basis as that will do more justice to the varied capacities and domains of application of artificial agents. Our discussion in Chapter 1, about possible statutory change, were largely to indicate how that could proceed as opposed to an endorsement of that strategy.

Incidentally, I’m curious to hear from DeMott on whether she thinks that the histories of the common law doctrines of agency are such as to give us cause for optimism that it will be flexible enough to accommodate artificial agents and their continued use in roles that resemble nothing as much as the old-fashioned legal agent. I do agree, that perhaps the doctrines of agency will work best when artificial agents will be granted some form of legal personality; perhaps the law will disdain the piecemeal solution we offer and make a more direct move in that regard.

I think it interesting that in his post Ian Kerr thought that our attempt to employ the doctrine of legal agency was to traffic in a doctrine that was too archaic to bear the weight of the diversity of the capacities of artificial agents and the variety of situations we will find them impinging on; this is an ironic complaint for me to take on board, given that our more cautious position in this book was driven by constant warnings about the conservativeness of the law and on the difficulty in inventing brand new legal doctrines to deal with entities that could, to some extent, be analogized with extant beings (as for instance, we do in our treatment of tort liability in Chapter 4; as I indicated in my response to Ian, I’m still keen to find out what he thinks of the treatment in there, because it is not restricted to employing the law of agency and attempts to address the many different usages and capacities of artificial agents).

So perhaps we are both too cautious and too adventurous. That’s not a bad position to occupy, I suppose.

You may also like...