LTAA Symposium: Response to Matwyshyn on Artificial Agents and Contracting

Andrea Matwyshyn’s reading of the agency analysis of contracting  (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.

At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.


[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.


Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.

Third, an implicit, unstated economic incentive.

The incentivizing intuition for maintaining separation between bots and their creators or bots and their deployers goes something like this (please understand that I am merely presenting the outlines of the prima facie economic intuition): Society wants the creators of artificial agents to continue supplying them, to continue work on their development and improvement, to continue making them more autonomous and ‘smarter’ or more ‘intelligent’, more capable of learning (in whatever sense of ‘learning’ we want to use for the moment using the distinction that Harry Surden alludes to in his post),  for the functionality they provide, for the economic and tangible benefits they provide to society. Agents that are ‘smarter’, more capable of functioning autonomously will be capable of having more tasks delegated to them, more capable of taking on jobs that are too tedious for humans to perform even if they require some intelligence (for instance, intelligent personal assistants for sorting our emails, prioritizing our tasks lists), or perhaps too dangerous (robots looking through earthquake debris for instance); such autonomy permits the offloading of a greater diversity of tasks that require cognitive competence (or practical cognition as Sartor termed it) so that we can get on to addressing more intractable work, work that perhaps requires even more human ingenuity.

Thus, what we want as a society is to encourage the creators of such artificial agents for the functionality they provide, for the social ends they help us realize. Similarly we want to incentivize deployers of such artificial agents to use them, to make them available for our use. One way to incentivize such producton and continued deployment would be to create some legal space between artificial agents and their creators, between artificial agents and their deployers, one that acknowledges the significant differences between artificial agents that can roam the surface of Mars, or change in response to learning data as compared to tools like hammers or objects like cars. Equipped with the creation of such a legal space, the principals of such agents could deploy them in a variety of situations, responsible for those tasks that were explicitly specified in the agent’s scope of authority, and similarly, the makers and developers of artificial agents would be incentivized in a manner not unknown to the law, which has provided protections to other industries along the same lines.

(I should note that in Chapter 4 on tort liability, we qualify this protection, and some material that got left out, looked more deeply into the kind of liability schemes that should be devised for industries engaged in the development of artificial agents; I should also point out that in that deleted material we made a strong argument that the software industry should be subjected far greater regulation and stricter tort liability schemes; I will be happy to share this material with anyone that is interested).

This intuition, now made explicit and visible, addresses some of Andrea’s recurring concerns in her post. I will try to address her concerns according to her numbered responses, but it should be kept in mind that the primary economic incentivizing argument will remain the one pointed to above.

1. Here, Andreas says

I pondered whether bots do indeed warrant special contract law rules.  How is a failure to anticipate the erratic behavior of a potentially poorly-coded bot not simply one of numerous categories of business risk that parties may fail to foresee?   Applying a contract law perspective, one might argue that the authors’ approach usurps for law what should be left to private ordering and risk management.  No one forces a party to use a bot in contracting; perhaps choosing to do so is simply an information risk that should be planned around with insurance?

Our response is that the crucial distinction that we tried to make is between a bot that behaves “erratically or unpredictably” and one that behaves normally, but in a manner unforeseen by its maker or deployer, not because it malfunctioned but because it had changed in response to learning data and was making “novel” decisions. (This also addresses Andreas concern #5). And indeed, no one forces anyone to use a bot for contracting, but e-commerce would come to a grinding halt if bots were not used; it is entirely implausible that we can turn the clock back on electronic contracting without human intermediaries now. This of course, prompted our consideration that some economic incentives could be put into place for makers and deployers of artificial agents. The primary e-commerce transaction remains the formation of contracts without human intermediation; it is this category of transaction that we aim to incentivize. Again, Andrea wonders whether

Perhaps encouraging the use of more humans in operations and contracting is instead the preferable policy goal and the one that warrants the more protective legal regime?

As before, this seems implausible in light of the ever-increasing footprint of e-commerce transactions.

2. Andrea is right to point out that we might need to

[T]emper this analysis by recognizing that contract law as embodied by the UCC and caselaw is not concerned solely or even primarily with efficiency in contractual relationships.

This is something to address in future work. (Provided we can agree on the proper reading of the UCC and the relevant caselaw!) Andrea then asks,

Again, to what extent are such bot dynamics truly unforeseeable?  Can it be argued that coding up your bot to offer very specific deal terms when a consumer clicks on something constitutes an indication of actual knowledge and intention similar to a price list?

But the scenarios we have in mind are not so easily described; rather the agent would “be programmed so as to change the contractual terms offered autonomously without referring back to the principal, e.g., to reflect changes in market prices.”  (We offer citations of work done in so-called ‘dynamic pricing’). Andrea refers to “real space contract cases” but there, were not human legal agents involved as opposed to mere tools or instrumentalities conveying the intentions of the principal? Of course, the agent is programmed to change these terms autonomously but then so are human beings instructed to carry out the instructions of their principals.

3. We thank Andrea for pointing us to Facebook vs. Power Ventures and US vs. Lowson and owe you a longer response to this point. The extension of this doctrine to the three cases mentioned (especially Ticketmaster vs. Tickets, which we noted only in passing as an example of a trespass to chattels case in the torts chapter) seems like a good next step to prove the viability of this doctrine.  Similarly, the intersection with information security concerns warrants a closer look in the next edition of our work.

4. Andreas makes a fair point here; the example she provides would indeed be one where a closer scrutiny of the bot’s software engineering practices would be justified.  Some of the material in Chapter 4 on tort liability might address Andrea’s concerns, especially when artificial agents are treated as products. The material in an earlier version of the book that dealt specifically with software engineering duties of care would also address her worries about duties of care to be placed on creators of bots.

5. Here Andrea says,

For example, the authors reference situations where the bot autonomously “misrepresents” information that its wielding party would not approve.  Is it not perhaps more accurate to say that the bot contains programming bugs its wielding party failed to catch and rectify?

Again, the crucial distinction is to be made between a bot that has malfunctioned and one, that because of its particular architecture and its capacity to respond to learning data was able to generate a novel and unexpected response. The bug-feature distinction is important here;  we make systems sophisticated enough so that they become capable of surprising us, sometimes, perhaps unpleasantly. The truly useful artificial agents are those that are capable of such surprise. (A credit card risk-assessing agent might start enforcing certain forms of racial discrimination that its principals would not approve of; this turns out to be a ‘bug’, but was actually built in as a ‘feature’; here a technical outcome clashes with broader social imperatives.)

Andreas says,

Many scholars would argue that, much like having a pet, using a code-based creation such as a bot in contracting is a choice and an assumption of responsibility.  Both dogs and bots are things that are optional and limited in their capacities: we choose to unleash them on the rest of the world.

I think this is a misleading analogy in one respect; the functionality of the artificial agent is by and large a desirable one, not one borne in sufferance by the world; it is not a personal indulgence of ours; the makers and deployers of artificial agents provide services by their use of bots. We could, if we chose, simply decline all form of electronic contracting and indeed, decline e-commerce altogether. But if we don’t and if we, as a society, show our preference for increasing automation and for delegation of tasks that require practical cognition, then some incentives seem necessary for the makers and deployers of the bot.

In sum: I think the responses we can make to Andreas’ concerns will largely tend to seek refuge in: providing incentives for designers and deployers of artificial agents in light of the role played by e-commerce today and in our society’s adoption of technologies of automation; and in distinguishing between the faulty, erratic performance and genuinely novel functionality of the artificial agent.

As others have noted, the deployment of agency doctrine has mixed outcomes for corporate entities; in the case of contracting it might be felt that this results in corporate-friendly outcomes; in the case of knowledge attribution and privacy, arguably, non-corporate-friendly outcomes result. What are we to do? The answer lies somewhere in the middle; perhaps agency doctrine might be extended in one area and not in the other; perhaps courts will see the economic analysis as viable in one area and not in another.

Once again, I thank Andreas for her careful and rigorous response.

You may also like...