Our Bots, Ourselves

In an extremely forward-looking and thought-provoking book, Samir Chopra and Lawrence F. White rekindle important legal questions with respect to autonomous artificial agents or bots.  It was a pleasure to engage with the questions that the authors raise in A Legal Theory for Autonomous Artificial Agents, and the book is a valuable scholarly contribution.  In particular, because of my own research interests, Chapter 2 Artificial Agents and Contracts was of special interest to me.

In Chapter 2, the authors apply the agency theory that they advocate in Chapter 1 to the context of contracts.  They challenge the view that bots are “mere tools” used for extension of the self by contracting parties.[1]    In doing so, they assert differences between “closed” and “open”[2] systems and various theoretical types of bots, arguing that parties who use bots as part of contracting should be protected from contract liability in some cases of bot error or malfunction.   From my reading, they argue in favor of using principles of agency law to replace some traditional contract law constructs when bots are involved in contracts.

Their argument is nuanced and thoughtful from an economic and agency law perspective.  In the comments that follow, I raise five sets of questions for thought, admittedly from the perspective of my own research on contract law, consumer privacy in technology contexts, and information security law.

1. Private ordering and accepting responsibility for imprudent technology risks.   The authors are concerned with providing better liability protection to contracting parties who use bots.  They assert that “[a] strict liability principle [which views bots as mere tools or means of communication] may not be fair to those operators and users who do not anticipate the contractual behavior of the agent in all possible circumstances and cannot reasonably be said to be consenting to every contract entered into by the agent.”[3]   As I was reading this chapter, I pondered whether bots do indeed warrant special contract law rules.  How is a failure to anticipate the erratic behavior of a potentially poorly-coded bot not simply one of numerous categories of business risk that parties may fail to foresee?   Applying a contract law perspective, one might argue that the authors’ approach usurps for law what should be left to private ordering and risk management.  No one forces a party to use a bot in contracting; perhaps choosing to do so is simply an information risk that should be planned around with insurance?[4]

2. Traditional contract law breach and damages analysis and the expectations of the harmed party.  The authors opt away from a discussion of traditional breach analysis and damages remedies when addressing bot failures.  Instead, they apply a tort-like calculation of a lowest cost avoider principle, which they argue “correctly allocate[s] the risk of specification or induction errors to the principle/operator and of malfunction to the principle or user, depending on which was the least–cost avoider “[5]  However, should we perhaps temper this analysis by recognizing that contract law as embodied by the UCC and caselaw is not concerned solely or even primarily with efficiency in contractual relationships?  How does the authors’ efficiency analysis square with traditional consideration sufficiency (versus adequacy) analysis, where courts enforce contracts with bad deal terms regularly, choosing not to question the choices of the parties?   A harmed consumer who was not using a bot in a contract pitted against a sophisticated company using a poorly-coded bot (because it chose to hire a bargain programmer) may indeed have inefficient needs, but is not the consumer the party more in need of the court’s protection as a matter of equity?[6]

The authors note that, for example, prices quoted by a bot are akin to pricing details provide by a third party – a scenario that they assert may make it unfair to bind the bot-using party to terms of a contract executed by his bot when he does not pre-approve each individual deal.  “In many realistic settings involving medium–complexity agents, such as modern shopping websites, the principal cannot be said to have a pre-existing “intention” in respect of the particular contract that is “communicated “to the user.”[7]  Again, to what extent are such bot dynamics truly unforeseeable?  Can it be argued that coding up your bot to offer very specific deal terms when a consumer clicks on something constitutes an indication of actual knowledge and intention similar to a price list?  Is a coding error with a wrong price not simply akin to a mismarked pricetag in real space? But even assuming that we agree with the argument that coding up a bot is a relinquishment of control to a third party of sorts, how would the bot dynamics at issue differ from those in real space contracts where prices are specified using a variable third party index or where performance details are left variable – dynamics that have been found unproblematic in real space contract cases? [8]

3.  The bot problems that currently exist in contract law.  The authors take us through two cases with respect to bots – eBay v. Bidder’s Edge and Register v. Verio.com, arguing them primarily through the lens of tort, particularly trespass to chattels.    I found myself wondering about the authors’ agency analysis in the contract-driven bot cases where the trespass to chattels line of argument was deemed unpersuasive.  For example, how would the authors’ agency analysis apply in the context of the two Ticketmaster v. Tickets.com bot cases, particularly the second where the trespass to chattels claim was dismissed and the contract count was the only count to survive summary judgment?   Also, I would be curious to hear more about the extrapolation of their agency approach to the current wave of bot cases that blend contract formation questions with allegations of computer intrusion, such as Facebook v. Power Ventures and United States v. Lowson.

4.  Duties of information security.  Turning to information security, the authors point out that a party may try to hack a bot used by the other party in order to gain contracting advantage.[9]  While this is a valid computer intrusion concern, another pressing contract concern is that a malicious third-party (who is not one of the parties seeking to be in privity) will chooses to hack the bot to steal money on an ongoing basis.  If the bot is vulnerable to easy attack because of information security deficits of its coding, should the party using it get a free pass for its failure to exercise due care in information security?  Is it fair to impose information security losses on the other contracting party who was prudent enough not to use a vulnerable bot in contracting?   Would a straightforward ‘your vulnerability, your responsibility’ approach create better incentives for close monitoring and better information security practices, a goal already recognized by Congress as a social good?

5. The broader implications for “b0rked” code.  The separateness of bots from their creators came across to me as an underlying premise for the entirety of the authors’ conversation.   For example, the authors reference situations where the bot autonomously “misrepresents” information that its wielding party would not approve.[10]   Is it not perhaps more accurate to say that the bot contains programming bugs its wielding party failed to catch and rectify?   Is not a bot simply lines of code written by a human (who may or may not be skilled in a particular code language) that will always be full of errors (because a human authored it)?   Is perhaps the appropriate goal not to protect bots but to incentivize bot creators to make fewer errors and rectify errors once they are found after “shipping” the code?

The authors argue that holding a contracting party accountable for bot malfunction is “unjust”[11] in some circumstances.    Is this consonant with the contract law approach that drafting errors and ambiguities are construed against the drafter?[12]  Is the author/operator of the error-ridden code considered the drafter here?  How is choosing a bad programmer to build your flawed bot different from choosing a bad lawyer to draft the flawed language of your contract?

The analogy of a bot to a rogue beer-fetching dog I found to be a particularly apt one.[13]  Many scholars would argue that, much like having a pet, using a code-based creation such as a bot in contracting is a choice and an assumption of responsibility.  Both dogs and bots are things that are optional and limited in their capacities: we choose to unleash them on the rest of the world.   If a dog or a bot causes harm, even when the owner has not expressly directed it to do so, isn’t it always the owner’s failure to supervise that is to blame?  I fear that comparing a bot to a human of any sort – slave, child, employee – at the current juncture for purposes of crafting law may be premature.  No machine is capable of replicating human behavior smoothly at present.  Will one arrive in the future?  Yes, it is likely.  However, I fear that aggressive untethering of the legal responsibility of the coder from her coded creation may send us down an undesirable path of uncompensated consumer harms in our march toward our brave new cyborg world.[14]

The book’s purposes are ambitious, and I truly enjoyed pondering the questions it raises.  I thank the organizers for allowing me to participate in this symposium.


[1] p. 36

[2] p. 31-32

[3] p. 35-6

[4] The authors appear to argue from the perspective that encouraging the use of bots in contracting is a good thing and, as such merits special legal protection.   While it is clear that digital contracts and physical space contracts are to be afforded legal parity, is it indeed clear that our legislatures and courts have decided to encourage parties to use bots instead of humans in contracting?  Perhaps encouraging the use of more humans in operations and contracting is instead the preferable policy goal and the one that warrants the more protective legal regime?

[5] p. 48

[6] Indeed the consumer protection analysis that is omnipresent in contract law does not seem to be a dominant thread in the authors’ analysis.  When a sophisticated company using a bot is contracting with a consumer, the power and balance that already exists between these parties – a traditional concern of contract law – is exacerbated by the presence of the bot and arguably favors protecting the consumer more aggressively in any technology malfunction related to the formation of the contract.

[7] p. 36

[8] See, e.g., UCC §2-305; Eastern Air Lines, Inc. v. Gulf Oil Corporation, 415 F. Supp. 429 (1975).

[9] The situation where a party seeks to gain advantage and a contracting relationship by hacking the other party’s bot, I would argue, is not primarily a contract law question. This is arguably an active computer intrusion best left for analysis under the Computer Fraud and Abuse Act.

[10] p. 50

[11] p.55

[12] I have argued that it is the responsibility of businesses who use code to interact with consumers and other entities to warn protect and repair the unsafe code environments to which they subject others.

[13] p. 55

[14] As upcoming work with my coauthor Miranda Mowbray  will explain, the most sophisticated Twitter bots have now become quite good at approximating the speech patterns of humans, and humans seem to like interacting with them; however, even they eventually give themselves away as mere code-based creations.  When a Cylon-like code creation finally arrives, it may be nothing like what we expect it to be.

You may also like...

2 Responses

  1. A.J. Sutter says:

    Thanks for such a cogent post. I’m reading the book’s early chapters now, and had many of the same reactions, albeit nowhere nearly so articulately or authoritatively formed. I especially agree about the prominence (and dubious wisdom) of the idea that bots are separate from their creators, and hope to comment on that in more depth later.

  2. Lurker says:

    I think that in the consideration of the legal personality of contract-making bots it would be beneficial to use the rules of the Roman law concerning the legal capacity of slaves.

    In the Roman law, the master could give a slave (or an adult child) a pecunium, a portion of movable property that was slave’s to administer. The slave could make contracts on that property, including debts. Of course, the master could remove the pecunium at will, but no one else could. Yet, all suits concerning the pecunium were raised by or against the master. If the slave (or child) had caused a damage or made a debt that the pecunium did not cover, the master had two chances: either give up the slave or carry the obligation himself.

    Now, an automated bot is administering a pecunium. A good way to equitably limit the actual person’s liability would be to give two chances, if the obligation is larger that the bot’s pecunium: either give up the bot or carry the obligation.

    This approach would be rather equitable. Giving up an expensive bot (destroying all software and documentation after handing over copies to the aggrieved party) is expensive, so this would create a sufficient penalty to the corporate bot-operator. Yet, for a consumer being banned from using an automated feature of E-bay would be a bearable result. And of course, if you can’t control your bot, you should not be operating it.