LTAAA Symposium Wrap-up

I want to wrap up discussion in this wonderful online symposium on A Legal Theory for Autonomous Artificial Agents that Frank Pasquale and the folks at Concurring Opinions put together. I appreciate you letting me hijack your space for a week! Obviously, this symposium would not have been possible without its participants–Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden–and I thank them all for their responses. You’ve all made me think very hard about the book’s arguments (I hope to continue these conversations over at my blog at samirchopra.com and on my Twitter feed at @EyeOnThePitch). As I indicated to Frank by email, I’d need to write a second book in order to do justice to them. I don’t want to waffle on too long so let me just quote from the book to make clear what our position is with regards to artificial agents and their future legal status:

The discussion of contracting suggested the capabilities of artificial agents, doctrinal convenience and neatness, and the economic implications of various choices would all play a role in future determinations of the legal status of artificial agents. Such “system-level” concerns will continue to dominate for the near future. Attributes such as the practical ability to perform cognitive tasks, the ability to control money, and “legal system-wide” considerations such as cost benefit analysis, will further influence the decision whether to accord legal personality to artificial agents. Such cost-benefit analysis will need to pay attention to the question of whether agents’ principals will have enough economic incentive to use artificial agents in an increasing array of transactions which grant agents more financial and decision-making responsibility, whether principals will be able, both technically and economically, to grant agents adequate capital assets to be full economic and legal players in tomorrow’s market-places, whether the use of such artificial agents will require the establishment of special registers or the taking out of insurance to cover losses arising from malfunction in contractual settings, and even the peculiar and specialized kinds and costs of litigation that the use of artificial agents will involve. Factors such as whether it is necessary to introduce personality in order to explain all relevant phenomena, efficient risk allocation and whether alternative explanations gel better with existing theory, will also carry considerable legal weight in deliberations over personhood. Most fundamentally, such an analysis will evaluate the transaction costs and economic benefits of introducing artificial agents as full legal players in a sphere not used to an explicit acknowledgement of their role.

We see the deliberate, measured workings of the common law as the most likely and desirable resolution of our current confusions about how the law should treat artificial agents. We think that over a period of time, the courts of the law will be asked to rule on a variety of fact patterns involving artificial agents. In making their rulings the courts will examine the law’s resources in its body of established precedent; it will sometimes make analogies like we do; sometimes it will draw upon existing doctrines like those of agency law; sometimes they might even act boldly and make some idiosyncratic determinations of what status artificial agents should have, and where they ought to be fitted into law’s conceptual and empirical schemes; at those times, we, as authors, can only hope that the counsels and the judge’s clerks will have read our book and will think about its arguments while penning their opinions and briefs. If we can get them to do that, writing this book will have been worth it.

You may also like...