Autonomous artefacts and the intentional stance

The book “A Legal Theory for Autonomous Artificial Agents” by Samir Chopra and Laurence White provides a very comprehensive and well written account of a challenging issue, namely, of how the law should address the creation and deployment of intelligent artefacts capable of goal-oriented action and social interaction. No comparable work is today available, and thus I think that this is a very valuable contribution to the interface of ICT (and in particular AI) and law.

As some commentators have already observed, the title words “A legal theory” may be a bit misleading, since one does not find in the book a new approach to legal theory inspired by artificial agents, but rather a theoretically-grounded analysis of the legal implications of this new socio-technological phenomenon. However, awareness is shown of legal theory and various legal theoretical themes are competently discussed in the book.

The fundamental idea which is developed in the first chapters is that when interacting which such artificial agents we need to adopt the intentional stance, and understand their behaviour as resulting form the agents’ beliefs and goals. Often indeed there is no other strategy available to us: we have no power, no ability and in any case no time, to examine the internal structure and functioning of such artificial entities. The only chance we have to make sense of their behaviour is to assume that they tend to achieve their goals on the basis of information they collect and process, namely, the idea that they endowed to a certain kind and extent of theoretical and practical rationality: they can track the relevant aspects of their (physical or virtual) environment, and adopt plans of actions on how to achieve their goals in such an environment.

As an example quite remote for the domain considered by the authors of the book, consider an autopilot system for an aircraft. The system has a complex goal to achieve (bring the airplane to destination, safely , in time, consuming as less fuel as possible), collects though various sensors information from the environment (height, speed of wind, expected weather conditions, on ground obstacles and incoming aircrafts, etc.) and from the airplane itself (available fuel, temperature, etc), draws theoretical conclusions (the length still to be covered, the speed needed for getting to destination in time, the expected evolution of the weather, etc.) and makes choices on various matters (speed, path, etc.) on this basis. Moreover, it receives and sends messages concerning the performance of its task, interacting, with pilots, with air traffic systems, and with other manned and unmanned aircrafts. Clearly, the pilot has little idea of the internal structure of the autopilot (probably he or she has only a vague idea of the autopilot’s architecture, and does not even know what are the procedures included in its software, let alone the instructions composing each such procedures) and has no direct access to the information being collected by automatic sensors and processed by the system. The only way to sensibly understand what the autopilot is doing, and the messages it is sending, is indeed to assume that it is performing a cognitive goal-directed activity, namely, adopting actions on the basis of its goals and its representations of the context of its action, as well as communicating what it assumes to be hold in its environment (what it believes), the objectives it is currently pursuing (its goals) and what it is going to do next (its intentions or commitments). As autopilot systems become more and more sophisticated (approaching the HAL of 2001 Space Odyssey), take on new functions (such as controlling distances, avoiding collisions, governing take off and landing) and use an increasing amount of information, their autonomy increases as well as their communication capacities. Thus it becomes more natural and useful (inevitable, I would say) to adopt the intentional stance toward them.

I have addressed myself the need to adopt the intentional stance toward certain artificial entities (Cognitive Automata and the Law), where the intentional stance was discussed to some extent, and the legal relevance of Daniel Dennett’s distinction of physical, design and intentional stance was considered. An aspect I have considered there, that it is not addressed in the book (though being quite significant for legal theory) is whether the cognitive states we attribute to an artificial entity only exist in the eye of the observer, according to a behaviouristic approach to intentionality (only the behaviour of a system verifies or falsifies any assertions concerning its intentional states, regardless of the system’s internal conditions) or whether such cognitive states states also concern specific internal features of the entity to which they are attributed. I have sided with the second approach, on the basis of a functional understanding of mental states. For instance, a belief may be viewed as an internal state that co-variates with environmental condition, in such a way that co-variation enables approbate reactions to such conditions. Having such a realist approach to cognitive states of artificial agents enables us to distinguish ontologically cases when agents have a cognitive state from cases where they only appear to have it (a distinction which is different from the issue of what evidence may justifiably support such conclusions, and what behaviour justifies one’s reliance on the existence of certain mental states). This is not usually relevant in private law and in particular with regard to contracts (we are entitled to assume that people have the mental states they appear to have, for the sake of reliance, regardless of whether they really have such states), but may be significant in some contexts, such as criminal law or even some parts of civil liability (intentional torts).

Another idea I find useful for distinguishing agents from mere tools is the idea of cognitive delegation (also discussed in the above contribution). While we can delegate various simple tasks to our tools (e.g. we use a spreadsheet for making calculations or a a thermometer for measuring temperature), we can delegate only to agents tasks pertain to the deployment of practical cognition (determining what to do, given certain goals, in inga certain environment). It is since agents engage in practical cognition, as they have been required to do, that we can (and should) understand their action according to the intentional stance.

In conclusion, not only I fully agree with the book’s idea of adopting the intentional stance with regard to artificial agents, but I think that this idea should be further developed and that this may lead to a better understanding of how the law takes into accounts both human and artificial minds. I think that this may indeed be the way in which the book can most contribute to legal theory.

You may also like...