Hypotheticals, the Classroom, and Moral Biology
Hypotheticals are a ubiquitous pedagogical tool in both the law and philosophy classrooms. I have recently been thinking about the different functions they serve and whether they are well-suited for the weight we give them. These reflections were prompted by a conference on “Moral Biology,” hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School (which I co-direct), in cooperation with The Project on Law and Mind Sciences at Harvard Law School, the Gruter Institute, the Harvard Program on Ethics and Health, and the MacArthur Law and Neuroscience Project.
I may blog a little bit later about some other of the marvelous things I learned over these two days, but for now I wanted to concentrate on some thoughts that stemmed from a public portion of the conference that can be seen here, involving Josh Greene from Harvard’s Psychology Department, William Fitzpatrick from the University of Rochester’s Philosophy Department, Adina Roskies from Dartmouth’s Philosophy Department, Walter Sinnott-Armstrong from Duke’s Philosophy Department, and Tim Scanlon, from Harvard’s philosophy department.
At around the 43 to 50 minute mark in the video, Josh discusses Trolley Problems (which ask participants a thought experiment about whether to divert a trolley from one track to another with many versions of the hypothetical) and an experiment done on them by Fiery Cushman (and a collaborator, Switzgable I believe, I could not find the actual paper) in Josh’s lab. In the experiment, before being asked whether they would endorse the principle of double effect, ethicists with PhDs were asked to reason about variants of the Trolley problem (switch vs. footbridge) presented in different orders. The experiment found that if one varied the order in which the versions were presented (but always presented all of them,) ethicists reached different conclusions about whether they would endorse the principle. [This is Josh’s description in the video, again if anyone can find the paper he is discussing I will try and like to that]. The result is surprising in that it appears even those with PhD training in ethics are susceptible to order effects in reasoning about a very fundamental issue.
As Josh concedes, and others (in the panel and in written pieces discussing his work emphasize) the fact that these ordering effects occur is not itself fatal to the enterprise of philosophical analysis using intuitions. It depends on further views about how one uses these kinds of intuitions in the analysis. For present purposes, though, I want to partially side-step that question in favor of thinking about the law classroom, and how this experiment might should us a little more careful about the way we use hypotheticals.
It seems to me that there are two main ways I use hypotheticals in class (in fact there are many subtler distinctions in ways, so this is admittedly crude but hopefully sufficient for present purposes). The first is of a realist or at least Hart-Kelsen/Core-Penumbra approach: I begin with what seems like a clear and defensible rule. I then present easy cases on both sides. I then vary the facts a little at a time to produce a hard case, and the student learns how even seemingly clear and easy to apply rules breakdown in hard cases.
A second usage, though, is more coherentist. I start by asking students for a rule in one case that they are fairly sure of. I then examine whether the principle behind the rule is really one they want to defend by applying it in several new cases and testing it against their intuitions about how those cases should come out. These intuitions put pressure on their original rule, causing them to want to state it more precisely, add caveats, or perhaps chuck it altogether.
The experiment Josh discusses is perfectly consistent with the realist/hard case approach, indeed Jerome Frank would have loved it and easily assimilated it into what the judge had for breakfast. What, however, should it mean for the more coherentist hypothetical usage? There, the approach seems to tell student that if they reason about enough cases and compare their initial intuitions against many hypothetical cases, they will come close to what they think the “right” answer is, or at least rule out “wrong” answers. Do results of this kind of experiment threaten that usage? I am very curious what others think…