Category: Empirical Analysis of Law


Consent Decrees and Unintended Consequences

lapd.jpgRobert Parry of the LA Daily News has written a curious column about the relationship between legal rules and police behavior.

As Parry explains:

In the late 1990s, rogue Rampart Division CRASH officers provided the Los Angeles Police Department’s legion of critics with ammunition . . . to place their vaunted enemy under the oversight of a federal court . . . All complaints against officers are now thoroughly investigated and subject to triple audits — by the LAPD Audit Bureau, the inspector general and the consent decree monitor . . . Serious uses of force are double-investigated — one administrative investigation and one criminal one . . . In short, after six years, if the LAPD was at all brutal and corrupt, shootings should be down, use of force down, complaints down, sustained complaints up and more officers prosecuted.

But, Parry asserts, shootings have increased 15%, complaints have increased, but guilty findings have decreased. Indeed, the “only statistic that appears to have tracked as the activists indicated is use of force. On a per-100-arrests basis, serious use of force is down about 20 percent.”

Parry asserts that these complicated data can be boiled down to a simple cause: “Cops are fleeing in record numbers [because of the increased supervision] . . . As a result, inexperienced cops with unseasoned supervision are using more deadly force and getting more complaints, but the force is deemed acceptable and the complaints are increasingly bogus.”

To my reading, this claim is bogus.

Attrition problems at the LAPD are old – they certainly predate the consent decree, starting as early in the mid-1980s. The problem’s severity has engendered a number of explanations, and solutions, varying from: excessive financial disclosure requirements, bad press due to the Rodney King riots, insufficient funds, a convoluted application process, bad equipment and physical plant, and even affirmative action policies. Shucks, the only explanation not offered is that LA’s famously sunny climate makes officers too happy to effectively walk the beat.

Even were attrition to be exacerbated by the consent decree, Parry still hasn’t come close to making his claim stick.

Read More

Are Survivors’ Costs a Pro-Life Issue?

The conservative Manhattan Institute recently commissioned a study of a gap in life-expectancy gains over the past 20 years. The data that inspired the study are startling:

While U.S. life expectancy increased by 2.33 years from 1991 to 2004, some jurisdictions — the District of Columbia (5.7 years), New York (4.3 years), California (3.4 years) and New Jersey (3.3 years) — led the way, while others, such as Oklahoma (0.3 years), Tennessee (0.8 years) and Utah (0.9 years), trailed the national average by significant margins.

To make a long story short, the researcher found that found that “longevity increased the most in those states where access to newer drugs . . . in Medicaid and Medicare programs has increased the most.”

Unfortunately, budgetary rules often make the federal government concentrate more on the costs of such interventions than their benefits. For example, the CBO counts “increased costs to the Medicare program for extending the life of its beneficiaries” as “survivors’ costs.” Tim Westmoreland’s fascinating article on the topic (95 Georgetown L.J. 1555, June 2007) calls this “euthanasia by budget:”

In describing why its model included costs but no savings from new access to pharmaceuticals, the Congressional Budget Office said, inter alia, “ [T]o the extent that a drug benefit helps people live longer, they may consume more health care over their remaining lifetime than they would have without the benefit.” In other words, it is still cheaper for Medicare beneficiaries to die.

One wonders if the same reasoning was behind a Texas law that permitted hospital authorities to cut off life support to a conscious woman.

I admit that Daniel Callahan has eloquently questioned the “research imperative,” and perhaps his reasoning could be extended to health care more generally. But it strikes me that in our accounting the costs and benefits of health care in this country, budgetary savings arising out of early death ought to be suspect.

Argument & Authority

One part of the intro to Kennedy & Fisher’s Canon of American Legal Thought really hit me today:

Law students struggle to understand the relationship between “the rules” and the vague arguments that lawyers call “policy.” Should “policy” begin only in the exception—when legal deduction runs out—or should it be a routine part of legal analysis? If the latter, how should lawyers reason about policy? What should go into reasoning about “policy”—how much ethics, how much empiricism, how much economics? Which of the arguments laypeople use count as professionally acceptable arguments of “policy” and which do not? Which mark one as naïve, an outsider to the professional consensus? What is it about policy argument that makes it seem more professional, more analytical, more persuasive, than talking about “mere politics”?

I think I might begin my administrative law class next term with those questions at the forefront. Administrative Law is occasionally derided as a Seinfeld class–a class about nothing–because the precedents seem so malleable and ad hoc. All seems to turn on an increasingly complicated jurisprudence of deference. But the agencies are often getting deference because they are presumed to have a better grasp on “empiricism and economics” than nonspecialist judges.

The problems raised by K&F go beyond law into flelds like economics itself. Consider EconJournalWatch’s recent issue examining the role of math in top-level publications. Sutter & Pejsky ask “Where Would Adam Smith Publish Today?,” and note a “near absence of math-free research in top journals.” A bit from their conclusion:

The emphasis on mathematical modeling and regression analysis imposes a toll on the profession. Adam Smith spent his early years studying literature, history, ethics, political and moral philosophy, and then teaching literature and rhetoric to college students. Today to succeed in the profession he would need to study model building and regression analysis well enough to publish in “good” journals, and he (and the rest of us) would have lost the value added from the studies displaced. The same would apply for many Nobel prize winners who published their work in an economics profession less tied down to model building and regression analysis.

Sutter & Pejsky, along with many other interesting authors in EJW, are arguing for a more pluralistic approach to economic authority. I hope to show my students in Admin the multiple sources of authority for agency decisions…and how that complexity, while occasionally frustrating and obfuscatory, can make the resulting decisions stronger, like a Peirce’s cable.

Scientists Manques?

Ever wonder why Richard Posner has gotten so interested in pragmatism? Well, James R. Hackney’s book Under Cover of Science: American Legal-Economic Theory and the Quest for Objectivity suggests that he’s right to be looking for a post-scientific discourse for the style of law & economics he advances. Here’s an abstract of Hackney’s work:

The current dominant strand of legal economic theory is what is commonly referred to as law and economics (but more appropriately labeled “law and neoclassical economics”). [This movement] gained its claim to objectivity based on the philosophical premises of logical positivism and the analytic philosophy movement generally. . . . In understanding the claim of objectivity in the law and neoclassical economics movement and why that claim can no longer be sustained (in part due to new conceptions of science and developments in philosophy) it is crucial that legal-academics have a fuller understanding of developments in science and how they shape our general cultural ethos.

Hackney synthesizes a wide variety of CLS and socio-economic critiques to show how “law and economics often cloaks ideological determinations—particularly regarding the distribution of wealth—under the cover of science.” Toward the end of the book he tentatively points a way forward for the discipline, urging greater humility about theoretical claims and greater reliance on empirical work. In other words, the cure for scientism is genuine science.

I have some sympathy with this perspective, and new awareness of “uniformity costs” in both law and legal scholarship backs up Hackney’s position. But the problem of “scientism” may extend beyond law and neoclassical economics…

Read More

From Right-of-Reply to Norm-of-Trackback

One of the things I love about the blogosphere is the way that comments let readers correct you or turn your attention to something you may have missed. One of my recent posts on copyright law illustrates how this process can work. James Grimmelmann has suggested that this right to comment, and to trackback to one’s own post upon linking to another’s post, is a big victory for free speech. While right-of-reply laws may be stymied by Miami Herald v. Tornillo, these innovations let everyone have their say.

Should the mainstream media adopt similar norms? Consider the case of a recent WSJ commentary entitled “The Innocence Myth,” arguing that the rate of false convictions is very low. You can find critiques of it online if you google “innocence myth,” and the WSJ does publish some skeptical letters to the editor. But my colleague Michael Risinger is about to publish a piece that he believes definitively refutes the WSJ piece. As he argues:

If one is at all serious about trying to determine the empirical truth about the magnitude of the wrongful conviction problem, one must make an attempt to associate the denominator with the same kind of cases represented in the numerator. . . . In an article now in galleys at Northwestern Law School’s Journal of Criminal Law and Criminology, I have tried to do just that. Using only DNA exonerations for capital rape-murders from 1982 through 1989 as a numerator, and a 407-member sample of the 2235 capital sentences imposed during this period, this article shows that 21.45%, or around 479 of those, were cases of capital rape murder. Data supplied by the Innocence Project of Cardozo Law School and newly developed for this article show that only two-thirds of those cases would be expected to yield usable DNA for analysis. Combining these figures and dividing the numerator by the resulting denominator, a minimum factually wrongful conviction rate for capital rape-murder in the 1980’s emerges: 3.3%.

The WSJ has so far failed to publish Prof. Risinger’s letter to the editor, and claims a policy against allowing responses to commentaries. But would it at least behoove the Journal to provide a link to Risinger’s work after this opinion piece? I don’t see how this could hurt. . . . especially given time already devoted to screening letters to the editor. The Journal could make the links inobtrusive, as it does in this fantastic article on predatory debt collectors.

I hope that more of the mainstream media (MSM) follows the lead of the Washington Post, which provides great links to blogs (and opportunities for comment) on virtually all of its online articles (including editorials). Perhaps “opening up” the letters to the editor section in this way will be a bit of a burden at the beginning. But as technology makes these online forums more permeable, the usual excuse of “space constraints” (for shutting out diverse views) will be less and less convincing.


The Death of Fact-finding and the Birth of Truth

magnififying.jpgToday’s Supreme Court decision in Scott v. Harris is likely to have profound long-term jurisprudential consequences. At stake: whether trial courts, or appellate courts, are to have the last say on what the record means. Or, more grandly, does litigation make findings of fact, or truth?

The story itself is pretty simple. Victor Harris was speeding on a Georgia highway. Timothy Scott, a state deputy, attempted to pull him over, along with other officers. Six minutes later, after a high-speed chase captured on a camcorder on Scott’s car, Scott spun Harris’ car off the road, leading to an accident. Harris is now a quadriplegic. He sued Scott for using excessive force in his arrest. On summary judgment, the District Court denied Scott’s qualified immunity defense; the Eleventh Circuit affirmed.

Justice Scalia, writing for the majority, noted that the “first step is . . . to determine the relevant facts.” Normally, of course, courts take the non-moving party’s version of the facts as given. [Or, to be more precise, the district court resolves factual disputes in favor of the non-moving party.] But here, the videotape “quite clearly contradicts the version of the story told by respondent and adopted by the Court of Appeals.” Notwithstanding a disagreement with Justice Stevens on what whether that statement was accurate (“We are happy to allow the videotape to speak for itself.” Slip Op. at 5), the Court proceeded to reject the nonmoving party’s version of the facts. To do so, it relied on the ordinary rule that the dispute of facts must be “genuine”: the Respondent’s version of the facts is “so utterly discredited by the record that no reasonable jury could have believed him.” (Slip Op. at 8).

Let’s get a bias out of the way. At the Court’s suggestion, I watched the video. I lean toward Justice Stevens’ view: “This is hardly the stuff of Hollywood. To the contrary, the video does not reveal any incidents that could even be remotely characterized as ‘close calls.'” Such a dispute over a common story immediately highlights the most serious problem with the Court’s opinion: we all see what we want to see; behavioral biases like attribution and availability lead to individualized view of events. Where the majority sees explosions, Justice Stevens sees “headlights of vehicles zooming by in the opposite lane.” (Dissent at 2, n.1 – and check out the rest of the sentence for a casual swipe against the younger members of the court.) It brings to mind the Kahan/Slovic/Braman/Gastil/Cohen work on the perceptions of risk: each Justice saw the risk of speeding through his or her own cultural prism.

But even if I agreed with the majority on what the videotape shows, the Court’s opinion is disruptive to fundamental principles of American Law. Justice Stevens suggests that the majority is acting like a jury, reaching a “verdict that differs from the views of the judges on both the District court and the Court of Appeals who are surely more familiar with the hazards of driving on Georgia roads than we are.” (Dissent at 1). There are several problems with such appellate fact finding based on videotape that the Court ignores.

Read More

Libertarians Against Subjectivism

Some commenters on my post on the Value of Pets took me to task for being too quick to discount individuals’ extraordinary attachment to their companion animals. I found some support in unlikely quarters–Will Willkinson’s critique of “happiness research” which recently appeared on the Cato Institute’s website. This is the most comprehensive recent comment on the literature of subjective well-being that I’ve seen, and raises all sorts of interesting questions for those who are trying to expand the boundaries of economic analysis.

A little background: A growing number of economists have begun to question traditional measurements of well-being, such as GDP or income, and have focused instead on self-reported “subjective well-being” from interviewed subjects. “Happiness research” has come up with some counterintuitive findings, reporting extraordinary levels of life dissatisfaction in apparently prospering liberal democracies.

Wilkinson takes these social scientists to task for failing to fully describe “the dependent variable—

the target of elucidation and explanation—in happiness research.” He claims there are four main possibilities:

(1) Life satisfaction: A cognitive judgment about overall life quality relative to expectations.

(2) Experiential or “hedonic” quality: The quantity of pleasure net of pain in the stream of subjective experience.

(3) Happiness: Some state yet to be determined, but conceived as a something not exhausted by

life satisfaction or the quality of experiential states.

(4) Well-being: Objectively how well life is going for the person living it.

Wilkinson provides some great arguments for questioning 1 and 2 as hopelessly subjective desiderata for public policy. He quotes Wayne Sumner, a Toronto philosopher, on 2: “Time and philosophical fashion have not been kind to hedonism . . . Although hedonistic theories of various sorts flourished for three centuries or so in the congenial empiricist habitat, they have all but disappeared from the scene. Do they now merit even passing attention[?]” “Life satisfaction” also comes in for heavy criticism, as epiphenomenal of various uncontrollable variables: “people have different standards for assessing how well things are going, and they may employ different standards in different sorts of circumstances.”

Of course, Wilkinson and I go entirely different directions at this point: he tries to argue that the whole line of research is useless, while I think inconsistencies like the ones he points out demonstrate the necessity of more objective and virtue-oriented accounts of well-being. (Or, to be more precise, Wilkinson (like Freud) appears to believe that debates over happiness may ultimately best be settled by brain analysis, while I tend to think the direction of Aristotelian theorists like Seligman & Nussbaum is the way to go.) But his perspective does demonstrate that even those most committed to the idea of individual liberty as a public policy goal are not necessarily wedded to the type of subjectivity in value that would underlie societal recognition of the more extreme claims of pet-owners mentioned in that post.



As I previously have discussed here and here, I’ve been working on a project examining when trial courts write opinions. With the help of statistician co-authors, I have investigated trial court dockets, trying to account for various factors that might lead a contested matter to either be explained through a traditional written opinion or issued in a brief order. Our resulting draft, “Docketology, District Courts, and Doctrine”, is now available from SSRN or from Selected Works. Here is an abstract:

Empirical legal scholars have traditionally modeled judicial opinion writing by assuming that judges act rationally, seeking to maximize their influence by writing opinions in politically important cases. Support for this hypothesis has reviewed published opinions, finding that civil rights and other “hot” topics are more to be discussed than other issues. This orthodoxy comforts consumers of legal opinions, because it suggests that opinions are largely representative of judicial work.

The orthodoxy is substantively and methodologically flawed. This paper starts by assuming that judges are generally risk averse with respect to reversal, and that they provide opinions when they believe that their work will be reviewed by a higher court. Judges can control risk, and maximize leisure, by writing in cases that they believe will be appealed. We test these intuitions with a new methodology, which we call docketology. We have collected data from 1000 cases in 4 different jurisdictions. We recorded information about every judicial action over each case’s life.

Using a hierarchical linear model, our statistical analysis rejects the conventional orthodoxy: judges do not write opinions to curry favor with the public or with powerful audiences, nor do they write more when they are younger, seeking to advance their careers. Instead, judges write more opinions at procedural moments (like summary judgment) when appeal is likely and less opinions at procedural moments (like discovery) when it is not. Judges also write more in cases that are later appealed. This suggests that the dataset of opinions from the trial courts is significantly warped by procedure and risk aversion: we can not look at opinions to capture what the “Law” is.

These results have unsettling implications for the growing empirical literature that uses opinions to describe judicial behavior. It also challenges the meaning of doctrine, as we show that the vast majority of judicial work – almost 90% of substantive orders, and 97% of all judicial actions – are not fully reasoned, and are read only by the parties. Those rare orders that are explained by opinions are, at best, unrepresentative. At worst, they are true black sheep – representing moments and issues where the court is most obviously rejecting traditional patterns and analyses.

I am very interested in receiving comments on this paper, particularly before the late summer, when we plan to submit it to the law reviews!

[Nit-seekers beware: there is one typo in the SSRN abstract. (Don’t go find it, just trust me, it is there.) For what it is worth, I basically agree with Kevin Heller that SSRN should give users more control over author-submitted papers to make revision easier. ]


Neuroscience and Law

Jeffery Rosen has a fascinating article in this week’s New York Times Magazine. While the article is balanced and careful, the “buy me, read me” headlines and several of the researchers that Rosen quotes suggest that a law-and-neuroscience revolution is brewing. I want to add my voice to the skeptics that Rosen quotes, though with a different perspective. To my mind, recent findings in the field of neuroscience will change law only at the margins; and its main contribution will be to confirm the central tenets of legal realism, and will thus have only minor effects on most legal concerns.

Read More


Lewis Libby (66%) Guilty

Price for Lewis (Scooter) Libby Charges at

While Scott may not be “100% sold on either side” of the Libby trial, two-thirds of traders on the prediction markets think that he will be found guilty of lying on at least one charge against him. This is a notable upswing from last June, when I wrote about the market and its intricacies. I then pointed out that prices for these “conviction contracts” include a discount for the likelihood of a plea, so that in the months before trial, prices for conviction are likely to be depressed. I think that the rise in the price of Libby’s conviction stock demonstrates the point. Although there were few surprises at trial, traders raised the likelihood of conviction by over 20% after it began, representing the end of the plea discount and the market’s real estimation of the likelihood of conviction. Notably, traders don’t seem to think that a mistrial is terribly likely, although the likelihood of conviction has decreased from 75% at the beginning of the jury’s deliberations.

For what it is worth, I tend to agree with Scott that the length of the jury’s deliberations is a mark of seriousness and worth. If we wanted a quick and summary answer, we wouldn’t use a jury, we’d flip a coin.