Category: Empirical Analysis of Law


How Should Courts Handle Cultural Dissensus on Summary Judgment?

That’s the deep question unanswered by last year’s Supreme Court decision, Scott v. Harris. As Dan Kahan announced here on Balkin, he, current guest-blogger Don Braman, and I have written a paper testing the majority’s view that no reasonable jury could or would find for the plaintiff after watching this videotape. The experiment we conducted was simple and intuitive: we showed the video to a 1,350 member subject pool and asked them about it. Our first circulating draft, Whose Eyes are You Going to Believe? An Empirical (and Normative) Assessment of Scott v. Harris, can be downloaded here.

Overall, we found substantial support for the Court’s position: most members of the subject pool agreed with the majority about the risks posed by the police chase, the relative fault of the parties, and the ultimate questions of justification. But does majority support mean SJ is correct? Our thought was that that question can’t be meaningfully answered without some understanding of the characteristics of the minority of people who would disagree with the court. We wanted to identify who those people were and figure out whether there was any explanation that might explain their differing view of the tape besides that they are unreasonable. In particular, we wanted to test the hypothesis, grounded in cultural cognition theory, that the dissenters would not be random statistical outliers but persons disposed by shared cultural values and other characteristics to process visual information in the tape different from how the majority did.

ronOur results showed exactly that. Dissenters to the Court’s view of the facts and the appropriateness of summary judgment were linked by shared cultural styles that features a commitment to egalitarianism and communitarianism. By the same token, subjects who were strongly inclined to see things the Court’s way were linked by commitments to hierarchy and individualism.

Drawing on Joseph Gusfield’s work on “status collectivities,” we imagined four potential members of the venire: Pat, Ron, Linda, and Bernie. You can see their pictures to the left. Ron is a rich Goldwater republican from Arizona. Bernie is a socialist professor from Vermont with average income. Linda a social worker from Philadelphia, whose income is also at the mean. And Pat is the average American in every respect.

Using statistical simulations, we found that these individuals would have very different reactions to the video, based on their distinct forms of culturally motivated cognition of the risks involved. Take, for example, subjects’ reaction to the statement “[t]he danger that Harris’s driving posed to the police and the public justified Officer Scott’s decision to end the chase in a way that put Harris’s own life in danger.” The graphic below illustrates how Ron, Linda, Bernie and Pat will respond.

New Picture.jpg

At least three-fifths (64%, +/- 4%) of the persons who share Linda’s characteristics “disagree”—about one-half either strongly or moderately—with the statement and thus the result in Scott. Those who hold Bernie’s characteristics see things in nearly exactly the same way as those holding Linda’s. Pat does agree with the Scott majority, although not without a bit of equivocation. There is a 60% (+/- 3%) chance that a person drawn randomly from the population would either moderately or strongly agree that the police were justified in using deadly force. There is, however, a 16% (+/- 3%) chance that he/she would be only “slightly” inclined to agree, and over a 20% chance that he/she would conclude upon watching the tape that use of deadly force was unreasonable. Finally, over 80% of the individuals who share Ron’s characteristics would find that the police acted reasonably.

What does dissensus of this character mean for how courts should resolve summary judgment motions in cases like, and unlike Scott? When minorities of the venire would process visual information in particular way, but that minority sees things the way they do because they are linked by values?

I’ll explore these questions in subsequent posts (as will, I think, Don.)

Previous Posts:

Hoffman, The Death of Fact-finding and the Birth of Truth

Crocker, Do Texts Speak for Themselves?

Kerr, What Are the Facts in Scott v. Harris?


Too Much Happiness?

Increasingly, the study of “happiness” is making its way into legal academic writing. In some analyses it is framed as an alternative to money as a measure of welfare; in others as a focus on addressing the recurring problem of law firm associates’ pessimism. It is applied to tax policy, the calculation of pain-and-suffering damages, democratic institutions, and more. And happiness is making its way into law schools—well, in a sense anyway—with seminars being offered at Yale and Temple Law Schools on, for instance, “Law, Happiness, and Subjective Well-Being.” The study of happiness, and the related research program in positive psychology, are becoming increasingly prominent in law and policy.

The connection to the also-burgeoning literature on paternalism is clear; to the extent different interventions might be able to increase people’s happiness and welfare, is government justified in promulgating such interventions (or even obligated to do so)? That’s a can-of-worms type of question that I won’t get into in this post, but it connects with an interesting new article that indirectly raises the question whether such intervention—even if justified—might in fact backfire. That article, “The Optimum Level of Well-Being: Can People Be Too Happy?,” suggests that even though higher happiness seems to correlate with higher success in other areas, simply continuing to increase happiness might not increase that success consistently. The abstract follows:

Psychologists, self-help gurus, and parents all work to make their clients, friends, and children happier. Recent research indicates that happiness is functional and generally leads to success. However, most people are already above neutral in happiness, which raises the question of whether higher levels of happiness facilitate more effective functioning than do lower levels. Our analyses of large survey data and longitudinal data show that people who experience the highest levels of happiness are the most successful in terms of close relationships and volunteer work, but that those who experience slightly lower levels of happiness are the most successful in terms of income, education, and political participation. Once people are moderately happy, the most effective level of happiness appears to depend on the specific outcomes used to define success, as well as the resources that are available.

We know that “money doesn’t buy happiness”—that simply increasing financial success doesn’t directly correlate with happiness above a certain (surprisingly low) point; here’s an interesting suggestion that above a certain point, happiness doesn’t “buy” success.


This is Your Brain on … the New York Times

A recent NY Times bit talks about “neurorealism,” that is, people’s increased tendency to believe psychological or other scientific assertions when those assertions are accompanied by images from brain scans. The piece quotes Deena Weisberg, who wrote an article in the Journal of Cognitive Neuroscience documenting this empirically (in both laypeople and, if I remember the article correctly, in experts, though to a lesser extent), and the neologizer, Eric Racine. The piece mentions a newspaper article “about how high-fat foods activate reward centers in the brain,” and asks, “Couldn’t we have proced that with a slice of pie and a piece of paper with a check box on it?” Brian Leiter also noted the Times piece, with a plug for his paper criticizing legal academics’ use of evolutionary biology.

But the Times bit, and these scholars, conflate two very different points. The first is the “credulousness” issue—that people believe the assertions when accompanied by brain images. That’s an important point, especially in the legal context, where judges, jurors, or policy-makers might be exposed to such scans and misled by such scientific “explanations” of behavior. (Of course, it’s not enormously surprising, given past concerns about jurors’ understanding of complex scientific evidence.)

But that’s quite a different point from the dismissive “check box” question, criticizing even the usefulness of such neurological research. fMRI and other such scans can of course provide important and useful evidence, and certainly can tell us more than simple self-reports or even other behavioral studies. Matt Lieberman, a psychologist at UCLA [disclosure: we were in grad school together] and one of those most prominently associated with the newish field of social cognitive neuroscience, has addressed this well, in answering whether SCN provides something more than conventional social psychology. Summarizing just one of his papers on the issue: he points out that fMRI can provide evidence that “two psychological processes that experientially feel similar and produce similar behavioral results, but actually rely on different underlying mechanisms,” such as memory for social and non-social information. It can document “processes that one would not think rely on the same mechanisms, when in fact they do,” such as the common neurological pathways in the experience of both physical and social pain. And more speculatively, he suggests, as “more is learned about the precise functions of different regions of the brain it may be possible to infer some of the mental processes that an individual is engaged in just from looking at the activity of their brains.” This is an important advantage to overcome potential difficulties in, for instance, self-report.

There is of course danger in over-selling fMRI and similar neurological evidence—whether evaluating psychiatric patients, capital defendants, or others—and documenting people’s susceptibility to such over-sell is important. But it’s quite a different question whether such scans can be useful, and to dismiss them out of hand is just as obviously a mistake.


Interdisciplinarity, Leiter and the Bluebook

bluebook.jpgGordon Smith has a nice summary post of the debate between Brian Leiter, Mary Dudziak and others on whether Brian’s faculty citation rankings accurately measure “impact in legal scholarship.”

The basic framework of the debate is

Objection: “But you didn’t measure X…”

Leiter: “True. Let a hundred flowers bloom, and do your own data collection!”

(Which strikes me as pretty persuasive.) I wanted to add a different ingredient into the pot. I think Leiter’s rankings mismeasure impact in interdisciplinary scholarship for a reason unrelated to his methodology or its merits. Simply put: the Bluebook itself undervalues interdisciplinary collaborations and thus scholarship.

I’m not nearly the first to observe that the Bluebook’s citation rules have an ideological component. See, e.g., Christine Hurt’s great piece on that very topic. But consider the interaction between Bluebook Rule 15.1, 16 and Leiter’s study. R.16 states that the citation of author names in signed law review articles should follow Rule 15.1. R. 15.1 states that when there are two or more authors, you have a choice:

Either use the first author’s name followed by “ET AL.” or list all of the authors’ names. Where saving space is desired, and in short form citations, the first method is suggested . . Include all authors’ names when doing so is particular relevant.

This seems to me to express a pretty strong non-listing preference. The “problem” is that much good interdisciplinary work results from collaborations among more than two authors – it is the nature of the beast. Take, for example, my colleague Jaya Ramji-Nogales’ forthcoming triple-authored article Refugee Roulette: Disparities in Asylum Adjudication, which was front-paged by the Times back in June. Two of the article’s authors are in danger of being ET AL.’ed in many law review footnotes, and consequently ignored in subsequent Leiter citation counts (unless the citing article’s author chooses to mention them by name in the text). This seems like a trivial objection, but it will take on increasing weight over the next ten years as empirical legal studies really comes online in the major law reviews. (Obviously, I’m writing in part because I’ve two articles in the pipeline where I’m a part of three-author teams, and the “et al.” problem is somewhat salient.)

Bluebook editors: I know you are lurking here! Can you fix this silly problem in the 19th edition?


Clarification about Clarify

anonymity2.jpgI recently switched to STATA from SPSS. The choice seemed overdetermined, not least because of the abundance of freeware add-ons for STATA (compared with the pricey programs for SPSS). For example, Clarify, developed by Michael Tomz, Jason Wittenberg and Gary King, makes it easy to estimate predicted probabilities by simulating data, a highly useful technique, especially when graphed. (I first learned about Clarify in the Martin/Epstein legal-empirical methods stats camp.) Going through this kind of work by hand is a hassle, as my co-authors and I learned when writing Docketology.

I’ve a question about the software that seemed unanswered by the documentation, and I thought there was a chance (a slim one) that it might be something our readers could answer. Ordinarily, when estimating a model that contains two or more mutually exclusive dummies, you are supposed to omit one as a comparison. Is that true when using the estsimp command in Clarify, or, because the assumption is that omitted variables are set to their mean, you should specify a value for all variables (and thus include all of the dummies in the set.)

Hope that makes sense! Additionally, if anyone else has experiences with Clarify or questions about it to share, consider this an open forum.

A Positive Externality of Surveillance

I’ve been skimming the first chapter of Randall Collins’s Violence: A Micro-Sociological Inquiry, and came across this interesting perspective on an unexpected benefit of a high-surveillance society:

Violence as it actually becomes visible in real-life situations is about the intertwining of human emotions of fear, anger, and excitement, in ways that run right against the conventional morality of normal situations. It is just this shocking and unexpected quality of violence, as it actually appears in the cold eye of the camera, that gives a clue to the emotional dynamics at the center of a micro-situational theory of violence.

We live in an era in which our ability to see what happens in real-life situations is far greater than ever before. . . .The video revolution has made available much more information about what happens in violent situations than ever before.


Technologies of recording real-life conflict are useful for a series of reasons: they can provide us details that we otherwise wouldn’t see at all, that we were not prepared to look at, or did not know were there; they can give us a more analytical stance, more detached from the everyday perceptual gestalts and the clichés of conventional language for talking about violence.

Collins’s observations here remind me of a recent discussion in my admin class on the inevitably value-laden nature of most verbal characterizations of situations. We discussed the simple statement “Jack pushed John.” The key word here–push–carries with it all manner of charged associations. The types of images that can spring to mind from such a description are diverse. Perhaps only a video of the event can “tell the truth.”

On the other hand, co-blogger Dave Hoffman has argued that, even in video evidence, “we all see what we want to see; behavioral biases like attribution and availability lead to individualized view of events.”

Read More


Bar Passage & Accreditation: The “Neutral” Case Against Standards

rosin_2.jpgBack in August, the ABA withdrew proposed interpretive standard 301-6, which would have de-accredited schools that didn’t graduate students who passed their state bar at certain rates:

Under the first option, a school would have to show that in three or more of the most recent five years, in the jurisdiction in which the largest proportion of the school’s graduates take the bar exam for the first time, they pass the exam above, at or no more than 10 points below the first-time bar passage rates for graduates of ABA-approved law schools taking the bar examination in that jurisdiction during the relevant year. For schools from which more than 20 percent of graduates take their first bar examination in a jurisdiction other than the primary one, the schools also would be required to demonstrate that at least 70 percent of those students passed their bar examination over the two most recent bar exams.

Schools unable to satisfy the first alternative still could comply by demonstrating that 80 percent of all their graduates who take a bar examination anywhere in the country pass a bar examination within three sittings of the exam within three years of graduation.

The major critiques I saw of 301-6 focused on its alleged discriminatory effects:“all of the five ABA accredited law schools with the highest African-American enrollment (Howard, Southern, Texas Southern, North Carolina Central, and District of Columbia) would fail to meet the proposed interpretation.”I recently saw an interesting paper by Gary Rosin titled Benchmarking the Bar: No Unity in Difference Scores that seems to provide a race-neutral argument against the standard. From the abstract:

Under ABA proposed Interpretation 301-6, the primary benchmark used to measure the adequacy of a law-school’s academic program would be the amount by which is “local” Bar passage rate for first-takers differs from the overall passage rate for all first-takers from ABA-approved law schools. The study used generalized linear modeling as a method to compare Bar “difference scores” of ABA-approved law-schools in two states, New York and California. The study found that Bar difference scores in California were significantly more sensitive to changes in law-school relative LSAT scores than were Bar difference scores in New York. Bar difference scores – subtracting the “local” overall ABA Bar passage rate – do not fully adjust for variations in state grading practices, especially differences in minimum passing scores (“cut scores”) .

That is, because of state-to-state variation in slope of the bar passage curve, a standard that uses that curve as a predominant factor in accreditation decisions will have disparate effects. This seemed like an neat finding, but I wondered whether it is possible that the ABA (if it has to be the agency doing this) could correct for this slope problem using a weighting technique of some kind. I asked Gary, and he has kindly permitted me to quote his answer, after the jump.

Read More


In Praise of Market Imperfections

You would expect to go out of business if you hired people without knowing if they could do the job. And, the same would be true if you had no reliable way of measuring if they actually were doing the job once they were hired. Law Schools do both of these. They would prefer to hire second tier students from elite law schools rather than top students form non elite schools. Yet, the empirical evidence I know of shows that the scholarly production of the non elites once hired is no lower than that of the elites. In fact, since law reviews use credentials as a basis for article selection, non elites may be actually outperforming elites. Do we have any reliable way to evaluate what the new hires do? Give me a break. We have faculty classroom visits announced ahead of time that result in evaluations that could have been written ahead of time – all positive given the propensity of law professors to shirk from institutional responsibilites. And we have student evaluations that largely reflect expected grades. On scholarship, we send the articles to a list of reviewers influenced by the candidate or just the regular suppliers of positive letters. Be grateful for market imperfections!


Does the Phillies’ Pennant Mean It’s Good to be a Philadelphia Plaintiff’s Lawyer?

We_Believe--large-msg-119124344743.jpgI had the tremendous pleasure of attending yesterday’s 6-1 Phillies victory over the Nationals. In the ninth, the crowd learned of the Mets’ loss (and consequent, miraculous, Phillies clinching of the National League East pennant) about five minutes before the scoreboard posted that result, demonstrating the quick response time of social networks. I screamed my head off, and as a result will be hoarse for class tonight. Ironically, I’m teaching acceptance by silence.

But I didn’t put up this post just to gloat. That would be wrong.

Well after the game, I wondered about the interaction between sports victories and legal decision making. I know there are studies out there that correlate a home-team’s victory with a limited bump in local discretionary spending, and that overall wins (and teams) have negligible effects on economic growth. That makes some sense to me. But sports victories certainly have noneconomic effects. Wins change the atmosphere in cities (like Philadelphia) where there are tightly-connected urban communities. Just to relay an anecdote: this morning, on the subway, I observed someone actually give up their place to a woman transporting two small children. I don’t think that happens on an ordinary day in Philly.

Does winning matter for law? It’s not implausible, and it is relatively easy to test. I bet that jury awards today for prevailing plaintiffs are higher than average, and that judges are slightly less likely to grant summary judgment. (And visa versa. I would not want to open a civil case before a Queens jury today.) Civic noise certainly matters to legal decisionmakers: if the narrative around town is “the underdog has prevailed,” that has got to have some impact on the legal system. All of which is to say: plaintiffs lawyers able to choose cases might consider picking clients likely to go to trial in jurisdictions with winning local sports teams.


The Efficient Sports Betting Market Hypothesis

800px-Greenwood_Betting.jpgReader CDP passes along a link to this interesting story from, sort of an intellectual’s Sports Book. The article summarizes some academic literature on the efficiency of the betting market in professional and college-level football games. It’s just a puzzle: the sports betting market, despite being quite liquid and well-researched, isn’t particularly efficient.

Finance professor Richard Borghesi, of Texas State, has done much of the recent work on the problem.

One recent paper shows that the “home underdog” effect is most robust late in the season, when the influx of naive bettors swamps the ability of sophisticated bettors to “fix” the line. Another paper suggests that the betting market is quite slow to react to new, odds-relevant, weather information.

Why do such inefficiencies persist? Borghesi argues that the market makers are crooked: bookies are deliberately taking advantage of bettors’ cognitive biases. Perhaps, but as Josh Wright argued here in response to a post of mine about consumer irrationality, such explanations don’t satisfy unless we’ve got a theory explaining why competitors don’t compete away the “irrationality premium.”

So what of the explanation that the late-season betting is “too heavy” with amateurs to remain rationally priced. This is odd too: the home-dog effect is is well-known, yet it persists as a good strategy. Given all that money lying on the table, why hasn’t Goldman established a private sports betting fund?

The only reason I can think of is that such interventions would be unlawful. Thus, restrictions on gambling, presumably in place to deter fraud, are in fact enabling exploitation of gamblers. We could test the hypothesis by looking at markets where gambling was totally lawful but which have very irrational fan bases. Does the home-dog effect pop up for premier league soccer games?