Category: Empirical Analysis of Law


This is Your Brain on … the New York Times

A recent NY Times bit talks about “neurorealism,” that is, people’s increased tendency to believe psychological or other scientific assertions when those assertions are accompanied by images from brain scans. The piece quotes Deena Weisberg, who wrote an article in the Journal of Cognitive Neuroscience documenting this empirically (in both laypeople and, if I remember the article correctly, in experts, though to a lesser extent), and the neologizer, Eric Racine. The piece mentions a newspaper article “about how high-fat foods activate reward centers in the brain,” and asks, “Couldn’t we have proced that with a slice of pie and a piece of paper with a check box on it?” Brian Leiter also noted the Times piece, with a plug for his paper criticizing legal academics’ use of evolutionary biology.

But the Times bit, and these scholars, conflate two very different points. The first is the “credulousness” issue—that people believe the assertions when accompanied by brain images. That’s an important point, especially in the legal context, where judges, jurors, or policy-makers might be exposed to such scans and misled by such scientific “explanations” of behavior. (Of course, it’s not enormously surprising, given past concerns about jurors’ understanding of complex scientific evidence.)

But that’s quite a different point from the dismissive “check box” question, criticizing even the usefulness of such neurological research. fMRI and other such scans can of course provide important and useful evidence, and certainly can tell us more than simple self-reports or even other behavioral studies. Matt Lieberman, a psychologist at UCLA [disclosure: we were in grad school together] and one of those most prominently associated with the newish field of social cognitive neuroscience, has addressed this well, in answering whether SCN provides something more than conventional social psychology. Summarizing just one of his papers on the issue: he points out that fMRI can provide evidence that “two psychological processes that experientially feel similar and produce similar behavioral results, but actually rely on different underlying mechanisms,” such as memory for social and non-social information. It can document “processes that one would not think rely on the same mechanisms, when in fact they do,” such as the common neurological pathways in the experience of both physical and social pain. And more speculatively, he suggests, as “more is learned about the precise functions of different regions of the brain it may be possible to infer some of the mental processes that an individual is engaged in just from looking at the activity of their brains.” This is an important advantage to overcome potential difficulties in, for instance, self-report.

There is of course danger in over-selling fMRI and similar neurological evidence—whether evaluating psychiatric patients, capital defendants, or others—and documenting people’s susceptibility to such over-sell is important. But it’s quite a different question whether such scans can be useful, and to dismiss them out of hand is just as obviously a mistake.


Interdisciplinarity, Leiter and the Bluebook

bluebook.jpgGordon Smith has a nice summary post of the debate between Brian Leiter, Mary Dudziak and others on whether Brian’s faculty citation rankings accurately measure “impact in legal scholarship.”

The basic framework of the debate is

Objection: “But you didn’t measure X…”

Leiter: “True. Let a hundred flowers bloom, and do your own data collection!”

(Which strikes me as pretty persuasive.) I wanted to add a different ingredient into the pot. I think Leiter’s rankings mismeasure impact in interdisciplinary scholarship for a reason unrelated to his methodology or its merits. Simply put: the Bluebook itself undervalues interdisciplinary collaborations and thus scholarship.

I’m not nearly the first to observe that the Bluebook’s citation rules have an ideological component. See, e.g., Christine Hurt’s great piece on that very topic. But consider the interaction between Bluebook Rule 15.1, 16 and Leiter’s study. R.16 states that the citation of author names in signed law review articles should follow Rule 15.1. R. 15.1 states that when there are two or more authors, you have a choice:

Either use the first author’s name followed by “ET AL.” or list all of the authors’ names. Where saving space is desired, and in short form citations, the first method is suggested . . Include all authors’ names when doing so is particular relevant.

This seems to me to express a pretty strong non-listing preference. The “problem” is that much good interdisciplinary work results from collaborations among more than two authors – it is the nature of the beast. Take, for example, my colleague Jaya Ramji-Nogales’ forthcoming triple-authored article Refugee Roulette: Disparities in Asylum Adjudication, which was front-paged by the Times back in June. Two of the article’s authors are in danger of being ET AL.’ed in many law review footnotes, and consequently ignored in subsequent Leiter citation counts (unless the citing article’s author chooses to mention them by name in the text). This seems like a trivial objection, but it will take on increasing weight over the next ten years as empirical legal studies really comes online in the major law reviews. (Obviously, I’m writing in part because I’ve two articles in the pipeline where I’m a part of three-author teams, and the “et al.” problem is somewhat salient.)

Bluebook editors: I know you are lurking here! Can you fix this silly problem in the 19th edition?


Clarification about Clarify

anonymity2.jpgI recently switched to STATA from SPSS. The choice seemed overdetermined, not least because of the abundance of freeware add-ons for STATA (compared with the pricey programs for SPSS). For example, Clarify, developed by Michael Tomz, Jason Wittenberg and Gary King, makes it easy to estimate predicted probabilities by simulating data, a highly useful technique, especially when graphed. (I first learned about Clarify in the Martin/Epstein legal-empirical methods stats camp.) Going through this kind of work by hand is a hassle, as my co-authors and I learned when writing Docketology.

I’ve a question about the software that seemed unanswered by the documentation, and I thought there was a chance (a slim one) that it might be something our readers could answer. Ordinarily, when estimating a model that contains two or more mutually exclusive dummies, you are supposed to omit one as a comparison. Is that true when using the estsimp command in Clarify, or, because the assumption is that omitted variables are set to their mean, you should specify a value for all variables (and thus include all of the dummies in the set.)

Hope that makes sense! Additionally, if anyone else has experiences with Clarify or questions about it to share, consider this an open forum.

A Positive Externality of Surveillance

I’ve been skimming the first chapter of Randall Collins’s Violence: A Micro-Sociological Inquiry, and came across this interesting perspective on an unexpected benefit of a high-surveillance society:

Violence as it actually becomes visible in real-life situations is about the intertwining of human emotions of fear, anger, and excitement, in ways that run right against the conventional morality of normal situations. It is just this shocking and unexpected quality of violence, as it actually appears in the cold eye of the camera, that gives a clue to the emotional dynamics at the center of a micro-situational theory of violence.

We live in an era in which our ability to see what happens in real-life situations is far greater than ever before. . . .The video revolution has made available much more information about what happens in violent situations than ever before.


Technologies of recording real-life conflict are useful for a series of reasons: they can provide us details that we otherwise wouldn’t see at all, that we were not prepared to look at, or did not know were there; they can give us a more analytical stance, more detached from the everyday perceptual gestalts and the clichés of conventional language for talking about violence.

Collins’s observations here remind me of a recent discussion in my admin class on the inevitably value-laden nature of most verbal characterizations of situations. We discussed the simple statement “Jack pushed John.” The key word here–push–carries with it all manner of charged associations. The types of images that can spring to mind from such a description are diverse. Perhaps only a video of the event can “tell the truth.”

On the other hand, co-blogger Dave Hoffman has argued that, even in video evidence, “we all see what we want to see; behavioral biases like attribution and availability lead to individualized view of events.”

Read More


Bar Passage & Accreditation: The “Neutral” Case Against Standards

rosin_2.jpgBack in August, the ABA withdrew proposed interpretive standard 301-6, which would have de-accredited schools that didn’t graduate students who passed their state bar at certain rates:

Under the first option, a school would have to show that in three or more of the most recent five years, in the jurisdiction in which the largest proportion of the school’s graduates take the bar exam for the first time, they pass the exam above, at or no more than 10 points below the first-time bar passage rates for graduates of ABA-approved law schools taking the bar examination in that jurisdiction during the relevant year. For schools from which more than 20 percent of graduates take their first bar examination in a jurisdiction other than the primary one, the schools also would be required to demonstrate that at least 70 percent of those students passed their bar examination over the two most recent bar exams.

Schools unable to satisfy the first alternative still could comply by demonstrating that 80 percent of all their graduates who take a bar examination anywhere in the country pass a bar examination within three sittings of the exam within three years of graduation.

The major critiques I saw of 301-6 focused on its alleged discriminatory effects:“all of the five ABA accredited law schools with the highest African-American enrollment (Howard, Southern, Texas Southern, North Carolina Central, and District of Columbia) would fail to meet the proposed interpretation.”I recently saw an interesting paper by Gary Rosin titled Benchmarking the Bar: No Unity in Difference Scores that seems to provide a race-neutral argument against the standard. From the abstract:

Under ABA proposed Interpretation 301-6, the primary benchmark used to measure the adequacy of a law-school’s academic program would be the amount by which is “local” Bar passage rate for first-takers differs from the overall passage rate for all first-takers from ABA-approved law schools. The study used generalized linear modeling as a method to compare Bar “difference scores” of ABA-approved law-schools in two states, New York and California. The study found that Bar difference scores in California were significantly more sensitive to changes in law-school relative LSAT scores than were Bar difference scores in New York. Bar difference scores – subtracting the “local” overall ABA Bar passage rate – do not fully adjust for variations in state grading practices, especially differences in minimum passing scores (“cut scores”) .

That is, because of state-to-state variation in slope of the bar passage curve, a standard that uses that curve as a predominant factor in accreditation decisions will have disparate effects. This seemed like an neat finding, but I wondered whether it is possible that the ABA (if it has to be the agency doing this) could correct for this slope problem using a weighting technique of some kind. I asked Gary, and he has kindly permitted me to quote his answer, after the jump.

Read More


In Praise of Market Imperfections

You would expect to go out of business if you hired people without knowing if they could do the job. And, the same would be true if you had no reliable way of measuring if they actually were doing the job once they were hired. Law Schools do both of these. They would prefer to hire second tier students from elite law schools rather than top students form non elite schools. Yet, the empirical evidence I know of shows that the scholarly production of the non elites once hired is no lower than that of the elites. In fact, since law reviews use credentials as a basis for article selection, non elites may be actually outperforming elites. Do we have any reliable way to evaluate what the new hires do? Give me a break. We have faculty classroom visits announced ahead of time that result in evaluations that could have been written ahead of time – all positive given the propensity of law professors to shirk from institutional responsibilites. And we have student evaluations that largely reflect expected grades. On scholarship, we send the articles to a list of reviewers influenced by the candidate or just the regular suppliers of positive letters. Be grateful for market imperfections!


Does the Phillies’ Pennant Mean It’s Good to be a Philadelphia Plaintiff’s Lawyer?

We_Believe--large-msg-119124344743.jpgI had the tremendous pleasure of attending yesterday’s 6-1 Phillies victory over the Nationals. In the ninth, the crowd learned of the Mets’ loss (and consequent, miraculous, Phillies clinching of the National League East pennant) about five minutes before the scoreboard posted that result, demonstrating the quick response time of social networks. I screamed my head off, and as a result will be hoarse for class tonight. Ironically, I’m teaching acceptance by silence.

But I didn’t put up this post just to gloat. That would be wrong.

Well after the game, I wondered about the interaction between sports victories and legal decision making. I know there are studies out there that correlate a home-team’s victory with a limited bump in local discretionary spending, and that overall wins (and teams) have negligible effects on economic growth. That makes some sense to me. But sports victories certainly have noneconomic effects. Wins change the atmosphere in cities (like Philadelphia) where there are tightly-connected urban communities. Just to relay an anecdote: this morning, on the subway, I observed someone actually give up their place to a woman transporting two small children. I don’t think that happens on an ordinary day in Philly.

Does winning matter for law? It’s not implausible, and it is relatively easy to test. I bet that jury awards today for prevailing plaintiffs are higher than average, and that judges are slightly less likely to grant summary judgment. (And visa versa. I would not want to open a civil case before a Queens jury today.) Civic noise certainly matters to legal decisionmakers: if the narrative around town is “the underdog has prevailed,” that has got to have some impact on the legal system. All of which is to say: plaintiffs lawyers able to choose cases might consider picking clients likely to go to trial in jurisdictions with winning local sports teams.


The Efficient Sports Betting Market Hypothesis

800px-Greenwood_Betting.jpgReader CDP passes along a link to this interesting story from, sort of an intellectual’s Sports Book. The article summarizes some academic literature on the efficiency of the betting market in professional and college-level football games. It’s just a puzzle: the sports betting market, despite being quite liquid and well-researched, isn’t particularly efficient.

Finance professor Richard Borghesi, of Texas State, has done much of the recent work on the problem.

One recent paper shows that the “home underdog” effect is most robust late in the season, when the influx of naive bettors swamps the ability of sophisticated bettors to “fix” the line. Another paper suggests that the betting market is quite slow to react to new, odds-relevant, weather information.

Why do such inefficiencies persist? Borghesi argues that the market makers are crooked: bookies are deliberately taking advantage of bettors’ cognitive biases. Perhaps, but as Josh Wright argued here in response to a post of mine about consumer irrationality, such explanations don’t satisfy unless we’ve got a theory explaining why competitors don’t compete away the “irrationality premium.”

So what of the explanation that the late-season betting is “too heavy” with amateurs to remain rationally priced. This is odd too: the home-dog effect is is well-known, yet it persists as a good strategy. Given all that money lying on the table, why hasn’t Goldman established a private sports betting fund?

The only reason I can think of is that such interventions would be unlawful. Thus, restrictions on gambling, presumably in place to deter fraud, are in fact enabling exploitation of gamblers. We could test the hypothesis by looking at markets where gambling was totally lawful but which have very irrational fan bases. Does the home-dog effect pop up for premier league soccer games?



Blonde burris more compressed.JPGI’ve been working on a business for when I get tired of being a law professor. False memories. There’s a huge potential market. Everyone has missing pages in the scrapbook, things we’ve always wanted to do but never managed — that grand April affair in Paris, climbing K-2, or perhaps just nobly and diligently overcoming some childhood adversity. False memories have a bad name in law: we don’t like it when a victim remembers abuse that never happened, or an eye-witness realizes that the short Black defendant is the tall White gunman he saw pull the trigger. But why not harness that power for good? My idea is to help people recover detailed memories of things that, if you want to be technical about it, never actually happened. From the point of view of present emotional value, a false memory is just as good as a real one, so why confine your remembrance of things past to that poor parade of things that actually passed you?

Well I thought this was a pretty good idea, until last week, when a New York Times editorial reminded me that this sort of fantasy is already a mainstream business. Working in public health law, I should have realized a long time ago that most of what passes for the facts beneath our health policy are, in fact, things we know for sure that just ain’t so. (Wait, I just recovered a memory of having this precise insight fifteen years ago, during a magical week in Paris). Anyway, in this editorial, the Times catalogued the myths that shape health care politics in America today. Here’s a bit:

Seven years ago, the World Health Organization made the first major effort to rank the health systems of 191 nations. France and Italy took the top two spots; the United States was a dismal 37th. More recently, the highly regarded Commonwealth Fund has pioneered in comparing the United States with other advanced nations through surveys of patients and doctors and analysis of other data. Its latest report … ranked the United States last or next-to-last compared with five other nations — Australia, Canada, Germany, New Zealand and the United Kingdom — on most measures of performance, including quality of care and access to it.

Read More


Law Review Forum at ELS Blog

The Empirical Legal Studies blog has hosted a great forum over the last few days, evaluating the Nance-Steinberg paper on law review submission practices. The first post is here, and there are eight others, featuring comments by Christine Hurt, Christopher Zorn, Ahmed Taha, and Ben Barton, among others, as well as the ELS regulars. It has been a remarkable discussion. Check it out.