Category: Empirical Analysis of Law


Are We Really Growing “More Divided” By Party Over Time?

Over at the Cultural Cognition Blog, I’ve written a bit about some new evidence about partisan division.  The headline news is that partisanship is a better predictor than it used to be of cultural division.  But as I read the data, the undernews is that we’re actually no more divided than we used to be on common ideological and cultural measures.  Given all that’s happened in the last quarter-century – including media differentiation, the digital revolution and 24-hour news cycle, more bowling alone, sprawl – isn’t that kind of a huge deal? The fact that partisan self-identification is a better predictor of cultural views than it used to be simply means that the parties are cohering better.  That might be bad for the functioning of our particular form of representative government, but it doesn’t mean that we’re drifting apart as a country.


A Taxonomy of Federal Litigation

For the last two years, Christy Boyd and I, along with some friends, have been working on a paper on how attorneys construct complaints.  The project began when we were working to code some other detritus of federal litigation and decided to collect the causes of action in complaints to understand the legal issues in our cases in a better manner than NOS codes alone permitted.  Soon enough, we got to thinking that our causes of action were pled in distinctively patterned ways.  Obviously, this isn’t an earth-shaking insight, as most first year students have thought, at one time or another, that each of their classes’ exam fact patterns could easily substitute for any other.  That is: causes of action are alternative, mutually complementary, theories that channel a limited number of fact patterns into claims to legal relief.   Everyone knows that contract and tort claims are pled together, and that constitutional claims come accompanied by state law torts.  But we thought it’d be worthwhile to nail down this insight using a very similar analysis to the one that enables Amazon to tell you which books you might like — i.e., if you plead a particular cause of action, what other causes of action are you likely to bring in a particular case?

We gathered a set of 2,500 complaints (from a much larger sample of federal complaints derived through RECAP).  The complaints were sampled to be fairly representative of all federal litigation, excluding pro se, social security, and prisoner petition cases. The sample contained 11,500 individual causes of action – around 4.6 causes of action per case.  Guided by co-authors at Temple’s Center for Data Analytics, we used spectral clustering to examine the relationship between causes of action.  Two years later and presto, we’ve a (draft) paper is up on SSRN!  The ungainly title is Building a Taxonomy of Litigation: Clusters of Causes of Action in Federal Complaints. I welcome your comments, and your suggestions for a better title. Follow me after the jump for an exploration of our findings.

Read More


Cultural Cognition and the Trayvon Martin Case

Over at the Cultural Cognition blog, Dan Kahan has two posts up, with a third promised, on the Trayvon Martin case. In the first, Dan argued that motivated cognition helps to explain why we disagree so vehemently about the facts of the Martin-Zimmerman incident. Indeed, he claimed that “we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.”

In his second post, he connects the shooting to the history of stand-your-ground laws – and the NRA’s successful strategy to combine self-defense norms with gun rights.  Arguing that turning Martin’s death into a discussion of the empirics of gun violence is exactly what the NRA would like, he urges that commentators “to just back off.  Not only are you needlessly sowing division; you are destroying the prospects for a meaningful conversation of the values that—despite our cultural differences—in fact unite us. ”

As as is so often the case, Dan states offers a subtle and compelling argument for the relevance of motivated cognition in understanding public policy.  I’ve actually been toying with writing a similar post – but it wouldn’t have been nearly as well-executed. So I hope you’ll go to the CCP blog and read what he’s written – it might cause you to rethink your priors on the tragedy in florida. Then please come back for a few further thoughts.

Read More


Measurable Things

The Misleadingly Convenient Source of Information

A common criticism one reads of ELS is that “too much of the work is driven by the existence of a data set, rather than an intellectual or analytical point.”  It’s ironic that this is the very critique that the realists made of traditional legal scholarship. Consider the great Llewellyn:

“I am a prey, as is every man who tries to work with law, to the apperceptive mass.  I see best what I have learned to see.  I am a prey, too — as are the others — to the old truth that the available limits vision, the available bulks as if it were the whole.  What records have I of the work of magistrates?  How shall I get them?  Are there any?  And if there are, must I search them out myself?  But the appellate courts make access to their work convenient.  They issue reports, printed, bound, to be had all gathered for me in the libraries.  The convenient  source of information lures.  Men work with it, first, because it is there; and because they have worked with it, men build it into ideology.  The ideology grows and spreads and gains acceptance, acquires a force and an existence of its own, becomes a thing to conjure with:  the rules and concepts of the courts of last resort.”

Or to put it differently, all of our work – quantitative empiricists, doctrinalists, corporate finance wizards, administrative regulation parsers, legal philosophers, and derivative social psychologists alike – is driven by the materials at hand. For most lawyers and legal academics, appellate opinions are the most convenient pieces of information available; we use such opinions to create mental models of what the “law” is, and (ordinarily in legal scholarship) what it ought be. Indeed, whenever trial court opinions are cited, they are often discounted as aberrant or transitory, in part because they are known to be unrepresentative!

Why, you might wonder, is the convention of data-driven-scholarship a particular problem in quantitative empirical work? ELS’s detractors make three interrelated claims:

Read More


Greiner and Pattanayak: The Sequel

In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.

The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.

If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.



Dockets and Data Breach Litigation

Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation.  Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version).  From the abstract:

In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.

A few thoughts follow after the jump.

Read More


R.I.P. Larry Ribstein

Larry Ribstein, who died earlier this week, was a galvanic force as a scholar and blogger.  I join those who’ve expressed sadness and loss at his untimely passing.  I figured I’d add two comments.

As others have commented, Larry always told you when he thought you were being an idiot.  When I presented one of my early empirical papers at an otherwise warm-and-friendly Canadian Law and Economics conference, Larry provided comments from the audience that had me wanting to go back to running fire drills at Cravath.  My god, how he schooled me!  But he was basically right, and it was business, not personal.  Some years later, he provided crucial encouragement on a new (better?) empirical paper.  Praise felt twice as good coming from him.  What a teacher he must have been!

Second, I’ve recently read his book (coauthored with Erin O’Hara) The Law Market.  I think it’s simply amazing – provocative, and in some ways as mind-opening as Stuntz’s Collapse of American Criminal Justice.  Law and economics has lost a great and unique voice.


CELS VI: Half a CELS is Statistically Better Than No CELS

Northwestern's Stained Glass Windows Made Me Wonder Whether Some Kind of Regression Was Being Proposed

As promised, I’m filing a report from the Sixth Annual Empirical Studies Conference, held 11/4-11/5 at Northwestern Law School.  Several of the attendees at the Conference approached me and remarked on my posts from CELS V, IV, and III. That added pressure, coupled with missing half of the conference due to an unavoidable conflict, has delayed this post substantially.  Apologies!  Next time, I promise to attend from the opening ceremonies until they burn the natural law figure in effigy.  Next year’s conference is at Stanford.  I’ll make a similar offer to the one I’ve made in the past: if the organizing committee pays my way, I promise not only to blog the whole thing, but to praise you unstintingly.  Here’s an example: I didn’t observe a single technical or organization snafu at Northwestern this year.  Kudos to the organizing committee: Bernie Black, Shari Diamond, and Emerson Tiller.

What I saw

I arrived Friday night in time for the poster session.  A few impressions.  Yun-chien Chang’s Tenancy in ‘Anticommons’? A Theoretical and Empirical Analysis of Co-Ownership won “best poster,” but I was drawn to David Lovis-McMahon & N.J. Schweitzer’s Substantive Justice: How the Substantive Law Shapes Perceived Fairness.  Overall, the trend toward professionalization in poster display continues unabated.  Even Ted Eisenberg’s poster was glossy & evidenced some post-production work — Ted’s posters at past sessions were, famously, not as civilized. Gone are the days where you could throw some powerpoint slides onto a board and talk about them over a glass of wine!  That said, I’m skeptical about poster sessions generally.  I would love to hear differently from folks who were there.

On Saturday, bright eyed and caffeinated, I went to a Juries panel, where I got to see three pretty cool papers.  The first, by Mercer/Kadous, was about how juries are likely to react to precise/imprecise legal standards.  (For a previous version, see here.) Though the work was nominally about auditing standards, it seemed generalizable to other kinds of legal rules.  The basic conclusion was that imprecise standards increase the likelihood of plaintiff verdicts, but only when the underlying conduct is conservative but deviates from industry norms.  By contrast, if the underlying conduct is aggressive, jurors return fewer pro-plaintiff verdicts.  Unlike most such projects, the authors permitted a large number of mock juries to deliberate, which added a degree of external validity.  Similarly worth reading was Lee/Waters’ work on jury verdict reporters (bottom line: reporters aren’t systematically pro-plaintiff, as the CW suggests, but they are awfully noise measures of what juries are actually doing).  Finally, Hans/Reyna presented some very interesting work on the “gist” model of jury decisionmaking.

At 11:00, I had to skip a great paper by Daniel Klerman whose title was worth the price of admission alone – the Selection of Thirteenth-Century Disputes for Litigation.  Instead, I went to Law and Psychology III.  There, Kenworthey Bilz presented Crime, Tort, Anger, and Insult, a paper which studies how attribution & perceptions of dignitary loss mark a psychological boundary between crime and tort cases.  Bilz presented several neat experiments in service of her thesis, among them a priming survey- – people primed to think about crimes complete the word “ins-” as “insult,” while people primed to think about torts complete it as “insurance.”  (I think I’ve got that right – – the paper isn’t available online, and I’m drawing on two week old memories.)

At noon, Andrew Gelman gave a fantastic presentation on the visualization of empirical data.  The bottom line: wordles are silly and convey no important information.  Actually, Andrew didn’t say that.  I just thought that coming in.  What Andrew said was something more like “can’t people who produce visually interesting graphs and people who produce graphs that convey information get along?”

Finally, I was the discussant at an Experimental Panel, responding to Brooks/Stremitzer/Tontrup’s Framing Contracts:Why Loss Framing Increases Effort.  Attendees witnessed my ill-fated attempt to reverse the order of my presentation on the fly, leading me to neglect the bread in the praise sandwich.  This was a good teaching moment about academic norms. My substantive reaction to Framing Contracts is that it was hard to know how much the paper connected to real-world contracting behavior, since the kinds of decision tasks that the experimental subjects were asked to perform were stripped of the relational & reciprocal norms that characterize actual deals.

CELS: What I missed

The entire first day!  One of my papers with the cultural cognition project, They Saw a Protest, apparently came off well.  Of course, there was also tons of great stuff not written from within the expanding cultural cognition empire.  Here’s a selection: on lawyer optimism; on public housing, enforcement and race; on probable cause and hindsight judging; and several papers on Iqbal, none of which appear to be online.

What did you see & like?


Reversal Rates, Reconsidered

What is the meaning of an appellate court’s “reversal rate”?  Opinions vary.  (My view, expressed, succinctly, is “basically nothing.”) However conceived, we ought to at least be measuring reversal correctly.  But two lawyers at Hangley Aronchick, a Philadelphia law firm, think that scholars (and journalists) have conceptualized reversal in entirely the wrong way.

According to John Summers and Michael Newman, we’ve forgotten that every case the Supreme Court takes implicitly also considers shadow cases from other circuits ruling on the same issue — that is, the Supreme Court doesn’t just “reverse” the circuit on direct appeal, it also affirms (or reverses) coordinate circuits while resolving a split.  Thus, both our numerator and our denominator have been wrong.  They’ve written up the results of this pretty interesting approach to reversal in a paper you can find blurbed here.   Among the highlights: (1) reversal is less common that is commonly supposed; (2) the Court doesn’t predictably follow the majority of circuits; (3) there are patterns of concordance between circuits in analyzing issues; and (4) even under the new approach, the ninth circuit is still the least loyal agent of the Supreme Court.

I think that this method has real promise, and I bet that folks who are interested in judicial behavior will want to check it out.


In Praise of Complexity

Earlier this month, right here on this very blog, Dave Hoffman pontificated about two of my favorite subjects: empirical legal studies and baseball. Primarily, Dave wondered about whether empirical legal research was facing might face the same problem as sabermatic baseball analysis: inaccessible complexity. I won’t rehash his argument because he did a very good job of explaining it in the original post. Although I completely agree with his conclusion that empirical legal studies should seek to be more accessible (which I always note at the end of my introduction of my empirical work), I disagree with his contention that empirical legal studies are facing might face widespread incomprehensibility due to growing complexity. Because I think it is a helpful analogy, I’ll borrow Dave’s example of advanced statistics in baseball. Read More