CELS VIII: Data is Revealing, Part 1.

 

"If you are going to mine my data, at least have the courtesy of displaying predictive probabilities!"

“If you are going to mine my data, at least have the courtesy of displaying predictive probabilities!”

[This is part 1 of my recap of the Penn edition of CELS, promised here.  For previous installments in the CELS recap series, see CELS III, IV, V, and VI, VII.]

Barry Schwartz might’ve designed the choice set facing me at the opening of CELS. Should I go to Civil Procedure I (highlighted by a Dan Klerman paper discussing the limits of Priest-Klein selection), Contracts I (where Yuval Feldman et al. would present on the relationship between contract clause specificity and compliance), on Judicial Decisionmaking and Settlement (another amazing Kuo-Chang Huang paper). [I am aware, incidentally, that for some people this choice would be Morton’s. But those people probably weren’t the audience for this post, were they.] I bit the bullet and went to Civ Pro, on the theory that it’d be a highly contentious slugfest between heavyweights in the field, throwing around words like “naive” and “embarrassing.”  Or, actually, I went hoping to learn something from Klerman, which I did. The slugfest happened after he finished.

In response to a new FJC paper on pleading practices, a discussant and a subsequent presenter criticized the FJC’s work on Twiqbal. The discussant argued that the FJC’s focus on the realities of lawyers’ practice was irrelevant to the Court’s power-grab in Twombly, and that pleading standards mattered infinitely more than pleading practice.  The presenter argued that the FJC committed methodological error in their important 2011 survey, and that their result (little effect) was misleading. The ensuing commentary was not restrained. Indeed, it felt a great deal like the infamous CELS death penalty debate from 2008. One constructive thing did come out of the fire-fight: the FJC’s estimable Joe Cecil announced that he would be making the FJC’s Twombly dataset available to all researchers through Vandy’s Branstetter program. We’ll all then be able to replicate the work done, and compare it to competing coding enterprises. Way to go, Joe!

But still, it was a tense session.  As it was wrapping up, an economically-trained empiricist in the room commented how fun he had found it & how he hoped to see more papers on the topic of Twombly in the future. I’d been silent to that point, but it was time to say something.  Last year in this space I tried being nice: “My own view would go further: is Twiqbal’s effect as important a problem as the distribution of CELS papers would imply?” This year I was, perhaps impolitically, more direct.

I conceded that analyzing the effect of Twombly/Iqbal wasn’t a trivial problem. But if you had to make a list of the top five most important issues in civil procedure that data can shed light on, it wouldn’t rank.* I’m not sure it would crack the top ten.  Why then have Twiqbal papers eaten market share at CELS and elsewhere since 2011? Some hypotheses (testable!) include: (1) civil procedure’s federal court bias; (2) giant-killing causes publication, and the colossi generally write normative articles praising transsubstantive procedure and consequently hate Twombly; (3) network effects; and (4) it’s where the data are. But these are bad reasons. Everyone knows that there is too much work on Twombly. We should stop spending so much energy on this question. It is quickly becoming a dead end.

So I said much of that and got several responses. One person seemed to suggest that a good defense of Twiqbal fixation was that it provided a focal point to organize our research and thus build an empirical community. Another suggested that even if law professors were Twiqbal focused, the larger empirical community was not (yet) aware of the importance of pleadings, so more attention was beneficent. And the rest of folks seemed to give me the kind of dirty look you give the person who blocks your view at a concert. Sit down! Don’t you see the show is just getting started?

Anyway, after that bit of theatre, I was off to a panel on Disclosure. I commented (PPT deck) on Sah/Lowenstein, Nothing to Declare: Mandatory and Voluntary Disclosure leads advisors to avoid conflicts of interestThis was a very, very good paper, in the line of disclosure papers I’ve previously blogged here. The innovation was that advisors were permitted to walk away from conflicts instead of being assigned to them immutably. This one small change cured disclosure’s perverse effect. Rather than being morally licensed by disclosure to lie, cheat and steal, advisors free to avoid conflicts were chastened by disclosure just as plain-vanilla Brandeisian theory would’ve predicted.   In my comments, I encouraged Prof. Sah to think about what happened if advisors’ rewards in the COI were returned to a third party instead of to them personally, since I think that’s the more legally-relevant policy problem. Anyway, definitely worth your time to read the paper.

Then it was off to the reception. Now, as our regular readers know, the cocktail party/poster session is a source of no small amount of stress. On the one hand, it’s a concern for the organizers. Will the food be as good as the legendary CELS@Yale? The answer, surprisingly, was “close to it”, headlined by some grapes at a cheese board which were the size of small apples and tasted great.  Also, very little messy finger food, which is good because the room is full of the maladroit.  But generally, poster sessions are terribly scary for those socially awkward introverts in the crowd. Which is to say, the crowd. In any event, I couldn’t socialize because I had to circle the crowd for you. Thanks for the excuse!

How about those posters?  I’ll highlight two. The first was a product of Ryan Copus and Cait Unkovic of Bolt’s JSP program. They automated text processing of appellate opinions and find significant judge-level effects on whether the panel reverses the district court’s opinion, as well as strong effects for the decision to designate an opinion for publication in the first instance. That was neat. But what was neater was the set of judicial base cards, complete with bubble-gum and a judge-specific stat pack, that they handed out.  My pack included Andrew Kleinfeld, a 9th circuit judge who inspired me to go to law school.  The second was a poster on the state appellate courts by Thomas Cohen of the AO. The noteworthy findings were: (1) a very low appeal-to-merits rate; and (2) a higher reversal rates for plaintiff than defendant wins at trial. Overall, the only complaint I’d make about the posters was that they weren’t clearly organized in the room by topic area, which would have made it easier to know where to spend time.  Also, the average age of poster presenters was younger than the average age of presenters of papers, while the average quality appeared as high or higher. What hypotheses might we formulate to explain that distribution?

That was all for Day 1. I’ll write about Day 2, which included a contracts, international law, and legal education sessions,  in a second post.

 

*At some point, I’ll provide a top ten list.  I’m taking nominations.  If it has federal court in the title, you are going to have to convince me.

You may also like...

6 Responses

  1. Charles OConnor says:

    They automated text processing of appellate opinions and find significant judge-level effects on whether the panel reverses the district court’s opinion, as well as strong effects for the decision to designate an opinion for publication in the first instance

    In plain English, what does this mean?

  2. Dave Hoffman says:

    1. They used a computer program to pull information from judicial opinions instead of coding them by hand.

    2. They figured out that if a particular judge is on a panel, that panel was much more likely to reverse holding many other factors constant.

    3. Particular judge-led panels are much more likely to designate their opinions for publication than others.

  3. Charles OConnor says:

    Thanks, do you know what program they used to pull information from judicial opinions

  4. Ryan Copus says:

    Hi Charles. We wrote Python scripts to convert the text to a data frame and used R to clean and analyze it. If you’re interested, you’re welcome to email me at r w c o p u s @berkeley.edu (without the spaces), and I’d be happy to offer more detail. There’s still tons of stuff we’d like to do, but we think this is good start.

  5. abstain says:

    Dave, CELS does not have “debates”. It’s an academic conference of (mostly) serious empiricists. Empiricists often disagree on important matters, but this doesn’t turn their exchanges into something as glib, shallow, and zing-oriented as a “debate”. Our “debates” are conducted in Appendices to our papers. What you see in presentations is a shell.

  6. Dave Hoffman says:

    Abstain,

    If you say so. FWIW, I didn’t see a zing, but I did see some POWs!, if we are going to be making batman references.