CELS VII: Low Variance, High Significance

[CELS VII, held November 9-10, 2012 at Stanford, was a smashing success due in no small part to the work of chief organizer Dan Ho, as well as Dawn Chutkow (of SELS and Cornell) and Stanford’s organizing committee.  For previous installments in the CELS recap series, see CELS III, IV, V, and VI. For those few readers of this post who are data-skeptics and don’t want to read a play-by-play, resistance is obviously futile and you might as well give up. I hear that TV execs were at CELS scouting for a statistic geek reality show, so think of this as a taste of what’s coming.]

Survey Research isn't just for the 1%!

Unlike last year, I got to the conference early and even went to a methods panel. Skipping the intimidating “Spatial Statistics and the GIS” and the ominous “Bureau of Justice Statistics” panels, I sat in on “Internet Surveys” with Douglas Rivers, of Stanford/Hoover and YouGuv. To give you a sense of the stakes, half of the people in the room regularly use mTurk to run cheap e-surveys. The other half regularly write nasty comments in JELS reviewer forms about using mTurk.  (Oddly, I’m in both categories, which would’ve created a funny weighting problem if I were asked my views.) The panel was devoted to the proposition “Internet surveys are much, much more accurate than you thought, and if you don’t believe me, check out some algebraic proof.  And the election.”  Two contrasting data points. First, as Rivers pointed out, all survey subjects are volunteers, and thus it’s a bit tough to distinguish internet convenience samples from some oddball scooped up by Gallup’s 9% survey response rate.  Second, and less comfortingly, 10-15% of the adult population has a reading disability that makes self-administration of a survey prompt online more than a bit dicey.  I say: as long as the disability isn’t biasing with respect to contract psychology or cultural cognition, let’s survey on the cheap!

Lunch next. Good note for presenters: avoid small pieces of spinach/swiss chard if you are about to present. No one will tell you that you’ve spinach on a front tooth.  Not even people who are otherwise willing to inform you that your slides are too brightly colored. Speaking of which, the next panel I attended was Civil Justice I. Christy and I presented Clusters are AmazingWe tag-teamed, with me taking 9 minutes to present 5 slides and her taking 9 minutes to present the remaining 16 or so.  That was just as well: no one really wanted to know how our work might apply more broadly anyway. We got through it just fine, although I still can’t figure out an intuitive way to describe spectral clustering. What about “magic black box” isn’t working for you?

Zev Eigen then presented Justice or Just Between Us, as the room jealously marveled at his enormously broad set of survey data of employees of a firm (“Gilda”, he called it, to present anonymity) before and after the imposition of a dispute resolution system.  And then Jonah Gelbach presented a paper about the relationship between summary judgment disposition and Twiqbal.  The highlight there was William Hubbard’s elegant comments, which pointed out that given expected Twiqbal effect sizes, it’s not obvious how to interpret non-effects. My own view would go further: is Twiqbal’s effect as important a problem as the distribution of CELS papers would imply? I’m reminded of this figure.

As a consequence of sitting in Civil Justice, I missed Eugene Kontorovich’s The Penalties for Piracy (conclusion: variable), Seaman/Schwartz’s The Presumption of Validity in Patent Litigation: An Experimental Study (conclusion: presumptions matter less than you’d think), as well as a number of other papers.  I decided to go to Courts and Judging (II) next, to see Maya Sen present a paper on the ABA’s judicial ratings. Sen finds that women and minority federal judicial candidates are, holding all else equal, more likely to receive negative evaluations from the ABA.  I learned a ton about the promise of using matching these kind of data. But I’m still not convinced by the claim that she has a useful measure of judicial performance on the back-end of the paper – she uses reversal rates!  The horror!

Then it was time for the increasingly famous poster session. A few prepatory notes. First, I know that many of the folks there read my past posts and will be reading this one, since, well, you told me at the poster session. This is of course a little bit awkward, since blogging is still, at this late date, seen as vaguely embarrassing if brought up in front of certain law professors. Like: “Oh, you blog? And do you also pick your nose in public?” So maybe next year folks in the know can make a little typing gesture when I come up so I can know that they are in on the game?  Or wear a CoOp pin in your lapel?  I’ll put it this way: if you come up with some subtle signal, you are vastly more likely to be mentioned positively in 2013’s recap.

Since you asked, the food was fine. Was it as good as Yale’s spread? No, it was not. But there was a ton of it, which was great because the session went on for 5 hours.  Or perhaps it was 90 minutes. Time moves slooowly when you are in a room full of socially anxious people.  The posters were generally great. Stanford printed them for everyone and appeared to slightly reduce the overall n, no doubt to reduce variance.  You had the sense that the selection committee was perhaps not entirely in love with government regulation.  See, e.g., Jim Hawkins, The CARD Act on Campus (the Card act has had exactly zero effect on behavior); Conrad Ciccotello et al.,  Boards in Practice: Director Location, Qualifications and Credible Contracting (SOX caused small boards to select directors from farther away, reducing monitoring).  And also doesn’t love judges.  See, e.g.,  Wistrich et al., Predictably Incoherent Justice (judges subject to tons of biases); Robert Hume, Why Recusals Don’t Matter (justices manipulate voting and recusal practices). It was a relief to find one hopeful project, by David Simson.  Titled Restorative Justice and its Effects on (Racially Disparate) Punitive School Discipline, the paper found that imposing a RJ code reduced suspension and increased graduation rates.

The next morning, it was off to the one session at the conference on contracts. There were five panels on Courts and Judging, five on Criminal Justice, four on Corporate Governance, and tons on Law and Finance. I’m not a fancy stats guy, but something’s not right about that distribution.  At Contracts I (and Only), Florencia Marotta-Wurgler and Robert Taylor’s paper Set in Stone? Change and Innovation in Consumer Standard Form Contracts, got a good reception.  I’ve seen the paper before and continue to be impressed by the authors’ careful attention to detail, as well as the paper’s potential to set up an entire research agenda.  They find that consumer form contract terms actually do change over time in response both to market incentives and changes in law.  This contrasts with the extant literature on corporate boilerplate.  Tess Wilkinson-Ryan then presented our paper, The Psychology of Contract PrecautionsTess has no business being a good as presenter as she is given that she’s still junior, and the audience appeared to eat up the results, especially the third experiment. We’re quite encouraged, and I expect we’ll return to the topic – particularly, when do individuals subjectively believe they are bound to agreements.

At this point, I’ve a confession to make. Because I selfishly wanted to be there when Tess presented our work, I missed the Pornography [Causes] Divorce paper I blogged about last week, as well as another paper on that same Family Law panel, Can You Buy Sperm Donor Identification: An Experiment.  What’s particularly sad about this is that no fewer than three people asked me what I was going to blog about the pornography paper.  Some wags questioned whether Playboy sales really count as pornography. Tell you what:  let a thousand social media flowers bloom. I’m looking at you, Michael Heise. [Update: Hadar tweeted what she saw.]

Next, I decided to see papers at several panels, including Galoob/Li’s, Are Legal Ethics Ethical: A Survey Experiment, and Ames/Fiske’s Intentional Harms are Worse: Even When They’re Not. The latter paper in particular caught my interest based on the fifth experiment, which found that people who were upset at a hypothetical actor’s bad conduct inflated their estimate of the harm he caused, as measured by a series of dollar value damages. This is an incredibly cool result (in case you were wondering, they tried to control for punitive damages and also pushed incentive compatibility). I’m not sure I quite believe it yet, and I’d like to know more about the mechanism. But it suggests – like this paper – that juries can’t be fully controlled by nonhedonic and mechanical damage instructions.  The last paper I saw was Francis Shen’s Mind, Body, and the Criminal Law, which suggests that lay individuals’ intuitions about what constitutes are “physical” injury are heterogeneous but can be strongly shaped by instructions, as well as brain scan evidence.

In summary, what were the take-aways?

Well, I was a little bit bummed that there were so few law and psychology papers, while there seemed to be a very high number of political science and finance folks milling around. Second, the number of people there to watch – especially young graduate students – seemed way up compared to last year, while the number of law professors a bit lower. Trend or anecdata, only time will tell.  Third, the conference was exceptionally tightly run – every panel I saw began on time, ended on time, and you could easily see three papers at three concurrent panels since everyone was synchronized. That’s the way to run a railroad, folks!  Fourth, I was struck – as were others I talked to– by the sense that it’s tough to figure out norms of collegial engagement when people from very different disciplines are in a room together. Economists are very…rough; political scientists are polite; psychologists start and end their comments with questions; law professors make jokes and tend to self-deprecate. Maybe the organizers next year could send around a memo describing the conference-wide norm? We’re all just looking to conform, after all.

What did you see?

You may also like...

3 Responses

  1. anon says:

    Dave, I cannot tell whether you are complaining that conference organizers were biased towards poli sci and finance, or whether you are merely decrying the fact that law and psych has not yet reached the same quality and quantity of work as poli sci and finance.

  2. Dave Hoffman says:

    I think neither. Just wished there were more sessions.

  3. Erik Girvan says:

    Relative quality is always debatable, but there is plenty of law and psychology work, it just tends to be heavily dominated by psychology faculty and thus presented in different conferences (e.g., American Psychology-Law Society: http://www.ap-ls.org/conferences/apls2013/index2013.php). There always seem to be a few law faculty at AP-LS, but I definitely would not mind seeing more.