The Tulane Law Review & Empirical Legal Studies

Eugene Volokh has a very good post up today summarizing the Tulane Law Review Scandal. Basically, the Review, Tulane Law, and the authors, have now apologized for data collection errors in this article. But, as Eugene points out, one of the authors, Professor Vernon Palmer (Tulane Law) has claimed that corrected data would find the same result:

But in a Tuesday interview, speaking for himself and not Tulane, Palmer blamed himself for the errors, including those he found himself and those pointed out by the Supreme Court.

Yet with all the mistakes now corrected, he said, the study’s conclusions, broadly speaking, are the same.

Palmer said the corrected study data will be verified by an independent researcher and the revised study will probably be republished in a law review. He wasn’t sure which one it would be.

Last week, I contacted Professor Palmer, along with Dean Ponoroff and Palmer’s co-author John Levendis (Loyola – econ) and told them that I was organizing a mob-blog on the topic. I wanted to explore how data collection went wrong here – a risk that is salient for all of us who hand-collect data – and what we could do to prevent such errors in the future. I also hoped that participants in the mobblog, who were to be drawn from the ELS community, would discuss the degree to which the problem resulted from our student-run law review system. I thought that our blog had a responsibility to talk about the issue, in part because we’d help to publicize the original article, but also because so many of my posts highlight the important & unique contributions of the ELS movement. Because data collection really can be a black-box (even more than statistical method), this kind of incident could cast a shadow on empirical scholarship more generally.

Unfortunately, neither the authors nor Dean Ponoroff has gotten back to me. Perhaps, as Eugene’s post suggests, a “newly arising lawsuit threat” has led them to decide not to engage in further debate at this time. Or maybe it was something I wrote. In any event, without the original authors, it seems both unfair and unwise to dissect the article – we can’t know exactly what went wrong, and therefore we won’t know how to prevent it in the future. Further discussion would be merely speculative. And I’ve got other stuff on my plate.

I hope to return to the topic in the future, should the authors or Tulane get back to me.

You may also like...

2 Responses

  1. As someone who is currently working on his first empirical study that involves hand collecting data, I am extremely interested in hearing from more experienced scholars on the pitfalls of hand collection. Thank you for starting this conversation.

  2. dave says:

    Matt,

    It’s a good idea. Maybe, as soon as the financial crisis quiets (assuming it does!) and I get a paper moving a bit further toward completion, I’ll do a series of posts on this topic. Basically, I’ve made every kind of error possible doing hand-collection. (Including the error I just made, which is assuming that I’ve run out of errors to make.) For now, perhaps this can serve as an open thread for war stories. I’ll start. In my current project, I’ve employed upward of 25 RAs (at one time or another – obviously, not at the same time!). I feel like I spend 95% of the time managing & trying to design error-reduction systems, instead of writing & thinking. It kind of stinks!