More on Ranking Law Reviews
The Sullivan Scale
I’m a big fan of data mining (not the NSA variety, but the kind you do when you’re cleaning up your office), and there, nestled next to an article on rankings that I had lied to myself about responding to some day, was a pile of rejection letters from my Spring submission. As I was throwing them away, I noticed that several tried to ease the pain of rejection by informing me that I was just one of many who were also not quite good enough (many of these letters also solicited me to try again, as they had done the last 19 times).
Eureka, I thought, the perfect ranking system: ranking law reviews by number of submissions. One clear advantage of this method is that it does not necessarily reproduce the current hierarchies that dominate the other rankings.
Rank/ Law Journal/ # of Submissions
1 Stanford/ 3000
2 Ohio State/ 2000
2 Iowa/ 2000
2 Virginia/ 2000
2 Cornell/ 2000
6 Texas/ 1500
7 California/ 1200-1500
8 Tulane/ 1000
8 Rutgers-Camden/ 1000
10 Davis/ 800
While Stanford has the highest rating, thus providing some confirmation for other rankings, there are some obvious omissions from the top tier. Harvard, Yale, Columbia, for example – not there! Admittedly, there are a number of other absences; indeed, almost all law reviews are missing. This does pose some problems for the Sullivan Scale, but not insurmountable ones. And notice that the Scale, limited though it may be, has provided some useful information: Ohio State is ranked higher than Virginia, Texas, and California, not something most other ranking have uncovered.
There are two problems with my data. The first was that I didn’t submit to all the law reviews, not even all the 200 or so primary journals. It would be fair for Great-Review-at-Fourth-Tier-School to complain about my data collection. Obviously, I can resolve this with my next submission, at really only a very small cost. ExpressO will allow me to saturate the known universe for less than $1000. The second problem is, in my mind, a nonstarter: most reviews that rejected me didn’t tell me how many other articles they rejected. True, but, I ask you, whose fault is that? Besides as the Scale gains traction, this problem will solve itself. Everyone will feel obligated to provide me with the data I need, if only I am prepared to risk rejection.
I see two objections to the whole enterprise, one theoretical, the other cynical. The theoretical objection would ask why submissions are a good marker of quality. The answer seems plain: we are the academy, we know best.
The cynical objection would question reliability of data or at least manipulability. For example, what’s with California’s 1200 to 1500? Can’t they count out there? And it’s one thing for Stanford to claim 3000 submissions (a suspiciously round number in any event) when Yale is not on the board, but it might be tempted to report Yale + x the next time around. Similarly, a journal trying to improve its rankings might resort to tactics to increase submissions.
In an earlier era, the reliability objection would be serious, but in an age of ExpressO, there is some independent check on reliability (indeed, maybe I should just use ExpressO submissions, if they’d let me have it — that would avoid the heartbreak of collecting data through rejections).
As for encouraging submission to improve a journal’s rankings, the most effective way to increase submissions is to offer a greater prospect of actually being published by adding slots or reducing placements by the usual suspects. I ask you, what’s wrong with that?