More on Ranking Law Reviews

The Sullivan Scale

I’m a big fan of data mining (not the NSA variety, but the kind you do when you’re cleaning up your office), and there, nestled next to an article on rankings that I had lied to myself about responding to some day, was a pile of rejection letters from my Spring submission. As I was throwing them away, I noticed that several tried to ease the pain of rejection by informing me that I was just one of many who were also not quite good enough (many of these letters also solicited me to try again, as they had done the last 19 times).

Eureka, I thought, the perfect ranking system: ranking law reviews by number of submissions. One clear advantage of this method is that it does not necessarily reproduce the current hierarchies that dominate the other rankings.


Rank/ Law Journal/ # of Submissions

1 Stanford/ 3000

2 Ohio State/ 2000

2 Iowa/ 2000

2 Virginia/ 2000

2 Cornell/ 2000

6 Texas/ 1500

7 California/ 1200-1500

8 Tulane/ 1000

8 Rutgers-Camden/ 1000

10 Davis/ 800

While Stanford has the highest rating, thus providing some confirmation for other rankings, there are some obvious omissions from the top tier. Harvard, Yale, Columbia, for example – not there! Admittedly, there are a number of other absences; indeed, almost all law reviews are missing. This does pose some problems for the Sullivan Scale, but not insurmountable ones. And notice that the Scale, limited though it may be, has provided some useful information: Ohio State is ranked higher than Virginia, Texas, and California, not something most other ranking have uncovered.

There are two problems with my data. The first was that I didn’t submit to all the law reviews, not even all the 200 or so primary journals. It would be fair for Great-Review-at-Fourth-Tier-School to complain about my data collection. Obviously, I can resolve this with my next submission, at really only a very small cost. ExpressO will allow me to saturate the known universe for less than $1000. The second problem is, in my mind, a nonstarter: most reviews that rejected me didn’t tell me how many other articles they rejected. True, but, I ask you, whose fault is that? Besides as the Scale gains traction, this problem will solve itself. Everyone will feel obligated to provide me with the data I need, if only I am prepared to risk rejection.

I see two objections to the whole enterprise, one theoretical, the other cynical. The theoretical objection would ask why submissions are a good marker of quality. The answer seems plain: we are the academy, we know best.

The cynical objection would question reliability of data or at least manipulability. For example, what’s with California’s 1200 to 1500? Can’t they count out there? And it’s one thing for Stanford to claim 3000 submissions (a suspiciously round number in any event) when Yale is not on the board, but it might be tempted to report Yale + x the next time around. Similarly, a journal trying to improve its rankings might resort to tactics to increase submissions.

In an earlier era, the reliability objection would be serious, but in an age of ExpressO, there is some independent check on reliability (indeed, maybe I should just use ExpressO submissions, if they’d let me have it — that would avoid the heartbreak of collecting data through rejections).

As for encouraging submission to improve a journal’s rankings, the most effective way to increase submissions is to offer a greater prospect of actually being published by adding slots or reducing placements by the usual suspects. I ask you, what’s wrong with that?

You may also like...

5 Responses

  1. Frank says:

    I worry that the number of submissions metric might end up enhancing the “Matthew Effect;” the rich get richer, the poor poorer. People might stop bothering to submit to the journals with few extant submissions.

    Following on Jonathan Mermin’s ideas (in Remaking Law Review, 56 Rutgers L. Rev. 603 (2004)), I think that prospect could be ameliorated if law journals started specializing a bit in the types of articles they took…if only by a) running more symposia and b) calibrating a portion of acceptances to the expertise of a faculty member who would run a seminar for articles committee members to discuss and critique the law review articles they are reviewing.

    This would be a version of the type of “competition among law reviews” suggested by the following review piece:

    http://www.aallnet.org/sis/allsis/newsletter/25_2/LRevReview.htm

  2. Frank says:

    PS: here’s a nice article on the Matthew Effect in general:

    http://www.garfield.library.upenn.edu/merton/matthew1.pdf

  3. John Armstrong says:

    Actually, I’d suppose that a sort of “reverse Matthew effect” has been in play. The reason that Harvard and Yale are missing is that many authors don’t even bother submitting to such prestigious law reviews.

    For example: I’m converting my dissertation (in math) into papers now. Do I submit them to the Bulletin of the American Mathematical Society (general scope, very prestigious) or to the Journal of Knot Theory and its Ramifications (generally read only by the one small subfield)? Obviously the latter, since I’d just be wasting my time to run my new work up against established powerhouses like John Conway, no matter how wondrous I think my ideas are.

    One problem in the analogy is that I can only submit to one journal at a time, while (if I understand it) law articles can go to many journals simultaneously. Still, I think some part of the effect carries over.

  4. John,

    I don’t think your analogy works in law. There’s no reason not to submit to the top law reviews, so most people do even if they expect to get rejected. Law schools pick up the cost of submitting articles, so there is little cost to a professor in trying to get the best possible placement.

    Given the incentives, most top law reviews would receive the same or similar numbers of submissions. If you’re submitting to Yale, odds are 99.9% that you’re also submitting to Harvard, Columbia, Stanford, etc.

    My sense is that Charlie is largely right in his earlier post that law review ranking largely tracks US News ranking. But there are some notable exceptions where there’s a divergence.

    Ultimately, law review ranking is really a matter of collective perception in the academy. Maybe the best way to assess law review prestige is to poll a representative sample of law professors.

  5. John Armstrong says:

    Thanks for the insight, Prof. Solove. I suppose in that case i’m stymied as to why the supposedly top law reviews aren’t in the top ten for submissions. Is there a behavioral economist in the house?