Is It Harder to Get a Top Law Review Placement Today?

lawreview6.jpgThe other day, over at Conglomerate, Gordon Smith asked for data regarding law review rejection rates for article submissions.

I’m interested in collecting some data too — but instead of rejection rates, I’m interested in historical submission data. I’ve long had a hypothesis, though I’ve lacked data to back it up. The hypothesis is this: Today, it is much harder to get a top law review placement than it was 20 years ago. Let’s define a top law review placement as one in a top 20 law review (it doesn’t matter whether it’s the top 10, 20, 30, etc., so long as we agree on a particular number of journals). The number of article slots in the top 20 law reviews hasn’t changed much — each publishes roughly 12 to 18 articles a year. There are thus about 240 to 360 total slots each year — on average probably around 300. I assume that the number of slots has remained relatively constant over time.

The number of law review article submissions, however, has almost certainly risen. Over the past 20 years, standards have changed in the legal academy to encourage more publishing. Whereas before, professors were expected to publish frequently only at some schools, now nearly all law schools expect professors to be prolific scholars. The result: More law review article submissions! And the math is simple — more submissions are now chasing the same number of top law review slots.

I’m curious about the data. I wonder whether any law reviews have kept historical submission data. It would be interesting, for example, to know how many pieces were submitted to a given law review back in 1986 as opposed to 2006. Or back in 1976. Do any law review editors have access to this data? I’d be very interested to see the scope of the increase over the years.

You may also like...

7 Responses

  1. Jeff Lipshaw says:

    My rejection e-mails over the last couple years from the loosely defined top schools, when they mentioned it, used numbers in the magnitude of 1,000 submissions for the 10-12 they would select. I was visiting the Widener campus at Harrisburg a couple years ago, and the editor there told me the Review got 200 submissions for roughly the same number of selections.

    If that is the case, even accounting for multiple submissions, then it seems like the vast majority of pieces never get published, much less in the loosely defined top reviews. That would also be an interesting question. How many articles get submitted in total to ALL reviews and how many actually get accepted?

  2. The missing data here have to do with the number of law reviews, and with the correlation between law review rank and quality of pieces published. Consider the following stories:

    All the Print That Fits: The expansion in submissions has been accompanied by an equal (or greater) expansion in terrible law review submissions. Many more pieces are written, but most of them never stand a chance at the top journals and never would have. Competition for the top N is still among roughly the same-size pool of highly-polished and/or big-name articles.

    Survival of the Fattest: The new entrants have forced the stakes up for everyone. Tenure committees still look for top N publications, so everyone still tries to place there, and will write as long and as hard as they can in order to make it. Since the game is zero-sum at that level, the result is an arms race, with more citations, more gigantic empirical studies, more grandiose claims, etc. The amount of work that goes into a creditable top N submission has skyrocketed.

    Nobody Knows Anything: The selection process is now, as it has always been, statistically indistinguishable from the case in which all editors make their selections with a dartboard. Raw numerical competition drives down the odds for everyone. Unlike in the previous story, there is not much authors can do about it. (Indeed, the dominant strategy becomes writing many articles, rather than highly polished ones — more submissions means more tickets in the submission lottery.)

    All three stories may have an element of truth to them. I’m not sure that raw submission counts would tell us anything meaningful in distinguishing among them.

  3. Lawrence Cunningham says:

    The Solove hypothesis seems correct. In addition, there are about 20% more US law schools today than a generation or two ago.

  4. Lawrence Cunningham says:

    The Solove hypothesis seems correct. In addition, there are about 20% more US law schools today than a generation or two ago.

  5. TwoL says:

    What is the impact of specialized journals? Do top Crim law professors want to be in top “general” journals? or top journals focusing in their area?

  6. Miriam Cherry says:

    Bepress would know …

  7. Miriam — The problem with BePress’s statistics is that they are very distorted and inaccurate. Several top law reviews prefer that professors upload papers directly on their websites or send a hard copy. Therefore, many professors don’t use BePress for these journals. Thus, for example, I don’t use BePress for Harvard, Yale, Columbia, Penn, etc. — instead, I use their websites. So the stats are quite skewed.

    The best statistics would come from the journals themselves. I know that many journals keep track. Each article is logged in to their computer. So the stats are out there — if only some kind law review editors would share them with us!