Nominally Empirical Evidence of Unraveling in the Law Review Market
In a previous post, I observed that “the time for submitting law review articles is creeping backwards.” I then hypothesized that “we are experiencing what Alvin Roth called the ‘unraveling’ of a sorting market.” This is bad news:
Authors may not be able to get any sense at all of the “market value” of their article (loosely reflected, the myth goes, by multiple offers at a variety of journals). Conversely, journals feeling pressure to move quickly will increasingly resort to proxies for quality like letterhead, prior publication, and the eminences listed in the article’s first footnote (which tell you who an author’s friends and professional contacts are).
At the end of that post, I promised to “explore empirical evidence that this is in fact an unraveling market problem (as opposed to anecdote, to the extent possible).” As it turns out, this was a hard promise to deliver on. There simply isn’t data out there – at least that I’ve been able to find, that collects historical information about the submission processes to law reviews. This is somewhat surprising. Law professors are insular, interested in navel gazing, and well-motivated to do anything other than grading. Moreover, the process of submission is an economically consequential activity. But only recently, in two works-in-progress, has there have been any attempt made to systematically get at this problem. See here, and here.
I thought I’d make a modest contribution to the field by contributing some data from Temple in this recent submission season, and ask our readers to contribute with their experience as well. The sample size is tiny; the respondents self-selecting. This is, therefore, Co-Op’s second “very non-scientific survey” this week. It’s a trend! The data is not meant to suggest any definite conclusions, but rather help researchers with hypothesis formation. But I’ll offer some grand thoughts at the end of this post anyway.
I emailed ten law professors at Temple who submitted an article to a mainstream law review during the spring season, and have received five responses to date. To avoid certain problems, I’ve stripped identifying information – like where the articles placed – from the responses and collated them.
- All authors used ExpressO.
- All five authors sent out their articles before March 3. The earliest mailing was on February 22. This contrasts with anecdotal accounts from past years. One colleague related submitting in 2001 in “the end of April”, and receiving an acceptance on “July 10.” In 2005, a colleague related submitting an article on April 5, and receiving an offer on April 18.
- Average paper length was 24,599 words (min: 17,668; max: 29,846).
- Three of five authors used a staged process of submission (i.e., sending out articles in waves). The average number of journals submitted to was 71 (min: 65; max: 199; median: 90).
- The average number of acceptances per article was 8.8, one acceptance per 8 submissions. (Min: 2; Max 14; Median: 8 ).
- The average time to the first offer was 13.6 days (min: 2; max: 33; median: 6).
- The ultimate journal making an offer provided an (average) of 8.2 days to respond. (min: 48 hrs; max: 19 days; median: 9).
The total days, from start to finish of the process, averaged 33 days (min: 6; max: 52; median: 32).
Grand Thoughts: The median time to first offer (6); total days in process (32/3); and starting date (by early March) support the hypothesis that article selection is abbreviated. If we had more data, including historical reference points, we’d be able to say more. And if we had enough data, we’d be able, as I’d like to, call for a moratorium on sending out new articles before March 1 of each year. Among other pro-social effects, this would allow authors more time during the school year to workshop their pieces to outside audiences.