In a previous post, I observed that “the time for submitting law review articles is creeping backwards.” I then hypothesized that “we are experiencing what Alvin Roth called the ‘unraveling’ of a sorting market.” This is bad news:
Authors may not be able to get any sense at all of the “market value” of their article (loosely reflected, the myth goes, by multiple offers at a variety of journals). Conversely, journals feeling pressure to move quickly will increasingly resort to proxies for quality like letterhead, prior publication, and the eminences listed in the article’s first footnote (which tell you who an author’s friends and professional contacts are).
At the end of that post, I promised to “explore empirical evidence that this is in fact an unraveling market problem (as opposed to anecdote, to the extent possible).” As it turns out, this was a hard promise to deliver on. There simply isn’t data out there – at least that I’ve been able to find, that collects historical information about the submission processes to law reviews. This is somewhat surprising. Law professors are insular, interested in navel gazing, and well-motivated to do anything other than grading. Moreover, the process of submission is an economically consequential activity. But only recently, in two works-in-progress, has there have been any attempt made to systematically get at this problem. See here, and here.
I thought I’d make a modest contribution to the field by contributing some data from Temple in this recent submission season, and ask our readers to contribute with their experience as well. The sample size is tiny; the respondents self-selecting. This is, therefore, Co-Op’s second “very non-scientific survey” this week. It’s a trend! The data is not meant to suggest any definite conclusions, but rather help researchers with hypothesis formation. But I’ll offer some grand thoughts at the end of this post anyway.