Law Review Citations and Law School Rankings

columbia_law_review.jpgThere’s no shortage of writing on law reviews or law school rankings, to say the least. So why not combine the two?

Questions about law review ranking abound. How does one compare offers from journals at relatively equal schools? Is it better to publish with a journal that is more frequently cited or with one at a higher ranked law school? Is it better to publish with a main law journal at a top 40ish law school or the secondary at a top 10 law school? Questions about law school rankings abound as well, particularly for schools outside of the top 30 or so. (Or so it seems to me.)

I’m partial to citation studies as a way of judging quality. I know that citations have lots of problems as a way of ranking journals (or individual authors). However, I like the objectivity citation studies provide. And so I’m partial to the Washington and Lee Law Library’s website, which provides comprehensive data on citations to hundreds of law journals by other journals and by courts. I’ve found it useful in trying to draw some comparisons between journals. Other people often draw comparisons between journals by looking to the US News ranking of the journal’s school.

But this leads to some further questions: what’s the relationship between citations to law journals and the reputation of the school that publishes it? I have a vested interest in this question, because I’m the faculty advisor to the Alabama Law Review. (I like to argue to my dean that we need lots of money to host symposia, which he gives us.) And as someone who wants to encourage good scholarship, I hope that good scholarship and good journals are rewarded–that a good journal will reflect well on the reputation of the school that publishes it.

So that led me to analyze the US News data for 2006 (which actually appeared in the spring of 2005) and the 2004 W&L Law Library citation data (which measures citations to works published from 1997 to 2004). There’s a high correlation (.86) between citations and peer assessment scores for the US News top 50 schools.

I also looked at Professor Brian Leiter’s reputation survey, which I think represents a significant improvement methodologically over the US News data for the schools that he surveys. Some interesting stuff here–there’s a high correlation between Leiter’s reputation scores and the US News peer assessment (.91) and journal citations (.83).

But then things begin to get a little more surprising. For schools in the US News 52-102 range, the correlation is not nearly so high (.57). The correlation becomes even weaker when we consider journals at schools in the third and fourth tier (.41). The correlations are significantly weaker at each level between peer assessment and citations by courts: .66 for US News top 50 schools; .12 for US News 52-102 schools; .25 for US News tier 3 and 4 schools.

My paper contains detailed tables reporting the data and speculates some on their meaning. I suspect that people at schools whose journals over-perform (like Fordham, Cardozo, University of Miami, University of Kansas, DePaul, Albany, Indiana–Indianapolis, University of Colorado, and Houston) are going to be quite pleased with the results. Faculty at other schools may have other explanations–like the frequency of citation doesn’t mean much. And on that they may be correct. A lot of really, really fine work is rarely cited. I know that to be the case in legal history, the area of scholarship I know best, and suspect it’s true for some other important areas of the legal scholarship. Part of the problem with citations to legal history is that relatively little scholarship is being written in that area, so there are comparatively few opportunities to cite work.

There are some serious limitations with citation studies, of course. But the data are worth considering. One implication that I suggest is that for third and fourth tier schools, the citation data may be a way of bringing some precision to the peer assessments of school quality. Perhaps US News should look to citation data to gauge something about the intellectual orientation of a school.

Because my time as a guest at concurringopinions is about to expire, I rushed a little to get a draft of the paper out before I turn into a pumpkin. There’s some more I plan to do with this (including using the recently posted 2005 citation data and looking more at variances between amount of citations at each tier of school). A special thanks to Brian Leiter, who’s doing a lot to bring some more rationality to the rankings world.

Related posts by some other folks:

Kaimi Wenger’s On Rankings Bias; or, Why Leiter’s rankings make Texas look good — and why that’s not a bad thing

Kaimi Wenger’s The Uneasy Case for the US News Law School Rankings

Dave Hoffman’s Ann Coulter on Law School Rankings

Betsy McKenzie’s Law Students as Consumers of Rankings

Brian Leiter’s April 2004 More Thoughts on the US News Law School Rankings

Brian Leiter’s March 2005 Updating the 2003-04 Law School Rankings

Brian Leiter’s April 2005 More on the US News Rankings Echo Chamber

Brian Leiter’s August 2005 NY Times Expose of How Law Schools Manipulate the US News Rankings

You may also like...

10 Responses

  1. geoffrey manne says:

    I would also point out these two posts (coincidentally, of mine) over at The Conglomerate on topic.

    The first is my back-of-the-envelope list of top 30 journals using a secret formula consisting of unequal parts W&L/US News/Leiter. In other words, as Al suggests, I think journal rank is determined by some combination of the journal’s quality (citations) and the school’s reputation (US News & Leiter), recognizing, of course, that these are not entirely independent variables.

    The second is a post on the (un)importance of article order within a journal issue, but a little way through the comments, Gordon highjacks the thread and begins a discussion on journal rankings.

  2. Law Student '06 says:

    I think the problem with law reviews (and the law profession in general) is the fact that prestige often is the most important factor in any of these journals. For example, is the Harvard Law review more likely to publish the works of a professor who went to Yale and is teaching at the University of Chicago, or someone who went to American University Law School, and teaches at Hofstra’s law school?

    Quite frankly, I think if these two hypothetical people submitted the *EXACT* same piece of scholarship, that the first would stand a much better chance of getting their work published. In fact, the second person might not even get their article reviewed. It’s a reflection of the self feeding cyclical nature of a system in which elitism feeds elitism, making it virtually impossible for certain people to break into the upper ranks of the profession.

    Quite obviously, for the author of an article, it is much more beneficial to be in the Harvard Law Review or the Columbia Law review, than a law review at a lower ranked school. That much is simple.

    I think the more important question is whether it is good for the law profession to have such a strictly rank-driven system based on elitism. It can be seen at every level – admissions to law school, the pedigree of law professors, law journals, clerkships, and jobs at the elite firms. While there’s no doubt that the highest ranked law schools educate many of the brightest students out there, it is also true that the best legal minds in America aren’t only found at Harvard, Yale, Stanford, Columbia, Chicago, UPenn, UVA, Michgan, and a handful of others.

  3. Kevin C says:

    I am an articles editor at Texas Law review and I feel like we made a conscious effort to select what we felt was good scholarship, and not just people from the very top, prestigious law schools (according to US News-style rankings). Just as an example, we have an article from a professor at South Carolina and seriously considered an article by a professor at St. Louis University. We really looked for young scholars who will–hopefully–build on this work as their careers continue. This should benefit TLR because the scholar will refer back to their earlier work in TLR, and will also–again, hopefully–benefit the young professor by getting their name into a relatively highly ranked journal early in their career. We looked for young scholars who had already published substantial work in less well-known journals because we wanted someone who showed an ability and a desire to produce a lot of scholarship, but who was also producing good scholarship. TLR is ranked pretty high in most law review rankings, but we did lose a number of pieces to Yale and other law reviews and that is part of what made us decide to look to professors at schools who might be ignored by law reviews at the very top of the rankings.

    I think the work done by Prof. Brophy is important, and it’s something we’ve commented on around the office. TLR is ranked fairly close to Texas in many of the rankings I’ve seen, but Fordham Law review is noticeably above the ranking of Fordham by US News. I don’t know how US News does their rankings, but I think law review rankings should play a substantial part in how the law school itself is ranked. It says a lot about students–and therefore the professors who teach them–that they are able to pick out good legal scholarship from the huge number of articles that get submitted every year. This is especially true at lower ranked journals who have to worry about losing articles by big name professors to Yale and other top journals. This means they are picking good work based on the content of the article, not just the name/school of the author.

  4. Jason says:

    Perhaps it’s me being naive, but what does all the stuff about who the scholar is have to do with anything? Shouldn’t, ideally, the piece itself be all that matters?

    This informs a skepticism on my part that the schools with “better” reviews than US News rankings has anything whatsoever to do with their ability to pick out “good” scholarship, because it’s sort of apparent that no one’s actually looking at how good the scholarship is, except as a threshold issue for publication.

    Perhaps it’s time for articles to be decided on in the same (blind) way exams are graded? What would be the downside there? Aside from transaction costs from having to implement the blind system itself.

    (Full disclosure: I attend one of those schools that’s more highly regarded by law review than US News ranking: Cardozo.)

  5. Al,

    1. It would be great if there were a way to control for the “one hit wonders” — journals that get most of their citations from just one lucky hit.

    2. To what extent should symposia count? Chicago-Kent, for example, is an all-symposium law review I believe. While law reviews should get kudos for planning good symposia, some might be generating many of their cites not by their article selection process but by inviting big-name scholars to write symposium pieces. I’m not sure whether such pieces should be excluded, but the question is worth addressing.

  6. Michael Yim says:

    You should also consider secondary Journals at a law school. For example, Fordham’s International and Urban law journals rank amongst the top 5-10 in specialty law school publications.

  7. Alfred Brophy says:


    Both very interesting points. “One hit wonders” can, indeed, increase a law journal’s ranking; this is particularly noticeable in court citaions. St. Mary’s Law Review is ranked third in court citations (largely on the basis of one article–111 of their 136 citations are to Wendell Hall’s Standards of Review in Texas). After learning about that, I thought about writing an article on standards of review in Alabama….

    But some spot-checking suggested that the one-hit wonder phenomenon has a smaller effect when ranking journals based on citations by other journals. That is, journals that do substantially better in citations by journals than one would predict based on their US News ranking seem to benefit from a series of articles that win citations.

    I think symposia ought to count; I think that higher-than-expected citation counts by journals suggests that a school has a particularly developed scholarly culture. So I think we want to encourage schools to host symposia.

    I continue, of course, to be bothered by a rigid adherence to citation counts as a measure of quality. My sense is that citation counts are increasingly important in hiring (particularly hiring of chairs). And I think that pretty systematically disadvantages those of us who work in esoteric areas, like legal history. There just aren’t many opportunities for even absolutely outstanding work on seventeenth-century English law to be cited. One study that I’d like to conduct sometime soon is a comparison of citations to the “legal history” work of leading legal historians with citations to their more doctrinal work.

  8. reader says:

    I wonder if your methodology might blur cases in which a good law journal actually exerted an upward pressure on its school’s USNWR ranking by putting it in the minds of more law professors. I’m not sure how often this happens, but I suspect at least some of Iowa’s rise up the charts, for example, is related to the fact that its law journal’s reputation used to dramatically exceed the reputation of the school.

  9. Al — I agree with your unease about citation counts. I’m also even more uneasy about SSRN download counts. The best way to assess a scholar’s work is to read it. The problem is that people want shortcuts, and it is easy to count citations. Before Westlaw and SSRN, I wonder what people did when they couldn’t easily generate citation or download counts. I posted about some of these issues here.

  10. Alfred Brophy says:


    I like the possibility that you suggest–that a “good” (that is, frequently cited) law review increases a school’s prominence. I think people pay attention to what’s in reviews and that can help form opinions about those schools. One vignette here: way back in 1993 when I was interviewing for jobs–back before the internet–one of my key sources of information about schools I was interviewing with was law journals. I used them to get a sense of what was going on at the schools.

    I agree that the reputation of schools with reviews that consistently publish out-perform their ranking (like Cardozo, Chicago Kent, and Fordham) are helped by those reviews. I think well of Chicago Kent in part because I’ve read a lot of articles in their review–I have particularly fond memories of their symposium on slavery back around 1996 and their several symposia on law review citations. All of which suggests that a good symposium can generate good will for years into the future. This, of course, is all part of my plea for faculty to take more of an interest in what’s happening at their law review and to encourage the publication of better scholarship.