On Rankings Bias; or, Why Leiter’s rankings make Texas look good — and why that’s not a bad thing

Recent blog posts by Paul Caron and Gordon Smith note that creators of alternate law school rankings often seem to create rankings systems on which their own schools excel. A possible implication — not an implication that Smith or Caron make, but one that various Leiter critics have been making for some time — is that these alternative rankings are merely a form of naked self-promotion by their creators. In its simplest form, this argument would go something like this: “Brian Leiter promotes his rankings because they rank Texas higher than the U.S. News, and this makes Leiter look better.”

In response, Leiter has asserted through blog posts and comments that his rankings do not necessarily make Texas look better. His recent statements focus on the fact that he lists student quality on his new rankings page. He writes:

My institution, Texas, ranks 8th in faculty quality measured by reputation, 9th in faculty quality measured by impact, and 16th or 18th in student quality, depending on the measure used. Texas ranks 15th in US News, as it has for quite some time now. Texas thus ranks both more highly and more lowly in my ranking systems, depending on the measures used.

This is a singularly unconvincing fig leaf. Everyone knows that the 2000 and 2002 Leiter rankings did not weight student quality particularly heavily; they measured mostly faculty reputation, and they clearly gave an edge to Leiter’s school. (This is readily apparent from a look at Leiter’s archives section). Thus, for some time now, the Leiter rankings have placed Texas higher than the U.S. News list.

Is this cause for concern? Does this suggest that the Leiter rankings are simply self-promotion? Actually, there is a much more innocuous explanation.

Start with the principle that there are many ways in which law schools differ. These include faculty scholarship, teaching, library, student support (such as tutoring for the bar), student LSATs, tuition, job placement, class size, and so on.

Equally important is that a law school cannot excel in every one of these possible yardsticks. The trade-offs of time and money require that a school make choices between them. A law school may choose to spend money on more research support for the faculty; it may choose to spend money on more scholarships, to draw in higher-LSAT students; it may spend money on library; it may try to improve bar passage rate or job placement.

It is not possible to say objectively that any of these factors are more or less important than any others. Is a 10% increase in faculty scholarship better than a 10% increase in LSAT scores? Is a new center better than 500 (or 1000) new library volumes? How about a 10% increase in bar passage? Or a 10% decrease in tuition? Or a 10% increase in student-body or faculty racial diversity?

Ask these questions to ten different people, and you’ll get ten different answers. A particular individual’s responses will be determined by the particular values and preferences of that person. And because these decisions are made school-by-school, different schools will end up focusing on different things. School A will focus more on faculty scholarship; School B will focus on attracting smarter students; School C will focus on smaller class sizes; and so forth.

Given this reality, rankings methodology becomes key. If I create the “Wenger Rankings” — based 80% on faculty scholarship and 20% on student LSATs — I will come up with one list of schools. If Dan proposes the alternate “Solove Rankings” based 80% on LSATs and 20% on scholarship, he will come up with a different list. Our methodological differences will result in real differences between the lists; some schools will be elevated in his rankings but lower in mine, and vice versa. Whose rankings are “better”? There’s no objective answer — it will depend on the preferences of the person asked. (And the question will become even more complicated when Dave adds the “Hoffman rankings” to the list, with yet another methodological mix).

So, let’s get back to our cynic. The cynic charges this: Brian Leiter self-interestedly chose a methodology (pure emphasis on faculty scholarship) that would value Texas more than the U.S. News. Our cynic looks at Paul Caron’s list and sees more evidence of methodological gaming. How is it possible that every alternative ranking system would result in the ranker’s home school placing higher than U.S. News? It’s clear that every rankings system creator is choosing a methology that values her own school higher than U.S. News. Could there be any explanation for this pattern, other than self-interest?

Yes, there could.

The same factors that locate schools on a ranking grid — a particular mix of scholarship, student quality, library, tuition, and so forth — also affect a professor’s decisions on where to teach. A relatively well-established professor like Brian Leiter chooses a school like Texas because its particular mix of the various factors seems right to him. And so we see the innocuous reason why rankers’ lists tend to favor their own schools:

The factors that Leiter believes are important are the ones that he uses to judge between schools. Those are also the factors that drew him to Texas in the first place.

Leiter rates faculty productivity far above student LSATs. (This is evident from his first two ranking lists). That preference on Leiter’s part drew him to Texas in the first place — as noted in the newest Leiter rankings, Texas has a relatively stronger emphasis on faculty productivity, as compared to student LSATs. That same preference leads him to his methodology (rating faculty productivity above all). And not surprisingly, Texas does well on such a list. If it didn’t, he wouldn’t be teaching there.

Rather than evidence of self-interest, the Leiter rankings show that, given Brian Leiter’s preferences, Texas _is_ the 8th-best school in the country. Now Brian Leiter’s preferences certainly aren’t gospel; it is perfectly valid for someone else to incorporate their own preferences into their own methodology. However, Leiter’s rankings are a perfectly valid, and not necessarily self-interested, portrayal of schools as laid out according to his own preferences. And while I am not as familiar with the other rankings that Caron lists, I suspect that much of the same is going on with the other rankings. That is:

A rankings methodology that rates one’s own school highly need not be evidence of self-interest. Rather, in many cases, it probably flows from the fact that the ranker is teaching at a school whose values closely correlate with her own particular preferences and values. Those values underlie both her choice of methodology, and her decision to teach at that school in the first place.

You may also like...

3 Responses

  1. Law Review Citations and Law School Rankings

    There’s no shortage of writing on law reviews or law school rankings, to say the least. So why not combine the two? Questions about law review ranking abound. How does one compare offers from journals at relatively equal schools? Is…

  2. Law Review Citations and Law School Rankings

    There’s no shortage of writing on law reviews or law school rankings, to say the least. So why not combine the two? Questions about law review ranking abound. How does one compare offers from journals at relatively equal schools? Is…

  3. Surprised this has not drawn more comments by now, though it is an excellent analysis.