US News Rankings – Focus on Inputs

In my discussion that the US News Law School Rankings create negative incentives for law schools, I want to start by examining the misplaced emphasis of the components of the rankings. One of the oddities of the US News rankings is that the quality of incoming students (as measured by median GPA and LSAT) factors so prominently in how schools are ranked. The two factors account for 22.5% of a school’s overall ranking. Yet, what exactly do we expect to learn from the median GPA and LSAT scores? First, those two factors tell us, in aggregate, how the last entering class perceived the relative value of each law school. Students with the highest LSAT/GPA numbers typically have the greatest number of choices in terms of schools and scholarships. As a result, the GPA/LSAT scores are highly correlated with the previous year ranking. And, thus, the previous year ranking largely predicts the subsequent year ranking. Second, the numbers give us a crude sense of the quality of the student body before receiving law school education. Such considerations offer guidance to potential employers, but we might ask why those factors are important in any ranking system seeking to assess law school quality from a student perspective.

You may also like...

12 Responses

  1. cornelius mccracken says:

    “Peer Assessment” accounts for a full quarter of the final ranking. I’m guessing “the previous year ranking largely predicts the subsequent year ranking” just as much with this even-larger segment of the final rank. At least the higher-quality students have a larger impact on the ratings than their lesser peers. The same cannot be said for faculty.

  2. Orin Kerr says:

    Students often learn a lot from other students — in class discussions, in study groups, and in informal meetings. If LSAT/GPA is a modestly successful way to measure the smarts of other students, it gives applicants a way to know how sharp are the other students from whom they will learn.

  3. dave hoffman says:

    This is all very interesting, though I’ve one quibble. The inputs do not account for 22.5 of the ranking. Their influence is mediated by variation and normalization (as Ted Seto showed some years ago). For instance, there is less variation in uGPA than LSAT, exaggerating the importance of the latter factor.

  4. Corey Yung says:

    Hi Orin,

    I agree with your point that students learn from each other, but simply do not know how much of an effect there is from high-score students being with many, many other high-score students (and vice versa). If, as in my thought experiment today, students were randomly assigned to schools, high-score students would still have plenty of high-score students to be with. But they would also encounter students from a wide range of entering score levels. That might also might mean greater encounters with people from diverse backgrounds of all types. My inclination, totally unsupported by data, is that students learn the most when regularly encountering people at, above, and below their present ability level. To the degree that LSAT/GPA embody that, I think the present system homogenizes student bodies too much. But I recognize that I’m assuming a lot in that conclusion.

  5. Corey Yung says:

    Hi Dave,

    I debated whether I should mention that, but sided with simplicity. Particularly because the distributions among other variables (job placement, expenditures, etc.) have changed in the down market since Seto looked at the rankings, it is difficult to know how much any given variable affects schools on average. To be more precise, I should have probably said that LSAT/GPA count for 22.5% of each school’s raw score, but I’m not even sure if that is correct because US News keeps too much of its methodology hidden. I simply don’t know year-to-year how they scale each variable without trying to reverse engineer it. So, I just stuck with US News’ stated percentage because not knowing the exact effect of LSAT/GPA doesn’t undermine my overall point.

  6. Dave Hoffman says:


    Yes, it’s not the same as when Seto published, but it’s not terribly different either. Reverse engineering is much, much easier than it sounds.


  7. Orin Kerr says:

    Corey, I don’t see this as so difficult. When you complete a draft paper and you send it out for comments, do you send out your draft to some people who high-ability and others who are low-ability, just to get a diversity of reactions — so you can learn from the low-ability readers just as you can from the high-ability readers? Or do you try to send out your draft only to people who are high-ability? I’m guessing you do the latter, as you likely have found that you get better assistance from high ability readers than from low ability readers. To be clear, I’m not saying that LSATs and GPAs accurately measure ability. But they’re efforts to measure ability, so they’re relevant to students.

  8. Corey Yung says:

    Hi Orin,

    I don’t disagree. My point is that there are diminishing returns to having more and more high-score people around. Just as I don’t send my drafts out to 100 people, I think there is some threshold for high-score students after which more isn’t that valuable. In contrast, I think a lot of students learn a lot when helping others who aren’t as quick as them. And sometimes being with similarly situated people (as opposed to smarter people) allows students to struggle through the issues without the answers being given to them. I also wonder if high/low score mixes serve as a proxy for other forms of diversity which add to classroom experience. But my feelings on this are largely tentative because I have nothing other than my own anecdotal observations.

    Regardless, I shouldn’t even be arguing against your thoughts on this because either theory (yours or mine) is consistent with excluding input variables from a rankings system. If there are positive effects associated with being around smart students, we would expect those to show up in output variables as well. Inclusion of input variables only serves to mask portions of the output numbers which are simply due to higher entering class quality and creates self-reinforcing rankings. Although it is many posts into the future, my ultimate purpose in critiquing the US News rankings system is to introduce a system that excludes inputs and, in my opinion, is superior.

  9. Corey Yung says:

    Hi Dave,

    It has been a few years since I did try to reverse engineer the rankings. Having been, at the time, at an unranked school necessitated delving deeper into the rankings system. My memory, though, was that some of the scaling was wonky, but I could be misremembering. Either way, I’m arguing against the inclusion of any input variables in the final rankings and didn’t want to create confusion by quibbling with US News’ stated percentage. But your point is taken.

  10. Orin Kerr says:

    Corey, I don’t see why inputs are irrelevant under my theory. It seems to me that they are highly relevant. The learning is occurring at the input stage, and it is learning that is difficult if not impossible to measure using outputs. Most of the useful conversations that occur among studnets are not about testable things. They’re not about the bar exam. Instead, they’re discussions of theory, or constitutional interpretation, or interesting aspects of policy. I don’t know how those conversations can be assessed based on output.

  11. Corey Yung says:

    Hi Orin,

    I think with the right output variables, even subtle effects should be measurable. I am primarily thinking of moving beyond the initial job placement numbers that US News (and other rankings systems) focus on and utilizing short, medium, and long term employment options. Ideally, we would also have a student/alum survey system as well, but I don’t think that is realistic right now. But it is possible that eliminating all input variables, like I propose will fail to capture possible experiential benefits to particular law schools like you describe. However, I’m skeptical that those are well measured by median LSAT and GPA scores. Either way, I hope you will let me know if my end rating system relying on new data, when I post it, is better or worse at addressing the potential omissions you outline. My goal is to offer the best quantitative system for assessing law schools and I don’t want to omit important factors if possible.

  12. Very nice post, I haven’t seen such a great contents before. The description of topic and the beauty of writing skills are amazing, awesome. Keep up the great work. Thank you so much.
    Buy Custom Essays