Measuring Law School Teaching (Continued)

Evstafiev-barocoa-school.jpgThanks to various commentators (and Professor Carrell) for sending me this paper measuring the effect of teaching on students’ performance. As I’ve previously written, the paper (and its related literature) provides reasons to question assumptions about the relationship between student satisfaction and learning.

Two paragraphs from the paper, which I’ve now read, stand out:

Results show the less experienced and less qualified(by education level) calculus professors produce students who perform better in the contemporaneous course being taught, however, these students perform significantly worse in the follow-on advanced mathematics-related courses. Although, we can only speculate as to the mechanism by which these effects operate, one might surmise that the less educated and experienced instructors may teach more strictly to the regimented curriculum being tested, while the more experienced professors broaden the curriculum and produce students with a deeper understanding of the material. This deeper understanding results in better achievement in

the follow-on courses.

Ah, we’ve finally found a good potential defense of tenure: it enables brave souls to deviate from bad, top-down, curricular mandates! This section highlights a key difference between law school teaching and the undergraduate instruction studied in the paper: law school teaching isn’t generally guided by a centrally-administered curriculum. Unlike instructors at the college level, law professors are almost never told at any level of useful detail what their courses should cover. (I’ve heard odd rumblings that even attempts to coordinate instruction across sections would impinge on academic freedom!) This difference makes it hard to study law school teaching across professors, and makes it all but impossible to replicate this study in the law school classroom.

Assuming, however, that the study has general implications, the finding about teaching evaluations is of particular interest:

[P]rofessor evaluations in the initial courses are very poor predictors of student achievement in the follow-on related courses. Of the 27 coefficients [of achievement studied] 13 coefficients are negative and 14 are positive, with none statistically significant at the 0:05 level. Again, results for question 22, which asks students, “Amount you learned in this course was:” show that a 1-point (equivalent to 1:8 standard deviations) increase in the mean professor evaluation resulted in a statistically insignificant .014, -.008 and -.018 respective standard deviation change in calculus, science, and humanities follow-on related course achievement. Since many U.S. colleges and universities use student evaluations as a measurement of teaching quality for academic promotion and tenure decisions, this finding draws into question the value and accuracy of this practice.

A commentator to my previous post suggested that this result suggests a problem in the grading system. I agree! Professors aren’t being graded by students on indices relevant to how well the students are learning. Jason Solomon’s use of self-reported student gains in analytic ability scores in measuring “educational quality” thus appears to potentially mislead about how well law schools are doing.

I’ve suggested in the past that we ought to look for alternative ways of measuring teaching that go beyond student satisfaction. A nice approach, if the data were available, Or, as Bill Henderson (and others) have argued, you could focus on employment outcomes. Either of these alternatives, it seems to me, would dominate over student satisfaction metrics, which are (at best) a very bad proxy for whether law professors are doing their job, which is to model & instill in students a lawyer’s situation sense.

(Photo Credit: Wikicommons)

You may also like...