Teaching Evaluations

You may also like...

5 Responses

  1. To me, the overriding principle is to have a clear understanding of who the intended audience for an instrument is. Is this a tool for the school to evaluate professors, for students to learn about professors they might take classes from, or for professors to improve their teaching? The more of these bases an instrument tries to cover, the less well it’ll do at any of them. For that reason, I can’t recommend highly enough the practice of using different evaluations for these different purposes. Three may be overkill, but I’ve found that two is quite reasonable:

    We hand out course evaluation forms near the end of the semester. They’re good, but they’ve got some, perhaps unavoidable, issues: First, by asking for numerical ratings, they put the class in an evaluative frame of mind: this was good, that was bad, let’s play Simon Cowell for a few minutes. The qualitative responses end up being a little thin. Second, due to the timing, it becomes impossible to ask questions about the exam, even though the exam is the focal point of the course (due to the cockamamie law-school grading system).

    I get around these issues by also sending my students a SurveyMonkey poll a little after the grades are in. The timing lets me ask things like whether the exam was fair. And the less evaluative nature of the instrument means I get longer, more helpful responses to questions like “What topics do you feel could have been taught better?” The answers have been very useful. For example, in my Copyright class, I’m going to completely revise the way I teach substantial similarity, My SurveyMonkey told me pretty strongly that that week was a disaster, a fact that was hidden in the noise of lesser concerns on the official school evaluation.

  2. Eric Goldman says:

    James, if it makes you feel any better, I think most copyright professors feel like they chunk the “substantial similarity” section. The law is way too squirrelly for anyone to teach it cleanly.

    Sarah, this is a terrific topic. Let me offer up a couple of marginally useful generalities:

    * there is an enormous amount of scholarly research on teaching evaluations–probably thousands of articles

    * at the law schools I’ve been affiliated with, precisely ZERO of the scholarly research is consulted in the design and implementation of teaching evaluations.

    I can’t solve your broad problem, but I’ve repeatedly blogged on literature about teaching evaluations, both in the law school context and beyond:








    Regards, Eric.

  3. Jim G says:

    “Should teaching evaluations be released publicly?”

    Yes, partially. The results (but not the individual survey responses) should be released to students, at a minimum.

    Students want information about professors—who is a good lecturer, who isn’t, who’s a tough grader, who they’ll learn from, who is old-school Socratic and who is more laid back, and so on. In the absence of official evaluations, students go to outside sources of information. Other students. Ratemyprofessors.com. Internet message boards. Bad information sources, most of them, but if that’s all a student can get, that’s where she’ll go.

    Course evaluations aren’t perfect, but they’re better than the available alternatives. An official report on student satisfaction with courses and professors is the best way to deal with sites where professors are rated by a handful of self-selected students.

  4. Anon. says:

    There is a lot of literature on evaluations. One of my favorites is this relatively recent one:


  5. anon. says:

    Professor Merritt’s Article (referenced above) appears in the St. John’s Law Review:

    82 St. John’s L. Rev. 235 (2008).