An Experiment in Paced Learning
As law professors, we have multiple teaching goals. Traditionally, the major goal has been to teach black-letter law as well as the legal reasoning process (how to “think like a lawyer”). On top of that, some observers have suggested that law classes should teach other practice skills. As a professor, I recognize that I don’t have time to do everything. I’ve experimented with some different approaches in the past. For instance, as I’ve written, I’ve tried introducing practice-like exercises in courses. But of course, there’s some balancing required. Our ability to experiment depends on our teaching goals and the time available to dedicate to different projects.
This past semester, I tried something new. After some of my students had less than ideal results in some prior bar administrations, I wanted to prepare students for testing on the material — both the black-letter law, and the testing process. With that focus, I tried implementing a paced learning system. Here’s what I did, and here’s how it worked out.
Modules: The Concept
I required my Wills students to take seven module tests during the semester. About every two weeks, students took a 20 to 30 minute in-class mandatory test on the material we had just covered. These were divided up thematically — e.g., capacity, intestacy, trust creation, and so on. They were graded, and each counted for 5 percent of the final grade. I also built in a competency requirement: Students had to achieve at least 50% on each module test, or they would be required to re-take it.
There were several goals here. I wanted to push students to learn the material on a consistent basis with feedback at regular intervals — that is, it was a system of paced learning. I also wanted to push those students who didn’t know the material to improve their knowledge.
The modules ended up becoming a major logistical headache, in part because of some factors I had missed going in. I hadn’t anticipated at the start how complicated the process would be. (I’m making some changes going forward, to streamline the process.)
As it was, the retake requirement ended up causing delays which prevented me from being able to give student feedback for far too long. (That is, say that Jane Student needs to retake the test. I can’t release the test question until Jane Student is done with her retake. Scheduling the last few retakes often became complicated. And so it was weeks or longer before I could release questions and give detailed feedback.) Scheduling got very complicated, and students were sometimes frustrated. It was a pilot program, and I learned a few things not to do.
The program added a significant amount of work to my semester. Between creating questions, grading questions, and responding to students, I probably spent 4-6 hours per week on modules — 80 hours over the semester. That’s a significant investment.
Because of the time invested and the logistical difficulties, I was curious as to what results would look like. Would they justify the headaches?
The results of module testing were astonishingly good. I was expecting to see some improvement, but the level of improvement was stunning.
Student essay answers were strong. Based on essays, the students learned the material very well. However, it’s a little tough to tell precisely how much better.
Multiple choice results were unequivocal: This set of students performed far better on multiple choice questions, by a very wide margin, than any other wills class I’ve ever taught.
I typically give about an hour of multiple choice for the three-hour exam. I’ve used a similar set of questions each year. I always add some, take out some, tinker a little — so the set is not entirely the same. But overall, it’s a very similar set of questions. Historical numbers on student performance have held steady for years.
Over the past seven times I’ve taught, the mean and median MC scores have averaged around 17.5 out of 30. It’s been a very consistent pattern, slightly higher in some test administrations, slightly lower in others, but overall very consistent.
|*****||Spring 2008||Fall 2008||Spring 2009||Fall 2009||Fall 2010||Fall 2011||Spring 2012||Average|
I’m used to the results by now. So when I saw this class’s result, I did a double take.
These results were stunningly good. To put in context, _no one_ in the past seven times I’ve taught has ever scored over 26/30 in multiple choice; this time around, there was a 28 and two 27s.
It was not only the high end students who benefited. The median and mean scores were both almost four full points higher than the historical average.
These were astonishingly good results.
I want to point out a few caveats. First caveat: This is a single data point. It’s tough to figure out all factors from this.
Second, I want to acknowledge potential factors affecting the result. For instance, this might have been an unusually smart set of students. (However, it seems unlikely that this would explain the whole change.)
Third, I want to reiterate that, as always, I added, changed, and took out a few MC questions. I might have inadvertently added multiple questions that were just too easy. (In fact, I’ve determined that one of the new adds was probably too easy, and I’m going to tinker with it.)
But again, this is also part of the normal course. I tinker with questions all the time. Unless there was an unusual concentration of overly easy questions, this wouldn’t explain the jump.
Modules: Preliminary Conclusions
Given the caveats, and based on the information I have now, I’m prepared to say at this point that this is the single most prepared-to-take-a-Wills-test set of students I have ever had, by a wide margin.
I don’t know that they know more about Wills than other students. But they were definitely, definitely more prepared to take a test on Wills than any other class I have ever had.
I think the major cause has to be the obvious one: These students did better in Wills testing than ever before, because they _got_ a _lot_ of Wills testing. Over the course of the semester, they had to take over two hours of mandatory in-class testing. We went over the tests afterward and discussed the answers (although, as noted, there was often too long a delay in the process).
I think it helped a lot that this was required. I’ve always encouraged students to take practice tests, but it’s different when it’s mandatory. Being in a real test environment, every two weeks, is something I’ve never done before, and it seems to have worked, really really well. I hope that it has prepared this set of students well for the bar, as well.
Modules: Going Forward
As noted earlier, preparedness for a test is not the only goal in class. Obviously there are other pedagogical goals as well. There are limits on student time, and on my time. This paced-learning framework limited my ability to do other things (e.g., it meant that I was not able to use some of the practice-readiness exercises I had used in the past). However, as long as a level of test preparedness (both substantive black-letter legal knowledge, and practice in a testing environment) is a main goal, this approach was highly successful.
The module testing was a lot of work, and it will continue to be work, going forward. In order to give feedback, I ended up releasing my questions to the students once all exams were in. That was the better pedagogical move, but it also means that I’ll be creating new questions this semester, which will take time and resources.
However, given the results, and given the need to prepare students for testing on this subject, I’m going to continue with modules in Wills. And if results continue this way, I’m going to try to phase it in to other classes BA as well. I’m really, really excited about these results. All in all, the module program met and exceeded my goals, and I’d call it a real success.