Academics – driven by their accrediting agencies – have a new buzzword. We are all now charged with thinking about assessment. How well are we doing at the goals we set out for ourselves? How do we know? How do we know if our processes of assessment are appropriate? As an academic (non-legal) blogger observed:
Going beyond the reasonable notion that you should periodically take a deeper look at what you’re doing, pedagogical reformers of many sorts get convert zeal and treat assessment as a moral imperative. But, when a religion has enough zealous adherents, it might suddenly become mainstream. And when it goes mainstream, it goes from being pure to being mass market lowest common denominator oversaturation. The word “assessment” is no longer just confined to careful examinations of how well something is working. It isn’t even just applied to a bureaucratic ritual of report-writing focused on the curriculum. It’s applied to every piece of paper, every report, every bit of data, any and every piece of bureaucracy and hoop-jumping and report-generating. The odds are good that a time sheet will soon be marked “Hours assessment” and an account statement will be marked “Fiscal assessment.”
This proselytizing ideal has obviously caught on in the ABA’s self-study process, which requires not just a strategic plan and a strategic planning process, but also that the school show that it regularly evaluates its self-assessment and thinks about whether the school’s goals are good ones. Schools which fail to have a process, plan, and plan assessment will be disapproved until they come to their senses.
It’s no small irony – nor I’m sure am I the first to note – that there is no evidence at all that schools which regularly engage in planned reflection produce better outcomes for students or for society than schools who muddle through with less formal techniques. I’m not even sure that it is possible to design an experimental study that would make the case for assessment, given external validity concerns. The case against self-reflection is pretty simple: deciding what academics ought to maximize is a hard problem, and any answer arrived at by any group of people will necessarily be too vague to provide hooks for truly useful tactical choices, especially when the time spent planning uses up productive resources . Indeed, it’s possible that designing ever-more-particularly assessment metrics (and plans for achieving those metrics) encourages us to set ever-more-narrow goals, which are then, comfortably, met.
All in all, I’d give the current assessment trend a 23.3 on an A to ∂ point scale, where our goal is to hit a ß.