How Much Enthusiasm for Randomized Trials? A Response to Kevin Quinn and David Hoffman

We thank Kevin Quinn and David Hoffman for taking the time to comment in our paper.  Again, these are two authors whose work we have read and admired in the past.

Both Dave and Kevin offer  thoughts about the levelof enthusiasm legal empiricists, legal services providers, and clinicians should have for randomized studies.  We find ourselves in much but not total agreement with both.  To Kevin, we suggest that there is more at stake than just finding out whether legal assistance helps potential clients.  In an era of scarce legal resources, providers and funders have to make allocation decisions across legal practice areas (i.e., should we fund representation for SSI/SSDI appeals or for unemployment appeals or for summary eviction defense).  That requires more precise knowledge about how large representation (offer or actual use) effects are, how much bang for the buck.  Perhaps even more importantly, scarcity requires that we learn how to triage well; see Richard Zorza’s posts here and the numerous entries in his own blog on this subject.  That means studying the effects of limited interventions.  Randomized trials provide critical information on these questions, even if one agrees (as we do) that in some settings, asking whether representation (offer or actual use) helps clients is like asking whether parachutes are useful.

Thus, perhaps the parachute analogy is inapt, or better, it requires clarification:  we are in a world in which not all who could benefit from full-service parachutes can receive them.  Some will have to be provided with rickety parachutes, and some with little more than large blankets.  We all should try to change this situation as much as possible (thus the fervent hope we expressed in the paper that funding for legal services be increased).  But the oversubscription problem is simply enormous.  When there isn’t enough to go around, we need to know what we need to know to allocate well.  Meanwhile, randomized studies can also provide critical information on the pro se accessibility of an adjudicatory system, which can lay the groundwork for reform.

To Dave, we say that our enthusiasm for randomized studies is high, but perhaps not high enough to consider a duty to randomize among law school clinics or among legal services providers.  We provided an example in the paper of practice in which randomization was inappropriate because collecting outcomes might have exposed study subjects to deportation proceedings.  We also highlighted in the paper that in the case of a practice (including possibly a law school clinic) that focuses principally on systemic change, randomization of that practice is not constructive.  Instead, what should be done is a series of randomized studies of an alternative service provider’s practice in that same adjudicatory system; these alternative provider studies can help to assess whether the first provider’s efforts at systemic change have been successful.

Our great thanks to both Kevin and Dave for writing, and (obviously) to Dave (and Jaya) for organizing this symposium.

You may also like...