“Best Practices” for studies of legal aid – more thoughts

A few suggested additions to Richard Zorza’s proposed “best practices” for randomized study of legal services:

(1)    Remember the distinction between what is measurable and what is important. Broadly speaking, legal services programs are trying to increase access to justice (make voices heard), solve individual or group problems (win cases), and change the legal environment so that poor people’s lives will be better (change laws or systems).  Our work on these efforts is interwoven; for example, we take individual cases that will both provide access to the system and fix a person’s problem, and in that process we gain important information about what may be broken in the larger system and where solutions may lie.

So: a large volume practice in an area of law that has a stream of comparable cases can be studied through randomization.  On the other hand, efforts to change laws or systems, and innovative start-up projects, must be evaluated through other means.

The corollary here is:  Be willing to publicly state, “This is a type of work that is susceptible to this research tool; there are other valuable types of work that must be studied with other tools.”

(2)    Be clear about what is (and isn’t) being studied. This may be a warning primarily aimed at the legal aid providers.  Over time, we will want to learn how much impact our scarce resources can have

  • in various areas of legal work
  • in different jurisdictions
  • for clients with different fact patterns, personal skills, age, linguistic abilities, mental health or physical characteristics
  • using a variety of different intervention levels and strategies (i.e. advice vs. limited representation vs. long-term representation)
  • and employing a variety of different personal advocacy skills (i.e. confrontational vs. compromising, high-level listening skills vs. high level speaking skills).

We will need patience and persistence.  Over time our services will be enhanced by exploring all of these questions (and more!).  But we will get garbage results if we try to do everything at once.

The Greiner and Pattanayak HLAB study, and all the commentary in this symposium, illuminates how much work we have to do.  Did the Harvard students have no impact (one commentator disagrees based on the data).  Could a change in client selection enhance the impact for the clients served?  A change in case strategy?  A change in law student advocacy style or skills?

We are so early in this learning process that for now, each study will primarily highlight the next set of questions to be asked.

(3)    Be aware of the costs of measurement.

Measurement takes time. When we say “legal aid to the poor is a scarce resource,” we mean that there are nowhere near enough people-hours to do all that we know justice requires.  Planning and carrying out a useful measurement (a “next step” in the learning process described above) takes time away from other activities.  We will have to think through, design, set up the study.  We will have to explain to staff and communities we serve and funders what we are doing and why.  We will be spending that much less time serving clients or raising money to serve clients.

At certain points, measurement may arm opponents of legal services. Others have remarked on this; as someone who has done a lot work to present the case for legal services, I’ll just say both that the danger is real but also that it should not be over-emphasized.  People who don’t like legal services to the poor will use data against us when they can.  But our genuine effort to maximize the impact of scarce resources will encourage our supporters.  And we need to remember that data is only one of the types of description we should be providing about legal services.  The individual stories of our clients, the testimonials of the bar and bench and community supporters are all part of the larger message.  Data is an important part – but only a part – of that broader message.

Similarly, measurement may over-emphasize aspects of the work that can be measured.  This is a cost to measurement, but one that can also be countered.  As discussed above, it is quite important for everyone involved in this endeavor to keep in mind that while randomized study may teach us important things about how best to serve clients, that does not mean that the only things important to clients are those which can be (or have been) measured.

(4)    Be clear that even findings of “no distinction between groups” are not necessarily findings of “no effect.” Two examples to illustrate this point:

First, imagine a hypothetical study of a legal aid program – half the eligible clients are randomly turned away.  Now assume that all of the clients “turned away” have on their own applied for and gotten assistance from a second legal aid program.  While designed as a study of the first legal aid program, in all practical terms this has now become a comparison study of two legal aid programs.  If the two programs provide identical assistance, clients in the study program would see no benefit compared to the clients turned away.  But if in fact people outside the study unrepresented by either program do much worse, there remains a real effect of the study program’s services that is not measured by the study.  (To be clear – this example is brought to mind by aspects of the Greiner and Pattanayak study, in which some clients turned away received other assistance, but it is not an accurate description of that study’s participants – it is just a hypothetical to illustrate that a control group is not necessarily representative of the broadest class.)

Second, take the very real world of housing courts in Connecticut.  I am told by colleagues that 35 or 40 years ago, before there was a broad legal aid presence in housing courts, landlords routinely ejected poor people without following the laws.  When legal aid started a high volume housing practice, legal aid lawyers stopped landlords from locking people out without process, stopped courts from evicting poor people who had a right to stay, and in some cases got money from landlords for violations of the law.  Landlords are now much more unlikely to illegally eject tenants; a study conducted now might find little difference in “ability to stay” between tenants who have a lawyer and tenants who don’t, because the landlord doesn’t know who has (or will have) a lawyer.  But this lack of randomized difference would not necessarily mean that the continued housing practice is not having an impact.  If legal aid completely stopped representing tenants, it’s likely that illegal practices by landlords would re-emerge.

(5)    Be willing to publicly and forcefully debunk misleading uses of your data. This is a plea from those “in the trenches” to those in academia:  when your data is misused in a manner that could harm support for legal aid to the poor, the protestations of legal aid providers may not be believed by those hearing the debate.  After all, we are not economists or statisticians, and we have a vested interest in the outcome.  The academics will be the credible voice to publicly tell funders and government decision-makers, “Those opponents of legal services are misrepresenting truth when they say that this study suggests that poor people don’t need, or shouldn’t get a lawyer.  Indeed, we engage in this research because we believe that by studying legal services to the poor we can help this small and dedicated group be as effective as it can be, for people who are desperately in need of that help.”

You may also like...

4 Responses

  1. dave hoffman says:

    This is a really terrific post!

  2. Jim Greiner says:

    Again, Amen!

  3. Richard Zorza says:

    Good analysis. In terms of the impact/volume analysis, the really interesting question is this: What is the volume of cases that you need to handle within a system to keep it honest and open as possible, and how do you handle them to do so? There are many strategies beyond handling every case.

    On the other hand, one needs to be careful that the system does not respond to intervention in some cases by either a) surrendering in those cases, but only those cases, or, b) by expecting a much higher standard of evidence/proof when there is an attorney, and therefore meaning that the attorneys get no better results. (I’d like to see this theory tested as an explanation of what happened in this research.)

  4. Jeanne Charn says:

    Great post. Particularly those of us who are eager for a substantial research program must become more sophisticated about how to interpret, qualify and defend research results – also how to frame questions that social science experts can help us answer.