Good to Evaluate Our Pet Notions

I’m delighted that Jim and Cassandra have conducted this study. We have accomplished many things in clinics, but we have not even scratched the surface of their potential to serve as sites of inquiry – what some of us have called “clinic lab offices” – to ask basic service delivery questions. Their literature review is a cautionary tale about what we know (or think we know) about the impact of representation. Their original research is pathbreaking, if controversial.

There is plenty to praise here, which I have done privately to the authors, but like Kevin I will focus on some of the challenges. In addition to the methodological limits of this study, the paper raises a lot of important questions that can only be answered through additional research, especially with respect to causal mechanisms. I’ve shared specific questions and suggestions in recent months, but my general view is that an organizational sociology/ethnography dimension – to generate more data about the administrative agency, the service provider and this particular subset of clients – would inform this study (and others) to help account for otherwise not obvious variables.

For example, w/r/t to the administrative agency, what incentives exist for ALJ to grant/deny benefits, especially to represented vs. unrepresented claimants? Do ALJs get punished/rewarded for false positives/negatives? My experience with agency adjudication is that there are often hidden nudges that operate much more powerfully than surface considerations (Social Security funds district offices based on the handling of some kinds of claims but not others; VA rewards certain workers for productivity, but not accuracy, etc.). How has this venue changed over time in response to focused representation by HLAB? Maybe this is a story of triumph – HLAB representation has been so effective that unrepresented claimants now have a level playing field, or at least a user-friendly setting.

In empirical work, the best data (in statistically significant terms) is often generated by the most narrowly drawn question (by eliminating/controlling for other causal mechanisms). That is, we sacrifice breadth for depth, which is a version of the limits that have frustrated critics of this study. RCTs are the research gold standard, but socially complex relationships can rarely be reduced to quantitative assessments, and are often better understood through the application of mixed methods, including a thick qualitative description of what’s motivating various actors within the system.

In 1967, referring to the HLS Community Legal Assistance Office established a year earlier and funded as a demonstration project by the federal Office of Economic Opportunity, Frank Michelman observed: “[W]e have undertaken to construct and demonstrate what we have been pleased to call a ‘model’ of a law-school-affiliated legal-services program. Stripped of pretension and reduced to practicality, what this means to me is that we are committed to a continuing effort to generate alternative methods, to put into operation whatever recommends itself to our objective appraisal, and to evaluate remorselessly our fondest pet notions.”

It’s taken more than 40 years, but Jim and Cassandra’s paper, with all its strengths and limits, is a testament to the potential and pitfalls of evaluating remorselessly our fondest pet notions. Hopefully the interest it has generated – and their forthcoming work, which I eagerly await – will spur others to undertake rigorous study of legal representation on behalf of the poor. The absence of such data impairs our ability both to allocate scarce resources effectively and to make the case persuasively to public and private funders for greater investments in the field.

You may also like...

1 Response

  1. Jim Greiner says:

    Hi, Jeff, thanks so much for writing, and for sharing these ideas with us (as you had before).

    We agree that an organizational sociology/ethnography dimension to the sort of randomized evaluations we’re pursuing would be helpful. There are a couple of challenges to consider, but none undercut your basic point that what you suggest is highly worthwhile.

    The first is that we may change the behavior of the system if we observe it in a very, very obvious way; this is the classic Heisenberg uncertainty problem (with apologies to the physicists for abusing this phrase). Our sense is that it may be easier for an adjudicatory system to “forget” an ongoing numbers-based evaluation that it does not “see” day to day than it is for that system to “forget” evaluators in its buildings who watch its operations.

    Second, the kind of study you’re recommending feels as though it would be at least as hard to conduct as the randomized evaluation we’re pursuing. Herb Kritzer’s book Legal Advocacy: Lawyers and Nonlawyers at Work shows how hard one version of this is to do. We do not find many of the quantitative conclusions in Herb’s work persuasive for the reasons discussed in our paper, but I at least (I can’t speak for Cassandra here) found the observational reporting powerful. The historical research you’re also suggesting would seem similarly difficult.

    Which brings us to the third point: are Cassandra and I well-suited to do this kind of work? My guess is that the kind of organizational investigation, and the kind of observation Herb did in his book, is no simpler to do well than is a randomized evaluation. It definitely requires the cooperation of potentially reluctant or resistant governmental agencies. And it probably requires mastery of a certain, defined qualitative methodology; the fact that I don’t know whether such a methodology exists may itself say something about whether we are well-suited to pursue it. Cassandra and I can bring quant knowledge to the table, and I can add my background as a litigator. We would need to tread carefully if we undertook an exhaustive organizational sociology/ethnography if the type you suggest.

    Of course we think we can learn and do everything (who doesn’t?), but pursuing this might require an investment in learning investigative methodology as well as the time and effort to do any particular study. Is this an argument for additional co-authoring?

    Again, many thanks for writing!

    Jim & Cassandra