Tagged: Symposium (What Difference Representation)


Greiner and Pattanayak: The Sequel

In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.

The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.

If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.



What can we learn if we assume Greiner and Pattanayak are right?

When legal aid providers read “What Difference Representation? Offers, Actual Use, and the Need for Randomization,” we immediately start to raise questions.  Appropriately, we note that there’s a vast difference between a busy law student handling what may be their first case and a professional experienced legal aid lawyer.  We note that, apparently, some significant number of the people randomly turned away by the Harvard law school clinic were then advised or represented by Greater Boston Legal Aid.

There is also a broader question, which I will explore in a subsequent post:  What is the broader context for randomized study of the impact of legal aid — what kinds of things can we learn from randomized study, and what impact questions can’t be answered through randomization?

As others have written, Greiner and Pattanayak may not be right, or their conclusions may be overstated or unfounded.  But legal aid providers can have important conversations that start here:  “What if Greiner and Pattanayak are right?” What would it mean if Harvard law students offering representation to random low-income applicants for unemployment compensation are not increasing the number of people getting benefits, and may even be slowing down receipt of benefits for those who win?

Another way to ask this question is this:  What does it mean that under some sets of circumstances, offers of legal aid don’t help people?

Here are my answers:

(1) Outreach, client-friendly intake, and supportive client services are crucial to maximizing impact of legal aid to the poor.

Of the low-income people who might seek help from the Harvard Legal Aid Bureau (which is a student clinic), or any of the professional legal-aid agencies, it is very likely that some people could handle their legal problem adequately or even well, without a law student (or lawyer).

On the other hand, there certainly is a large set of of people who cannot possibly handle their cases adequately on their own.  There are many, many low-income people who cannot read or write or speak coherently, who live with severe mental health problems, whose only language is not supported in the relevant adjudicative setting, whose mental or physical health or destitution prevents them from being able even to appear at the adjudicative setting, or who face other barriers to successful litigation without representation.

Right or wrong, the Greiner and Pattanayak article reminds me that it is crucial for legal aid agencies to:

  • Identify which, of the millions of low-income people in crisis, are least able to resolve their legal issues on their own (and yes, this is a question ripe for further study);
  • Ensure that these “most-in-need” people know how to access our services (or that social service agency staff or others in contact with them know how to reach us);
  • Ensure that our intake systems (intended to be “triage” systems) effectively identify the “most-in-need” clients
  • Ensure that our services include, or are integrated with, support systems for clients who without support cannot take advantage of the legal help we are offering (people who, alone, cannot take advantage of our offers of help because they are afraid, confused, overwhelmed, or otherwise hard to serve).


(2) We need continued research, training and supervision to maximize use of best (most effective) practices.

The fact that Greiner and Pattanayak studied offers of services by law students provides a sharp reminder that there can be a wide range of effectiveness between different providers of legal help.  Anyone who has watched a series of cases in court has seen that some lawyers have more impact on the judge than others.  Similarly there is variance in how well lawyers organize their work, gather facts, research and present their cases.

In the world of elementary school teaching, the documenting and debating of best practices is well underway.  Teach Like A Champion, by Doug Lemov, is an attempt to turn research into set of best practices for teachers.  The criticisms of the research will be familiar, including questions about whether the research asked the right questions or included the right samples.  But the fundamental effort is right — in any area of legal work, our effectiveness will be driven in part by whether we use the right strategies and techniques.  The legal aid community works hard to deploy experience-based training towards best practices.  But there has been only limited formal study comparing available techniques and strategies for serving clients.  Perhaps further randomized or other outcome research can help us better identify the strategies and techniques that will maximize impact for our clients.

(3) Improving an adjudicative system can increase the number of people for whom we have little impact — and that’s a good outcome!

I have heard from colleagues in Massachusetts that some years back, the unemployment compensation system was complicated and near-impossible for non-lawyers to navigate.  Reform efforts by lawyers at Greater Boston Legal Services, Massachusetts Law Reform and others took lessons learned from individual representation in the unemployment system and turned that into systems reform advocacy.  Over the years, the system has become more and more accessible to people representing themselves, without a lawyer.

Efforts like this, in various areas of client legal need, have been repeated by legal aid programs across the country.  We fervently hope that some people can achieve justice without a lawyer, because we know that the very limited number of legal aid lawyers in the country is inadequate to serve more than a fraction of those in need.  Systems advocacy is an essential task, because its success will expand the number of people who truly can achieve equal justice without the offer of a lawyer.


What was the question? Or, scholarly conventions and how they matter.

Different fields of scholarship have different conventions. Those of us who participate in multiple scholarly worlds have likely had experiences leading us to believe that some conventions are useful and worthwhile, while others are pointless or actively harmful. Whether we like specific conventions or not, though, we have to play along with them if we want to contribute to the scholarly conversations where these conventions rule.

Professor Greiner and Ms. Pattanayak (hereinafter G&P) elected to publish their empirical research in a top traditional law review. Law reviews have their own peculiar conventions, that differ sharply from the peculiar conventions of peer-reviewed journals in fields like statistics, sociology, law and society, or political science. Because G&P made this choice, their article is different than it would have been had they been writing for a different kind of publication venue. I would like to focus on one convention of writing for peer-reviewed social science journals that law reviews typically disregard and draw out one consequence of this disregard.

By convention, a social scientific article starts with a literature review covering the prior work on the topic of study. The point of this exercise is to explain to the reader the significance to the field of the new empirical research that is about to be presented. Good literature reviews act as a wind up for the paper’s own research. A good literature review gets the reader interested and motivates the paper by showing the reader that the study she is about to read fills a big intellectual gap, or resolves an important puzzle, or is incredibly innovative and cool. Thus primed, the reader then eagerly consumes the study’s findings with a contextualized understanding of their significance.

G&P’s paper inverts this usual ordering, presenting their study first, and then following with a literature review that motivates their call for more studies like their own. Does this reversal of order matter? I think so: it results in an important confusion about the differences between G&P’s empirical question and the empirical question at the center of much of the extant research literature and the policy debates about the impact of counsel.

G&P’s study investigates the impact of offers of representation by law students. The research literature has been trying to answer a slightly but importantly different question, What is the impact of representation by advocates?

As I show in an article creeping slowly through peer review, 40 years of empirical studies try to uncover evidence of whether and how different kinds of representatives affect the conduct and outcomes of trials and hearings. Some of the studies in this literature are able to compare the outcomes received by people represented by fully qualified attorneys to those received by lay people appearing unrepresented, while other studies compare the work of lawyers to other kinds of advocates who are not legally qualified (including law students). Another group of these studies lumps all sorts of advocates together, comparing groups of unrepresented lay people to groups of people represented by lawyers, social workers, union representatives, and other kinds of advocates permitted to appear in particular fora.

G&P rightly criticize these older studies for what we would today call methodological flaws, and I heartily endorse their call for better empirical research into the impact of counsel. But, not only are they and the older participants in the scholarly conversation using different methods, they are asking different questions. As G&P tell us themselves, they can’t answer the question that motivated 40 years of research, as they can come to “no firm conclusion on the actual use of representation on win/loss” (2). If their article had reviewed the literature before it presented their findings, they likely would have had a harder time asserting to the reader that “the effect of the actual use of representation is the less interesting question” (39-40).

G&P’s empirical question is also slightly to the side of the empirical question arguably at the center of contemporary policy discussions. These often turn on when lawyers specifically are necessary, and when people can receive similar outcomes with non-lawyer advocates or with different forms of “self-help” (information and assistance short of representation, sometimes including and sometimes excluding legal advice). The comparative effectiveness of alternative potential services is a central question in evidence-based policy, and the way the access to justice discussion is conducted today places at the center the question of when attorneys are necessary advocates.

G&P are absolutely right that, if we wish to fully understand any program’s impact on the public, we need information about uptake by that public. Randomizing offers of law students’ services tells us something useful and important, but something different than randomizing the actual use of lawyer representation. As a matter of research design, randomizing use is a more challenging task.  Identifying the impact of use turns out to be quite hard to do, but it is still interesting and important.  We learn a lot from this article, and we stand to learn more, as the present piece is the first in a series of randomized trials.


What Difference Representation: Randomization, Power, and Replication

I’d like to thank Dave and Jaya for inviting me to participate in this symposium, and I’d also like to thank Jim and Cassandra (hereafter “the authors”) for their terrific paper.
This paper exhibits all the features of good empirical work.  It’s motivated by an important substantive question that has policy implications.  The authors use a precise research design to answer the question: to what extent does an offer of representation affect outcomes?  The statistical analysis is careful and concise, and the conclusions drawn from study are appropriately caveated.  Indeed, this law review article might just be the one with the most caveats ever published!  I’m interested to hear from the critics, and to join the dialogue about the explanation of the findings and the implications for legal services work.  In my initial comments about the paper, I’ll make three observations about the study.
First, randomization is key to successful program evaluation.  Randomization mitigates against all sorts of confounders, including those that are impossible to anticipate ex ante and or control for ex post.  This is the real strength of this study.  A corollary is that observational program evaluation studies can rarely be trusted.  Indeed, even with very fancy statistics, estimating causal effects with observational data is really difficult.  It’s important to note that different research questions will require different randomizations.
Second, the core empirical result with regard to litigation success is that there is not a statistically significant difference between those offered representation by HLAB and those that were not.  The authors write: “[a]t a minimum, any effect due to the HLAB offer is likely to be small” (p. 29).  I’d like to know how small.  Here’s why.  It’s always hard what to make from null findings.  Anytime an effect is “statistically insignificant” one of two things is true: there really isn’t a difference between the treatment and control group, or that the difference is so small that it cannot be detected with the statistical model employed.  Given the sample size and win rates around 70%, how small of a difference could the Fisher test be able to detect?  We all might not agree what makes a “big” or “small” difference, but some additional power analysis would tell us a lot about how what these tools could possibly detect.
Finally, if we truly care about legal services and the efficacy of legal representation, this study needs to be replicated, in other courts, other areas of laws, and with different legal aid organizations.  Only rigorous program evaluation of this type can allow us to answer the core research question.  Of course, the core research question isn’t the only thing of interest.  The authors spend a lot of time talking about different explanations for the findings.  Determining which of these explanations are correct will go a long way in guiding the practical take-away from the study.  Sorting out those explanations will require additional studies and different studies.  I spend a lot of time writing on judicial decisionmaking.  My money is on the idea that ALJs behave differently with pro se parties in front of them.  But this study doesn’t allow us to determine which explanation for the core findings are correct.  That doesn’t mitigate against the importance or quality of the work; it’s a known (and disclosed) limitation that leads us to the next set of studies to undertake.