What Difference Presentation?

David Udell is the Executive Director of the National Center for Access to Justice and a Visiting Professor from Practice at Cardozo Law School.

In my line of work, I have seen many efforts in the political realm to shut down civil legal services for the poor, and have continually worked to combat such efforts.  In 1996, when the Gingrich Congress barred federally funded legal services lawyers from bringing class actions on behalf of the poor, I left Legal Services for the Elderly in order to finish a lawsuit on behalf of widows and widowers who were suing to compel the United States Treasury to fix its practices for replacing stolen Social Security payments.  When I later moved to the Brennan Center for Justice, I helped bring a lawsuit against the rules that barred legal services lawyers from participating in such class actions, I filed another lawsuit against similar rules that barred law school clinic students from bringing environmental justice cases in Louisiana, and I built a Justice Program at the Brennan Center dedicated to countering such attacks on the poor and on their lawyers.

In their March 3, 2011 draft report, What Difference Representation? Offers, Actual Use, and the Need for Randomization (“the Study”), authors D. James Greiner & Cassandra Wolos Pattanyak are right about the importance of developing a solid evidence base – one founded on methodologies that include randomization – to establish what works in ensuring access to justice for people with civil legal cases. They are right again that in the absence of such evidence, both the legal aid community and its critics are accustomed to relying on less solid data.  And they are smart to “caution against both over- and under-generalization of these study results.”  But, unfortunately, the bare exhortation to avoid over- and under-generalization is not sufficient in the highly politicized context of legal services.

While the authors obviously do not have any obligation to arrive at a particular result, they can be expected to recognize a need to avoid statements that have a high probability to mislead, especially in light of the likely inability of much of the Study’s audience to understand the authors’ methodology and findings.  In fact, because of the Study’s novelty and appearance in a non-scientific journal, it will be relied on to analyze situations where it doesn’t apply, and by people who have no background in social science research, plus it will be given disproportionate weight because so few comparable studies exist to judge it against.  It is these factors, in combination with the politicization of legal services, that make it crucial that the authors’ assertions, particularly in the sections most likely to be seen by lay readers (the title and the abstract), do not extend beyond what the findings justify.

So, it is a cause of concern that the Study leads with dramatic headlines and minimizes essential caveats – most significantly, omitting the authors’ own important acknowledgement that the data could not support any useful empirical conclusion about the effect of actual representation (as distinct from an offer of it).  Thus, the title still leads with the phrase “What Difference Representation?,” while the abstract declares the findings “unexpected” and then goes on to state:

  • “a service provider’s offer of representation to a claimant had no statistically significant effect on the claimant’s probability of a victory.”
  • “the offer of representation inflicted a harm upon such claimants [. . .] with no discernible increase in the probability of a favorable outcome.”
  • “within the limits of statistical uncertainty, these claimants would have been better off without the offer of representation.”

By leading with the question “What Difference Representation,” and by employing alarming phrases such as “no … effect,” “inflicted harm,” and “better off without an offer of representation,” the title and abstract guide readers to conclusions that reach beyond the underlying data.  The abstract offers caveats, but only in the form of opaque technical phrases such as “statistically significant,” “no discernible increase in the probability,” and “statistical uncertainty.”  This is an approach that invites misuse.

I do not exaggerate the risk.  An overview of the politics surrounding the federally funded Legal Services Corporation (LSC)), published just two weeks ago in the National Law Journal, describes recurring efforts to defund the nation’s flagship legal services institution.  One piece carries the title, “For LSC, a 30-year funding rollercoaster; Throughout most of its history, the agency has been a political football, periodically the target of massive cuts.”  Another leads with, “Looking for allies in Congress, and finding few.”  Indeed, for decades, we have seen the legal services community compelled to respond to allegations that on investigation prove trumped up or spurious.  A recent example is Disappointing Reporting on Legal Services, in which the Legal Aid Society of the District of Columbia undertakes to rebut an “exposé” circulated on the internet that falsely sought to tar all legal services programs based on a single instance involving a single employee’s misconduct.

But, more specifically, in a Brief submitted this past month to the United States Supreme Court, opponents of legal services cited the “What Difference” study for the proposition that legal representation doesn’t matter, urging “A recent randomized, controlled Harvard study of simple, nonjury litigation found no significant difference in success rates between litigants who were offered legal representation and those who were not.”   In its haste to attack legal representation, the Brief omits to mention that the Study:  i) contains no empirically useful finding regarding the efficacy of actual representation (as distinct from offers of representation), ii) examined administrative advocacy, not court litigation, and, iii) evaluated law students, not lawyers.  Nor does the Brief mention that (as described in greater detail below):  i) the control group for the Study included many people (apparently 49% of the group) who ultimately  received actual representation from “other service providers,” ii) those “other service providers” presumably possessed greater experience and greater training than the law students, and iii) the Study included cases without regard to whether representation was expected to make a difference in their outcomes.  Finally, the Brief omits to mention the Study’s explicit admission that “It would be a mistake to over-generalize the results of our study to conclude that offering free legal assistance is not worth the cost or time, or even that offers of representation make no difference in Massachusetts first-level appeals.”  Study at 47.

But, the Study’s presentation isn’t likely to confuse only the enemies of legal services.  Even as esteemed a thought leader as Ian Ayres omits mention of some the Study’s limitations in an essay he published this past winter, Iatrogenic Legal Assistance, in the on-line forum, Freakonomics.  Describing the Study’s primary finding as “The claimants who were offered representation were no more (or less) likely to win their administrative appeal,” Ayres (who flagged several of the Study’s limitations, including a concern about the small size of the pool of subjects) does not include the authors’ acknowledgment that the Study failed to make any empirically useful finding that actual representation (as distinct from offers of representation) has any effect on a claimant’s probability of a win.  He also makes no mention of the fact that the “other service providers” who represented members of the control group should be presumed to possess deeper experience and training than the law students.  Nor does he mention that the Study included cases without regard to whether representation would be expected to make a difference in their outcomes.  Like the authors of the Supreme Court Brief, Ayres also does not mention the Study’s acknowledgment, that “It would be a mistake to over-generalize the results of our Study to conclude that offering free legal assistance is not worth the cost or time, or even that offers of representation make no difference in Massachusetts first-level appeals.”

Although many of the Study’s limitations were omitted from the Supreme Court Brief and from the Ayres essay, and although virtually all of them are omitted from the Study’s abstract, for the most part they are articulated in the body of the Study:

  1. The “mixed control group” problem – Contrary to what is implied by the abstract, the Study compared outcomes obtained by members of a “treatment group,” who received offers of representation from HLAB law students, to outcomes obtained by members of a “control group,” 49% of whom obtained offers of representation (and, actual representation) elsewhere (from advocates with presumably greater experience and greater training, see discussion below), and 51% of whom obtained no offers (and, no representation).  The fact that 49% of the members of the control group received representation potentially upwardly biases how well the control group did (as if in a medical trial, ensuring that 49% of the control group took the same medicine administered to the treatment group), thereby making the impact of HLAB offers of representation more difficult to discern in relation to the full control group.  Indeed, some members of the unrepresented 51% portion of the control group may also have received certain limited forms of legal help, such as “legal advice” or “brief assistance,” further upwardly biasing the performance of the control group.  The Study acknowledges the mixed control group issue (see, e.g., pp. 11, 41), but rejects, as “implausible,” the theory that it prevented  detection of the impact of the HLAB offers of representation (see p. 45).  In fact, the authors’ assertion of implausibility appears unwarranted in light of the “experience  gap” problem (discussed in greater detail below).  But, regardless of whether the authors are persuaded (or not persuaded) that the presence of represented persons in the control group upwardly skewed the results, the abstract should let readers know the facts. To prevent readers from being misled about what was studied, the authors should simply modify the abstract to make clear that:  “the HLAB law student service provider’s offer of representation to a claimant had no statistically significant effect on the claimant’s probability of a victory when compared to a control group in which 49% of the members received representation from other service providers and in which the remainder of the members may have received other forms of assistance.”
  2. The experience gap problem – The abstract omits all mention of the experience gap that exists between the HLAB students and the “other service providers” who represented members of the control group.  Although the Study explicitly rejects, as “implausible,” the notion that the HLAB “lawyering” was “low quality,” (see p. 45), “low quality” isn’t the relevant issue.  Rather, the Study fails to acknowledge that the “other service providers” who represented 49% of the members of the control group presumably possessed greater experience and greater training than the HLAB students (some of whom were handling their first case), and that this discrepancy with respect to experience and training may have upwardly biased the performance of the control group, with respect to both the possibility of a clamant obtaining a favorable outcome and the speed with which a favorable outcome is obtained.  This experience gap may thus be expected to conceal the effectiveness of the HLAB students’ performance while highlighting any delay caused by the HLAB students’ performance. To prevent readers from being misled about what was studied, the authors should modify the abstract to make clear that “the HLAB law student service provider’s offer of representation to a claimant had no statistically significant effect on the claimant’s probability of a victory in a study in which a portion of the control group (49%) received representation from other service providers who should be presumed to possess greater experience and training than the HLAB law students, and in which the remainder of the members may have received other forms of assistance.”
  3. The “screening for merit” problem – Another limitation that is missing  from the abstract is that subjects were selected for inclusion in the Study without regard to whether advocacy would be expected to make a difference in the probability of their obtaining a victory. The authors describe this problem as worthy of further inquiry (which they are pursuing) and explicitly acknowledge that:  “One might object, however, that [the current study’s] design cannot capture the effect of representation because one of the tasks attorneys, particularly legal aid attorneys, perform is to choose which cases will benefit from representation, and the randomizer prevents them from exercising their judgment in this manner.”  Study at 72.  To prevent readers from being misled about what was studied, the authors should modify the abstract to make clear that:  “Subjects were included in the Study without investigation as to whether their cases could benefit from representation by an advocate.”
  4. The “what is statistically significant” problem – The abstract relies on technical concepts of statistical significance, statistical uncertainty, and discernibility, but contains no definition of these concepts, thereby giving no clue to the lay reader to counteract the abstract’s direct message that a service provider’s offer of representation is ineffective, harmful, and necessary to avoid.  It is therefore interesting to see, deeper in the Study, the following clarification:

This finding [of “no statistically significant effect on the probability that a claimant would prevail”] does not mean that we know that the HLAB offer had no positive effect on a claimant’s probability of success; we can say, however, that the [sic] any such effect is unlikely to be very large (or the data probably would have shown it).

Study, at 8; see also Study at 45 (repeating clarification). Moreover, the Study explicitly acknowledges that the data are “useless” with respect to the question of whether HLAB “actual representations” increase a claimant’s probability of a victory (as distinct from HLAB “offers”).  See Study, at 43.  The Study also explicitly acknowledges, as noted above, that “It would be a mistake to overgeneralize the results of our study to conclude that offering free legal assistance is not worth the cost or time, or even that offers of representation make no difference in Massachusetts first-level appeals.”  Study at 47.  To prevent readers from being misled about the Study’s findings, the authors should modify the abstract to  make clear that:  “The Study does not find ‘that the HLAB offer had no positive effect’ or ‘that offers of representation make no difference,’ nor does it contain any useful findings regarding the possible effect of actual representation.”

As I hope is evident, I direct my comments primarily to issues concerning the accuracy of the presentation of the authors’ findings in the title and in the abstract, and leave issues concerning the accuracy of the findings themselves to the empirical experts in this on-line symposium.  I commend the authors for tackling very important questions, highlighting randomization methodology, acknowledging limitations on their findings, and urging readers neither to over-or under-generalize their findings.  But in light of the politicization of legal services, and as the Brief and the Ayres column make plain, readers will tend to overlook the Study’s limitations, including those acknowledged by the authors themselves in the body of the Study.  Of course the authors are not entirely accountable for choices others may make about how to use their Study.  But, nor would it be responsible for the authors to decline to take easy corrective steps to ensure that their title and abstract describe the Study for what it is rather than for what it is not.  One subject is beyond dispute:  “What Difference Representation?” poses a more challenging problem than “What Difference Presentation?”

You may also like...