Category: Symposium (What Difference Representation)


Some Initial Thoughts on The Offer of Representation Study — Designing a 100% Access System

Let me suggest that this study, regardless of, or perhaps because of, its controversial nature, will be looked back at as a critical event in the history of access to justice.  Context is, of course, all, not only in understanding the data reported in the study, but also in assessing its overall meaning and impact, and in discussing the future directions it should lead to.

For me, the key context is that this is a time in which there is a broad national consensus, at least among national constituency organizations, about what is needed to achieve access (court simplification and services, bar flexibility, and legal aid efficiency and resources), but also a lack of political will to move that consensus forward in the broader political arena.

Part of the lack of political will comes from a deep fear of financial consequences.  Whatever the intellectual achievements of the Civil Gideon movement, the fact remains that litigative efforts have largely failed.  Indeed, the oral argument in the Supreme Court last week on the civil contempt / child support counsel issue again illustrated the inevitable impact of financial concerns. (Transcript here).  E.g Transcript at 38 (“massive change.”)

I believe that we are only going to make truly significant progress on access to justice in these tough times (which are likely to go on for a long time, for state courts at least, given the changes in the structures of state budgets) is to convince decision makers that we can provide access to justice while controlling costs.  This is very hard to do, given the entitlement model that ground so much of the advocacy.

However, I see in this paper — as well as the discussion about it, and others that are in the pipeline, the beginning of the analysis that can give us the cost estimates, and the cost controls that will make access to justice politically unassailable.

To be concrete, my own view, having actually been an unemployment advocate in Massachusetts, before law school in the 1970s, and being familiar with the advaocy structures that have grown up, is that these results are best understood as the product of the (relatively) accessible nature of the agency, the high benefits win rate, the lack of experience with the system that second year law students suffer,  the fact that the non-treatment group so often got representation — which should surely have been better than that provided by students.  I should add that my own experience with the agency was that winning was not a matter of legal or forensic skill, but rather a matter of internalizing, and communicating to clients one simple cultural message “I want to work so hard it hurts”.  But most of all, I think that a large portion of these cases were doomed to win or lose, regardless of what kind of assistance they got.  In other words, representation of any quality might make less difference than it should — with one importance caveat.  Much of the impact of advocacy in this area depends on working with the ultimate UI claimant long before the claim is filed, and ideally long before the employment is terminated.

Why does this matter?  It matters because in a cost effect access to justice system, we need to find a way to provide resources only in cases in which they have a significant chance of making a significant difference, and even in those cases to provide only the cheapest help that will acheive that goal.

I think that this study highlights the ultimate possibility of making these determinations.  This is because we here see one form of treatment, delivered in one context, and we all agree, I think that we need to understand both the treatment and the context better to understand the meaning of the study.  This is the first piece of randomized mosaic that will ultimately produce a multi-dimensional picture of what makes and difference and when.  When we know that, we will be able to figure out what systems will allow us to decide who gets what in terms of help, and how such systems can be grounded on broadly legitimate factors.  In other words we need a triage system that has wide intellectual and political legitimacy, and that considers how to leverage recent innovations in court and bar services to minimize the number of situations that need the most expensive forms of access services.  The most interesting randomized studies of all will be those that compare different systems of triage, including both different criteria, and different decision-makers.  I would very much appreciate thoughts on how this work might be advanced.

Those interested in the possible scope of the access to justice consensus, including its relationship to triage, can read my recent judicature article here.  Those interested in parsing the recent Supreme Court argument  can look at my recent blog here.


Fair Criticism: A Response to Rebecca Sandefur, Andrew Martin, Michael Heise, and Ted Eisenberg

We very much appreciate the time Rebecca Sandefur, Andrew Martin, Michael Heise, and Ted Eisenberg have taken to comment on our paper.  We are particularly excited by comments from these authors because for each, we have read and admired his/her work in the past.

We believe that much of the criticism expressed in these comments is well-taken, and we will react accordingly.

Read More


What can we learn if we assume Greiner and Pattanayak are right?

When legal aid providers read “What Difference Representation? Offers, Actual Use, and the Need for Randomization,” we immediately start to raise questions.  Appropriately, we note that there’s a vast difference between a busy law student handling what may be their first case and a professional experienced legal aid lawyer.  We note that, apparently, some significant number of the people randomly turned away by the Harvard law school clinic were then advised or represented by Greater Boston Legal Aid.

There is also a broader question, which I will explore in a subsequent post:  What is the broader context for randomized study of the impact of legal aid — what kinds of things can we learn from randomized study, and what impact questions can’t be answered through randomization?

As others have written, Greiner and Pattanayak may not be right, or their conclusions may be overstated or unfounded.  But legal aid providers can have important conversations that start here:  “What if Greiner and Pattanayak are right?” What would it mean if Harvard law students offering representation to random low-income applicants for unemployment compensation are not increasing the number of people getting benefits, and may even be slowing down receipt of benefits for those who win?

Another way to ask this question is this:  What does it mean that under some sets of circumstances, offers of legal aid don’t help people?

Here are my answers:

(1) Outreach, client-friendly intake, and supportive client services are crucial to maximizing impact of legal aid to the poor.

Of the low-income people who might seek help from the Harvard Legal Aid Bureau (which is a student clinic), or any of the professional legal-aid agencies, it is very likely that some people could handle their legal problem adequately or even well, without a law student (or lawyer).

On the other hand, there certainly is a large set of of people who cannot possibly handle their cases adequately on their own.  There are many, many low-income people who cannot read or write or speak coherently, who live with severe mental health problems, whose only language is not supported in the relevant adjudicative setting, whose mental or physical health or destitution prevents them from being able even to appear at the adjudicative setting, or who face other barriers to successful litigation without representation.

Right or wrong, the Greiner and Pattanayak article reminds me that it is crucial for legal aid agencies to:

  • Identify which, of the millions of low-income people in crisis, are least able to resolve their legal issues on their own (and yes, this is a question ripe for further study);
  • Ensure that these “most-in-need” people know how to access our services (or that social service agency staff or others in contact with them know how to reach us);
  • Ensure that our intake systems (intended to be “triage” systems) effectively identify the “most-in-need” clients
  • Ensure that our services include, or are integrated with, support systems for clients who without support cannot take advantage of the legal help we are offering (people who, alone, cannot take advantage of our offers of help because they are afraid, confused, overwhelmed, or otherwise hard to serve).


(2) We need continued research, training and supervision to maximize use of best (most effective) practices.

The fact that Greiner and Pattanayak studied offers of services by law students provides a sharp reminder that there can be a wide range of effectiveness between different providers of legal help.  Anyone who has watched a series of cases in court has seen that some lawyers have more impact on the judge than others.  Similarly there is variance in how well lawyers organize their work, gather facts, research and present their cases.

In the world of elementary school teaching, the documenting and debating of best practices is well underway.  Teach Like A Champion, by Doug Lemov, is an attempt to turn research into set of best practices for teachers.  The criticisms of the research will be familiar, including questions about whether the research asked the right questions or included the right samples.  But the fundamental effort is right — in any area of legal work, our effectiveness will be driven in part by whether we use the right strategies and techniques.  The legal aid community works hard to deploy experience-based training towards best practices.  But there has been only limited formal study comparing available techniques and strategies for serving clients.  Perhaps further randomized or other outcome research can help us better identify the strategies and techniques that will maximize impact for our clients.

(3) Improving an adjudicative system can increase the number of people for whom we have little impact — and that’s a good outcome!

I have heard from colleagues in Massachusetts that some years back, the unemployment compensation system was complicated and near-impossible for non-lawyers to navigate.  Reform efforts by lawyers at Greater Boston Legal Services, Massachusetts Law Reform and others took lessons learned from individual representation in the unemployment system and turned that into systems reform advocacy.  Over the years, the system has become more and more accessible to people representing themselves, without a lawyer.

Efforts like this, in various areas of client legal need, have been repeated by legal aid programs across the country.  We fervently hope that some people can achieve justice without a lawyer, because we know that the very limited number of legal aid lawyers in the country is inadequate to serve more than a fraction of those in need.  Systems advocacy is an essential task, because its success will expand the number of people who truly can achieve equal justice without the offer of a lawyer.


The Centrality of Abstracts? A Response to Bob Sable’s and David Udell’s Comments on “What Difference Representation? Offers, Actual Use, and the Need for Randomization”

Our great thanks to David Udell and to Bob Sable for taking the time to comment (separately) on our paper, “What Difference Representation?  Offers, Actual Use, and the Need for Randomization.”  We very much appreciate the comments they have made, and as we hope is clear from the introduction and elsewhere in our paper, we have the greatest respect for the work that they do, the work that their organizations do, and the legal services community to which they belong does.

Some uncomfortable aspects of writing this paper are that we find ourselves sometimes disagreeing with persons and organizations we greatly admire, being held responsible for what an advocate did not include in a legal brief, and having our study implicitly compared to the Gingrich Congress’ efforts to limit legal services funding and to a false expose seeking to tar legal services programs.  Had David’s and Bob’s criticisms concerned what we said in our paper, we might have considerable cause for regret.  As we understand them, however, the lion’s share of David’s and Bob’s comments go not to the content of the paper but to the title and abstract.  There are some substantive points, to which we respond below, but the primary thrust of both comments is that we have been reckless in the title and the abstract by not including and highlighting caveats that David and Bob for the most part agree that we discuss in the paper’s text.

We wonder about the apparent centrality of titles and abstracts.  But conceding that point for the moment, we also wonder whether the sins of omission and selective emphasis David and Bob accuse us of committing apply to their blog posts.  By way of example, none of the following appears in either of their posts:  (i) that the full title of the paper (which appears above) references the distinction between offers and actual use; (ii) that the first sentence of the abstract says that our research program is “designed to measure the effect of an offer of, and the actual use of, legal representation”; (iii) the last sentence in this paragraph in the abstract, after again referencing “the actual use of (as opposed to an offer of) representation,” reports that “we could come to no firm conclusion on the effect of actual use of representation on win/loss”; (iv) the third sentence of the paper again references “both an offer of, and actual use of, representation”; and (v) Part B of the introduction dedicates several pages discussing the distinction.  We have expanded the abstract several times already in response to concerns from legal services providers (including HLAB itself), and we will consider doing so again, but perhaps the best thing to do at this point would be simply to omit the abstract entirely.  We will consider that as well.

To substance.

Read More


What was the question? Or, scholarly conventions and how they matter.

Different fields of scholarship have different conventions. Those of us who participate in multiple scholarly worlds have likely had experiences leading us to believe that some conventions are useful and worthwhile, while others are pointless or actively harmful. Whether we like specific conventions or not, though, we have to play along with them if we want to contribute to the scholarly conversations where these conventions rule.

Professor Greiner and Ms. Pattanayak (hereinafter G&P) elected to publish their empirical research in a top traditional law review. Law reviews have their own peculiar conventions, that differ sharply from the peculiar conventions of peer-reviewed journals in fields like statistics, sociology, law and society, or political science. Because G&P made this choice, their article is different than it would have been had they been writing for a different kind of publication venue. I would like to focus on one convention of writing for peer-reviewed social science journals that law reviews typically disregard and draw out one consequence of this disregard.

By convention, a social scientific article starts with a literature review covering the prior work on the topic of study. The point of this exercise is to explain to the reader the significance to the field of the new empirical research that is about to be presented. Good literature reviews act as a wind up for the paper’s own research. A good literature review gets the reader interested and motivates the paper by showing the reader that the study she is about to read fills a big intellectual gap, or resolves an important puzzle, or is incredibly innovative and cool. Thus primed, the reader then eagerly consumes the study’s findings with a contextualized understanding of their significance.

G&P’s paper inverts this usual ordering, presenting their study first, and then following with a literature review that motivates their call for more studies like their own. Does this reversal of order matter? I think so: it results in an important confusion about the differences between G&P’s empirical question and the empirical question at the center of much of the extant research literature and the policy debates about the impact of counsel.

G&P’s study investigates the impact of offers of representation by law students. The research literature has been trying to answer a slightly but importantly different question, What is the impact of representation by advocates?

As I show in an article creeping slowly through peer review, 40 years of empirical studies try to uncover evidence of whether and how different kinds of representatives affect the conduct and outcomes of trials and hearings. Some of the studies in this literature are able to compare the outcomes received by people represented by fully qualified attorneys to those received by lay people appearing unrepresented, while other studies compare the work of lawyers to other kinds of advocates who are not legally qualified (including law students). Another group of these studies lumps all sorts of advocates together, comparing groups of unrepresented lay people to groups of people represented by lawyers, social workers, union representatives, and other kinds of advocates permitted to appear in particular fora.

G&P rightly criticize these older studies for what we would today call methodological flaws, and I heartily endorse their call for better empirical research into the impact of counsel. But, not only are they and the older participants in the scholarly conversation using different methods, they are asking different questions. As G&P tell us themselves, they can’t answer the question that motivated 40 years of research, as they can come to “no firm conclusion on the actual use of representation on win/loss” (2). If their article had reviewed the literature before it presented their findings, they likely would have had a harder time asserting to the reader that “the effect of the actual use of representation is the less interesting question” (39-40).

G&P’s empirical question is also slightly to the side of the empirical question arguably at the center of contemporary policy discussions. These often turn on when lawyers specifically are necessary, and when people can receive similar outcomes with non-lawyer advocates or with different forms of “self-help” (information and assistance short of representation, sometimes including and sometimes excluding legal advice). The comparative effectiveness of alternative potential services is a central question in evidence-based policy, and the way the access to justice discussion is conducted today places at the center the question of when attorneys are necessary advocates.

G&P are absolutely right that, if we wish to fully understand any program’s impact on the public, we need information about uptake by that public. Randomizing offers of law students’ services tells us something useful and important, but something different than randomizing the actual use of lawyer representation. As a matter of research design, randomizing use is a more challenging task.  Identifying the impact of use turns out to be quite hard to do, but it is still interesting and important.  We learn a lot from this article, and we stand to learn more, as the present piece is the first in a series of randomized trials.


What Difference Representation: Randomization, Power, and Replication

I’d like to thank Dave and Jaya for inviting me to participate in this symposium, and I’d also like to thank Jim and Cassandra (hereafter “the authors”) for their terrific paper.
This paper exhibits all the features of good empirical work.  It’s motivated by an important substantive question that has policy implications.  The authors use a precise research design to answer the question: to what extent does an offer of representation affect outcomes?  The statistical analysis is careful and concise, and the conclusions drawn from study are appropriately caveated.  Indeed, this law review article might just be the one with the most caveats ever published!  I’m interested to hear from the critics, and to join the dialogue about the explanation of the findings and the implications for legal services work.  In my initial comments about the paper, I’ll make three observations about the study.
First, randomization is key to successful program evaluation.  Randomization mitigates against all sorts of confounders, including those that are impossible to anticipate ex ante and or control for ex post.  This is the real strength of this study.  A corollary is that observational program evaluation studies can rarely be trusted.  Indeed, even with very fancy statistics, estimating causal effects with observational data is really difficult.  It’s important to note that different research questions will require different randomizations.
Second, the core empirical result with regard to litigation success is that there is not a statistically significant difference between those offered representation by HLAB and those that were not.  The authors write: “[a]t a minimum, any effect due to the HLAB offer is likely to be small” (p. 29).  I’d like to know how small.  Here’s why.  It’s always hard what to make from null findings.  Anytime an effect is “statistically insignificant” one of two things is true: there really isn’t a difference between the treatment and control group, or that the difference is so small that it cannot be detected with the statistical model employed.  Given the sample size and win rates around 70%, how small of a difference could the Fisher test be able to detect?  We all might not agree what makes a “big” or “small” difference, but some additional power analysis would tell us a lot about how what these tools could possibly detect.
Finally, if we truly care about legal services and the efficacy of legal representation, this study needs to be replicated, in other courts, other areas of laws, and with different legal aid organizations.  Only rigorous program evaluation of this type can allow us to answer the core research question.  Of course, the core research question isn’t the only thing of interest.  The authors spend a lot of time talking about different explanations for the findings.  Determining which of these explanations are correct will go a long way in guiding the practical take-away from the study.  Sorting out those explanations will require additional studies and different studies.  I spend a lot of time writing on judicial decisionmaking.  My money is on the idea that ALJs behave differently with pro se parties in front of them.  But this study doesn’t allow us to determine which explanation for the core findings are correct.  That doesn’t mitigate against the importance or quality of the work; it’s a known (and disclosed) limitation that leads us to the next set of studies to undertake.

What Difference Representation: Inconclusive Evidence

Congratulations to the authors on an excellent study that promotes and explores the importance of random assignment.

My comment supports the article’s emphasis on caution and not overgeneralizing. My focus is on the article’s Question 2: Did an offer of HLAB representation increase the probability that the claimant would prevail? My analysis of the simple frequencies (I have not delved into the regressions and ignore weights) suggests that HLAB attorneys should view the results as modest, but inconclusive, evidence that an offer of representation improves outcomes.

Based on Table 1, page 24, there are 129 No offer observations and 78 Offer observations. Ignoring weights, which I think are said not to make a huge difference, page 26 reports that .76 of claimants who received an offer prevailed in their first level appeals, and that .72 of claimants who did not receive an offer prevailed in their first level appeal.

So, those who were offered representation fared better; one measure of which is they did .04/.72 x 100, or 5.6% better. Given the high background (no offer condition) rate of prevailing, the maximum improvement (to 1.00 success rate) is .28/.72 x 100 or 38.9%.  Another measure could be the proportionate reduction in defeat. The no offer group was “defeated” 28% of the time. The offer group was defeated 24% of the time.  The reduction in defeat is .04/.28 x 100 is 14.3%. This measure has the sometimes attractive feature that it can range from 0% to 100%. So by this measure the offeree group did 14% better than the non-offeree group, a modest improvement for the offer condition.

A concern expressed in the paper is that the result is not statistically significant. This raises the question: given the sample size, how likely was it that a statistically significant effect would be detected? Assessing this requires hypothesizing what size effect of an offer would be of societal interest.  Suppose we say that lawyers should do about 10% better and move the win rate from .72 for non offerees to .80 for offerees.  This is an 11.1% improvement by the first measure and a 28.6% improvement by the second measure.  Both strike me as socially meaningful but others might specify different numbers.

We can now pose the question: given the sample size and the effect of specified size, what is the probability of observing a statistically significant effect if one exists?  I use the following Stata command to explore the statistical power of the study:

sampsi .72 .80,n1(129) n2(78), which yields the following output:

Estimated power for two-sample comparison of proportions

Test Ho: p1 = p2, where p1 is the proportion in population 1 and p2 is the proportion in population 2


alpha =   0.0500  (two-sided)

p1 =   0.7200

p2 =   0.8000

sample size n1 = 129

n2 = 78

n2/n1 = 0.60

Estimated power:

power =   0.1936

A power of 0.19 is too low to conclude that the study was large enough to detect an effect of the specified size at a statistically significant level. If one concluded that an offer of representation did not make a significant difference from this study, there is a good chance the conclusion would be incorrect. To achieve power of about 0.70, one would need a sample four times as large as that in the study. If one thought that smaller effects were meaningful, the sample would be even more undersized.

I think my analysis so far underestimates the benefit of an offer by HLAB attorneys.  Perhaps we can take .72 as a reasonable lower bound on success. Even folks without an offer succeeded at that rate.  The realistic upper bound on success is likely not 1.00.  Some cases simply cannot be won, even by the best lawyer in the world. Perhaps not more than 90% of cases are ever winnable, with the real winnable rate likely somewhere between .8 and .9.  If the winnable rate was .8, then the offer got clients halfway there, from .72 to .76. If the real rate was higher, the offer was less effective but not trivial in size.  At .9, the offer got the clients 22% closer to the ideal. The study just was not large enough to detect much of an effect at a statistically significant level.

So while I agree that the study provides no significant evidence that an offer increases success, my analysis (obviously incomplete) suggests that the study provides no persuasive evidence that an offer does not increase success. The study is inconclusive on this issue because of sample size.

HLAB lawyers should not feel that they have to explain away these results; the results modestly, but inconclusively, support the positive effect of an offer because they are in the right direction in a small study.



I assigned the Greiner & Pattanayak paper (or, more accurately, an earlier iteration of the paper) in my Empirical Legal Studies Colloquium this semester at Cornell. Among the many issues that animated my students was the paper’s title, particularly its focal point: “What Difference Representation?”

My students noted the obvious–notwithstanding the title’s tilt, the authors make clear (indeed, painfully clear) their wish to dwell on the effects of an offer of representation rather than the efficacy of actually using legal representation. Moreover, the authors assert that “the effect of actual use of representation is the less interesting question” (emphasis added)(pp. 39-40) while investing considerable energy in explaining to readers “why offers are relevant” (e.g., pp. 10-12).

To be sure, the authors are correctly mindful of and sensitive to important data and research design limitations. As they note repeatedly, “the offer, not actual use of, representation was randomized” (e.g., p. 41). Although the ‘effect of actual use of representation’ question is, as the paper makes clear, “challenging to answer” (p. 41), it does not follow that it is also, therefore, a “less interesting question” (pp. 39-40).

Simply put, the paper does not persuade on this point. If anything, the degree to which the authors felt it necessary to explain why “offers are relevant” (and, by implication, interesting) erodes their argument. Moreover, if, as the authors claim, the use of representation is the less interesting question, then why make it the clear focal point of the paper’s title? While I am not insensitive to the need to “market” one’s scholarship and understand that titles can be pressed into such service (especially if one immediate target audience includes student law review editors), my sense is that this title for this paper contributes unnecessary drag.


What Difference Presentation?

David Udell is the Executive Director of the National Center for Access to Justice and a Visiting Professor from Practice at Cardozo Law School.

In my line of work, I have seen many efforts in the political realm to shut down civil legal services for the poor, and have continually worked to combat such efforts.  In 1996, when the Gingrich Congress barred federally funded legal services lawyers from bringing class actions on behalf of the poor, I left Legal Services for the Elderly in order to finish a lawsuit on behalf of widows and widowers who were suing to compel the United States Treasury to fix its practices for replacing stolen Social Security payments.  When I later moved to the Brennan Center for Justice, I helped bring a lawsuit against the rules that barred legal services lawyers from participating in such class actions, I filed another lawsuit against similar rules that barred law school clinic students from bringing environmental justice cases in Louisiana, and I built a Justice Program at the Brennan Center dedicated to countering such attacks on the poor and on their lawyers.

In their March 3, 2011 draft report, What Difference Representation? Offers, Actual Use, and the Need for Randomization (“the Study”), authors D. James Greiner & Cassandra Wolos Pattanyak are right about the importance of developing a solid evidence base – one founded on methodologies that include randomization – to establish what works in ensuring access to justice for people with civil legal cases. They are right again that in the absence of such evidence, both the legal aid community and its critics are accustomed to relying on less solid data.  And they are smart to “caution against both over- and under-generalization of these study results.”  But, unfortunately, the bare exhortation to avoid over- and under-generalization is not sufficient in the highly politicized context of legal services.

While the authors obviously do not have any obligation to arrive at a particular result, they can be expected to recognize a need to avoid statements that have a high probability to mislead, especially in light of the likely inability of much of the Study’s audience to understand the authors’ methodology and findings.  In fact, because of the Study’s novelty and appearance in a non-scientific journal, it will be relied on to analyze situations where it doesn’t apply, and by people who have no background in social science research, plus it will be given disproportionate weight because so few comparable studies exist to judge it against.  It is these factors, in combination with the politicization of legal services, that make it crucial that the authors’ assertions, particularly in the sections most likely to be seen by lay readers (the title and the abstract), do not extend beyond what the findings justify.

Read More


What Difference Representation – A Response

I am the Executive Director of Greater Boston Legal Services, the primary provider of civil legal services to poor people in the greater Boston area.  My program and I have a great stake in assuring that our limited resources are used where they can be most effective.  Indeed we are participating with Professor Greiner in a study of the impact of our staff attorneys’ representation in defense of eviction cases.  My comments refer to the draft dated February 12, 2011.

It is important with any study; however, to know what it concludes and what it does not.  For instance, and most importantly, the study concedes on page 43 that it could draw no conclusions about the effect on outcome for claimants actually receiving representation, as opposed to just an offer of representation.  Thus, this study should be recognized for what it is: a limited analysis of the somewhat abstract concept of “offering” assistance.  Indeed, the study wisely cautions against drawing any conclusions from the study about the usefulness of free legal assistance or even about the usefulness of offers of representation in unemployment cases in general (page 47).

I feel some changes are necessary to avoid much confusion about (and misuse of) this study’s conclusions (or lack thereof) as to the effect of representation itself, as opposed to just the offer.  For instance, given that this study’s principal conclusions are about an offer of representation and not actual representation, a more accurate title to this study would be, “What Difference an Offer of Representation?”  And the very first sentence of the Introduction on page 5 currently reads, “Particularly with respect to low-income clients in civil cases, how much of a difference does legal representation make?”  It is only a footnote that explains the study looks at offers as well as effect, and much later in the study (page 32) that no conclusions were reached at all as to the effect of representation.  Similarly, the conclusion (“Where Do We Go From Here?”) states, “the present study primarily concerned representation effects on legal outcomes affecting the potential client’s pecuniary interests.”

I am concerned also that the results reported in the study with respect to offers of representation by HLAB are misleading at best and of little utility at worst.  This is because nearly half of the control group were represented by counsel and, more significantly, probably that many and perhaps more in the control group got an offer of free representation from my program or another providing free legal services in unemployment cases.  To make an analogy to the medical world, suppose there was a Pfizer drug trial where 50% of Pfizer’s control group were offered the exact same medication from Merck.  Wouldn’t that cast serious doubt on the outcome of the study?  There is no mention of this 49% in either the abstract or introduction which unfortunately are all many readers will read.

Read More