The Centrality of Abstracts? A Response to Bob Sable’s and David Udell’s Comments on “What Difference Representation? Offers, Actual Use, and the Need for Randomization”

Our great thanks to David Udell and to Bob Sable for taking the time to comment (separately) on our paper, “What Difference Representation?  Offers, Actual Use, and the Need for Randomization.”  We very much appreciate the comments they have made, and as we hope is clear from the introduction and elsewhere in our paper, we have the greatest respect for the work that they do, the work that their organizations do, and the legal services community to which they belong does.

Some uncomfortable aspects of writing this paper are that we find ourselves sometimes disagreeing with persons and organizations we greatly admire, being held responsible for what an advocate did not include in a legal brief, and having our study implicitly compared to the Gingrich Congress’ efforts to limit legal services funding and to a false expose seeking to tar legal services programs.  Had David’s and Bob’s criticisms concerned what we said in our paper, we might have considerable cause for regret.  As we understand them, however, the lion’s share of David’s and Bob’s comments go not to the content of the paper but to the title and abstract.  There are some substantive points, to which we respond below, but the primary thrust of both comments is that we have been reckless in the title and the abstract by not including and highlighting caveats that David and Bob for the most part agree that we discuss in the paper’s text.

We wonder about the apparent centrality of titles and abstracts.  But conceding that point for the moment, we also wonder whether the sins of omission and selective emphasis David and Bob accuse us of committing apply to their blog posts.  By way of example, none of the following appears in either of their posts:  (i) that the full title of the paper (which appears above) references the distinction between offers and actual use; (ii) that the first sentence of the abstract says that our research program is “designed to measure the effect of an offer of, and the actual use of, legal representation”; (iii) the last sentence in this paragraph in the abstract, after again referencing “the actual use of (as opposed to an offer of) representation,” reports that “we could come to no firm conclusion on the effect of actual use of representation on win/loss”; (iv) the third sentence of the paper again references “both an offer of, and actual use of, representation”; and (v) Part B of the introduction dedicates several pages discussing the distinction.  We have expanded the abstract several times already in response to concerns from legal services providers (including HLAB itself), and we will consider doing so again, but perhaps the best thing to do at this point would be simply to omit the abstract entirely.  We will consider that as well.

To substance.

In responding to the fact that an unexpectedly large percentage of the group randomized to no HLAB offer obtained representation elsewhere, Bob’s post references a hypothetical Pfizer drug trial in which 50% of Pfizer’s control group were offered the exact same medication from Merck, and suggests that such an occurrence would cast serious doubt on the drug trial.  The analogy to a drug trial is excellent, but we disagree with the conclusion regarding serious doubt.  It depends on what use is made of the study.  If the Pfizer/Merck study were used to measure whether government funding to offer the Pfizer drug for free would be a good investment of scare drug resources, the fact that 50% of the control group obtained representation elsewhere would not cast serious doubt on the study.  To the contrary, this fact would be critical information suggesting that a program offering the drug may not be a good investment of resources because the drug is readily available in the general economy.  If the study were used to measure the effect of forcing persons to take the drug versus prohibiting persons from taking the drug, then the 50% usage rate in the control group is an enormous problem.  If the study were used exclusively to measure what happens when the drug is taken by a class of persons who will take if it is provided for free but will not take if not so provided, then the viability of the study depends on the use of the Go-Getter/Regular statistical adjustment we attempted (but could not effectuate) in our paper.

By the way, in drug trials, statisticians routinely recommend to drug companies (and many drug companies accept the recommendation) that they focus on the effect of the offer of the drug, not actual use.  Although in experimental drug trials, the experimental drug is not often available outside the trial, if this were the case, one can see why the drug company would want to focus on the effect of an offer of the drug (as opposed to actual use):  If the drug (or a substitute) is routinely available elsewhere, the drug company may not wish to expend the resources on development.

This all cycles back to the central question:  what was randomized?  In the drug hypothetical, what was randomized was an offer of a free Pfizer drug.  Going beyond what was randomized requires careful statistical thinking, and as our paper demonstrates, it does not always work.

This is why Bob’s suggestion that remove from the control group those who also had an offer of free representation from another service provider would lead to a statistically invalid comparison.  There is no reason to think that the process by which those in the control group received an offer of representation was random.  Bob argues that our “Go-Getter” story is implausible because HLAB students sometimes (but not always) called other legal services organizations to provide the names and telephone numbers of person randomized to the control group, and these other legal services organizations in called the control group members to offer assistance.   Mightn’t that suggest that the HLAB students called other legal services providers in, say, especially “sympathetic” cases, which might be those with the most compelling facts, i.e., those cases more likely to win with or without assistance because of their sympathetic nature?  Thus, even if our Go-Getter story is implausible, we may have simply substituted one selection effect for another.  Even if one find these HLAB-made-calls-in-sympathetic-cases hypothesis implausible, the point is that we do not know what process led some control group claimants to receive offers and others not to receive offers, so we do not know what selection effects might be at work.

(Meanwhile, we do not have information on who received offers of representation from entities other than HLAB; the Massachusetts DUA collects information on who was actually represented, not who was offered representation.)

The principal lesson here is that primary focus should be on what was randomized; here, that was an HLAB offer.

There are other thoughts in Bob’s and David’s posts.  In particular, Bob’s thought about a victory before a DUA review examiner not automatically meaning that benefits would flow deserves careful consideration.  We suspect that this is not a problem in our dataset because we requested from the DUA information on the amount of benefits actually awarded to each claimant, there was no case in our dataset in which the DUA reported a win before the agency but no benefits were provided.  But we can certainly check on this.

Again, our great thanks to Bob and David for sharing their thoughts.

Jim and Cassandra

You may also like...