Category: Empirical Analysis of Law

3

Does familiarity breed contempt?

I have been reading some interesting articles on the factors that contribute to a court’s or judge’s reversal rate. Because I live in, and litigate cases in, Washington, D.C., where the federal district and circuit court judges occupy the same building, I began to wonder whether there is any correlation between sharing a courthouse and the frequency with which the appellate court reverses the district court. Similarly, I would be interested to know whether workplace proximity affects the frequency with which the appellate court orders a district court judge to recuse him or herself from sitting on a case. The articles I have found do not address this question.

The federal courthouse in D.C. provides district and circuit court judges with lots of opportunity to interact in the elevators, cafeteria, parking lot, gym, and at various courthouse functions (for example, at the annual chili cook off organized by Judge Sentelle, or at the holiday caroling hosted by Judge Henderson). Would these sorts of frequent, casual social interactions change the way the appellate judges review their district court colleagues? I could see it cutting either way. On the one hand, the appellate judges might give a little more deference to that district court judge who seems friendly, sensible, smart, and always remembers to ask after the kids when they run into each other in the hallways. On the other hand, the water-cooler familiarity might lead appellate judges to view some of their lower court counterparts as less reliable and trustworthy than others. Although I doubt workplace proximity is a major factor in reversal rates, I would guess that it plays in a little at the margins.

Read More

3

Solum on the Need for Opinions

opinion.jpgLarry Solum recently posted a kind response to my post on the need for judicial reasoning. Here is a taste of his analysis:

An obligation to offer justification has obvious accuracy-enhancing effects: it forces the decision maker to engage in an internal process of deliberation about explicit reasons for an action and to consider whether the reasons to be offered are “reasonable” and whether they are likely to be sustained in the event of appeal. Balancing approaches, which consider the costs of procedural rules as well as their accuracy benefits, point us in the direction of the costs associated with requiring justifications on too many occasions and of the costs of requiring justificatory effort that is disproportionate to the benefits to be obtained. Requiring reasons facilitates a right of meaningful participation as well: when a judge gives reasons, then the parties affected by the action can respond–offering counter reasons, objecting to their legal basis, and so forth. Moreover, the offering of reasons provides “legitimacy” for the decision.

Very helpful. Clearly, the procedural justice literature has much to say on whether it is illegitimate for judges to rule without explanation. It seems to me that much of Larry’s discussion would seem to foreclose the legitimacy of what our commentators have suggested as the backstop for expressed opinions: back-pocket explanations, i.e., reasons produced by litigant demands.

But I still think that much of our thinking on the problem of “why and when reasons” is driven by biases built into our legal-DNA by the law school experience. I’ll ramble a bit more on this problem below the jump.

Read More

13

Must District Judges Give Reasons?

gavel.jpgJonathan Adler highlights this astonishing Ninth Circuit opinion on the alleged misconduct of now-embattled District Judge Manuel Real. Some interesting facets of the case (previously blogged about here, here, and elsewhere). First, dissents matter. It is more than tempting to attribute the current push to impeach Judge Real to Judge Kozinski’s harsh dissent from the panel’s order exonerating him on the misconduct charge. Second, the case raises a neat issue which relates to what I’ve been writing this summer. While the overall facts of the case are well worth reading in the original, if you’ve ten or twenty minutes, I want to focus briefly on part of Judge Kozinski’s charge against Real: that he failed to explain the reasoning for a controversial order.

The basic story is that Judge Real withdrew the petition in a pending bankruptcy case and stayed a state-court judgement evicting a woman who was appearing before his court in a criminal matter. Both orders were entered apparently sua sponte, or at least without hearing the evicting party’s arguments. According to Kozinski, Judge Real “gave no reasons, cited no authority, made no reference to a motion or other petition, imposed no bond, balanced no equities. The two orders [the withdraw and stay] were a raw exercise of judicial power…” In a subsequent hearing, Kozinski continued, “we find the following unilluminating exchange”:

The Court: Defendants’ motion to dismiss is denied, and the motion for lifting of the stay is denied . . .”

Attorney for Evicting Party: May I ask the reasons, your Honor?

The Court: Just because I said it, Counsel.

Kozinski wrote:

I could stop right here and have no trouble concluding that the judge committed misconduct. [Not only was there a failure of the adversary process . . . but also] a statement of reasons for the decision, reliance on legal authority. These niceties of orderly procedure are not designed merely to ensure fairness to the litigants and a correct application of the law . . . they lend legitimacy to the judicial process by ensuring that judicial action is-and is seen to be-based on law, not the judge’s caprice . . . [And later, Kozinski exclaims] Throughout these lengthy proceedings, the judge has offered nothing at all to justify his actions-not a case, not a statute, not a bankruptcy treatise, not a law review article, not a student note, not even a blawg. [DH: Check out the order of authority!]

So here’s the issue: in the ordinary case, to what extent are judges required to explain themselves?

Read More

11

Perhaps this empirical dog does not hunt.

I have hit a . . . data analysis sticking point with some empirical work that I am doing, and I thought I’d toss the problem out there to see if any of you see something that I do not see. I am a bit embarrassed, however, to admit that I am having a problem analyzing my data, so please refrain from starting any of your comments with “Did you skip 12th grade calc., Nowicki?” or “when, if ever, have you taken a stats class?”

I have calculated the annual percentage change in pay for the CEOs of ten large, publicly traded corporations. I am then comparing those annual percentage changes to the annual percentage changes in profits for those ten corporations, to see if there is a relationship between percentage changes in pay and percentage changes in corporate profits (such as a 10% increase in annual profit being accompanied with a 10% increase in CEO pay).

My ratios of percentage change in pay as compared to percentage change in profit are not producing what I expected to get, however. I have taken my annual percentage changes in pay and divided them by my annual percentage change in profit (for each CEO, for each year).

I expected to be able to then say “A result of 1 or a number greater than 1 is a bad thing” (because it means that the percentage change in pay is GREATER than any percentage change in profit). But things get confusing when I have percentage decreases – I frequently end up with negative numbers that are sometimes indicative of a “good” relationship (a negative percentage change in CEO pay accompanied by a percentage increase in profit, for example) and sometimes indicative of a BAD relationship (a positive percentage pay change accompanied by a NEGATIVE percentage profit change).

Given that I have negative numbers that are sometimes indicating a “good” pay/profit relationship and sometimes indicating a “bad” pay/profit relationship, I am stymied. What am I not seeing? Why am I not able to say “a number greater than 1 is a BAD thing for shareholders in terms of the CEO pay/profit relationship and a number less than one is a good thing”?

3

Update on Plea Bargains and Prediction Markets

In Let Markets Help Criminal Defendants, I wrote that “If I were running a public defender service, I’d consider setting up an online prediction market for the conviction of my clients.” I still think this is a good idea, but someone suggested a serious problem that would have to be remedied for the scheme to be possible.

Right now, prediction markets bets on judicial events, like the conviction of Lewis Libby (whose graph is to the right), pay off at 100 for conviction, and 0 for any other ending of this set of charges, including a plea. This creates noise which renders them useless for criminal defendants looking to see if they ought to plea. That is, as I didn’t fully appreciate before, traders must be estimating the probability of conviction, tempered by the likelihood of a plea – prices are lower than the actual market estimate of a guilty verdict independent of a plea. That is, if the current price of Libby’s “stock” is .40, that means that incarceration is not 40% likely. It means that traders think it is 60% likely that Libby will win at trial, receive a mistrial, obtain a dismissal, be granted a pardon, or plea. I imagine that the likelihood of a plea accounts for a large percentage of this figure.

If traders thought that conviction prices affected defendant behavior, then presumably they’d seek to put in sell orders at prices above those where rational defendants would plea. This would put downward pressure on price and make the entire system useless from defense counsel’s perspective.

For my system to work, you’d have to exclude the possibility of a plea (i.e., nullify all bets if there is a plea). Of course, this still would create some dynamic tension, as bettors presumably would become eager to invest time and trade only as pleas become less likely – near trial, or in jurisdictions, like Philadelphia, where the District Attorney has a no-plea policy. But the resulting prices would be more informative than those offered by the current system.

8

Setting the Bar, and the Limits of Empirical Research

Larry Ribstein and Jonathan Wilson are debating the merits of a strong, exclusionary, state bar.

Wilson’s position is pro-Bar:

Deregulating lawyers as punishment or retribution for a profession that has lost its way would be a recipe for disaster. Deregulating the practice of law would open the floodgates to fraud of every conceivable variety and would only compound the problems that the readers of these pages see in our civil justice system.

Ribstein, naturally, is pro-market:

Big law firms provide a strong reputational “bond” . . . Lawyers can be certified by private organizations, including existing bar associations, which can compete with each other by earning reputations for reliability. . . .We could have stricter pleading rules, or require losers to pay winners’ fees. Or how about this: let anybody into court, but adopt a loser pays rule for parties that come into court represented by anything less than a lawyer with the highest possible trial certificate . . . Even if only licensing would effectively deal with this problem, the licensing scheme should be designed specifically to protect the courts. Instead of requiring the same all-purpose license to handle a real estate transaction and to prosecute a billion-dollar class action, we could have a special licensing law for courtroom practice, backed by tight regulation of trial lawyers’ conduct – something like the traditional barrister/solicitor distinction in the UK.

Josh Wright has picked up the thread of the discussion at TOTM, and suggests that empirical evidence would inform this debate. Unfortunately, as both Larry and he note, there is a paucity of useful studies on point:

If I recall, the Federal Trade Commission has recently been involved in some advocacy efforts in favor of limiting the scope of unauthorized practice of law statutes. My sense is that a number of states must have relaxed unauthorized practice of law restrictions (I think Arizona is one), or similarly relaxed restrictions on lawyer licensing, such that one could directly test the impact of these restrictions on consumers in terms of prices and quality of service. There must be work on this somewhere.

Solove and I have gone around on this question before (see here for the powerful pro-licensing position, and here and here for Solove’s “response”).

Generally, I like Josh’s intuition. It would be quite useful to look to Arizona, or other natural experiments, to help us to answer the problem of the utility of the Bar Exam and other licensing barriers. Surely, there is no reason in the abstract to preserve an ancient system that keeps lawyer fees artificially high, diverts millions of dollars from law students to Barbri, and causes no end of mental anguish simply because it provides a new jurisprudential lens!

But I’m quite skeptical that this is an answerable question, at least in the short term. My thinking is informed somewhat by the new Malcolm Gladwell New Yorker essay about basketball. Although Gladwell extols the virtues of statistical analysis (instead of anecdote, judgment, and valuing the joy of watching Allen Iverson triumph despite his height), the lesson I took from the piece was that:

Most tasks that professionals perform . . . are surprisingly hard to evaluate. Suppose that we wanted to measure something in the real world, like the relative skill of New York City’s heart surgeons. One obvious way would be to compare the mortality rates of the patients on whom they operate—except that substandard care isn’t necessarily fatal, so a more accurate measure might be how quickly patients get better or how few complications they have after surgery. But recovery time is a function as well of how a patient is treated in the intensive-care unit, which reflects the capabilities not just of the doctor but of the nurses in the I.C.U. So now we have to adjust for nurse quality in our assessment of surgeon quality. We’d also better adjust for how sick the patients were in the first place, and since well-regarded surgeons often treat the most difficult cases, the best surgeons might well have the poorest patient recovery rates. In order to measure something you thought was fairly straightforward, you really have to take into account a series of things that aren’t so straightforward.

I know how I would test the direct cost of legal service in Pennsylvania, and I’ve no doubt that it would go down if I (by fiat) abolished the state bar. But I have no good idea of how we can measure lawyer “quality”. To take something as obvious as criminal defense, some really good public defenders will lose every case for a year, but take comfort in having not lost on the top count of a single indictment. Saying that a public defender who went 0 for 50 in 2005 was a less “good” attorney than a prosecutor who went 50-0 would be a real problem. Facts drive litigation, and make empirical investigation of lawyer quality as a quantitative matter hard. And that is for attorneys who perform in public. How do you evaluate the relative strength of deal counsel on a gross level? Count the typos in the document? Talk with the business folks, and ask who got in the way less? [Obviously, deal counsel can be very good and very bad: the point is we need metrics that are easily coded by, say, research assistants.]

So here is the question for our readers. Can you design an empirical project that measures both litigation and transactional practice quality as a function of licensing?

4

Empirical Studies at ALEA

Bill Henderson (at the ELS Blog) has a very useful round-up of empirical papers presented at the recent ALEA conference. Blog-traveller Kate Litvak comes in for special praise:

Kate Litvak [presented] “The Effect of the Sarbanes-Oxley Act on Non-US Companies Listed in the U.S.,” which was an extremely well-done event study that used a natural experiment approach to capture the market reaction to SOX (it was generally negative). In the last couple of years, Kate, who does not have a PhD, has spent a lot of time learning sophisticated econometric techniques. It really showed. Very impressive (and easy to follow) presentation.

To be frank, I’ve been quite skeptical of studies showing a negative relationship between SOX and equity prices, on several grounds: (1) my practice experience managing the creation of event studies that dealt with changing legal regimes suggested that results are rarely as robust as one might hope; (2)) the passage and eventual implementation of SOX were so attenuated that event studies would seem hard to perform; and (3) the debate is quite politicized, with folks already disposed to dislike federalization of corporate law leading the charge on the empirical front as well. But, having read Kate’s paper, I’m inclined to rethink my position. It is well-worth a read.

5

Nominally Empirical Evidence of Unraveling in the Law Review Market

book21a.jpgIn a previous post, I observed that “the time for submitting law review articles is creeping backwards.” I then hypothesized that “we are experiencing what Alvin Roth called the ‘unraveling’ of a sorting market.” This is bad news:

Authors may not be able to get any sense at all of the “market value” of their article (loosely reflected, the myth goes, by multiple offers at a variety of journals). Conversely, journals feeling pressure to move quickly will increasingly resort to proxies for quality like letterhead, prior publication, and the eminences listed in the article’s first footnote (which tell you who an author’s friends and professional contacts are).

At the end of that post, I promised to “explore empirical evidence that this is in fact an unraveling market problem (as opposed to anecdote, to the extent possible).” As it turns out, this was a hard promise to deliver on. There simply isn’t data out there – at least that I’ve been able to find, that collects historical information about the submission processes to law reviews. This is somewhat surprising. Law professors are insular, interested in navel gazing, and well-motivated to do anything other than grading. Moreover, the process of submission is an economically consequential activity. But only recently, in two works-in-progress, has there have been any attempt made to systematically get at this problem. See here, and here.

I thought I’d make a modest contribution to the field by contributing some data from Temple in this recent submission season, and ask our readers to contribute with their experience as well. The sample size is tiny; the respondents self-selecting. This is, therefore, Co-Op’s second “very non-scientific survey” this week. It’s a trend! The data is not meant to suggest any definite conclusions, but rather help researchers with hypothesis formation. But I’ll offer some grand thoughts at the end of this post anyway.

Read More

6

Reefer Madness At The FDA

marijuana-leaf.jpgOne of the most troubling behaviors of the current administration is its repeated willingness to manipulate the distribution of empirical data with which it disagrees. From global warming to crime, the government seems more interested in promoting its policy preferences than transparently reporting the results of the research it performs or supports. The administration has a legitimate right to advocate for its positions. But if it wants to argue that marijuana ought to be illegal, as the FDA did last week in its Inter-Agency Advisory Regarding Claims That Smoked Marijuana Is A Medicine, it seems to me the better policy – both from an honesty and a credibility point of view – is to concede the facts that cut against you, and make your case anyway. In its press release last week, the FDA asserted that:

A past evaluation by several Department of Health and Human Services (HHS) agencies, including the Food and Drug Administration (FDA), Substance Abuse and Mental Health Services Administration (SAMHSA) and National Institute for Drug Abuse (NIDA), concluded that no sound scientific studies supported medical use of marijuana for treatment in the United States.

True as this may be, a 1999 review of studies by the National Institute of Medicine suggests that marijuana offers potential therapeutic value for pain relief, control of nausea and vomiting, and appetite stimulation. Also, it notes that “until a non-smoked, rapid-onset cannabanoid drug delivery system becomes available…there is no clear alternative” to smoking. Why can’t the administration concede the existence of this data review by another federal agency?

It seems to me that the administration is driven by a decision, ex ante, that marijuana ought to be illegal. If it were truly interested in investigating the utility of the drug, it wouldn’t make serious research into its value exceedingly difficult. So the federal government ignores data suggesting the value of marijuana. It makes it hard to generate more research on marijuana. And it is therefore able to rail against the many states that have legalized marijuana for medical purposes. There are reasons to believe that, if the government allowed the debate to flourish – by sharing data that does exist and promoting the production of new data – its position might become weaker. But if marijuana is in fact effective as a medicine, perhaps the FDA should legalize it. And if the government’s real argument is something other than efficacy – that it is very likely to be misued, for example, or that its increased availability will lead to a rise in DUI cases – then it should make that case instead.

In some respects, this approach to policy debate reminds me of an argument made by death penalty opponents who argue that the death penalty is bad policy because it is expensive. But why is it expensive? Because opponents litigate these cases very aggressively. There are many good reasons why some people may oppose the death penalty. But it seems to me that when the people complaining about the cost of capital punishment are the people generating this expense, one should at least be skeptical. I’m not denying that the expense argument might mask a a deeper claim: perhaps these cases are so expensive, and require so many appeals, because the state fails to provide excellent counsel in the first instance. But if this is true, wouldn’t a more logical solution to the cost problem be a requirement that states spend money on quality counsel up front, to save in the long haul? In the end, the real claim underneath cost is fairness: the quality of a person’s lawyer should not determine whether he receives a death sentence. That may not “sell” as well to certain voters, but it is the more honest argument.

As for reefer, when government is making the arguments, I think we have a right to expect honesty. The FDA’s dubious pronouncement appears driven primarily by the administration’s emotional hatred of marijuana. Personally, I’d prefer FDA decisions to be grounded in evidence-based research rather than simply madness.

2

The Most Cited Cases in Administrative Law

Some empirical research is more blog-worthy than essay-worthy. Entering citations into Westlaw’s Allfeds database over lunch may be an example.

Others have observed that Chevron v. NRDC may become the most cited case of any kind by federal courts, displacing Erie v. Thompkins. It has garnered 7909 citations, far ahead of the next most cited case in administrative law, NLRB v. Universal Camera Corp. (substantial evidence), with 4801 citations. Following that, it’s a tight race between Matthews v. Eldridge (due process), with 4293 citations, and Citizens to Preserve Overton Park v. Volpe (hard look), with 4227. The scope-of-judicial-review case that has underperformed is MVMA v. State Farm (arbitrary and capricious), with 2276 citations, less than the sort of quaint Goldberg v. Kelly’s (due process) 2377 citations and the narrow-issue-area Abbott Labs v. Gardner’s (ripeness) 2910 citations. Chevron has also stolen a lot of Vermont Yankee v. NRDC’s (rulemaking) glory – it has 1059 citations. But my not-so-dark-horse candidate for the silver medal in the future is Lujan v. Defenders of Wildlife (standing) with 3775 cites. Not too bad for a case from 1992, and I suspect that the government has installed a shift-F4 macro for the case on every one of its attorneys’ computers.