Category: Empirical Analysis of Law

Song of Jersey City

PATH Map.jpg

Rick Garnett recently wrote on “cities’ hipness competition.” According to a recent article in New York Magazine, my urban home (Jersey City) has recently won some prize:

To live [in New York now] is to endure a gnawing suspicion that somebody, somewhere, is marveling and reveling a little more successfully than you are. That they’re paying less money for a bigger apartment with more-authentic details on a nicer block closer to cuter restaurants and still-uncrowded bars and hipper galleries that host better parties with cooler bands than yours does, in an area that’s simultaneously a portal to the future (tomorrow’s hot neighborhood today!) and a throwback to an untainted past (today’s hot neighborhood yesterday!). And you know what? Someone is. And you know what else? Right now, that person just might be living in Jersey City.

It’s not just Tyler Cowen who’s rescuing New Jersey from punchline status–even the uberhip NYM is recognizing us (even if we’re shunned by NYC Bloggers). Our hospitals may be closing, but at least we’ve got a hot arts scene.

Of course, the NYM piece focuses not on all of the JC, but only on the “downtown” close to the Hudson waterfront. I live a bit further down the PATH line, in Journal Square. I think a comparison between the two areas may help us answer Rick’s question: “what law can do — e.g., zoning laws, liquor licensing, etc. — to make cities / metro areas more (or less) attractive to the young (or the old, for that matter)”? Can big urbanism work?

Read More

From the New Property to the New Responsibility

apple small.jpgJust as Charles Reich was a premier theorist of rights to government largesse, Peter Schuck and Richard Zeckhauser are leading exponents of the responsibilities it entails. In Targeting Social Programs, S&Z focus on the denial of benefits to “bad bets” and “bad apples:”

Bad bets are individuals who are likely to benefit little from social resources relative to other [beneficiaries]. . . . Bad apples are individuals whose irresponsible, immoral, or illegal behavior in the past—and predictably, in the future as well—marks them as unsuitable to receive the benefits of social programs.

This may sound a bit cold-hearted at first, but S&Z make a good case that, behind a veil of ignorance, we’d quite sensibly allocate resources to, say, the transplant recipient who is most likely to benefit, rather than the one who has been on the wait list the longest. They also show how often “bad apples'” worst effects are on the disadvantaged citizens near them. (For an example, see Kahan and Meares on anti-loitering ordinances.)

The West Virginia Medicaid program provides an interesting case study of “bad apple screening.” Consider the fate of one beneficiary who refuses to sign a “health responsibility contract:”

Mr. Johnson. . . goes to a clinic once a month for diabetes checkups. Taxpayers foot the bill through Medicaid . . . [b]ut when doctors urged him to mind his diet, “I told them I eat what I want to eat and the hell with them. . . . I’ve been smoking for 50 years — why should I stop now? . . . This is supposed to be a free world.”

Traditionally, there was little Medicaid could do to encourage compliance. But now, “[u]nder a reorganized schedule of aid, the state, hoping for savings over time, plans to reward “responsible” patients with significant extra benefits or — as critics describe it — punish those who do not join weight-loss or antismoking programs, or who miss too many appointments, by denying important services.” But as the article notes, “Somewhat incongruously, [Johnson] appears to be off the hook: as a disabled person he will be exempt under the rules.”

Critics claim the program is unduly intrusive: “What if everyone at a major corporation were told they would lose benefits if they didn’t lose weight or drink less?” asked one doctor. Certainly in some manifestations it could be; consider this 1997 proposal by Judge John Marshall Meisburg:

Congress should . . . consider legislation stipulating that no one can be granted disability by SSA if s/he continues to smoke against the advice of his physician, and smoking is a factor material to the disability, because such claimants are bringing illness and disability upon themselves. Such a law would reduce the burden of proof now needed to deny benefits to persons who fail to heed their doctors’ advice, and would dovetail with legislation just passed by Congress to abolish disability benefits for persons addicted to drug and alcohol. In many cases, smoking is akin to “contributory negligence” and the SSA law should recognize it as such. [From Federal Lawyer, 44-APR FEDRLAW 56 on Westlaw.]

I think S&Z frame the debate in a nuanced enough way to avoid this kind of draconian proposal. But I do have a few quibbles with the framing of their work, if not its substance.

Read More


Educated Yet Broke

Can you be too poor to file for bankruptcy, yet have the ability to repay your student loans?

When Congress amended the Bankruptcy Code in 2005, it also amended the Judicial Code to provide for the waiver of the mandatory filing fee for bankruptcy. That’s right. Prior to this statutory amendment, if you were so financially strapped that you couldn’t pay the filing fee (then, $150 for Chapters 7 and 13; now, $220 for Chapter 7 and $150 for Chapter 13), you were out of luck: Per the Supreme Court’s 1973 decision in United States v. Kras, 409 U.S. 434, in forma pauperis relief was unavailable in bankruptcy. Lest we prematurely praise Congress for changing this state of affairs, debtors today will get a waiver of the filing fee only under very narrow circumstances. A debtor must have (1) household income less than 150% of the poverty line and (2) and an inability to pay the filing fee in installments (see 28 U.S.C. § 1930(f)(1)).

Now that we have a sense of what Congress deems to be a financially dire situation, at least for purposes of filing for bankruptcy, it strikes me that we might use this measure to gauge a debtor’s inability to repay other types of debts—say, for example, student loans. In an empirical study of the discharge of student loans in bankruptcy, Michelle Lacey (mathematics, Tulane) and I documented that the financial characteristics of the great majority of debtors in our sample evidenced an inability to repay their student loans. One measure we used was the amount of the debtor’s household income in relation to the poverty line established by the U.S. Department of Health and Human Services. We had sufficient information to calculate this figure for 262 discharge determinations. For this group of debtors, half of them had household income less than 200% of the poverty line. It didn’t occur to us to run the numbers using the 150% figure applicable to the fee waiver. In light of the new statutory provision, I’ve set out to look at our data from this perspective. The numbers are sobering, to say the least.

Read More


Lempert on ELS

Richard Lempert, guest-blogging at the ELS Blog, has a great series of posts on empirical scholarship in law. In the first, he observed that:

Too often researchers encourage misuses of their results in conclusions that push the practical implications of their research, even when the more detailed analysis emphasizes proper cautions. While this occurs with empirical students of the law in liberal arts schools by political scientists, sociologists, economists and psychologists among others, the problem tends to be more severe in the empirical work of law professors, perhaps because most see their business not as building social or behavioral theory but as criticizing laws and legal institutions and recommending reform.

In the second, he said:

There is also the question of qualitative data. I am distant enough from the ELS movement that I do not know how its core advocates regard qualitative research, but taking down 5 volumes of the Journal of Empirical Legal Studies that happen to be close at hand I could not help but note that every article in every volume had a quantitative dimension. Each had at least a graph, table, equation or regression and most analyzed and presented results using more than one of these analytic modality. Yet qualitative research is as empirically-based as quantitative research and it can be as unbiased and as rigorous. Moreover, it is often more revealing of relationships legal scholars seek to understand, not to mention more accessible and interesting. Lawyers have done many quantitative studies I find useful and admire, but I would not elevate any of them above, for example, Bob Ellickson’s study of Shasta county when it comes to developing and sharing an understanding of the real world or, in this case, illuminating the limitations of the Coase Theorem.

And most recently, he argued for a deeper appreciation of the role of ground-tested theory:

What is plausible depends, of course, on what we know about the matter we are studying. More than occasionally empirical scholars seem to have little appreciation of context beyond the general knowledge everyone has and the specific data they have collected. Without a deep appreciation of context, even the best scholars may be misled. For example, some years ago Al Blumstein and Daniel Nagin, who were and are among the very best of our nation’s quantitative criminologists, did a study of the deterrent effects of likely sentences for draft evasion on draft evasion rates. For its time the study was in many ways exemplary – variables were carefully measured and analyzed, and it was refreshing to see an investigation into deterrence outside the street crimes and capital punishment contexts. The results of the Blumstein-Nagin research strongly confirmed deterrence theory. Resisting the draft by refusing induction was substantially higher in those jurisdictions that sentenced resisters most leniently. Yet I regarded the study as worthless.

To find out why, and to read more of this powerful (but friendly) critique of the newly dominant methodology in legal scholarship, check out the ELS blog!


Does familiarity breed contempt?

I have been reading some interesting articles on the factors that contribute to a court’s or judge’s reversal rate. Because I live in, and litigate cases in, Washington, D.C., where the federal district and circuit court judges occupy the same building, I began to wonder whether there is any correlation between sharing a courthouse and the frequency with which the appellate court reverses the district court. Similarly, I would be interested to know whether workplace proximity affects the frequency with which the appellate court orders a district court judge to recuse him or herself from sitting on a case. The articles I have found do not address this question.

The federal courthouse in D.C. provides district and circuit court judges with lots of opportunity to interact in the elevators, cafeteria, parking lot, gym, and at various courthouse functions (for example, at the annual chili cook off organized by Judge Sentelle, or at the holiday caroling hosted by Judge Henderson). Would these sorts of frequent, casual social interactions change the way the appellate judges review their district court colleagues? I could see it cutting either way. On the one hand, the appellate judges might give a little more deference to that district court judge who seems friendly, sensible, smart, and always remembers to ask after the kids when they run into each other in the hallways. On the other hand, the water-cooler familiarity might lead appellate judges to view some of their lower court counterparts as less reliable and trustworthy than others. Although I doubt workplace proximity is a major factor in reversal rates, I would guess that it plays in a little at the margins.

Read More


Solum on the Need for Opinions

opinion.jpgLarry Solum recently posted a kind response to my post on the need for judicial reasoning. Here is a taste of his analysis:

An obligation to offer justification has obvious accuracy-enhancing effects: it forces the decision maker to engage in an internal process of deliberation about explicit reasons for an action and to consider whether the reasons to be offered are “reasonable” and whether they are likely to be sustained in the event of appeal. Balancing approaches, which consider the costs of procedural rules as well as their accuracy benefits, point us in the direction of the costs associated with requiring justifications on too many occasions and of the costs of requiring justificatory effort that is disproportionate to the benefits to be obtained. Requiring reasons facilitates a right of meaningful participation as well: when a judge gives reasons, then the parties affected by the action can respond–offering counter reasons, objecting to their legal basis, and so forth. Moreover, the offering of reasons provides “legitimacy” for the decision.

Very helpful. Clearly, the procedural justice literature has much to say on whether it is illegitimate for judges to rule without explanation. It seems to me that much of Larry’s discussion would seem to foreclose the legitimacy of what our commentators have suggested as the backstop for expressed opinions: back-pocket explanations, i.e., reasons produced by litigant demands.

But I still think that much of our thinking on the problem of “why and when reasons” is driven by biases built into our legal-DNA by the law school experience. I’ll ramble a bit more on this problem below the jump.

Read More


Must District Judges Give Reasons?

gavel.jpgJonathan Adler highlights this astonishing Ninth Circuit opinion on the alleged misconduct of now-embattled District Judge Manuel Real. Some interesting facets of the case (previously blogged about here, here, and elsewhere). First, dissents matter. It is more than tempting to attribute the current push to impeach Judge Real to Judge Kozinski’s harsh dissent from the panel’s order exonerating him on the misconduct charge. Second, the case raises a neat issue which relates to what I’ve been writing this summer. While the overall facts of the case are well worth reading in the original, if you’ve ten or twenty minutes, I want to focus briefly on part of Judge Kozinski’s charge against Real: that he failed to explain the reasoning for a controversial order.

The basic story is that Judge Real withdrew the petition in a pending bankruptcy case and stayed a state-court judgement evicting a woman who was appearing before his court in a criminal matter. Both orders were entered apparently sua sponte, or at least without hearing the evicting party’s arguments. According to Kozinski, Judge Real “gave no reasons, cited no authority, made no reference to a motion or other petition, imposed no bond, balanced no equities. The two orders [the withdraw and stay] were a raw exercise of judicial power…” In a subsequent hearing, Kozinski continued, “we find the following unilluminating exchange”:

The Court: Defendants’ motion to dismiss is denied, and the motion for lifting of the stay is denied . . .”

Attorney for Evicting Party: May I ask the reasons, your Honor?

The Court: Just because I said it, Counsel.

Kozinski wrote:

I could stop right here and have no trouble concluding that the judge committed misconduct. [Not only was there a failure of the adversary process . . . but also] a statement of reasons for the decision, reliance on legal authority. These niceties of orderly procedure are not designed merely to ensure fairness to the litigants and a correct application of the law . . . they lend legitimacy to the judicial process by ensuring that judicial action is-and is seen to be-based on law, not the judge’s caprice . . . [And later, Kozinski exclaims] Throughout these lengthy proceedings, the judge has offered nothing at all to justify his actions-not a case, not a statute, not a bankruptcy treatise, not a law review article, not a student note, not even a blawg. [DH: Check out the order of authority!]

So here’s the issue: in the ordinary case, to what extent are judges required to explain themselves?

Read More


Perhaps this empirical dog does not hunt.

I have hit a . . . data analysis sticking point with some empirical work that I am doing, and I thought I’d toss the problem out there to see if any of you see something that I do not see. I am a bit embarrassed, however, to admit that I am having a problem analyzing my data, so please refrain from starting any of your comments with “Did you skip 12th grade calc., Nowicki?” or “when, if ever, have you taken a stats class?”

I have calculated the annual percentage change in pay for the CEOs of ten large, publicly traded corporations. I am then comparing those annual percentage changes to the annual percentage changes in profits for those ten corporations, to see if there is a relationship between percentage changes in pay and percentage changes in corporate profits (such as a 10% increase in annual profit being accompanied with a 10% increase in CEO pay).

My ratios of percentage change in pay as compared to percentage change in profit are not producing what I expected to get, however. I have taken my annual percentage changes in pay and divided them by my annual percentage change in profit (for each CEO, for each year).

I expected to be able to then say “A result of 1 or a number greater than 1 is a bad thing” (because it means that the percentage change in pay is GREATER than any percentage change in profit). But things get confusing when I have percentage decreases – I frequently end up with negative numbers that are sometimes indicative of a “good” relationship (a negative percentage change in CEO pay accompanied by a percentage increase in profit, for example) and sometimes indicative of a BAD relationship (a positive percentage pay change accompanied by a NEGATIVE percentage profit change).

Given that I have negative numbers that are sometimes indicating a “good” pay/profit relationship and sometimes indicating a “bad” pay/profit relationship, I am stymied. What am I not seeing? Why am I not able to say “a number greater than 1 is a BAD thing for shareholders in terms of the CEO pay/profit relationship and a number less than one is a good thing”?


Update on Plea Bargains and Prediction Markets

In Let Markets Help Criminal Defendants, I wrote that “If I were running a public defender service, I’d consider setting up an online prediction market for the conviction of my clients.” I still think this is a good idea, but someone suggested a serious problem that would have to be remedied for the scheme to be possible.

Right now, prediction markets bets on judicial events, like the conviction of Lewis Libby (whose graph is to the right), pay off at 100 for conviction, and 0 for any other ending of this set of charges, including a plea. This creates noise which renders them useless for criminal defendants looking to see if they ought to plea. That is, as I didn’t fully appreciate before, traders must be estimating the probability of conviction, tempered by the likelihood of a plea – prices are lower than the actual market estimate of a guilty verdict independent of a plea. That is, if the current price of Libby’s “stock” is .40, that means that incarceration is not 40% likely. It means that traders think it is 60% likely that Libby will win at trial, receive a mistrial, obtain a dismissal, be granted a pardon, or plea. I imagine that the likelihood of a plea accounts for a large percentage of this figure.

If traders thought that conviction prices affected defendant behavior, then presumably they’d seek to put in sell orders at prices above those where rational defendants would plea. This would put downward pressure on price and make the entire system useless from defense counsel’s perspective.

For my system to work, you’d have to exclude the possibility of a plea (i.e., nullify all bets if there is a plea). Of course, this still would create some dynamic tension, as bettors presumably would become eager to invest time and trade only as pleas become less likely – near trial, or in jurisdictions, like Philadelphia, where the District Attorney has a no-plea policy. But the resulting prices would be more informative than those offered by the current system.


Setting the Bar, and the Limits of Empirical Research

Larry Ribstein and Jonathan Wilson are debating the merits of a strong, exclusionary, state bar.

Wilson’s position is pro-Bar:

Deregulating lawyers as punishment or retribution for a profession that has lost its way would be a recipe for disaster. Deregulating the practice of law would open the floodgates to fraud of every conceivable variety and would only compound the problems that the readers of these pages see in our civil justice system.

Ribstein, naturally, is pro-market:

Big law firms provide a strong reputational “bond” . . . Lawyers can be certified by private organizations, including existing bar associations, which can compete with each other by earning reputations for reliability. . . .We could have stricter pleading rules, or require losers to pay winners’ fees. Or how about this: let anybody into court, but adopt a loser pays rule for parties that come into court represented by anything less than a lawyer with the highest possible trial certificate . . . Even if only licensing would effectively deal with this problem, the licensing scheme should be designed specifically to protect the courts. Instead of requiring the same all-purpose license to handle a real estate transaction and to prosecute a billion-dollar class action, we could have a special licensing law for courtroom practice, backed by tight regulation of trial lawyers’ conduct – something like the traditional barrister/solicitor distinction in the UK.

Josh Wright has picked up the thread of the discussion at TOTM, and suggests that empirical evidence would inform this debate. Unfortunately, as both Larry and he note, there is a paucity of useful studies on point:

If I recall, the Federal Trade Commission has recently been involved in some advocacy efforts in favor of limiting the scope of unauthorized practice of law statutes. My sense is that a number of states must have relaxed unauthorized practice of law restrictions (I think Arizona is one), or similarly relaxed restrictions on lawyer licensing, such that one could directly test the impact of these restrictions on consumers in terms of prices and quality of service. There must be work on this somewhere.

Solove and I have gone around on this question before (see here for the powerful pro-licensing position, and here and here for Solove’s “response”).

Generally, I like Josh’s intuition. It would be quite useful to look to Arizona, or other natural experiments, to help us to answer the problem of the utility of the Bar Exam and other licensing barriers. Surely, there is no reason in the abstract to preserve an ancient system that keeps lawyer fees artificially high, diverts millions of dollars from law students to Barbri, and causes no end of mental anguish simply because it provides a new jurisprudential lens!

But I’m quite skeptical that this is an answerable question, at least in the short term. My thinking is informed somewhat by the new Malcolm Gladwell New Yorker essay about basketball. Although Gladwell extols the virtues of statistical analysis (instead of anecdote, judgment, and valuing the joy of watching Allen Iverson triumph despite his height), the lesson I took from the piece was that:

Most tasks that professionals perform . . . are surprisingly hard to evaluate. Suppose that we wanted to measure something in the real world, like the relative skill of New York City’s heart surgeons. One obvious way would be to compare the mortality rates of the patients on whom they operate—except that substandard care isn’t necessarily fatal, so a more accurate measure might be how quickly patients get better or how few complications they have after surgery. But recovery time is a function as well of how a patient is treated in the intensive-care unit, which reflects the capabilities not just of the doctor but of the nurses in the I.C.U. So now we have to adjust for nurse quality in our assessment of surgeon quality. We’d also better adjust for how sick the patients were in the first place, and since well-regarded surgeons often treat the most difficult cases, the best surgeons might well have the poorest patient recovery rates. In order to measure something you thought was fairly straightforward, you really have to take into account a series of things that aren’t so straightforward.

I know how I would test the direct cost of legal service in Pennsylvania, and I’ve no doubt that it would go down if I (by fiat) abolished the state bar. But I have no good idea of how we can measure lawyer “quality”. To take something as obvious as criminal defense, some really good public defenders will lose every case for a year, but take comfort in having not lost on the top count of a single indictment. Saying that a public defender who went 0 for 50 in 2005 was a less “good” attorney than a prosecutor who went 50-0 would be a real problem. Facts drive litigation, and make empirical investigation of lawyer quality as a quantitative matter hard. And that is for attorneys who perform in public. How do you evaluate the relative strength of deal counsel on a gross level? Count the typos in the document? Talk with the business folks, and ask who got in the way less? [Obviously, deal counsel can be very good and very bad: the point is we need metrics that are easily coded by, say, research assistants.]

So here is the question for our readers. Can you design an empirical project that measures both litigation and transactional practice quality as a function of licensing?