Category: Empirical Analysis of Law

5

Lewis Libby (66%) Guilty

Price for Lewis (Scooter) Libby Charges at intrade.com

While Scott may not be “100% sold on either side” of the Libby trial, two-thirds of traders on the prediction markets think that he will be found guilty of lying on at least one charge against him. This is a notable upswing from last June, when I wrote about the market and its intricacies. I then pointed out that prices for these “conviction contracts” include a discount for the likelihood of a plea, so that in the months before trial, prices for conviction are likely to be depressed. I think that the rise in the price of Libby’s conviction stock demonstrates the point. Although there were few surprises at trial, traders raised the likelihood of conviction by over 20% after it began, representing the end of the plea discount and the market’s real estimation of the likelihood of conviction. Notably, traders don’t seem to think that a mistrial is terribly likely, although the likelihood of conviction has decreased from 75% at the beginning of the jury’s deliberations.

For what it is worth, I tend to agree with Scott that the length of the jury’s deliberations is a mark of seriousness and worth. If we wanted a quick and summary answer, we wouldn’t use a jury, we’d flip a coin.

8

Pie Charts: The Prime Evil

pie2.pngI’ve been busy working on edits for my recent paper, which attempts to present lots of data in relatively clear ways. I’ve gotten some great comments from readers. None more so than this one, in response to a proposed figure providing some descriptive statistics:

Pie charts are bad! They are ugly and provide the reader no visual assistance in comparing categories.

I had no idea if this was a generally accepted view among experts in the visual display of quantitative information. Extensive research suggested that it was:

One of the prevailing orthodoxies of this forum – one to which I whole-heartedly subscribe – is that pie charts are bad and that the only thing worse than one pie chart is lots of them.

The thing I don’t get is why: pie charts seem to be a very common form of data presentation; and folks are accustomed to measuring the area of slices of pie, so the visuals convey important data. What’s wrong with a slice of pie? (And, more importantly, are dot plots really better?)

11

Law Profs Who Code

perl.gif

Law Professors who write about the Internet tend to develop facts through a combination of anecdote and secondary-source research, through which information about the conduct of computer users, the network’s structure and architecture, and the effects of regulation on innovation are intuited, developed through stories, or recounted from others’ research. Although I think a lot of legal writing about the Internet is very, very good, I’ve long yearned for more “primary source” analysis.

In other words, there is room and need for Internet law scholars who write code. Although legal scholars aren’t about to break fundamental new ground in computer science, the hidden truths of the Internet don’t run very deep, and some very simple code can elicit some important results. Also, there is a growing cadre of law professors with the skills needed to do this kind of research. I am talking about a new form of empirical legal scholarship, and empiricists should embrace the perl script and network connection as parts of their toolbox, just as they adopted the linear regression a few decades ago.

I plan to talk about this more in a subsequent post or two, but for now, let me give some examples of what I’m describing. Several legal scholars (or people closely associated with legal scholarship) are pointing the way for this new category of “empirical Internet legal studies”.

  • Jonathan Zittrain and Ben Edelman, curious about the nature and extent of filtering in China and Saudi Arabia, wrote a series of scripts to “tickle” web proxies in those countries to analyze the amount of filtering that occurs.
  • Edelman has continued to engage in a particularly applied form of Internet research, for example see his work on spyware and adware.
  • Ed Felten—granted, a computer scientist not a law professor—and his graduate students at Princeton have investigated DRM and voting machines with a policy bent and a particular focus on applied, clear results. Although the level of technical sophistication found in these studies is unlikely to be duplicated in the legal academy soon, his methods and approaches are a model for what I’m describing.
  • Journalist Kevin Poulsen created scripts that searched MySpace’s user accounts for names and zip codes that matched the DOJ’s National Sex Offender Registry database, and found more than 700 likely matches.
  • Finally, security researchers have set up vulnerable computers as “honeypots” or “honeynets” on the Internet, to give them a vantage point from which to study hacker behavior.

What are other notable examples of EILS? Let’s keep with the grand Solovian tradition, and call this a Census. Is this sub-sub-discipline ready to take off, or should we mere lawyers leave the coding to the computer scientists?

3

Replicability, Exam Grading, and Fairness

Exam-Grade-2a.jpgWhat does it mean to grade fairly?

At my law school, and presumably elsewhere, law students aggrieved by a grade can petition that it be changed. Such petitions are often granted in the case of mathematical error, but usually denied if the basis is that on re-reading, the professor would have reached a different result. The standard of review for such petitions is something like “fundamental fairness.” In essence, replicability is not an integral component of fundamental fairness for these purposes.

Law students may object to this standard, and its predictable outcome, asserting that if the grader can not replicate his or her outcomes when following the same procedure, then the total curve distribution is arbitrary. On this theory, a student at the least should have the right to a new reading of their test, standing alone and without the time-pressure that full-scale grading puts on professors.

To which the response is: grading is subjective, and not subject to scientific proof. Moreover, grades don’t exist as platonic ideals but rather distributions between students: only when reading many exams side by side can such a ranking be observed. We wouldn’t even expect that one set of rankings would be very much like another: each is sort of like a random draw of a professor’s gut-reactions to the test on that day.

This common series of arguments tends to engender cynicism among high-GPA and low-GPA students alike. To the extent that law school grading is underdetermined by work, smarts and skill, it is a bit of a joke. The importance placed on these noisy signals by employers demonstrates something fundamentally bitter about law – the power of deference over reason.

Read More

5

Can NVivo Qualitative Empirical Software Help Manage Oceans Of Research?

One of the real challenges for a legal scholar (and probably researchers in many other social science disciplines as well) is figuring out what to do with all those interesting articles you read. Do you make notebooks organized by topic? If so, what happens when a piece has something important to say on multiple topics? Do you create index cards, or their digital equivalents, with relevant quotes? Or, like me, do you find yourself rediscovering the wheel several times – putting an article aside in stack on day one, and rediscovering it on Lexis or Westlaw four months later when you’re searching for a different issue?

I find that keeping control of existing literature, a critical process for those who publish in law reviews (which demand a footnote to support even the most mundane statements), turns out to be a burdensome and sometimes unsuccessful pursuit. As a result, I’m very intrigued by the idea of using NVivo to help.

What is NVivo? It’s a leading qualitative empirical research software. Yes, Virginia, I did say qualitative. As many folks know, one of my biggest beefs with Empirical Legal Studies is that some of its followers have marginalized qualitiative research – so much that many people with only a passing awareness of ELS believe that all empirical work is quantitative. That discussion is for another day, however. The point is that qual researchers use software to help them keep track of their data…which is to say, their texts. My understanding of NVivo – formerly known as NUD*IST – is that you can take texts (like law review articles) and drop them into the software. You can then create coding fields, and mark selected text as part of such fields. (A discussion of the capacities of qual software is here.) For example, if one were studying the way that courts discuss victims in rape cases, and had created a sample for investigation, one might load the selected cases into NVivo. As the researcher creates particular fields – for example “victim dressed provocatively”, “victim drinking”, “victim previously worked as prostitute” (as well as “circuit court”, “appellate court”, “female judge”) – she can then mark text in each case that would fit into the field. This allows her, at a later point, to do targeted searches for particular marked themes – and also allows her to subdivide by the traits of the cases. Thus, she can identify all the decisions by female judges that identify females as victims, and break them out by year.

I wonder whether many legal scholars who don’t do qualitative work could benefit from this software simply by using it as a way of containing, coding and organizing all the articles they read in the course of their literature review. I haven’t heard of anyone doing this, but it seems like it might make a lot of sense – particularly for somewhat disorganized researchers. It might not take advantage of all the power of NVivo, but it could be the equivalent of the smartest filing system ever created.

Does anyone have experience with NVivo, or other similar software (like Atlas), that might shed light on this? By the way, many schools have site licenses for this software, so many of those interested in trying this out can do so without spending a dime.

4

A minimum wage field experiment

Thanks to Dan for inviting me to guest blog!

Tyler Cowen suggests that the expected increase in the minimum wage may serve as a useful “controlled experiment,” at least if the increase applies to Northern Mariana but not to American Samoa. A commenter points out that it’s not a well controlled experiment, because the two territories are not identical. This point rehearses a familiar challenge for empirical legal analysis: Legal scholars don’t have the luxury of randomized studies. Even natural experiments rarely provide conclusive evidence of policy effects.

But we could have randomized studies (or so I will argue in a paper that I am working on). John List is a leader among those who do “field tests” rather than using laboratory experiments or relying on other econometric techniques. (I refer of course to John List the economist, not John List the family murderer, although the boundary between these occupations has allegedly blurred recently.) We could do field tests in law, if only legislatures would cooperate.

An explanation, after the jump.

Read More

Song of Jersey City

PATH Map.jpg

Rick Garnett recently wrote on “cities’ hipness competition.” According to a recent article in New York Magazine, my urban home (Jersey City) has recently won some prize:

To live [in New York now] is to endure a gnawing suspicion that somebody, somewhere, is marveling and reveling a little more successfully than you are. That they’re paying less money for a bigger apartment with more-authentic details on a nicer block closer to cuter restaurants and still-uncrowded bars and hipper galleries that host better parties with cooler bands than yours does, in an area that’s simultaneously a portal to the future (tomorrow’s hot neighborhood today!) and a throwback to an untainted past (today’s hot neighborhood yesterday!). And you know what? Someone is. And you know what else? Right now, that person just might be living in Jersey City.

It’s not just Tyler Cowen who’s rescuing New Jersey from punchline status–even the uberhip NYM is recognizing us (even if we’re shunned by NYC Bloggers). Our hospitals may be closing, but at least we’ve got a hot arts scene.

Of course, the NYM piece focuses not on all of the JC, but only on the “downtown” close to the Hudson waterfront. I live a bit further down the PATH line, in Journal Square. I think a comparison between the two areas may help us answer Rick’s question: “what law can do — e.g., zoning laws, liquor licensing, etc. — to make cities / metro areas more (or less) attractive to the young (or the old, for that matter)”? Can big urbanism work?

Read More

From the New Property to the New Responsibility

apple small.jpgJust as Charles Reich was a premier theorist of rights to government largesse, Peter Schuck and Richard Zeckhauser are leading exponents of the responsibilities it entails. In Targeting Social Programs, S&Z focus on the denial of benefits to “bad bets” and “bad apples:”

Bad bets are individuals who are likely to benefit little from social resources relative to other [beneficiaries]. . . . Bad apples are individuals whose irresponsible, immoral, or illegal behavior in the past—and predictably, in the future as well—marks them as unsuitable to receive the benefits of social programs.

This may sound a bit cold-hearted at first, but S&Z make a good case that, behind a veil of ignorance, we’d quite sensibly allocate resources to, say, the transplant recipient who is most likely to benefit, rather than the one who has been on the wait list the longest. They also show how often “bad apples'” worst effects are on the disadvantaged citizens near them. (For an example, see Kahan and Meares on anti-loitering ordinances.)

The West Virginia Medicaid program provides an interesting case study of “bad apple screening.” Consider the fate of one beneficiary who refuses to sign a “health responsibility contract:”

Mr. Johnson. . . goes to a clinic once a month for diabetes checkups. Taxpayers foot the bill through Medicaid . . . [b]ut when doctors urged him to mind his diet, “I told them I eat what I want to eat and the hell with them. . . . I’ve been smoking for 50 years — why should I stop now? . . . This is supposed to be a free world.”

Traditionally, there was little Medicaid could do to encourage compliance. But now, “[u]nder a reorganized schedule of aid, the state, hoping for savings over time, plans to reward “responsible” patients with significant extra benefits or — as critics describe it — punish those who do not join weight-loss or antismoking programs, or who miss too many appointments, by denying important services.” But as the article notes, “Somewhat incongruously, [Johnson] appears to be off the hook: as a disabled person he will be exempt under the rules.”

Critics claim the program is unduly intrusive: “What if everyone at a major corporation were told they would lose benefits if they didn’t lose weight or drink less?” asked one doctor. Certainly in some manifestations it could be; consider this 1997 proposal by Judge John Marshall Meisburg:

Congress should . . . consider legislation stipulating that no one can be granted disability by SSA if s/he continues to smoke against the advice of his physician, and smoking is a factor material to the disability, because such claimants are bringing illness and disability upon themselves. Such a law would reduce the burden of proof now needed to deny benefits to persons who fail to heed their doctors’ advice, and would dovetail with legislation just passed by Congress to abolish disability benefits for persons addicted to drug and alcohol. In many cases, smoking is akin to “contributory negligence” and the SSA law should recognize it as such. [From Federal Lawyer, 44-APR FEDRLAW 56 on Westlaw.]

I think S&Z frame the debate in a nuanced enough way to avoid this kind of draconian proposal. But I do have a few quibbles with the framing of their work, if not its substance.

Read More

10

Educated Yet Broke

Can you be too poor to file for bankruptcy, yet have the ability to repay your student loans?

When Congress amended the Bankruptcy Code in 2005, it also amended the Judicial Code to provide for the waiver of the mandatory filing fee for bankruptcy. That’s right. Prior to this statutory amendment, if you were so financially strapped that you couldn’t pay the filing fee (then, $150 for Chapters 7 and 13; now, $220 for Chapter 7 and $150 for Chapter 13), you were out of luck: Per the Supreme Court’s 1973 decision in United States v. Kras, 409 U.S. 434, in forma pauperis relief was unavailable in bankruptcy. Lest we prematurely praise Congress for changing this state of affairs, debtors today will get a waiver of the filing fee only under very narrow circumstances. A debtor must have (1) household income less than 150% of the poverty line and (2) and an inability to pay the filing fee in installments (see 28 U.S.C. § 1930(f)(1)).

Now that we have a sense of what Congress deems to be a financially dire situation, at least for purposes of filing for bankruptcy, it strikes me that we might use this measure to gauge a debtor’s inability to repay other types of debts—say, for example, student loans. In an empirical study of the discharge of student loans in bankruptcy, Michelle Lacey (mathematics, Tulane) and I documented that the financial characteristics of the great majority of debtors in our sample evidenced an inability to repay their student loans. One measure we used was the amount of the debtor’s household income in relation to the poverty line established by the U.S. Department of Health and Human Services. We had sufficient information to calculate this figure for 262 discharge determinations. For this group of debtors, half of them had household income less than 200% of the poverty line. It didn’t occur to us to run the numbers using the 150% figure applicable to the fee waiver. In light of the new statutory provision, I’ve set out to look at our data from this perspective. The numbers are sobering, to say the least.

Read More

0

Lempert on ELS

Richard Lempert, guest-blogging at the ELS Blog, has a great series of posts on empirical scholarship in law. In the first, he observed that:

Too often researchers encourage misuses of their results in conclusions that push the practical implications of their research, even when the more detailed analysis emphasizes proper cautions. While this occurs with empirical students of the law in liberal arts schools by political scientists, sociologists, economists and psychologists among others, the problem tends to be more severe in the empirical work of law professors, perhaps because most see their business not as building social or behavioral theory but as criticizing laws and legal institutions and recommending reform.

In the second, he said:

There is also the question of qualitative data. I am distant enough from the ELS movement that I do not know how its core advocates regard qualitative research, but taking down 5 volumes of the Journal of Empirical Legal Studies that happen to be close at hand I could not help but note that every article in every volume had a quantitative dimension. Each had at least a graph, table, equation or regression and most analyzed and presented results using more than one of these analytic modality. Yet qualitative research is as empirically-based as quantitative research and it can be as unbiased and as rigorous. Moreover, it is often more revealing of relationships legal scholars seek to understand, not to mention more accessible and interesting. Lawyers have done many quantitative studies I find useful and admire, but I would not elevate any of them above, for example, Bob Ellickson’s study of Shasta county when it comes to developing and sharing an understanding of the real world or, in this case, illuminating the limitations of the Coase Theorem.

And most recently, he argued for a deeper appreciation of the role of ground-tested theory:

What is plausible depends, of course, on what we know about the matter we are studying. More than occasionally empirical scholars seem to have little appreciation of context beyond the general knowledge everyone has and the specific data they have collected. Without a deep appreciation of context, even the best scholars may be misled. For example, some years ago Al Blumstein and Daniel Nagin, who were and are among the very best of our nation’s quantitative criminologists, did a study of the deterrent effects of likely sentences for draft evasion on draft evasion rates. For its time the study was in many ways exemplary – variables were carefully measured and analyzed, and it was refreshing to see an investigation into deterrence outside the street crimes and capital punishment contexts. The results of the Blumstein-Nagin research strongly confirmed deterrence theory. Resisting the draft by refusing induction was substantially higher in those jurisdictions that sentenced resisters most leniently. Yet I regarded the study as worthless.

To find out why, and to read more of this powerful (but friendly) critique of the newly dominant methodology in legal scholarship, check out the ELS blog!