Category: Behavioral Law and Economics


CELS VII: Low Variance, High Significance

[CELS VII, held November 9-10, 2012 at Stanford, was a smashing success due in no small part to the work of chief organizer Dan Ho, as well as Dawn Chutkow (of SELS and Cornell) and Stanford’s organizing committee.  For previous installments in the CELS recap series, see CELS III, IV, V, and VI. For those few readers of this post who are data-skeptics and don’t want to read a play-by-play, resistance is obviously futile and you might as well give up. I hear that TV execs were at CELS scouting for a statistic geek reality show, so think of this as a taste of what’s coming.]

Survey Research isn't just for the 1%!

Unlike last year, I got to the conference early and even went to a methods panel. Skipping the intimidating “Spatial Statistics and the GIS” and the ominous “Bureau of Justice Statistics” panels, I sat in on “Internet Surveys” with Douglas Rivers, of Stanford/Hoover and YouGuv. To give you a sense of the stakes, half of the people in the room regularly use mTurk to run cheap e-surveys. The other half regularly write nasty comments in JELS reviewer forms about using mTurk.  (Oddly, I’m in both categories, which would’ve created a funny weighting problem if I were asked my views.) The panel was devoted to the proposition “Internet surveys are much, much more accurate than you thought, and if you don’t believe me, check out some algebraic proof.  And the election.”  Two contrasting data points. First, as Rivers pointed out, all survey subjects are volunteers, and thus it’s a bit tough to distinguish internet convenience samples from some oddball scooped up by Gallup’s 9% survey response rate.  Second, and less comfortingly, 10-15% of the adult population has a reading disability that makes self-administration of a survey prompt online more than a bit dicey.  I say: as long as the disability isn’t biasing with respect to contract psychology or cultural cognition, let’s survey on the cheap!

Lunch next. Good note for presenters: avoid small pieces of spinach/swiss chard if you are about to present. No one will tell you that you’ve spinach on a front tooth.  Not even people who are otherwise willing to inform you that your slides are too brightly colored. Speaking of which, the next panel I attended was Civil Justice I. Christy and I presented Clusters are AmazingWe tag-teamed, with me taking 9 minutes to present 5 slides and her taking 9 minutes to present the remaining 16 or so.  That was just as well: no one really wanted to know how our work might apply more broadly anyway. We got through it just fine, although I still can’t figure out an intuitive way to describe spectral clustering. What about “magic black box” isn’t working for you?

Read More


The Price of Bankruptcy

Credit Slips highlights a very cool new paper, Bankruptcy Spillovers: Distance, Public Disclosure, and Opaque Information.  In the paper, Barry Scholnick examines bankruptcy filings in Canada at a micro level.  Looking at the postal code of every filer – which code is a much more precise geographic identifier than our zip codes – Scholnick concludes:

“The punch line of my study is that there is indeed a significant impact from the past bankruptcies of neighbors (as defined by the very small Canadian Post Codes) to the probability that an individual in the neighborhood will file . . . I propose, and provide evidence for, the hypothesis that if a defaulter lives in a neighborhood with a large number of previous bankruptcies among the neighbors, then that individual will choose to default via bankruptcy rather than charge-off. This is because more neighborhood bankruptcies will lower stigma or provide more information about the process of bankruptcy.

On the other hand, I show that defaulters who live in low bankruptcy neighborhoods choose to default via charge-off rather than bankruptcy. This is consistent with the argument that low bankruptcy neighborhoods have higher levels of bankruptcy stigma, thus individual defaulters choose to default via charge-off in order to maintain more privacy about their default.”

This paper not only fits within a literature on bankruptcy, but also is a nice match to work by my co-author Tess Wilkinson-Ryan on how mortgage foreclosure and other forms of breach are socially mediated events.  Abiding by onerous contracts is unpleasant, but we do it so long as it is socially validated. When it stops being socially normal to stick with terrible deals, we exit them.


Volume 59, Issue 5 (June 2012)

Volume 59, Issue 5 (June 2012)


Implicit Bias in the Courtroom Jerry Kang et al. 1124
The Supreme Court’s Regulation of Civil Procedure: Lessons From Administrative Law Lumen N. Mulligan & Glen Staszewski 1188


Techniques for Mitigating Cognitive Biases in Fingerprint Identification Elizabeth J. Reese 1252
Credit CARD Act II: Expanding Credit Card Reform by Targeting Behavioral Biases Jonathan Slowik 1292
Shocking the Conscience: What Police Tasers and Weapon Technology Reveal About Excessive Force Law Aaron Sussman 1342

Stanford Law Review, 64.5 (2012)

Stanford Law Review

Volume 64 • Issue 5 • May 2012

The City and the Private Right of Action
Paul A. Diller
64 Stan. L. Rev. 1109

Securities Class Actions Against Foreign Issuers
Merritt B. Fox
64 Stan. L. Rev. 1173

How Much Should Judges Be Paid?
An Empirical Study on the Effect of Judicial Pay on the State Bench

James M. Anderson & Eric Helland
64 Stan. L. Rev. 1277

How Congress Could Reduce Job Discrimination by Promoting Anonymous Hiring
David Hausman
64 Stan. L. Rev. 1343


Stanford Law Review, 64.4 (2012)

Stanford Law Review

Volume 64 • Issue 4 • April 2012

The Tragedy of the Carrots:
Economics and Politics in the Choice of Price Instruments

Brian Galle
64 Stan. L. Rev. 797

“They Saw a Protest”:
Cognitive Illiberalism and the Speech-Conduct Distinction

Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans & Jeffrey J. Rachlinski
64 Stan. L. Rev. 851

Constitutional Design in the Ancient World
Adriaan Lanni & Adrian Vermeule
64 Stan. L. Rev. 907

The Copyright-Innovation Tradeoff:
Property Rules, Liability Rules, and Intentional Infliction of Harm

Dotan Oliar
64 Stan. L. Rev. 951

Testing Three Commonsense Intuitions About Judicial Conduct Commissions
Jonathan Abel
64 Stan. L. Rev. 1021

Derivatives Clearinghouses and Systemic Risk:
A Bankruptcy and Dodd-Frank Analysis

Julia Lees Allen
64 Stan. L. Rev. 1079


Measurable Things

The Misleadingly Convenient Source of Information

A common criticism one reads of ELS is that “too much of the work is driven by the existence of a data set, rather than an intellectual or analytical point.”  It’s ironic that this is the very critique that the realists made of traditional legal scholarship. Consider the great Llewellyn:

“I am a prey, as is every man who tries to work with law, to the apperceptive mass.  I see best what I have learned to see.  I am a prey, too — as are the others — to the old truth that the available limits vision, the available bulks as if it were the whole.  What records have I of the work of magistrates?  How shall I get them?  Are there any?  And if there are, must I search them out myself?  But the appellate courts make access to their work convenient.  They issue reports, printed, bound, to be had all gathered for me in the libraries.  The convenient  source of information lures.  Men work with it, first, because it is there; and because they have worked with it, men build it into ideology.  The ideology grows and spreads and gains acceptance, acquires a force and an existence of its own, becomes a thing to conjure with:  the rules and concepts of the courts of last resort.”

Or to put it differently, all of our work – quantitative empiricists, doctrinalists, corporate finance wizards, administrative regulation parsers, legal philosophers, and derivative social psychologists alike – is driven by the materials at hand. For most lawyers and legal academics, appellate opinions are the most convenient pieces of information available; we use such opinions to create mental models of what the “law” is, and (ordinarily in legal scholarship) what it ought be. Indeed, whenever trial court opinions are cited, they are often discounted as aberrant or transitory, in part because they are known to be unrepresentative!

Why, you might wonder, is the convention of data-driven-scholarship a particular problem in quantitative empirical work? ELS’s detractors make three interrelated claims:

Read More


Some Truly Fascinating Numbers on Video Game Economics

Back in October, Valve co-founder Gabe Newell explained the economics of video games as his company sees it. The Geekwire article is worth the read. For now, I’ll point out that he admits “We don’t understand what’s going on” and uses the language of co-creation of value, which I happen to believe is the current future as it were, to describe what the company is doing:

This is probably the biggest change that’s affected the gaming business over the last few years. It’s not just that we have digital distribution to our customers. It’s that we have this incredible two-way connection that we’ve never had before with our customers.
We’ve gone from a situation where we dream up a game, we spend three years making it, we put it in a box, we put it out in stores, we hope it sells, to a situation that’s incredibly more fluid and dynamic, where we’re constantly modifying the game with the participation of the customers themselves

The comments on piracy comport with insights from other industries:

One thing that we have learned is that piracy is not a pricing issue. It’s a service issue. The easiest way to stop piracy is not by putting antipiracy technology to work. It’s by giving those people a service that’s better than what they’re receiving from the pirates. For example, Russia. You say, oh, we’re going to enter Russia, people say, you’re doomed, they’ll pirate everything in Russia. Russia now outside of Germany is our largest continental European market. … the people who are telling you that Russians pirate everything are the people who wait six months to localize their product into Russia. … So that, as far as we’re concerned, is asked and answered. It doesn’t take much in terms of providing a better service to make pirates a non-issue.

The information on pricing is really cool. “[W]e varied the price of one of our products. We have Steam so we can watch user behavior in real time. That gives us a useful tool for making experiments which you can’t really do through a lot of other distribution mechanisms. What we saw was that pricing was perfectly elastic. In other words, our gross revenue would remain constant. We thought, hooray, we understand this really well. There’s no way to use price to increase or decrease the size of your business.”

Yet he goes on to describe how sales such as a 75% price reduction lead to a “gross revenue increased by a factor of 40.” They tested against a product they did not own and saw similar results. Then they tested free. It turns out free to play and and free work differently. His thought is that the user base matters because they value the products differently including “what the statement that something is free to play implies about the future value of the experience that they’re going to have.”

Furthermore, conversion rates shift too. Free to play often “see[s] about a 2 to 3 percent conversion rate of the people in their audience who actually buy something, and then with Team Fortress 2, which looks more like Arkham Asylum in terms of the user profile and the content, we see about a 20 to 30 percent conversion rate of people who are playing those games who buy something.”

What do all these tests mean? As Newell said, it’s unclear. That is why I could see some rather cool studies being done for this emerging area.


Nest Thermostat, Data Driven for Your Pleasure and Green Health

As Deano and others might say Baby, It’s Cold Outside. And, heating costs are no joke. Neither is about $250 for a thermostat. Nonetheless, data and networks are changing the way we manage heating. As Wired reports, Tony Faddell, founder of Nest Labs makes this compelling point:

Untold tons of carbon were being pumped into the air, with people losing billions of dollars in energy costs, all because there was no easy, automatic way to control the temperature. But what if you could apply all the skills and brilliance of Silicon Valley to produce a thermostat that was smart, thrifty and so delightful that saving energy was as much fun as shuffling an iTunes playlist?

So far, you may be thinking that programmable thermostats are old hat. They are and may not have worked as well as hoped given that the Times reports “Two years ago, the federal government eliminated the entire programmable thermostat category from its Energy Star program.” Yet, there is something different here. Improved, networked climate control is not your father’s Oldsmobile. It sounds crazy, but the pre-orders sold out and demand is high. Others are in the game as well. Some require more tech savvy to install. Regardless the idea is that data and networks will allow one to manage energy costs well.

The Nest seems to be the leader for easy use and install. The Times explains that the design is great but then the iPod designer would have to do that, right? The best part for me is that the Nest uses Wi-Fi which means software updates, programming from the Web or an App, and it learns.

Learns? Yes, learns. The system tells users how much time it will take to raise a house’s temperature (which stops the habit of cranking heat to get to a lower temperature), notes manual adjustments for home, midday, away, etc. to start to offer an automatic cycle attuned to habits. Motion sensors help set basic overrides for heating and cooling to take care times when no one is home. In a nod to behaviorial economics and some things that I think Ryan Calo has been considering, the Times explains that “Nest says that turning down your thermostat by even a single degree can save you 5 percent in energy. To that end, it offers a little motivational logo: a green leaf. It glows brighter as you turn the ring beyond your standard comfort zone. As a positive-reinforcement technique, it’s a lot more effective than an exhortation from Jimmy Carter to put on a sweater.”

I always feel a little sad when reminded of President Carter’s attempt to address the energy crisis of the 1970s. It seems to flow from a view of WWII America when people buckled down for the greater good, but that had perhaps faded years before his plea. Still, if we have learned that other approaches can aid better judgment and action, maybe we will turn those thermostats to 68 and wear that sweater as the then President asked us to do.


Paying People Not To Use Talk To Their Cellphones’ Virtual Assistants in Public

The NYT isn’t entirely worthless.  There’s a cute technology piece up on how irritated the reporter and his friends-on-the-street are by people who talk to their iPhone’s Siri when they could just as easily text.  As the Times puts it, this is a problem of unfelt externalities:

“James E. Katz, director of the Center for Mobile Communication Studies at Rutgers, said people who use their voices to control their phones are creating an inconvenience for others — noise — rather than coping with an inconvenience for themselves — the discomfort of having to type slowly on a cramped cellphone keyboard. Mr. Katz compared the behavior with that of someone who leaves a car’s engine running while parked, creating noise and fumes for people surrounding them.”

The piece goes onto claim that eventually, we’re get used to this noise pollution.  Perhaps we will!  But if we don’t, there are options other than anti-nuisance regulation.  After all, there are competing rights here: the right to speak so you don’t have to confront your inability to text without typos and the right not to hear what the person next to you on the subway wants for dinner.  Now, we could ban Siri-like Apps in public places.  But, as all good Coasians know, there’s another option.  We could decide that the Siri-ans should have the right to speak wherever they are: irritated hearers can simply pay the offending speaker not to talk into their iPhone in public.  In fact, I wonder if Apple could perhaps make an App for that.  Call it the “Shut Down Nearby Siris For Five Minutes Auction App.”  People could list the price at which they’d agree to be paid to be silenced; irritated listeners could either pay that price or bid at a lower rate.  If hearers and speakers matched, we’d achieve (in the Article’s words) the socially efficient outcome: back to the “old days when people just texted in public.”


CELS VI: Half a CELS is Statistically Better Than No CELS

Northwestern's Stained Glass Windows Made Me Wonder Whether Some Kind of Regression Was Being Proposed

As promised, I’m filing a report from the Sixth Annual Empirical Studies Conference, held 11/4-11/5 at Northwestern Law School.  Several of the attendees at the Conference approached me and remarked on my posts from CELS V, IV, and III. That added pressure, coupled with missing half of the conference due to an unavoidable conflict, has delayed this post substantially.  Apologies!  Next time, I promise to attend from the opening ceremonies until they burn the natural law figure in effigy.  Next year’s conference is at Stanford.  I’ll make a similar offer to the one I’ve made in the past: if the organizing committee pays my way, I promise not only to blog the whole thing, but to praise you unstintingly.  Here’s an example: I didn’t observe a single technical or organization snafu at Northwestern this year.  Kudos to the organizing committee: Bernie Black, Shari Diamond, and Emerson Tiller.

What I saw

I arrived Friday night in time for the poster session.  A few impressions.  Yun-chien Chang’s Tenancy in ‘Anticommons’? A Theoretical and Empirical Analysis of Co-Ownership won “best poster,” but I was drawn to David Lovis-McMahon & N.J. Schweitzer’s Substantive Justice: How the Substantive Law Shapes Perceived Fairness.  Overall, the trend toward professionalization in poster display continues unabated.  Even Ted Eisenberg’s poster was glossy & evidenced some post-production work — Ted’s posters at past sessions were, famously, not as civilized. Gone are the days where you could throw some powerpoint slides onto a board and talk about them over a glass of wine!  That said, I’m skeptical about poster sessions generally.  I would love to hear differently from folks who were there.

On Saturday, bright eyed and caffeinated, I went to a Juries panel, where I got to see three pretty cool papers.  The first, by Mercer/Kadous, was about how juries are likely to react to precise/imprecise legal standards.  (For a previous version, see here.) Though the work was nominally about auditing standards, it seemed generalizable to other kinds of legal rules.  The basic conclusion was that imprecise standards increase the likelihood of plaintiff verdicts, but only when the underlying conduct is conservative but deviates from industry norms.  By contrast, if the underlying conduct is aggressive, jurors return fewer pro-plaintiff verdicts.  Unlike most such projects, the authors permitted a large number of mock juries to deliberate, which added a degree of external validity.  Similarly worth reading was Lee/Waters’ work on jury verdict reporters (bottom line: reporters aren’t systematically pro-plaintiff, as the CW suggests, but they are awfully noise measures of what juries are actually doing).  Finally, Hans/Reyna presented some very interesting work on the “gist” model of jury decisionmaking.

At 11:00, I had to skip a great paper by Daniel Klerman whose title was worth the price of admission alone – the Selection of Thirteenth-Century Disputes for Litigation.  Instead, I went to Law and Psychology III.  There, Kenworthey Bilz presented Crime, Tort, Anger, and Insult, a paper which studies how attribution & perceptions of dignitary loss mark a psychological boundary between crime and tort cases.  Bilz presented several neat experiments in service of her thesis, among them a priming survey- – people primed to think about crimes complete the word “ins-” as “insult,” while people primed to think about torts complete it as “insurance.”  (I think I’ve got that right – – the paper isn’t available online, and I’m drawing on two week old memories.)

At noon, Andrew Gelman gave a fantastic presentation on the visualization of empirical data.  The bottom line: wordles are silly and convey no important information.  Actually, Andrew didn’t say that.  I just thought that coming in.  What Andrew said was something more like “can’t people who produce visually interesting graphs and people who produce graphs that convey information get along?”

Finally, I was the discussant at an Experimental Panel, responding to Brooks/Stremitzer/Tontrup’s Framing Contracts:Why Loss Framing Increases Effort.  Attendees witnessed my ill-fated attempt to reverse the order of my presentation on the fly, leading me to neglect the bread in the praise sandwich.  This was a good teaching moment about academic norms. My substantive reaction to Framing Contracts is that it was hard to know how much the paper connected to real-world contracting behavior, since the kinds of decision tasks that the experimental subjects were asked to perform were stripped of the relational & reciprocal norms that characterize actual deals.

CELS: What I missed

The entire first day!  One of my papers with the cultural cognition project, They Saw a Protest, apparently came off well.  Of course, there was also tons of great stuff not written from within the expanding cultural cognition empire.  Here’s a selection: on lawyer optimism; on public housing, enforcement and race; on probable cause and hindsight judging; and several papers on Iqbal, none of which appear to be online.

What did you see & like?