Future of the Internet Symposium: (Im)Perfect Enforcement

Prohibition wasn’t working. President Hoover assembled the Wickersham Commission to investigate why. The Commission concluded that despite an historic enforcement effort—including the police abuses that made the Wickersham Commission famous—the government could not stop everyone from drinking. Many people, especially in certain city neighborhoods, simply would not comply. The Commission did not recommend repeal at this time, but by 1931 it was just around the corner.

Five years later an American doctor working in a chemical plant made a startling discovery. Several workers began complaining that alcohol was making them sick, causing most to stop drinking it entirely—“involuntary abstainers,” as the doctor, E.E. Williams, later put it. It turns out they were in contact with a chemical called disulfiram used in the production of rubber. Disulfiram is well-tolerated and water-soluble. Today, it is marketed as the popular anti-alcoholism drug Antabuse.

Were disulfiram discovered just a few years earlier, would federal law enforcement have dumped it into key parts of the Chicago or Los Angeles water supply to stamp out drinking for good? Probably not. It simply would not have occurred to them. No one was regulating by architecture then. To dramatize this point: when New York City decided twenty years later to end a string of garbage can thefts by bolting the cans to the sidewalk, the decision made the front page of the New York Times. The headline read: “City Bolts Trash Baskets To Walks To End Long Wave Of Thefts.”

In an important but less discussed chapter in The Future of the Internet, Jonathan Zittrain explores our growing taste and capacity for “perfect enforcement.” Readers are likely familiar with the cyberlaw mantra that “code is law.” What’s striking is that since Lawrence Lessig published Code in 1999, relatively little has been written about the dangers of regulation by architecture, particularly outside of the context of intellectual property. Many legal scholars—Neil Katyal, Elizabeth Joh, Edward Cheng—have instead argued for more regulation by architecture on the basis that it is less discriminatory or more effective. Recall that this was also Joel Reidenberg’s conclusion in “Lex Informatica” (PDF).

Zittrain’s fifth chapter returns the concern over perfect enforcement by architecture to center stage. He draws a distinction—along with Lessig, Cheng, and Gary Marx—between architectural intervention that makes detection and punishment more likely (e.g., traffic-light cameras), and architectural intervention that makes wrongful behavior more difficult or impossible (e.g., speed bumps). Zittrain identifies a third phenomenon, one perhaps closer to disulfiram in the local well, where a party secures a court order to zap the offending practice out of existence.

Zittrain worries that the shift to “tethered appliances” opens the door to too heavy a regulatory touch. He points out that perfect enforcement may lock in mistakes, dampen creativity, and upset the balance of power between government and citizen. To this litany we might add Lessig’s earlier concerns about architecture’s intransparency, seductive ease, and the fact that there can be no civil disobedience where the underlying conduct has been rendered impossible. (Rosa Parks never gets to the front of the bus. Henry David Thoreau’s tax is automatically deducted from his pay check). Or that some regulation by architecture thwarts separation of powers because there is no law for the executive to prosecute or offense for the judiciary to review.

Still, Zittrain’s question is the right one: if we have the power to switch off murder, do we really decline to use it? And what’s the harm, exactly, in bolting trash cans to the sidewalk?

Lessig used code to show that the Internet is, in fact, governable—maybe particularly so. To borrow from Chris Anderson in another context, “atoms are the new bits.” We are entering an era of unprecedented ability to manipulate the mind, body, and environment. There’s a vaccine that renders cocaine inert. There are cars that won’t start if you’ve been drinking. There are guns that won’t fire for anyone but the registered owner. We might applaud suicide nets on bridges but abhor digital rights management. But where do we draw the line? This search for a defensible distinction between good (appropriate) and bad (inappropriate) regulation by architecture will be one of the formative legal puzzles of our time. We should turn our attention to solving it.

You may also like...

5 Responses

  1. Frank Pasquale says:

    Fascinating post. I am interested in how the work of “Neil Katyal, Elizabeth Joh, Edward Cheng” (who “argued for more regulation by architecture on the basis that it is less discriminatory or more effective”) interacts with that of our symposium organizer Danielle Citron, who has explored the many ways in which “technological due process” needs to be brought to automated systems.

    I would emphasize that, at least in the realm of finance, a definite “sorcerer’s apprentice” quality has been arising in automated technology. Amar Bhide has described it in the HBR:
    http://hbr.org/2010/09/the-big-idea-the-judgment-deficit/ar/1

    “In recent times . . . a new form of centralized control has taken root—one that is the work not of old-fashioned autocrats, committees, or rule books but of statistical models and algorithms. These mechanistic decision-making technologies have value under certain circumstances, but when misused or overused they can be every bit as dysfunctional as a Muscovite politburo.”

    If these are a form of “regulation by architecture,” we should be afraid!

  2. Ryan Calo says:

    Thanks, Frank! Great question. As I think about it, Elizabeth Joh’s argument in “Discretionless Policing” is in a way the flip side of Danielle’s in TDP.

    Elizabeth argues that we ought to automate traffic enforcement precisely because it heads off the kinds of biases minorities regularly experience at the hands of some police. (A traffic light camera simply does not care whether the offender is white or black.) She might argue, and we can ask her, that benefits decisions should also be automated so that everyone gets a fair shake.

    On the other hand, Danielle will counter that automation brings its own troubles, including potentially unreviewable mistakes. But I’ll let her describe her own, wonderful article.

  3. Joel Reidenberg says:

    Subsequent to Lex Informatica, I have also argued in “Technology and Internet Jurisdiction” that technologically-based enforcement should subject to legal pre-conditions including an adjudicatory authorization process that evaluates constitutional, statutory and international constraints. Tethered appliances certainly make it easier to sanction wrongful behavior. But, I am skeptical that perfect enforcement is possible.

  4. Danielle Citron says:

    This is a fantastic post, Ryan. Like Frank, I am indeed worried about this kind of perfect enforcement. Joel’s Penn piece is indeed helpful here to sort our ways to bring rule of law commitments to automated jurisdiction and decision-making. Thanks so much for bringing TDP into the discussion!

  5. Ryan Calo says:

    I love that essay, Joel, and I apologize for not mentioning it. I only wish countries (I’m thinking of Brazil in the YouTube case) would be so careful.

    I think JZ is asking a slightly different question, however, one that I’m trying to get out with my disulfiram example. There literally could not be more process than what led to Prohibition. Yet we might still be concerned about an enforcement action that stamped out all resistance.