Category: General Law

A Sokal Hoax for Docs

Via Ezra Klein, a revealing anecdote about the power of “thought leadership:”

In the early 1970s, a group of medical researchers decided to study an unusual question. How would a medical audience respond to a lecture that was completely devoid of content, yet delivered with authority by a convincing phony? To find out, the authors hired a distinguished-looking actor and gave him the name Dr. Myron L. Fox. They fabricated an impressive CV for Dr. Fox and billed him as an expert in mathematics and human behavior. Finally, they provided him with a fake lecture composed largely of impressive-sounding gibberish, and had him deliver the lecture wearing a white coat to three medical audiences under the title “Mathematical Game Theory as Applied to Physician Education.” At the end of the lecture, the audience members filled out a questionnaire.

The responses were overwhelmingly positive. The audience members described Dr. Fox as “extremely articulate” and “captivating.” One said he delivered “a very dramatic presentation.” After one lecture, 90 percent of the audience members said they had found the lecture by Dr. Fox “stimulating.” Over all, almost every member of every audience loved Dr. Fox’s lecture, despite the fact that, as the authors write, it was delivered by an actor “programmed to teach charismatically and nonsubstantively on a topic about which he knew nothing.”

It’s one more rationale for more disclosure of sources of influence in the medical profession . . . and ostensibly more objective, “algorithmic” authorities.


Freiwald on Much-Anticipated Cell Location Privacy Decision

Professor Susan Freiwald generously agreed to blog about the recent Third Circuit decision regarding the privacy protections afforded cell phone location data.  Here is Professor Freiwald’s commentary on the case:

The Third Circuit has issued the first Appellate court decision on the standard by which government agents may compel the disclosure of cell phone subscribers’ location data, i.e., records of the cell towers with which a phone communicates that indicate the phone’s physical location.  In The Matter Of The Application Of The United States Of America For An Order Directing A Provider Of Electronic Communication Service To Disclose Records To The Government. The majority held that Magistrate Judges (MJs) may choose whether to impose a warrant requirement on government agents who seek location data or instead to permit them to satisfy a lower statutory standard (under 18 U.S.C. § 2703(d)) that requires “specific and articulable facts showing … reasonable grounds to believe that the … records … are relevant and material to an ongoing criminal investigation.” (D order standard.)   The majority remanded to the MJ who had first considered the government’s application to require either a warrant based on probable cause or to impose the D order standard and then to determine whether the government’s application satisfies the chosen requirement.   The majority also directed the MJ to make factual findings and provide an explanation if it demands a warrant.

The issues here are complex so it may not be immediately clear whether the decision represents a win for communications privacy.  In fact, the decision contains important privacy gains, although more remains to be done.  In what follows, I explain the decision and its significance by addressing the following issues: 1) the parties and their arguments, 2) the court’s statutory analysis 3), the court’s constitutional analysis, and 4) what happens next.

1) The Parties and their Arguments

The government is the only traditional party in the case.  Applicable law permits agents to seek orders from MJs to compel cell phone service providers (like Sprint or Verizon) to disclose stored location data  without ever notifying the person whose records they seek (the target).  In February, 2008, a MJ in the Western District of Pennsylvania denied the government’s application for location data, because the government failed to establish probable cause for a warrant.   The government appealed to the District Court, arguing that MJs must grant orders to compel location data whenever the government meets the D order standard, which is easier to satisfy than probable cause.

Before the District Court heard the government’s appeal, however, it invited amici curiae to oppose the government: The Electronic Frontier Foundation represented itself and three other online civil liberties groups, and I weighed in as a law professor who has taught and written on the issues.  Ultimately, the District Court affirmed the MJ’s denial, which set the stage for the government to appeal to the Third Circuit.  Civil Liberties Amici and I submitted briefs in the Third Circuit and participated in oral arguments there in February.

I argued that the government must always establish probable cause and obtain a warrant, as the lower courts had held.  The Civil Liberties Amici argued that if the Third Circuit was not prepared to require a warrant in every case (it wasn’t), it should recognize that MJs may, in their discretion, require a warrant before compelling disclosure of location data, or they may grant such orders under the D order standard.  The majority adopted the approach the Civil Liberties’ Amici advocated. Read More


University of Toronto Law Journal Volume 60, Number 3, Summer 2010

University of Toronto Law Journal – Volume 60, Number 3, Summer 2010

Misfeasance As An Organizing Normative Idea In Private Law
Peter Benson

New Modes And Orders: The Difficulties Of A Jus Post Bellum Of Constitutional Transformation
Nehal Bhuta

Early Twentieth-Century Canadian Medical Patent Law In Practice: James Bertram Collip And The Discovery Of Emmenin
Virginie Marier, Tina Piper

Investment Rules And The Denial Of Change
Gus Van Harten

Book Review: Law and Religion in Theoretical and Historical Context (Peter
Cane, Carolyn Evans & Zoë Robinson eds., 2008)

Anver M. Emon

Current issue also available through Westlaw, LexisNexis/Quicklaw, Scholars Portal and Project Muse.


The Forgotten New Deal — John Nance Garner

Consider the following accident of history, which comes from Bruce Ackerman, not from me.  In 1933, an assassin fired on a car carrying Franklin D. Roosevelt and the Mayor of Chicago.  The Mayor was killed. Suppose, instead, that FDR had been killed.  The Presidency would have passed to his Vice-President, John Nance “Cactus Jack” Garner, who was a conservative Southern Democrat.  President Garner would have been far more likely to veto liberal initiatives coming from an overwhelmingly Democratic Congress.

Does this sound familiar?  It should.  Something like this happened when Lincoln was replaced by Andrew Johnson.  The congressional response at that time was to pass the Fourteenth Amendment to bypass the President.  If Lincoln had not been shot, there would have been no need for the Fourteenth Amendment.  Broad statutes, enforced vigorously by the Executive Branch and combined with the appointment of sympathetic Justices, would achieved much the same result.

I bring this up for two reasons.  First, it suggests that the difference between the decision to use Article Five and a decision not to do so may turn on random events that have nothing to do with constitutional philosophy.  Second, we could ask which was the better scenario — the textual version or the nontextual one?  The New Deal would have been placed on a firmer footing if there were some New Deal Amendments, but then again the Reconstruction Amendments were thwarted for decades after their ratification and in that sense were less successful.


Future of the Internet Symposium: Do we need a new generativity principle?

[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet’s overall ability to foster innovation) is here.]

In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)

Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.[1]

Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.

This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments [2] (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.

Read More


Future of the Internet Symposium: Generative End Hosts vs. Generative Networks?

Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.

As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:

1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture) [1] to design the architecture of a network creates a network with these characteristics.

2. A sufficient number of general-purpose end hosts [2] that allowed their users to install and run any application they like.

Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”

In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.

Read More


Future of the Internet Symposium: Does anyone care about the ‘rule of law’?

I would like to suggest another angle to consider in this dissection of JZ’s wonderful generative book:  Do we still care about the ‘rule of law’?

The theory of generativity relies on self-governance through an open market approach and embodies an abhorrence of “governability” by states.  This I find troubling.  Why is governability by states so abhorrent?   If we believe in the ‘rule of law,’ governability by states cannot be anathema.   States through their political and legal processes express public values through law. Generativity does not have a mechanism for all of society’s stakeholders to participate in decision-making about the values embedded in technological decisions.   Privacy and security are good examples.    Transparency may be the choice of some online participants with respect to their personal information, but that choice has important third party implications (e.g. the consensual disclosure of a person’s DNA also reveals information about that person’s non-consenting relative).   The political and judicial process arbitrate third party rights and society’s reasonable expectations of privacy, by contrast the technological development and deployment/adoption process impose determinations.  With respect to security, JZ recognizes that generativity is self-destructive and looks to individual liability as the solution.  Yet, individuals will typically lack sufficient technical knowledge to engage in self-help.   This is the classic situation where citizens look to the state to protect the public’s welfare.

Lon Fuller, in his work The Morality of Law, argued that “laws must exist and those laws should be obeyed by all, including government officials.”   The future of the internet should not grant an immunity card from accountability with respect to public values.   Rejecting governability by states is more precisely a rejection of the rule of law.   In this vein, the tethering of appliance may a natural maturity of the internet toward acceptance and re-enforcement of the ‘rule of law.’


Future of the Internet Symposium: Lessons in Designing for Privacy

Disclaimer: The views expressed in this blog post are mine alone and do not in any way represent those of my employer.

It’s quite an opportune time to revisit the ideas laid out in Future of the Internet. Though many critics of the book seem to have focused on the dichotomy between tethered devices and generative ones, I have never found that to be the most interesting piece of the book. Going back to the early 1980s and the original End to End paper, many scholars have pointed out the policy implications of architecture design and code. Zittrain built on this work in useful ways, the most interesting of which (to me) are his thoughts about how to preserve generativity while simultaneously tackling some of the toughest policy challenges ahead: censorship, privacy, and security among others.

As I reread the book this weekend, I was struck by Zittrain’s prediction that a “government able to pressure the provider of BlackBerries could insist on surveillance of e-mails” as an example of perfect enforcement enabled by tethered devices. Three years later we’re witnessing increasing numbers of examples of exactly that type of behavior. There is something to the idea that tethering devices in the name of – for example – increased security could create unintended consequences and policy considerations. The proposed solutions to the possibility that a tethered device might be used to censor information or surveil citizens are what fascinate me. Witness the success of Herdict in identifying availability of specific web pages from around the world. Increased transparency across the industry might help these efforts scale. Google last spring released data on the number of requests it receives from governments around the world to take down content and/or access information about users. Imagine the type of transparency we would achieve if even ten other companies of that scale released similar data and a clearing house were available to host all that data together.

Read More


Future of the Internet Symposium: The Roles of Technology and Economics

I’m delighted to have this opportunity to participate in this symposium.  I’m a computer scientist, not a law professor; most of my comments will tend to be at the intersection of technology and public policy.

When reading Jonathan Zittrain’s book — and I agree with his overall thesis about generativity — it’s important to take into account what was technically and economically possible at various times.  Things that are obvious in retrospect may have been obvious way back when, too, but the technology didn’t exist to do them in any affordable fashion.  While I feel that there are a number of sometimes-serious historical errors in the early part of the book — for example, AT&T, even as a monopoly, not just leased modems but also modified its core network to support them; data networking was not solely a post-Carterphone phenomenon — the more serious problems stem from ignoring this perspective.  I’ll focus on one case in point: the alleged IBM control of mainframes.

Read More

Labor Day Read: Kim Bobo, Wage Theft in America

I’ve come across a number of good books that describe recent developments in the labor market (including The Disposable American, Nickeled and Dimed, The Gloves-Off Economy, New Capitalism?, and Fast Boat to China), but I wanted to particularly recommend today Kim Bobo’s Wage Theft in America. Bobo is founder and director of Interfaith Worker Justice, and pursues her calling as writer and activist with moral seriousness and inspiring determination.

Bobo reports that “even the Economic Policy Foundation, a business-funded think tank, [has] estimated that companies annually steal 19 billion dollars in unpaid overtime.” I found three aspects of Wage Theft particularly compelling:

1) The stories of poor and hard-working individuals are often moving. Bobo relates the words of Jeffrey Steele, an African American construction worker from Atlanta who came to New Orleans post-Katrina and found himself repeatedly denied wages he was clearly due by unscrupulous employers:

Contractor after contractor . . . crammed us into filthy living spaces, provided next to nothing to eat, offered practically no safety precautions or equipment and paid workers late and so much less than even promised. If this is how this country allows employers to get away with treating hard working citizens while companies make a profit—then shame on us.

Read More