Category: Bright Ideas


Hypotheticals, the Classroom, and Moral Biology

Hypotheticals are a ubiquitous pedagogical tool in both the law and philosophy classrooms. I have recently been thinking about the different functions they serve and whether they are well-suited for the weight we give them. These reflections were prompted by a conference on “Moral Biology,” hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School (which I co-direct), in cooperation with The Project on Law and Mind Sciences at Harvard Law School, the Gruter Institute, the Harvard Program on Ethics and Health, and the MacArthur Law and Neuroscience Project.

I may blog a little bit later about some other of the marvelous things I learned over these two days, but for now I wanted to concentrate on some thoughts that stemmed from a public portion of the conference that can be seen here, involving Josh Greene from Harvard’s Psychology Department, William Fitzpatrick from the University of Rochester’s Philosophy Department, Adina Roskies from Dartmouth’s Philosophy Department, Walter Sinnott-Armstrong from Duke’s Philosophy Department, and Tim Scanlon, from Harvard’s philosophy department.

At around the 43 to 50 minute mark in the video, Josh discusses Trolley Problems (which ask participants a thought experiment about whether to divert a trolley from one track to another with many versions of the hypothetical) and an experiment done on them by Fiery Cushman (and a collaborator, Switzgable I believe, I could not find the actual paper) in Josh’s lab.  In the experiment, before being asked whether they would endorse the principle of double effect, ethicists with PhDs were asked to reason about variants of the Trolley problem (switch vs. footbridge) presented in different orders. The experiment found that if one varied the order in which the versions were presented (but always presented all of them,) ethicists reached different conclusions about whether they would endorse the principle. [This is Josh’s description in the video, again if anyone can find the paper he is discussing I will try and like to that].  The result is surprising in that it appears even those with PhD training in ethics are susceptible to order effects in reasoning about a very fundamental issue.

As Josh concedes, and others (in the panel and in written pieces discussing his work emphasize) the fact that these ordering effects occur is not itself fatal to the enterprise of philosophical analysis using intuitions. It depends on further views about how one uses these kinds of intuitions in the analysis. For present purposes, though, I want to partially side-step that question in favor of thinking about the law classroom, and how this experiment might should us a little more careful about the way we use hypotheticals.

Read More


Mechanical Turk, Research Ethics, and Research Assistants

A recent faculty workshop by my witty and brilliant colleague Jonathan Zittrain on “ubiquitous human computing,” (this youtube video captures in a different form what he was talking about ), prompted me to thinking about some ways in which platforms like Amazon’s Mechanical Turk, interface with university research and research ethics in interesting ways.

For those unfamiliar, Mechanical Turk allows you to farm out a variety of small tasks (label this image, enter date of this .pdf to a spreadsheet, take a photo of yourself with the sign “will turk for food,” etc) at a price per unit you set. Millions of anonymous users can then do the task for you and collect the bounty, a form of microwork.

As Jonathan detailed, this raises a host of fascinating issues, but I want to focus on two that are closer to bioethics.

First, I have begun to see some legal academics recruiting populations for experimental work using Mechanical Turk, and there is an emerging literature on the pros and cons of subject recruitment from these populations. Are Mechanical Turkers “research subjects” within the legal (primarily the Common Rule if one receives federal funding) or broader ethical sense of the term? Should they be? Take as a tangible example the implicit bias research of the kind Mahzarin R. Banarji has made famous, and imagine it was done over something like Mechanical Turk. How (if at all) should the anonymity of the subject, the lack of subject-experimenter relationship of any sort, the piecemeal nature of the task, etc, change the way an institutional review board reviews the research? It is a mantra in the research ethics community that informed consent is supposed to be a “process” not a document, but how can that process take place in this anonymous static cyberspace environment?

Second, consider research assistance.

Read More


BRIGHT IDEAS: Mike Sacks on Supreme Court Reporting from the Front Lines

Sometime before commencement of the Supreme Court’s 2009 term, Mike Sacks, a third-year law student at Georgetown University, had an idea.  Taking advantage of his close living proximity to the Court, Mike would attempt to be the first one in line for all of the major oral arguments for the Court’s term. In addition, he would interview people in line about why they were there and their impressions of the Court and the case to be argued. And, most importantly, he would start a blog to report on his experiences. Mike has been engaging in legal journalism from a unique vantage point: from the front lines — or, from the “front of the line” — of the Supreme Court. Mike’s bright idea has resulted in a successful Supreme Court blog, First One @ One First.  [Recall Mike’s mission to be the “first one” in line at “One First” Street NE (the Court’s address).] Click HERE for the blog’s mission statement. Mike’s experiences and blogging have been featured in the New York Times (see HERE as well), National Public Radio, the ABA Journal, the Washington Post’s WhoRunsGov/PostPolitics, The Atlantic, Slate, Volokh Conspiracy, Above the Law, and other outlets.

Mike’s blogging has also launched the beginning of what is likely to be a successful career in legal journalism. In fact, Mike wrote the cover story for last week’s issue of the Christian Science Monitor.  He has also been blogging at some premier legal blogs. Below, Mike answers some of my questions about his reporting experiences, his impressions of the Court’s term, and his perspective on the Supreme Court in general.

1.  Could you talk briefly about how and why you came up with this idea of what might be called “legal journalism from the front lines?”

Because Concurring Opinions is more of an academic blog, I’ll start with F1@1F’s intellectual underpinnings.  As the Citizens United rehearing approached last September, I noticed that the Roberts Court’s dockets and decisions from OT06 through OT08 appeared to track the surrounding political climate.  Once so boldly conservative on all the hot buttons when operating under the cover of Republican-controlled Legislative and Executive branches, the Roberts Court–now operating alongside Democratic political branches–appeared to have shaped an exceedingly modest OT09 docket so to have enough political capital to spend on Citizens United without irreparably damaging the Court’s institutional legitimacy.

I wanted to test my hypothesis that the Roberts Court was not only sensitive, but also responsive, to its surrounding political climate. Of course, I could have done this by reading transcripts of oral argument and digging through the decisions once released.  But I lived four blocks from the Court and had already had a blast camping out for Citizens United / Sotomayor’s first day.  When I noticed I had no morning classes for the Spring Term on the Court’s argument days, I really decided to make this an in-the-flesh project.

But I wouldn’t have followed through so thoroughly had I not had vocational motivations as well.  I entered law school very interested in constitutional law, politics, and media.  After my first year, I interned for Nina Totenberg at NPR.  That was the summer of Heller and Boumediene.  I so enjoyed that experience that I took a semester off to work at ABC News’s Law & Justice Unit in New York, where I covered the legal aspects of the 2008 Presidential Election and the Wall Street meltdown.  Once back at school and on the job market, I thought there was no better way to make myself attractive to both legal and media employers than to build a body of work on the Supreme Court beat.

Nevertheless, just another person writing about the Court out in the ether wouldn’t have been too compelling.  But getting out in line at disturbingly early hours and telling the tales of those crazy enough to join me – now that’s something no one had ever done. Indeed, if the Court is responsive to the political climate, and if public opinion on any given case is the “weather” that shapes our broader climate, then I figured those who cared enough to get out in line on bitterly cold mornings well before the sun came up would make a very good representative sample for the people who shape public opinion.  By asking these folk, “why are you here?”, I would be committing interesting journalism while also informing my research about the Roberts Court.

2.   What unique insights have your experiences over the past term given you about the Supreme Court and the justices?

Chief Justice Roberts is a superb political strategist.  He’s steering a right-of-center Court through a left-of-center government and knows which storms his ship can handle and which it cannot.  I wrote prospectively about this back in December, Jeff Rosen of The New Republic wrote about it in February, and Adam Liptak of the New York Times wrote about it just the other day.

What we’ve seen this year is the birth of John Roberts’ Court.  It will always, to a degree, remain the Anthony Kennedy Court as well, until he leaves the bench or one of the conservatives is replaced by a liberal.  But Roberts took control this year in the Court’s decisionmaking that we haven’t yet seen.  The next interesting thing to look out for is what issues beyond Miranda, guns, arbitration, and campaign finance the Chief believes are ripe for conservative gains as the Congress and the Presidency remain in Democratic hands.
Read More


BRIGHT IDEAS: Political Scientists Chris W. Bonneau and Melinda Gann Hall on the Judicial Elections Controversy

As I noted in a post on Monday, controversy continues to surround the use of judicial elections in the selection of judges at the state level. Judicial reform advocates seek to abolish judicial elections in an attempt to preserve judicial independence and judicial impartiality. As I noted in Monday’s post, political scientists Chris W. Bonneau (University of Pittsburgh) and Melinda Gann Hall (Michigan State University) have thrown empirical grenades at these arguments in their new book, In Defense of Judicial Elections, which empirically assesses and debunks many of the reformers’ arguments. Professors Bonneau and Hall, who are experts in the areas of judicial selection, state politics, and judicial politics more generally, were kind enough to answer some of my questions about their book, the judicial elections controversy, and judicial selection in general.

For those who are interested in judicial elections, judicial selection, and law and courts more generally, Bonneau and Hall’s book is a must-read! Before you sign on to the judicial reform movement, you must come to terms with the forceful empirical evidence and arguments put forth by Bonneau and Hall. The interview below is a bit long, but it is definitely worth the read!

1.  Your research focuses on the selection of state supreme court judges, for which there are four different selection systems currently used: partisan judicial elections, nonpartisan judicial elections, merit selection with retention elections (the Missouri Plan), and appointment (akin to the appointment process for federal judges). Could you briefly characterize the controversy surrounding judicial elections versus the other systems?

BONNEAU:  The controversy comes down to whether one thinks voters should have a say in who sits on their courts (partisan and nonpartisan elections) and those who think this power should be vested in the hands of elites (appointment and retention).  From our perspective, we ask, given that states elect judges, do voters know what they are doing when they vote?  Are there institutional mechanisms that can assist voters?

HALL:  The basic claim about partisan and nonpartisan elections is that electioneering and other forms of electoral politics have unacceptably deleterious consequences for the American bench, including diminishing the public trust and deterring the most qualified candidates from seeking office. Reform advocates also describe voters as disinterested and uninformed, and incumbents as at the mercy of special interests and other financial high-rollers when seeking reelection.

From our perspective, these assertions are testable hypotheses that have proven to be unsubstantiated or incorrect.

2.  Your research is empirical—you analyze data from state supreme court elections to test claims put forth by judicial reform advocates (i.e., opponents to judicial elections). Judicial reform advocates have typically relied on normative arguments related to judicial independence and the need for judicial impartiality. Are these (and other) arguments grounded in reality?

BONNEAU: Based on all the evidence to date, the answer is no.  It is not only our work that highlights this, but also that of people like Jim Gibson and Eric Posner and his colleagues.  So, for example, one of the claims made by reformers is that voters don’t know what they are doing.  We find that, other thing being equal, voters are able to distinguish between challengers with prior judicial experience (“quality” challengers) and those who have no such experience.  That is, challengers to incumbents who have prior experience perform better, on average, than those that do not.  Another example:  reformers argue that nobody participates in these elections.  We find that voter participation is quite high, given a competitive election.  When voters are given a meaningful choice, they participate.  One final example:  reformers argue that these elections are exacting a toll on the legitimacy of the court system.  In a series of studies, Jim Gibson has shown that is just not true.

HALL:  This is an excellent question that goes directly to the disjuncture between political scientists and other scholars and practitioners concerned with judicial reform. The reform community, based almost entirely in the legal community, readily accepts normative accounts of judging as entirely apolitical and also assumes that any lifting of the purple curtain will attenuate judicial legitimacy. Similarly, the reform community casts the selection process simply as choosing competent technicians and has the tendency to rely on a normative ideal when evaluating the success or failure of judicial elections.

These normative assumptions are contradicted by modern social science. In fact, judges often have significant discretion and rely on their own political preferences to make decisions. Also, voters have participated in partisan judicial elections for decades without any observable adverse consequences and consistently have shown an unwillingness to relinquish their power over the selection process to political elites. Finally, an apolitical selection process is fiction, just as judges are not mere technocrats. In fact, regardless of who chooses judges, these actors seek to forward their own agendas by placing like-minded people on the bench. The federal judicial appointment process illustrates this point well. Finally, when compared to a normative ideal, all American elections fail. State supreme court elections perform as well or better than elections to other major offices in the United States.

Read More


BRIGHT IDEAS: Andrew Sparks on Charter School Boards & Non-Profit Governance

Andrew Sparks is a recently minted PhD in education whose dissertation on the governance of Philadelphia Charter School boards I happened to come across.  He’s developed a precis of that thesis, Finding Their Own Way: The Work of Philadelphia Charter School Boards in a Complex Accountability Environment.   The short report (which you should read) is a particularly nice example of qualitative research into non-profit board behavior – a subject lamentably understudied by legal academics.   In part spurred by the NYT’s recent articles on Charter performance and governance,  I asked Andrew whether he’d be willing to talk with us about what he found.

1.  Why did you write about charter school governance?

When I decided to study charter school governance about 5 years ago my advisors at Penn were not thrilled.  It wasn’t, and still isn’t, the “sexiest” topic to research and isn’t where the research money has been headed.  Within the charter school research arena, the vast majority of time and energy has been devoted to trying to figure out whether charter schools “work” – whether they are better than their non-charter competitors.  For me, showing that school A scored a 745 (on a given test) and school B scored a 731 isn’t usually very interesting, especially when it’s only measuring math and/or reading.   Even if we could say school A is better than school B, do we know exactly makes school A so good and do we know how to replicate that with what will likely be a different group of students, teachers, administrators and parents?

At about this time I also had a few friends who were asked to join charter school boards.  While these friends were talented people, they had no education background, so I began to wonder, more broadly, “who’s on these boards and what are they doing?”  Having worked in the non-profit field, I was aware of the impact that a board can have on an organization – for better or worse.  Having worked with and researched charter schools enough to understand their general governance framework, it seemed that governance might be a critical piece in their potential success and expansion.

Read More


BRIGHT IDEAS: Talking About Robotics With Ryan Calo

Once just fantasy, robots are increasingly prevalent in the twenty-first century.  Ryan Calo, a Senior Research Fellow at the Stanford Center for Internet Society, has been doing fascinating research on the topic.  Along with his work at Stanford, Calo serves on the programming group for National Robotics Week and will be co-chairing the Committee on Robotics and Artificial Intelligence for the ABA.  (He also tweets about privacy and robotics at  This month’s ABA Magazine has a terrific article discussing Calo’s work and I wanted to follow up on that piece with an interview of my own.  I reproduce my discussion with Calo below.

DC:  Tell our readers about your research on robotics.

RC:  Thanks very much for your interest.  I’m researching essentially two aspects of robotics and the law. First, I’m looking at the potential impact of robots on society—for instance, with respect to privacy—and whether existing laws suffice to address this impact.  Second, I’m investigating what the right legal infrastructure might be to promote safety and accountability but also to preserve the conditions for innovation.  In each case, my focus has been on “personal” or “service” robots, a rapidly expanding category of consumer technology that encompasses everything from a Roomba to a humanoid Nao.  I’m also interested in autonomous vehicles and vehicles features such as lane departure prevention.

DC:  What are the most pressing concerns now and what issues do you foresee as pressing in the future?

RC:  Today the most pressing concern is the military’s use of robotics.  Literally thousands of robots have been deployed in the field, with more on the way.  Peter Singer has marshaled extensive evidence that robots may skew individual and military priorities in some instances.  On the one hand, I agree that we should be worried about our increased capacity and willingness to kill at a distance.  On the other, as Ken Anderson has pointed out, robots may allow for more surgical strikes on enemy targets, reducing so-called “collateral damage” to civilians and infrastructure.

The second pressing concern is the uncertainty around liability for what end-users do with robots.  Robots share two key similarities with computers and software: (1) responsibility can be difficult to parse in the event of a malfunction or accident and (2) many of the innovative uses of robotics will be determined by end-users.  We’ve managed to domesticate the issue of computer liability with doctrines such as economic loss; you cannot sue Microsoft because Word ate your term paper.  But this option is unlikely to be on the table with robots that can cause corporeal harm.

We need to get this issue of liability right.  Would you build robots or invest in robotics if you were uncertain of your legal risk?  Would you build versatile, “generative” platforms (to borrow a term from Jonathan Zittrain) if you might be held accountable for whatever users do with those platforms?  I wouldn’t.

Read More


BRIGHT IDEAS: Nunziato on Virtual Freedom: Net Neutrality and Free Speech in the Internet Age

My colleague at George Washington University Law School, Professor Dawn Nunziato, has recently published a provocative book about the First Amendment and the Internet — Virtual Freedom: Net Neutrality and Free Speech in the Internet Age (Stanford University Press 2009).

Her book explains that, contrary to the prevailing understanding of the Internet as a haven for free speech, our communications on the Internet today are subject to censorship and control by a host of private gatekeepers – most notably, by broadband providers.  Under the prevailing negative conception of the First Amendment, these powerful private gatekeepers are not subject to the First Amendment’s mandate prohibiting censorship.  Unlike real space conduits for communication – like telecommunications providers and the postal service – broadband providers are unregulated in their power to censor speech on the Internet.  Dawn argues for an affirmative conception of the First Amendment, under which public and powerful private gatekeepers of Internet communications are subject to the First Amendment’s mandate to ensure the free flow of communications in the digital age.

I had a chance to ask Dawn a few questions about her new book.

SOLOVE: You point out many compelling examples of how ISPs, search engines, and news aggregators are censoring speech.  Can you briefly describe one or two of the most troublesome of your many examples of speech censorship?

NUNZIATO: The examples of censorship that are most troublesome to me involve content or viewpoint discrimination by broadband providers and wireless carriers.  In my view, broadband providers and wireless carriers should be required to serve as neutral conduits for our expression and should not be permitted to censor or block communications.  In one troubling incident, Verizon Wireless initially refused to allow NARAL Pro-Choice America to send text messages to Verizon customers who had signed up to receive such messages.  Verizon relied on its authority to block messages that “may be seen as controversial or unsavory to any of our users.”  In another incident, Comcast refused to deliver politically-charged, time-sensitive emails from an organization that was critical of President Bush’s handling of the War with Iraq.  Examples like these led me to argue that broadband providers and wireless carriers should be prohibited from discriminating against speech on the basis of viewpoint or content.  Just as telecommunications providers and the postal service have long been regulated as “common carriers” and prohibited from engaging in content discrimination, so too should broadband providers be prohibited from discriminating against content in serving as communications conduits.

SOLOVE: You propose what you call “an affirmative conception of the First Amendment.”  What do you mean by that?

NUNZIATO: Let’s contrast two conceptions of the First Amendment.  Under the negative conception, individuals do not enjoy any affirmative right to speak; rather, they only enjoy the right to prevent the government (and only the government) from censoring their speech.  Censorship by other powerful conduits for expression – like broadband or wireless providers – is permissible under this negative conception – even if it means that individuals actually have no meaningful avenues for expressing themselves.  In contrast, under the affirmative conception of the First Amendment, individuals enjoy an affirmative right to speak, free from content and viewpoint discrimination — regardless of whether such discrimination occurs at the hands of the government or other powerful regulators of speech.  The Supreme Court has recognized such an affirmative conception of the First Amendment in several areas, including in the public forum and company town contexts and must carry regulations governing cable TV providers.  But so far, the affirmative conception has not taken root in the Internet context.  This is problematic because virtually all of our speech on the Internet is subject to control by powerful private entities – by broadband providers, email providers, search engines, etc. – and if these gatekeepers of Internet speech are not subject to the First Amendment’s mandate prohibiting censorship, then there is no guarantee that our communication will be free.

SOLOVE: There are some who argue for “net neutrality” – that all ISPs be prohibited from censoring or discriminating against content or applications in any way.   How is what you’re arguing different?

Read More


Bright Ideas: Cahn & Carbone, Red Families v. Blue Families

My colleague, Professor Naomi Cahn (GW Law School) and Professor June Carbone (U. Missouri at Kansas City) have recently published a very provocative and interesting new book, Red Familes v. Blue Families: Legal Polarization and the Creation of Culture (Oxford University Press,2010).  Their book examines the fact that “red” states, despite more restrictive family law, have higher teen pregnancy rates and higher divorce rates than “blue” states.

SOLOVE: What inspired you to write the book?

CARBONE & CAHN: We saw the commentary on the 2004 election about moral values and when we saw the statistics on higher divorce rates in the red states, we reacted, “But we know why that happens, red families marry at younger ages and age is a risk factor for divorce.” When we inquired further, we found the differences were much greater than that and worth much more exploration.

SOLOVE: What are the most central ideas of the book are?

CARBONE & CAHN:  There really are two family systems , and one is in crisis while the other is doing reasonably well. The “blue” one invests in women as well as men, delays family formation until after young adults reach emotional maturity and financial independence, and views sexuality as a private matter. The “red” system is a traditional one that continues to preach abstinence, early marriage, and more traditional gender roles. The blue system arose in response to the needs of the post-industrial economy while the religious backlash against the new values has locked red families into a war against modernity.

The two systems map onto increasingly ideological divisions in American politics, and make family a point of intense contestation.

The conflict between the two systems produces counterproductive results, such as abstinence education that has the most disproportionate consequences for poor women.

The solution is to reforge values at the state and local level while keeping the pathways (e.g., access to contraception) open through national efforts.

SOLOVE: What was your most surprising finding?

CARBONE & CAHN: We were surprised to find that the relationship between age and divorce is new. While teen marriages have always been risky, those who married at 22 in 1980 had about the same levels of divorce as those who married at 28; today, every increase in age reduces the incidence of divorce. This is surprising to us because it suggests that what is going on is not biological, that is, that the improved stability of later marriage is probably a function of better assortative mating (i.e., the successful marry later and marry similarly successful mates) rather than greater maturity at later ages. It also suggests that what’s wrong with marriage in the early twenties is the absence of the right societal support rather than anything about the immaturity per se of those in their early twenties.

Read More


BRIGHT IDEAS: A Dialogue with Brian Tamanaha

Professor Brian Tamanaha (Washington University School of Law) has been publishing a number of must-read works in jurisprudence.  His latest book is Beyond the Formalist-Realist Divide: The Role of Politics in Judging (Princeton University Press 2010).  Brian was gracious enough to respond to my request for a brief dialogue about his ideas in the book.  I agree with quite a lot of what Brian argues, but I played a bit of a skeptic in my interview questions, as I wanted to push him on some of his arguments.  Here’s our exchange:

Solove: You argue in your book that the “formalist-realist divide” is a myth based on a straw-man account of formalism – that formalists were unduly rigid and mechanistic and that realists correctly pointed out that judges weren’t purely objective robots. You spend the first part of your book debunking the traditional view of formalism, noting that formalists were much more balanced and realistic than their critics give them credit for. You argue that dethroning this picture of formalism should lead to an embrace of “balanced realism.” What do you mean by “balanced realism”?

Tamanaha: My argument is not quite that we have bought into a “straw-man [or exaggerated] account of formalism,” but more strongly, that the “formalist age” was a pure invention by progressive critics to paint judges as deluded or deceptive. I provide substantial evidence in the book showing that judges and jurists at the turn of the century did not believe in “formalism.” There were no avowed “formalists” in the U.S. legal culture (although it did exist in German legal science). Indeed, “formalism” was used as an insult at the time; they associated formalism with a primitive stage in law, which they had progressed beyond; “the Zeitgeist and its dislike of formalism,” wrote a jurist in 1893. I show in the book that, contrary to conventional accounts of the so-called formalist age, the jurists now identified as leading “formalists” (Cooley, Carter, Dillon, etc.) all said very realistic things about judging.

“Balanced realism” recognizes that there are gaps in the law, that sometimes judges have discretion and must make choices, that different judges can sometimes interpret the same law in different ways owing to differences in perspective and background, that inconsistent precedents or conflicts in the applicable law can exist, and that sometimes judges manipulate the law to reach desired ends (I called these factors the “skeptical aspects”); but “balanced realism” also recognizes that a substantial bulk of the time the rules and their application are clear and predictable, that surrounding institutional factors constrain judges, that most judges abide by the commitment to follow the law, and that the overwhelming majority of judicial decisions are legally determined (the “rule bound” aspects).

I call this “balanced realism” because it acknowledges the limitations inherent to law and human judges—which cannot be eliminated—yet it also recognizes that law nonetheless works, that judges can and do render rule bound decisions. For at least two centuries, the book shows, judges and jurists have described law and judging in balanced realist terms. The formalist-realist divide that dominates contemporary views of judging tends to obscure this common ground.

Solove: You write that “the greater danger to the legal system today is posed not by excessive formalism but by excessive skepticism about judging” (p. 197). What do you mean by this?

Tamanaha: The rule bound aspect of judging can function reliably within a legal system notwithstanding the challenges attendant to the skepticism-inducing aspects, but this is an achievement that must be earned, is never perfectly accomplished, and is never guaranteed. Excessive skepticism about judging threatens to disrupt the balance. If our legal culture buys into the view that judging is a cover for politics, then judges might well come to think it is naïve or foolish to strive to live up to the commitment to rule in accordance with the law. If this commitment is lost, rule bound judging will diminish.

Read More


BRIGHT IDEAS: Helen Nissenbaum’s Privacy in Context: Technology, Policy, and the Integrity of Social Life

I’d like to second Dan’s enthusiasm for Helen Nissenbaum‘s newest book, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press 2009).  Privacy in Context is engrossing and important, and, lucky for us, I had a chance to interview Professor Nissenbaum about the book, her scholarship, and her thoughts on the future of privacy.  First, let me tell you a bit about Professor Nissenbaum.  Then, I will reproduce our interview below.

Helen Nissenbaum is Professor of Media, Culture and Communication, and Computer Science, at New York University, where she is also Senior Faculty Fellow of the Information Law Institute.  Her areas of expertise span social, ethical, and political implications of information technology and digital media. Nissenbaum has written extensively in journals of philosophy, politics, law, media studies, information studies, and computer science and has written and edited four books (including the book we highlight today).  She has also authored several important studies of values embodied in computer system design, including search engines, digital games, and facial recognition technology.

DC:  Why did you write this book?

HN:  I had published a series of articles on how privacy, conceptually and in practice, had been challenged by IT and digital media. Although, initially, these had been mainly critical in tone, for example, demonstrating how “privacy in public” exposed glaring weaknesses not only in predominant understandings of privacy but in approaches law and regulation, as well, they ultimately yielded the substantive idea of privacy as a claim to appropriate flows of personal information within distinctive social contexts, modeling this idea in terms of contextual integrity and — what I call in the book — “context-relative informational norms.” IT systems and digital media are often felt as privacy threats because they are disruptive of entrenched flows, they violate norms.

With these articles in far-flung journals, I realized it would be hard, if not impossible, for anyone to pull the whole argument together, to recognize the problems in certain other approaches and how contextual integrity addressed some of these. A book would consolidate these works into a coherent whole in what I imagined it would be the work of a mere few months — an extravagant miscalculation, of course.

While collaborating with colleagues from the PORTIA project (Adam Barth, Anupam Datta, and John Mitchell) to develop a formal expression of contextual integrity (in linear temporal logic), I came to realize that it needed significant sharpening. Further, it became increasingly clear that the theory needed a far more robust and fleshed out prescriptive (or normative) dimension, which I had only briefly sketched in the Washington Law Review article. This component would be absolutely essential to the success of contextual integrity as a whole, if the theory was to have moral “teeth.” And, of course, the longer I worked the larger the field became, more cases with which to reckon, more outstanding work to take into consideration. Mere months became a couple years.

DC:  What for you are the most pressing concerns that the book addresses.

HN:  Among the most pressing for me were:

First, to demonstrate that the private-public distinction, as useful as it may be in other areas of political and legal philosophy, is a terrible dead-end for conceptualizing a right to privacy and for formulating policy. In my view, far too much time has been wasted deciding whether this or that piece of information is private or public, whether this or that place is private or public, when, in fact, what ultimately we care about is what constraints ought to be imposed on the flows of this or that information in this or that place. We could make much more rapid progress addressing urgent privacy questions if we addressed the latter questions head-on instead of tying ourselves in knots over the former.

Second, to challenge the definition of privacy as control over information about oneself, which dominates policy realms, even if not to that extent in academia. The trouble with this definition is that it immediately places privacy at odds with other values, conceived as more pro-social. If the right to privacy is the right to control then of course it must be moderated, traded-off, compromised for the general good!  Moreover, it not even clear that control offers the best protection to the subject. Imagine, for example, if all that stood between individuals and access to their complete health records was subject consent and place these individual in a situation where a job, or mortgage, the chance to win the lottery, … hung in the balance. Fortunately, U.S. law recognizes that we need substantive constraints on information flow in certain areas – contexts – of life and though critics have pointed out many weaknesses in the letter of these laws, I believe the approach is dead right. Read More