BRIGHT IDEAS: Helen Nissenbaum’s Privacy in Context: Technology, Policy, and the Integrity of Social Life
I’d like to second Dan’s enthusiasm for Helen Nissenbaum‘s newest book, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press 2009). Privacy in Context is engrossing and important, and, lucky for us, I had a chance to interview Professor Nissenbaum about the book, her scholarship, and her thoughts on the future of privacy. First, let me tell you a bit about Professor Nissenbaum. Then, I will reproduce our interview below.
Helen Nissenbaum is Professor of Media, Culture and Communication, and Computer Science, at New York University, where she is also Senior Faculty Fellow of the Information Law Institute. Her areas of expertise span social, ethical, and political implications of information technology and digital media. Nissenbaum has written extensively in journals of philosophy, politics, law, media studies, information studies, and computer science and has written and edited four books (including the book we highlight today). She has also authored several important studies of values embodied in computer system design, including search engines, digital games, and facial recognition technology.
DC: Why did you write this book?
HN: I had published a series of articles on how privacy, conceptually and in practice, had been challenged by IT and digital media. Although, initially, these had been mainly critical in tone, for example, demonstrating how “privacy in public” exposed glaring weaknesses not only in predominant understandings of privacy but in approaches law and regulation, as well, they ultimately yielded the substantive idea of privacy as a claim to appropriate flows of personal information within distinctive social contexts, modeling this idea in terms of contextual integrity and — what I call in the book — “context-relative informational norms.” IT systems and digital media are often felt as privacy threats because they are disruptive of entrenched flows, they violate norms.
With these articles in far-flung journals, I realized it would be hard, if not impossible, for anyone to pull the whole argument together, to recognize the problems in certain other approaches and how contextual integrity addressed some of these. A book would consolidate these works into a coherent whole in what I imagined it would be the work of a mere few months — an extravagant miscalculation, of course.
While collaborating with colleagues from the PORTIA project (Adam Barth, Anupam Datta, and John Mitchell) to develop a formal expression of contextual integrity (in linear temporal logic), I came to realize that it needed significant sharpening. Further, it became increasingly clear that the theory needed a far more robust and fleshed out prescriptive (or normative) dimension, which I had only briefly sketched in the Washington Law Review article. This component would be absolutely essential to the success of contextual integrity as a whole, if the theory was to have moral “teeth.” And, of course, the longer I worked the larger the field became, more cases with which to reckon, more outstanding work to take into consideration. Mere months became a couple years.
DC: What for you are the most pressing concerns that the book addresses.
HN: Among the most pressing for me were:
First, to demonstrate that the private-public distinction, as useful as it may be in other areas of political and legal philosophy, is a terrible dead-end for conceptualizing a right to privacy and for formulating policy. In my view, far too much time has been wasted deciding whether this or that piece of information is private or public, whether this or that place is private or public, when, in fact, what ultimately we care about is what constraints ought to be imposed on the flows of this or that information in this or that place. We could make much more rapid progress addressing urgent privacy questions if we addressed the latter questions head-on instead of tying ourselves in knots over the former.
Second, to challenge the definition of privacy as control over information about oneself, which dominates policy realms, even if not to that extent in academia. The trouble with this definition is that it immediately places privacy at odds with other values, conceived as more pro-social. If the right to privacy is the right to control then of course it must be moderated, traded-off, compromised for the general good! Moreover, it not even clear that control offers the best protection to the subject. Imagine, for example, if all that stood between individuals and access to their complete health records was subject consent and place these individual in a situation where a job, or mortgage, the chance to win the lottery, … hung in the balance. Fortunately, U.S. law recognizes that we need substantive constraints on information flow in certain areas – contexts – of life and though critics have pointed out many weaknesses in the letter of these laws, I believe the approach is dead right.
These two general claims, argued mainly in Section Two, stake out the groundwork. The bulk of the book, of course, is devoted to fleshing out the substantive theory of contextual integrity, specifying the structure of informational norms, developing the political philosophy, if you will, of privacy as contextual integrity, and demonstrating its application to a number of well known, controversial cases.
The book lays out a justificatory framework for contextual integrity, develops the theory to a certain point, and outlines several applications. The hard work remaining will fall to area experts, for example, in healthcare, education, social life, and workplace, to carefully articulate the norms, to understand their sources, and to explain crucial values and purposes served by them.
DC: How does this fit in your broader research?
HN: I will continue testing the usefulness of contextual integrity in application to specific questions. Right now, these include whether and under what conditions Court Records ought to be placed online, developing clear arguments identifying sources of problems with online behavioral ad targeting and surveillance of search queries, and, in collaboration with colleagues in computer science, developing companion, proof-of-concept software systems such as Adnostic (push-back against behavioral targeting) and continuing to improve TrackMeNot (pushback against web-search profiling) drawing on principles of values-in-design developed in the Values-at-Play project.
DC: Are you hopeful about the future of privacy?
HN: My hope level is in constant flux. When I think of the vast backend of information aggregators interacting directly and indirectly with personal information, such as Google, Choicepoint, ISPs, government agencies, and financial conglomerates, I fear the worst. I worry that the landscape of incentives will swamp just about any moral consideration we might bring to bear. At the same time, I’m buoyed by the growth in size and quality of privacy scholarship and practice, the guile, brilliance, and insubordination of computer hacker and NGO players. And sometimes, watershed events can be enormously important; grim as it is, the Google/China debacle may turn a few heads.