Why privacy matters even if you don’t care about it (or, privacy as a collective good)

privacy is...

“How much do people care about privacy?” This is a key, enduring, question in ongoing debates about technological surveillance. As survey after survey regarding changing privacy attitudes is presented as proof that privacy is dead, one might wonder why we should bother protecting privacy at all.

One common answer is that the privacy surveys are wrong. If survey-makers only asked the right questions, they would see that people do actually care about their privacy. Just look at the most recent Pew Research Survey on privacy and surveillance. We should protect privacy rights because people care about it.

While this answer is fine, I find it unsatisfying. For one, it’s hard to draw firm conclusions about privacy attitudes from the surveys I’ve seen (compare the Pew survey linked above to this Pew survey from the year before). Those attitudes might ebb and flow depending on the context and tools being used, and social facts about the people using them. More importantly, though, while privacy surveys can be very valuable, it’s not clear that they are relevant to key policy questions about whether and how we should protect privacy.

This leads to what I think is the better (but perhaps more controversial) answer to the puzzle: privacy is worth protecting even if turns out most people don’t care about their own privacy. As counterintuitive as it seems, questions about privacy and surveillance don’t–and shouldn’t–hinge on individual privacy preferences.

That’s because questions about privacy rights, like questions about speech or voting or associative rights, are bigger than any individual or group. They are, instead, about the type of society we (including all those survey-takers) want to live in. Or as scholars have suggested, privacy is best thought of as a collective rather than merely an individual good.

Privacy is like voting

Many of our most cherished rights, such as expressive, associational, and voting rights, are understood to protect both individual and collective interests. The right to vote, for example, empowers individuals to cast ballots in presidential elections. But the broader purpose of voting rights–their raison d’être–is to reach collective or systemic goods such as democratic accountability.

The fact that many individuals in the United States don’t vote doesn’t tell us much about whether the right to vote is worth protecting, let alone whether we should enact or scale back a particular set of voter protections. When it comes to voting, we intuitively understand that the right to vote has societal benefits that are worth protecting regardless of individuals’ attitudes towards voting. For example, the very existence of robust voting and electoral rights—the possibility that people might exercise their voting power if unhappy—incentivizes accountability on the part of government officials.

Privacy is like voting. Privacy rights create space for individual freedom, but their raison d’être is protecting broader societal and systemic goods. The point of protecting privacy rights—even rights that we choose not to exercise—is to facilitate the creation and furtherance of these social goods. Absent the space provided by the rights to private thought, private communications, and private associations, it is difficult to imagine how any of the major socio-political movements of the past century—from civil rights, to women’s rights, and gay rights—could have survived long enough to influence policy.

The same, perhaps, can be said of major innovations in science, business, and technology. When properly balanced, privacy rights protect the creative process; they create space for deviance, and for experimentation; they allows for the testing and weeding out of weak arguments; they create new pathways for minority viewpoints and groups to gain public support, and for unpopular legal and political arguments to move from off the wall to on the wall.

The question of whether privacy rights are worth protecting is tied to the value we place on these systems and processes–and the public goods they facilitate–rather than to any individual’s interest in the secrecy of their own information. Even if it turns out to be true that most people don’t care about their privacy, that would not be enough to settle important questions about whether (and what) privacy rights society should protect. Privacy rights, like voting rights, are the types of rights that are worth protecting even if many of us don’t care to exercise them.

Rethinking privacy harms

Once we recognize that a critical purpose of privacy rights is to protect collective interests, we have to rethink some of the ways we evaluate privacy harms in our legal and political discourse. For example, courts and policymakers evaluating the harm from (say) data breach or overbroad surveillance must look beyond the intrusion on a particular individual’s privacy and give weight to the broader societal harms of the legal rules or rulings being promulgated. In privacy cases, as with first amendment and voting rights cases, the harm to the individuals is often less important than the harm to society.

Unfortunately, courts and policymakers very often undervalue the societal harms of privacy intrusions. In data breach litigation, courts typically throw out privacy claims unless the victims can prove the data thieves misused their stolen information. This is a very high bar that does little justice to the many social costs of the poor security practices that cause data breach. In its 2012 decision in Clapper v. Amnesty International, the Supreme Court set a similarly high bar for plaintiffs challenging surveillance laws. The Clapper majority’s decision was premised, again, on the theory that the plaintiffs could not prove that they suffered more than speculative harm from the government’s expanded surveillance powers, which in the view of four dissenting justices had a “very strong likelihood” of ensnaring lawful communications.

In Fourth Amendment cases courts routinely contort themselves to force decisions with broad implications for collective interests (decisions that fundamentally affect “the right of the people to be secure”) into a narrow individual-privacy box. Take Maryland v. King, in which the Supreme Court authorized Maryland’s practice of genetic testing suspects arrested, but not charged or convicted, with a violent felony. The majority’s operative legal analysis focused on the “negligible” intrusion caused when the police swabbed the suspect’s cheek with a Q-Tip, and not the brave new world of warrantless genetic testing. In evaluating the reasonableness of the government’s conduct, the majority weighed the degree to which the cheek-swab “intrudes upon an individual’s privacy,” on one hand, and “the promotion of legitimate governmental interests” in crime prevention, on the other.

Put otherwise, the Court weighed the right of the people of Maryland to efficient law enforcement against one man’s right to have his cheek let alone. If it doesn’t seem like a fair fight, it’s because it’s not. As the Court concluded, “[a] gentle rub along the inside of the cheek does not break the skin, and it involves virtually no risk, trauma, or pain”—a small price to pay for the safety of the people of Maryland. Not only does Court’s legal analysis gives short shrift to societal implications of broadened genetic surveillance, it focuses on the wrong individual harm: the momentary (and “negligible”) intrusion of a cheek-swab, rather than the privacy implications of suspicionless DNA searches and lifelong inclusion in ever-searchable, all but permanent, law enforcement DNA databanks.

The Court’s pro-privacy decisions are often similarly contrived. In U.S. v. Jones, the Court held that the Fourth Amendment regulates the use of GPS trackers by law enforcement. The case presented difficult questions about the scope of privacy rights in public places in the face of new technologies that allow pervasive tracking of location and patterns of life. Instead of grappling with the implications of unchecked, ubiquitous location tracking, the Court fashioned a brand new legal rule rebuking the FBI’s minor physical intrusion onto the undercarriage of Mr. Jones’s Jeep. The Jones case, of course, wasn’t about the car; it was about the broader implications of the type of unchecked, automated, warrantless location surveillance, which has become increasingly routine.

In Jones, as in King, the justices of the Supreme Court understood the impact of their rulings on collective interests—those stakes are addressed in the merits and amicus briefs, the concurrences and dissents, and even in the majority’s dicta. But in each case, the Court went well out of its way to make those stakes seem tangential to what it framed as its real job, crafting rules to protect the cheeks and the car-undercarriages of individual Americans. This misses the forest for the trees. Even recognizing the need for strong limits on judicial decisionmaking (enforced in part through justiciability rules), one must believe that there is—there must be–better ways to do justice to the broader societal impact of legal rules that undermine privacy.

Privacy rights are worth protecting for reasons that go beyond any individual’s interests in avoiding embarrassing disclosures, minor physical intrusions, or pecuniary damage. Privacy rights are worth protecting because they create space for innovation, creativity, expression, dissent, competition, and political participation. They are a condition precedent to the healthy functioning of our political and economic system. Policymakers and courts should do more to recognize these social and collective interests protected by privacy in their decisions.

Doctrinal implications

What are the doctrinal implications of recognizing privacy as a collective good? While there is no simple answer, we may look to our experience with other “collective rights” for guidance.  When it comes to free expression, association, and voting, Courts and policymakers have long devised doctrinal mechanisms to fill the gaps between the individual and collective interests protected by these rights.

In First Amendment cases, courts apply a modified version of traditional standing requirements–the same requirements used to toss privacy and surveillance claims–to account for the societal harms of speech-chilling statutes. Under the Supreme Court’s overbreadth cases, litigants may challenge speech restrictions that are substantially overbroad even where they are unable to demonstrate individualized harm to their own speech rights. The collateral damage from overbroad statutes on speech and associational rights is just too high, and courts will strike such laws rather than to wait for the perfect litigant.

Courts have also relaxed standing requirements in election law. According to Saul Zipkin’s 2014 piece Democratic Standing, when adjudicating disputes involving voting rights courts “attribute[] structural or probabilistic harms to plaintiffs without an individualized showing of particular harm.” These doctrinal fixes are necessary to correct the awkward fit between the individual-harms focus of traditional standing doctrine and the core purpose of election law, which is the protection of a system and a process rather than of any individual vote:

Standing, premised on a litigant who has suffered injury in fact, fits awkwardly with election law, which often involves claims of harm to the electorate or the democratic process and presents contexts where it may be impossible to identify an individual who has suffered concrete harm. Surveying an array of election contexts […] demonstrates that federal courts have applied standing in a distinct manner in this setting, thereby positioning themselves as monitors of the electoral process. [From the asbstract].

Courts have not, to my knowledge, adopted similar corrective mechanisms in the context of privacy rights, though they have had several notable opportunities to do so (the Clapper case, discussed above, is a recent example).

Perhaps they should. Modified constitutional standing requirements—or at least a rethinking of what the harms from privacy intrusions entail—may be necessary to do justice to broader privacy interests, at the very least for the subset of privacy rights that are inextricably intertwined with freedom of thought, expression, and the proper functioning of the democratic process (what some have called the rights to intellectual privacy). In many other contexts, remedies crafted by lawmakers, rather than courts, are likely to be more appropriate.

Changing the discourse

To be sure, the doctrinal analogies discussed above quickly reach the limits of their usefulness. Changing a few laws is unlikely to get at the problem’s root, which is the contrived and outdated vocabulary of privacy rights and the wooden policy and political discourse built around it. Our inability to conceptualize privacy as more than a purely individual right is damaging: it drives our broken notice-and-consent model of privacy protection, and it makes genuine debate (whether in the courts or otherwise) about the costs and benefits of surveillance difficult and unwieldy.

The longer we avoid grappling with the broader implications of privacy intrusions–the longer we frame the issue as a question of balancing the individual preferences of a few civil libertarians against the convenience and security of the many–the more likely it becomes that the big questions will be decided for us: by technology, by fear, and by the inertia of a new status quo, created without deliberation.

Perhaps most good citizens do in fact believe that they have “nothing to hide, and therefore, nothing to fear” — whether they do is interesting, but largely besides the point. The more important question is whether we want to live a society where no one can hide anything, and what the heck we do if we’re already there. And that’s one we’re not likely to find answered in an opinion poll.


  1. Most of the arguments in this post are not novel. Articulating the connection between privacy rights and democratic values has been an important goal of privacy scholarship for a very long time. Several scholars have argued that the Fourth Amendment, in particular, should be thought of as a collective right–an argument that is supported by the provision’s reference to a preexisting right of the people to be secure against unreasonable government interference with persons, places, papers, and things. See this textualist analysis by David Gray, this recent commentary by Thomas Clancy, and this 1985 note from Richard McAdams. Similar perspectives find support in works by Dan Solove, Neil Richards, Danielle Citron, and many others.
  2. While I’ve been talking about privacy as a collective good, I suspect there may be better ways of describing the non-individual aspects of privacy rights. One option is to start thinking of privacy, or one type of privacy–the type that’s most likely to deserve the special standing rules described above, and so on–as a political right; a right properly grouped with a number of other political rights protected by fundamental laws in democratic states.
  3. I haven’t discussed European conceptions of privacy here. In my view the European focus on privacy harms as harms to individual dignity do not fare much better than our own when it comes to accounting for the societal and collective interests protected by privacy.
  4. A final point, which I think should be obvious. Recognizing privacy as a collective good does not imply privacy absolutism, but rather a recalibration of the scales and a change in our discourse to better account for the societal benefits of privacy.

UPDATE: The Supreme Court recently granted certiorari in Spokeo v. Robins, a case about the scope of Article III standing requirements. The case is notable as its facts center on an alleged violation of a data privacy statute, the Fair Credit Reporting Act. I haven’t read the petitions, but will try to share my thoughts when I do.

The views and opinions in this article are those of the author and expressed in his personal capacity. 


You may also like...

2 Responses

  1. Orin Kerr says:

    Thanks for the interesting post, Babak, and good seeing you at PLSC.

    There are a few different versions of this argument, and I’m hoping I can better understand which version you are making. Consider these different versions of the argument:

    1) Some people don’t value privacy, but others do. Because there needs to be broad rules that govern society, we need to pick one set of rules. The law should value privacy to make sure those who do value privacy have their values respected even if some don’t like those rules.
    2) Some people say they don’t value privacy, but actually they do, so the law should value privacy based on that reality rather than the stated preference.
    3) People don’t value privacy, but they really should. People are wrong not to value it enough. The law should value privacy more because the wiser position is to value it more.

    Can you say a little more about which version (or other versions) of the argument you’re making? I ask because you refer a lot to what “we” want or need, and yet I’m not sure you say who the “we” is in the argument. Is the “we” the subset of the people who value privacy enough, or is that just public preference as a whole, or is that a normative claim based on what people should value?

    Major apologies if I’m just missing it, but I wasn’t sure.

    • Babak Siavoshy says:

      Hi Orin,

      Thanks for the comment, and sorry for the late response (travel!). I don’t think the options you listed fully capture my thesis, so I’ll try to restate it.

      Here’s the problem I’m trying to offer a perspective on: I think we’re generally pretty bad at explaining why privacy matters. There’s a lot of confusion and inconsistency in the way we talk about privacy’s value and harms, and this confusion makes for bad policy. Things are made worse by the fact that new technologies are forcing policymakers to make decisions about privacy rights at an accelerated pace.

      In my post, I’m less interested in giving an answer than in unpacking the question (“why does privacy matter”?). Looking at it again, I could have probably been more clear about that.

      My first thesis is that, when we think about the value of privacy (“why does privacy matter?”) we should distinguish the individual value that (say) we get from protecting our inbox from snoopers from the social value of email privacy rights. In other words, “why does privacy matter” has a different answer depending on whether we’re talking about privacy preferences or whether we’re talking about privacy rules.

      Courts and policymakers and commentators sometimes (in my view, often) get the connection between these two things wrong. They draw improper conclusions about privacy rights from facts about privacy preferences, and vice versa. For example, they assume that if a person doesn’t value their own privacy very much (they have “nothing to hide”), then they also don’t value privacy rights (i.e., they don’t care whether they live in a society that protects privacy). The argument goes, Bob doesn’t mind the NSA reading his email (perhaps because he has nothing to hide), therefore Bob doesn’t care about email privacy generally.

      I think these are conceptual mistakes. The argument from rights to preferences and vice versa isn’t as strong as it seems. It’s perfectly reasonable to have nothing to fear from your government (perhaps because you have nothing to hide), and at the same time believe there should be strong limits on government intrusions into privacy. There is no logical inconsistency (and no hypocrisy) in playing fast and loose with your personal information on social media, and at the same time insisting on strong privacy controls for social media data.

      My second thesis is supposed to explain the puzzle presented by the first. If I’m right, then it’s reasonable to care about privacy rights even if you don’t care about your own privacy. How do we explain that?

      I think it comes down to the fact that privacy is an ill-defined concept (clearly, I’m not the first or the last to notice it). We use the same concept – privacy – to refer to different (but related) problems, and that’s the source of the confusion.

      What are these “different problems”? As is relevant here, one way to think of it is in terms of levels of abstraction: at the lowest level of abstraction, say, your email inbox, the privacy problem you’re facing might be “will revealing my inbox hurt or embarrass me?.” At a higher level of abstraction, say, Fourth Amendment rights, the privacy problem you’re facing might be, “will watering down email warrants cause government abuse?; will that abuse change the relationship between people and their government?” and so on.

      I think most people don’t think in terms of “levels of abstraction,” so in my post I described the distinction as that between the “individual” and “collective” goods of privacy instead. That might not be the best way to talk about it. The language around this stuff can be really hard. For the purpose of the post, though, my point is that “privacy” can refer to different type of problems depending on whether you’re talking about your own preferences or the rights and duties that apply to society as a whole.

      The analogy to voting demonstrates this dynamic pretty well, I think. We might not vote when it’s inconvenient, but that does not mean we should (or would) trade off convenience for our/the right to vote. When it comes to voting (or speech) intuitively see a distinction that we more often miss in the context of privacy.

      My third thesis builds on the first two: we (courts, policymakers, commentators) undervalue the collective goods of privacy. They/we do so, I think, in large part because of the conceptual confusions I discussed above. We undervalue the collective goods of privacy because our legal and political discourse confuses and equates them with the individual goods of privacy. And, to be fair, some of the problem is that it’s just harder to quantify privacy problems at that higher level of abstraction.

      In this sense, I’m arguing that courts and policymakers do actually care about privacy (something close to your #2), but they’re mislead by the confusing language and conceptual baggage associated with the term. You might say this is a normative point, and I’m probably fine with that, but I don’t mean it as one. I’m certainly not stating that people should adopt my particular privacy preferences as law because they’re mine.

      Finally, as I mentioned in the post, I don’t think any of this is particularly novel. But I like the analogy to voting, because it’s intuitive to me and I hoped it would be to others as well.