Why empowering consumers won’t (by itself) stop privacy breaches

Thanks to CoOp for inviting me to guest blog once again. As with my other academic contributions, the views expressed here are my own and don’t necessarily reflect those of my employers past or present.

buyer-bewareWho bears the costs of privacy breaches? It’s challenging enough to articulate the nature of privacy harms, let alone determine how the resulting costs should be allocated. Yet the question of “who pays” is an important, unavoidable, and in my view undertheorized one. The current default seems to be something akin to caveat emptor: consumers of services — both individually as data subjects and collectively as taxpayers — bear most of the risks, costs, and burdens of privacy breaches. This default is reflected, for example, in legal rules that place high burdens on consumers seeking legal redress in the wake of enterprise data breaches and liability caps for violations of privacy rules.

Ironically, the “consumer pays” default may also (unwittingly) be reinforced in well-meaning attempts to empower consumers. This has been one of the unintended consequences of decades of advocacy aiming to strengthen notice and consent requirements. These efforts take it for granted that data subjects are best-positioned to make effective data privacy and security decisions, and thus reinforce the idea that data subjects should bear the ultimate costs of failures to do so. (After all, they consented to the use!). And while notice and consent are still the centerpiece of every regulator’s data privacy toolbox, there’s reason to doubt that empowering consumers to make more informed and granular privacy decisions will reduce the incidence or the costs of privacy breaches.

When it comes to managing privacy costs, empowering consumers with notice and consent is insufficient for at least three reasons. The first is physical: for the most part consumers do not have physical control over the data about them, and this places practical constraints on the types of data privacy decisions they can make. Most of the sensitive data in the world exists in networks of virtual and physical servers held in distributed data centers owned or controlled by enterprises such as banks, hospitals, insurers, and technology vendors. There are few practical steps, outside of total civic disengagement, that consumers can take to avoid these systems. As a result, any palatable privacy regime has to account for the fact that a whole host of very important data privacy and security decisions will remain outside of the data subject’s hands. For instance, consumers cannot realistically control the type of encryption or security infrastructure enterprises use to protect sensitive data, and it’s unlikely that a more robust notice or consent regime will change that.

Second, robust notice and consent regimes only work if we assume consumers have the time and the resources required to make effective, reasoned privacy decisions. That’s at the very least a non-obvious assumption. Today, most people have no practical choice but to trust (read: give blanket consent to) third parties — banks, insurers, communication firms, healthcare providers — to make hundreds or thousands of decisions pertaining to the use of their sensitive data. Even if more granular notice and consent were provided, few among us could properly exercise those rights without spending countless hours each week balancing the costs and benefits of data processing decisions for ourselves and those under our care.  (Put differently: in the utopian, privacy-friendly future of my daydreams, there are fewer — not more — privacy policies, cookie notices, and click through agreements.  It’s a future where I make fewer privacy decisions and get more privacy).

Third, empowering consumers with notice and consent is insufficient because consumers are rarely the least cost avoiders when it comes to the harms of privacy breaches. (This is true, I think, whether you have a Pigouvian or a Coasian take on efficient cost allocation). The true cost of, say, medical data breach goes well beyond the harm to individual victims of the breach. It include increases in medical fraud in an already overburdened healthcare system, and continued erosion of trust between individuals and providers. It includes the increased strain on the safety net caused by the effects of breach on children, the poor, the elderly, the sick, and others who often lack the means to proactively seek redress. Consumers are unlikely to internalize these societal costs and lack efficient means of avoiding them.  As a result, no amount of empowering consumers to make good privacy decisions is likely to address the full harms from data breaches and other privacy violations.

If strengthening notice and consent requirements won’t reduce the cost and incidence of privacy breaches, then what will? There’s no easy answer here, but the growing consensus is that managing privacy harms in our technological future depends on the development of new norms and social practices regarding information use; or as one author has put it, what we need is a new ethics of big data, codified in laws, industry norms, best practices, and cultural understandings surrounding the use of data collection, storage, and processing technologies. The virtue of this approach is that it recognizes the irreducible importance of norms and conventions in setting the bounds of appropriate human activity: the norms of promise-making underpin much of our contract law, the conventional morality of blame and fault informs the tort law, the shared norms of ownership shape the rules of property law.

Similarly, protecting privacy at a meaningful scale will ultimately be a bottom up, rather than merely a top down affair — more than rigorous notice and consent requirements, more than specific regulatory or technological measures, it will require processes and forums to facilitate the creation of a conventional morality of information use that will underpin the next generation of privacy laws.

[UPDATE: see Dan Solove’s 2013 piece on Privacy Self Management for a more thorough critique of the consent-centered model of privacy protection].

 

 

You may also like...

2 Responses

  1. Adam Shostack says:

    Nice post! I think there’s a 4th issue, which is that often times, people have no control over where their personal data goes, because decisions are made by their employers, by government agencies, or by companies over whom they have no control. For an easy example, in the recent health care breaches, the insurer was chosen by their employer. Perhaps a few people could choose their employer to get the most secure health insurer, but that’s bundled with a great many other things, and your point about people having time and resources to make decisions play in.

    But that’s only one variant–I’ve personally been impacted by a breach at the retirement plan of a company that acquired a former employer, and they had a database which included me even though I’d moved my money elsewhere.

  2. Babak Siavoshy says:

    Agreed. The conventional wisdom has been that restoring control will solve many of these issue. In the examples you provide, it’s not at all clear that more control for the data subject would have resolved the issue. Improved rules and norms regarding data security, in the first case, and how customer data is handled in mergers and acquisitions, in the second case, would have. (With regards to the development of norms relating to acquisitions, see the recent Radio Shack matter http://www.law360.com/privacy/articles/659010?nl_pk=d34b255a-2ea9-4f4f-b1a7-6183ffa4e878&utm_source=newsletter&utm_medium=email&utm_campaign=privacy).