Privacy’s midlife crisis

Privacy law is suffering from a midlife crisis. Despite well-recognized tectonic shifts in the socio-technological-business arena, the privacy framework continues to stumble along like an aging protagonist in a rejuvenated cast. The framework’s fundamental concepts are outdated; its goals and justifications in need of reassessment; and yet existing reform processes remain preoccupied with corporate organizational measures, which yield questionable benefits to individuals’ privacy rights. At best, the current framework strains to keep up with new developments; at worst, it has become irrelevant.

More than three decades have passed since the introduction of the OECD Privacy Guidelines; and 15 years since the EU Data Protection Directive was put in place and the “notice and choice” approach gained credence in the United States. This period has seen a surge in the value of personal information for governments, businesses and society at large. Innovations and breakthroughs, particularly in information technologies, have transformed business models and affected individuals’ lives in previously unimaginable ways. Not only technologies but also individuals’ engagement with the data economy has radically changed. Individuals now proactively disseminate large amounts of personal information online via platform service providers, which act as facilitators rather than initiators of data flows. Data transfers, once understood as point-to-point transmissions, have become ubiquitous, geographically indeterminate, and typically “residing” in the cloud.

In a new article titled Privacy law’s midlife crisis: A critical assessment of the second wave of global privacy laws, which will be presented at the upcoming Ohio State Law Journal Symposium on The Second Wave of Global Privacy Protection, I address the challenges posed to the existing privacy framework by three main socio-technological-business shifts: the surge in big data and analytics; the social networking revolution; and the migration of personal data processing to the cloud. The term big data refers to the ability of organizations to collect, store and analyze previously unimaginable amounts of unstructured information in order to find patterns and correlations and draw useful conclusions. Social networking services have reshaped the relationship between individuals and organizations. Those creating, storing, using, and disseminating personal information are no longer just organizations but also geographically dispersed individuals who post photos, submit ratings, and share their location online. The term cloud computing encompasses (at least) three distinct models of utilizing computing resources through a network – software, platform and infrastructure as a service. The advantages of cloud computing abound and include reduced cost, increased reliability, scalability, and security; however, the processing of personal information in the cloud poses new risks to privacy.

In response to these changes, policymakers on both sides of the Atlantic launched extensive processes for fundamental reform of the privacy framework. The product of these processes is set to become the second generation of privacy law. Yet as I show in the article, the second generation remains strongly anchored in the existing framework, which in turn is rooted in an architecture dating back to the 1970s. The major dilemmas and policy choices of informational privacy remain unresolved.

First, the second generation fails to update the definition of personal data (or PII), the most fundamental building block of the framework. Recent advances in re-identification science have shown the futility of traditional de-identification techniques in a big data ecosystem. Consequently, the scope of the framework is either overbroad, potentially encompassing every bit and byte of information ostensibly not about individuals; or overly narrow, excluding de-identified information, which could be re-identified with relative ease. More advanced notions, which have gained credence in the scientific community, such as differential privacy and privacy enhancing technologies, have unfortunately been left out of the debate.

Second, the second generation maintains and even expands the central role of consent. Consent is a wild card in the privacy deck. Without it, the framework becomes paternalistic and overly rigid; with it, organizations can whitewash questionable data practices. I argue that the role of consent should be demarcated according to normative choices made by policymakers with respect to prospective data uses. In some cases, consent should not be required; in others, consent should be assumed subject to a right of refusal; in specific cases, consent should be required to legitimize data use.

Third, the second generation remains rooted on a linear approach to processing whereby an active “data controller” collects information from a passive individual, and then stores, uses or transfers it until its ultimate deletion. The explosion of peer produced content, particularly on social networking services, and the introduction into the data value chain of layer upon layer of service providers, have meant that for vast swaths of the data ecosystem the linear model has become obsolete. Privacy risks are now posed by an indefinite number of geographically dispersed actors, not least individuals themselves, who voluntarily share their own information and that of their friends and relatives. Despite much discussion of “privacy 2.0”, the emerging framework fails to account for these changes. Moreover, in many contexts, such as mobile applications, behavioral advertising or social networking services, it is not necessarily the controller, but rather an intermediary or platform provider, that wields the most control over information.

Fourth, the second generation, particularly of European data protection laws, continues to view information as “residing” in a jurisdiction, despite the ephemeral nature of cloud storage and transfers. For many years, transborder data flow regulation has caused much consternation to businesses on both sides of the Atlantic, while generating steep legal fees. Unfortunately, this is not about to change.

While not solving all of these formidable problems, the article sets an agenda for future research, identifying issues and potential paths towards a rejuvenated framework for a rapidly shifting environment.

You may also like...

1 Response

  1. “Fourth, the second generation, particularly of European data protection laws, continues to view information as “residing” in a jurisdiction, despite the ephemeral nature of cloud storage and transfers. For many years, transborder data flow regulation has caused much consternation to businesses on both sides of the Atlantic, while generating steep legal fees. Unfortunately, this is not about to change.”

    It is hard not to consider the relationship between data and the data protection laws existing in a specific area (i.e. the European Union), as there are different levels of data protection in the different areas of the world. At the same time, we need a wider global convergence on fundamental principles on data protection: a coherent and sufficiently strong model of data protection is necessary. On this basis every country -or better any geographical areas-, will build its specific rules, coherently with its historical, cultural and legal system, but the convergence on the principles and on the basic rules will facilitate the protection and the flows of data.
    From this perspective, the provisions of the recent EU proposal explicitly consider instruments like the use of international treaties. In this sense the proposal assumes significant relevance for achieving the goal of a common framework on data protection and for the definition of specific rules for interoperability between systems.