Tagged: Privacy

0

EU and US data privacy rights: six degrees of separation

The EU and the US have often engaged in a “tit for tat” exchange with regard to their respective systems of privacy protection. For example, EU academics have criticized US law as reflecting a “civil rights” approach that only affords data privacy rights to its own citizens, whereas US commentators have argued that privacy protection in the EU is less effective than its status as a fundamental right would suggest.

I am convinced that neither the EU nor the US properly understands each other’s approach to data privacy. This is not surprising, given that a sophisticated understanding of the two legal systems requires language skills and comparative legal knowledge that few people have on either side of the Atlantic. The close cultural and historical ties between the EU and the US may also make mutual understanding more difficult, since concepts that seem similar on the surface seem may actually be quite different in reality.

I like to think of the difference between the EU and US concepts of data privacy rights as reflecting the differing epistemological views of the rationalist philosophers (e.g., Descartes) versus those of the empiricists (e.g., Hume and Locke) who influenced development of the legal systems in Europe and the US. EU data protection law derives normative rules based mainly on reason and deduction (as do the rationalists), while US privacy law bases legal rules more on evidence drawn from experience (like the empiricists). It is thus no surprise that the law and economics approach that is so influential in US jurisprudence is largely unknown in EU data protection law, while the more dogmatic, conceptual approach of EU law would seem strange to many US lawyers. An illustration is provided by the recent judgment of the Court of Justice of the European Union dealing with the “right to be forgotten” (C-131/12 Google Spain v AEPD and Mario Costeja Gonzalez), where the Court’s argumentation was largely self-referential and it took little notice of the practical implications of its judgment.

Here is a brief discussion of six important areas of difference between data privacy law in the EU and US, with a particular focus on their systems of constitutional rights:

Omnibus vs sectoral approach: The EU has an overarching legal framework for data privacy that covers all areas of data processing, based on EU constitutional law (e.g. the EU Charter of Fundamental Rights), the European Convention on Human Rights, the EU Data Protection Directive, national law, and other sources. In the US, there is no single legal source protecting data privacy at all levels, and legal regulation operates more at a sectoral level (e.g., focusing on specific areas such as children’s privacy, bank data etc).

Constitutional rights as the preferred method of protection: The US Supreme Court has interpreted the US Constitution to create a constitutional right to privacy in certain circumstances. However, from a US viewpoint, constitutional rights are only one vehicle to protect data privacy. Commentators have described the strengths of the US system for privacy protection as comprising a myriad of factors, including “an emergent privacy profession replete with a rapidly expanding body of knowledge, training, certification, conferences, publications, web-tools and professional development; self regulatory initiatives; civil society engagement; academic programs with rich, multidisciplinary research agendas; formidable privacy practices in leading law and accounting firms; privacy seals; peaking interest by the national press; robust enforcement by Federal and State regulators, and individual and class litigation”. In contrast, in the EU the key factor underlying data protection is its status as a fundamental right (see, e.g., Article 1 of the EU General Data Protection Regulation proposed by the European Commission in 2012).

Different conceptions of rights: In the US, a constitutional right must by definition derive from the US Constitution, while in the EU, fundamental rights are considered “general principles of law” that apply to all human beings within EU jurisdiction even if they do not derive from a specific constitutional source. The concept of fundamental rights in the EU is thus broader and more universal than that of constitutional rights in the US.

Positive and negative rights: In the US, privacy is generally protected as a “negative” right that obliges the government to refrain from taking actions that would violate constitutional rights. In the EU the state also has a constitutional obligation to affirmatively protect privacy rights (see the next point below).

Requirement of state action: US law protects constitutional rights only against government action, while in the EU the state also has a duty under certain circumstances to protect the privacy of individuals against violations by nongovernmental actors. An example from outside the area of privacy is provided by the decisions of the European Court of Human Rights (ECHR) in Case of Z and Others v. United Kingdom (2001) and the US Supreme Court in DeShaney v. Winnebago County (1989). Both cases involved the issue of whether the state has a duty under constitutional law to protect a child against abuse by its parents; in essence, the ECHR answered “yes” and the US Supreme Court answered “no”.

Requirement of “harm”: In the EU, the processing of personal data is generally prohibited absent a legal basis, and the CJEU has ruled that a data protection violation does not depend on “whether the information communicated is of a sensitive character or whether the persons concerned have been inconvenienced in any way” (para. 75 of the Rechnungshof case of 2003). In the US data processing is generally allowed unless it causes some harm or is otherwise restricted by law.

The EU and US systems of privacy rights have each developed in a democratic system of government based on the rule of law, and have been shaped by unique cultural and historical factors, so there is little point in debating which one is “better”. However, the fact that the two systems are anchored in their constitutional frameworks does not mean that practical measures cannot be found to bridge some of the differences between them; I am part of a group (the EU-US “Privacy Bridges” project) that is trying to do just that. The two systems may also influence each other and grow closer together over time. For example, the call for enactment of a “consumer privacy bill of rights” in the framework for protection of consumer privacy released by the White House in February 2012 seems to have been inspired in part by the status of data protection as a fundamental right in EU law.

The central role played by constitutional factors in the EU and US systems of data privacy rights means it is essential that more attention be given to the study of privacy law from a comparative constitutional perspective. For example, I wonder why there is so little opportunity in US law schools to study EU data protection law, and vice-versa? Efforts must be increased on both sides of the Atlantic to better understand each other’s systems for protecting data privacy rights.

1

The right to be forgotten and the global reach of EU data protection law

It is a pleasure to be a guest blogger on Concurring Opinions during the month of June. I will be discussing issues and developments relating to European data protection and privacy law, from an international perspective.

Let me begin with a recent case of the Court of Justice of the European Union (CJEU) that has received a great deal of attention. In its judgment of May 13 in the case C-131/12 Google Spain v AEPD and Mario Costeja Gonzalez, the Court recognized a “right to be forgotten” with regard to Internet search engine results based on the EU Data Protection Directive 95/46. This judgment by the highest court in the EU demonstrates that, while it is understandable that data protection law be construed broadly so that individuals are not deprived of protection, it is also necessary to specify some boundaries to define when it does not apply, if EU data protection law is not to become a kind of global law applicable to the entire Internet.

I have already summarized the case elsewhere, and here will only deal with its international jurisdictional aspects. It involved a claim brought by an individual in Spain against both the US parent company Google Inc, and its subsidiary Google Spain. The latter company, which has separate legal personality in Spain, acts as a commercial agent for the Google group in that country, in particular with regard to the sale of online advertising on the search engine web site www.google.com operated by Google Inc. via its servers in California.

The CJEU applied EU data protection law to the Google search engine under Article 4(1)(a) of the Directive, based on its finding that Google Spain was “inextricably linked” to the activities of Google Inc. by virtue of its sale of advertising space on the search engine site provided by Google Inc, even though Google Spain had no direct involvement in running the search engine. In short, the Court found that data processing by the search engine was “carried out in the context of the activities of an establishment of the controller” (i.e., Google Spain).

Since the Court applied EU law based on the activities of Google Spain, it did not discuss the circumstances under which EU data protection law can be applied to processing by data controllers established outside the EU under Article 4(1)(c) of the Directive (see paragraph 61 of the judgment), though the Court did emphasize the broad territorial applicability of EU data protection law (paragraph 54). Since the right to be forgotten has effect on search engines operated from computers located outside the EU, I consider this to be a case of extraterritorial jurisdiction (or extraterritorial application of EU law: I am aware of the distinction between applicable law and jurisdiction, but will use “jurisdiction” here as a shorthand to refer to both).

The Court did not limit its holding to claims brought by EU individuals, or to search engines operated under specific domains. An individual seeking to assert a right under the Directive need not be a citizen of an EU Member State, or have any particular connection with the EU, as long as the act of data processing on which his or her claim is based is subject to EU data protection law under Article 4. The Directive states that EU data protection law applies regardless of an individual’s nationality or residence (see Recital 2), and it is widely recognized that it may apply to entities outside the EU.

Thus, it seems that there would be no impediment under EU law, for example, to a Chinese citizen in China who uses a US-based Internet search engine with a subsidiary in the EU asserting the right to be forgotten against the EU subsidiary with regard to results generated by the search engine (note that Article 3(2) of the proposed EU General Data Protection Regulation would limit the possibility of asserting the right to be forgotten by individuals without any connection to the EU, since the application of EU data protection law would be limited to “data subjects residing in the Union”). Since only the US entity running the search engine would have the power to amend the search results, in effect the Chinese individual would be using EU data protection law as a vehicle to bring a claim against the US entity. The judgment therefore potentially applies EU data protection law to the entire Internet, a situation that was not foreseen when the Directive was enacted (as noted by the Court in paragraphs 69-70 of its 2003 Lindqvist judgment). It could lead to forum shopping and “right to be forgotten tourism” by individuals from around the world (much as UK libel laws have lead to criticisms of “libel tourism“).

It is likely that the judgment will be interpreted more restrictively than this. For example, the UK Information Commissioner’s office has announced that it will focus on “concerns linked to clear evidence of damage and distress to individuals” in enforcing the right to be forgotten. However, if one takes the position that Article 16 the Treaty on the Foundation of the European Union (TFEU) has direct effect, then the ability of individual DPAs to limit the judgment to situations where some “damage or distress” has occurred seems legally doubtful (see paragraph 96, where the Court remarked that the right to be forgotten applies regardless of whether inclusion of an individual’s name in search results “causes prejudice”). Google has also recently announced a procedure for individuals to remove their names from search results under certain circumstances, and the way that online services deal with implementation of the judgment will be crucial in determining its territorial scope in practice.

In any event, the Court’s lack of concern with the territorial application of the judgment demonstrates an inward-looking attitude that fails to take into account the global nature of the Internet. It also increases the need for enactment of the proposed Regulation, in order to provide some territorial limits to the right to be forgotten.

0

Over-Parenting Goes International

The thought of hiring a private detective in this age of relatively accessible electronic surveillance seems a bit retro, like a black-and-white scene from a smoky film noire. But it has been enjoying a surprising comeback in recent years, with parents who hire private investigators to spy on their children.

In an article titled Over-Parenting, my co-author Gaia Bernstein and I identified a trend of legal adoption of intensive parenting norms. We cautioned against society legally sanctioning a single parenting style – namely, intensive parenting – while deeming potentially neglectful other parenting styles which could be perfectly legitimate. We also pointed out that involved parenting is class-biased, since it is costly, and not all parents can afford the technology that would enable them to be intensive parents, such as purchasing GPS enabled smartphones for their kids. We argued that when intensive parenting is used for children who do not need it, it becomes over-parenting. Not all children need the same level of involvement in their lives; one of the most important roles of parents is to prepare their children for independent life, and over-parenting might thwart that role. Finally, we speculated that the cultural model for intensive parenting originates in media depictions of upper-middle class families, and that how these families are portrayed in movies and TV shows influences real-life parents.

Well, I’m sad to report that over-parenting is not a unique American phenomenon. Last year, for example, a Chinese newspaper reported that parents in china are increasingly becoming more involved in their children’s lives by hiring private investigators to check whether the children use drugs, drink alcohol or have sex. In Israel some parents are doing the same, especially during the long summer break, during which bored teenagers, many parent fear, are prone to engage in such activities (if you read Hebrew, you can read the story here). I am sure that some American parents do the same.

Leaving aside the class question (are parents who cannot afford a private eye neglectful?), what does this say about parents’ role as educators? Or about the level of trust (or distrust) between those parents and their children? It used to be that a spouse would hire a private investigator because they thought that their partner was having an affair. Nowadays, a growing chunk of a private investigator’s work involved parents spying on their children. Can’t we say that the fact that parents feel that they need to spy on their children already testifies to their limited parental skills?

6

PETs, Law and Surveillance

In Europe, privacy is considered a fundamental human right. Section 8 of the European Convention of Human Rights (ECHR) limits the power of the state to interfere in citizens’ privacy, ”except such as is in accordance with the law and is necessary in a democratic society”. Privacy is also granted constitutional protection in the Fourth Amendment to the United States Constitution. Both the ECHR and the US Constitution establish the right to privacy as freedom from government surveillance (I’ll call this “constitutional privacy”). Over the past 40 years, a specific framework has emerged to protect informational privacy (see here and here and here and here); yet this framework (“information privacy”) provides little protection against surveillance by either government or private sector organizations. Indeed, the information privacy framework presumes that a data controller (i.e., a government or business organization collecting, storing and using personal data) is a trusted party, essentially acting as a steward of individual rights. In doing so, it overlooks the fact that organizations often have strong incentives to subject individuals to persistent surveillance; to monetize individuals’ data; and to maximize information collection, storage and use.

Read More

0

More on government access to private sector data

Last week I blogged here about a comprehensive survey on systematic government access to private sector data, which will be published in the next issue of International Data Privacy Law, an Oxford University Press law journal edited by Christopher Kuner. Several readers have asked whether the results of the survey are available online. Well, now they are – even before publication of the special issue. The project, which was organized by Fred Cate and Jim Dempsey and supported by The Privacy Projects, covered government access laws in AustraliaCanadaChinaGermanyIsraelJapanUnited Kingdom and United States.

Peter Swire’s thought provoking piece on the increased importance of government access to the cloud in an age of encrypted communications appears here. Also see the special issue’s editorial, by Fred, Jim and Ira Rubinstein.

 

2

On systematic government access to private sector data

The Sixth Circuit Court of Appeals has recently decided in United States v. Skinner that police does not need a warrant to obtain GPS location data for mobile phones. The decision, based on the holding of the Supreme Court in US v. Jones, highlights the need for a comprehensive reform of rules on government access to communications non-contents information (“communications data”). Once consisting of only a list of phone numbers dialed by a customer (a “pen register”), communications data have become rife with personal information, including location, clickstream, social contacts and more.

To a non-American, the US v. Jones ruling is truly astounding in its narrow scope. Clearly, the Justices aimed to sidestep the obvious question of expectation of privacy in public spaces. The Court did hold that the attachment of a GPS tracking device to a vehicle and its use to monitor the vehicle’s movements constitutes a Fourth Amendment “search”. But it based its holding not on the persistent surveillance of the suspect’s movements but rather on a “trespass to chattels” inflicted when a government agent ever-so-slightly touched the suspect’s vehicle to attach the tracking device. In the opinion of the Court, it was the clearly insignificant “occupation of property” (touching a car!) rather than the obviously weighty location tracking that triggered constitutional protection.

Suffice it to say, that to an outside observer, the property infringement appears to have been a side issue in both Jones and Skinner. The main issue of course is government power to remotely access information about an individual’s life, which is increasingly stored by third parties in the cloud. In most cases past – and certainly present and future – there is little need to trespass on an individual’s property in order to monitor her every move. Our lives are increasingly mediated by technology. Numerous third parties possess volumes of information about our finances, health, online endeavors, geographical movements, etc. For effective surveillance, the government typically just needs to ask.

This is why an upcoming issue of International Data Privacy Law (IDPL) (an Oxford University Press law journal), which is devoted to systematic government access to private sector data, is so timely and important. The special issue covers rules on government access in multiple jurisdictions, including the US, UK, Germany, Israel, Japan, China, India, Australia and Canada.

Read More

3

Big Data for All

Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.

The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.

Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.

Read More

1

On Reverse Engineering Privacy Law

Michael Birnhack, a professor at Tel Aviv University Faculty of Law, is one of the leading thinkers about privacy and data protection today (for some of his previous work see here and here and here; he’s also written a deep, thoughtful, innovative book in Hebrew about the theory of privacy. See here). In a new article, Reverse Engineering Informational Privacy Law, which is about to be published in the Yale Journal of Law & Technology, Birnhack sets out to unearth the technological underpinnings of the EU Data Protection Directive (DPD). The DPD, enacted in 1995 and currently undergoing a process of thorough review, is surely the most influential legal instrument concerning data privacy all over the world. It has been heralded by proponents as “technology neutral” – a recipe for longevity in a world marked by rapid technological change. Alas, Birnhack unveils the highly technology-specific fundamentals of the DPD, thereby putting into doubt its continued relevance.

 

The first part of Birnhack’s article analyzes what technological neutrality of a legal framework means and why it’s a sought after trait. He posits that the idea behind it is simple: “the law should not name, specify or describe a particular technology, but rather speak in broader terms that can encompass more than one technology and hopefully, would cover future technologies that are not yet known at the time of legislation.” One big advantage is flexibility (the law can apply to a broad, continuously shifting set of technologies); consider the continued viability of the tech-neutral Fourth Amendment versus the obviously archaic nature of the tech-specific ECPA . Another advantage is the promotion of innovation; tech-specific legislation can lock-in a specific technology thereby stifling innovation.

 

Birnhack continues by creating a typology of tech-related legislation. He examines factors such as whether the law regulates technology as a means or as an end; whether it actively promotes, passively permits or directly restricts technology; at which level of abstraction it relates to technology; and who is put in charge of regulation. Throughout the discussion, Birnhack’s broad, rich expertise in everything law and technology is evident; his examples range from copyright and patent law to nuclear non-proliferation.

Read More

0

Privacy, Masks and Religion

Basking & masking. In China, where sun tan is negatively stigmatized, beach goers wear masks.

One of the most significant developments for privacy law over the past few years has been the rapid erosion of privacy in public. As recently as a decade ago, we benefitted from a fair degree of de facto privacy when walking the streets of a city or navigating a shopping mall. To be sure, we were in plain sight; someone could have seen and followed us; and we would certainly be noticed if we took off our clothes. After all, a public space was always less private than a home. Yet with the notable exception of celebrities, we would have generally benefitted from a fair degree of anonymity or obscurity. A great deal of effort, such as surveillance by a private investigator or team of FBI agents, was required to reverse that. [This, by the way, isn’t a post about US v. Jones, which I will write about later].

 

Now, with mobile tracking devices always on in our pockets; with GPS enabled cars; surveillance cameras linked to facial recognition technologies; smart signage (billboards that target passersby based on their gender, age, or eventually identity); and devices with embedded RFID chips – privacy in public is becoming a remnant of the past.

 

Location tracking is already a powerful tool in the hands of both law enforcement and private businesses, offering a wide array of localized services from restaurant recommendations to traffic reports. Ambient social location apps, such as Glancee and Banjo, are increasingly popular, creating social contexts based on users’ location and enabling users to meet and interact.

 

Facial recognition is becoming more prevalent. This technology too can be used by law enforcement for surveillance or by businesses to analyze certain characteristics of their customers, such as their age, gender or mood (facial detection) or downright identify them (facial recognition). One such service, which was recently tested, allows individuals to check-in to a location on Facebook through facial scanning.

 

Essentially, our face is becoming equivalent to a cookie, the ubiquitous online tracking device. Yet unlike cookies, faces are difficult to erase. And while cellular phones could in theory be left at home, we very rarely travel without them. How will individuals react to a world in which all traces of privacy in public are lost?

Read More

3

There is no new thing under the sun

Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]

 

Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).

The United States: Self Regulation

The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.

 

To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.

 

This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.

Read More