Category: Privacy (Consumer Privacy)


Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.


What Is Personally Identifiable Information (PII)? Finding Common Ground in the EU and US

This post was co-authored by Professor Paul Schwartz.

We recently released a draft of our new essay, Reconciling Personal Information in the European Union and the United States, and we want to highlight some of its main points here.

The privacy law of the United States (US) and European Union (EU) differs in many fundamental ways, greatly complicating commerce between the US and EU.  At the broadest level, US privacy law focuses on redressing consumer harm and balancing privacy with efficient commercial transactions.  In the EU, privacy is hailed as a fundamental right that trumps other interests.  The result is that EU privacy protections are much more restrictive on the use and transfer of personal data than US privacy law.

Numerous attempts have been made to bridge the gap between US and EU privacy law, but a very large initial hurdle stands in the way.  The two bodies of law can’t even agree on the scope of protection let alone the substance of the protections.  The scope of protection of privacy laws turns on the definition of “personally identifiable information” (PII).  If there is PII, privacy laws apply.  If PII is absent, privacy laws do not apply.

In the US, the law provides multiple definitions of PII, most focusing on whether the information pertains to an identified person.  In contrast, in the EU, there is a single definition of personal data to encompass all information identifiable to a person.  Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.

In our essay, Reconciling Personal Information in the European Union and the United States, we argue that both the US and EU approaches to defining PII are flawed.  We also contend that a tiered approach to the concept of PII can bridge the differences between the US and EU approaches.

Read More


Prism and Its Relationship to Clouds, Security, Jurisdiction, and Privacy

In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.


Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More


Harvard Law Review Privacy Symposium Issue

The privacy symposium issue of the Harvard Law Review is hot off the presses.  Here are the articles:

Introduction: Privacy Self-Management and the Consent Dilemmas
Daniel J. Solove

What Privacy is For
Julie E. Cohen

The Dangers of Surveillance
Neil M. Richards

The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures
Paul M. Schwartz

Toward a Positive Theory of Privacy Law
Lior Jacob Strahilevitz


Privacy Self-Management and the Consent Dilemma

I’m pleased to share with you my new article in Harvard Law Review entitled Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013). You can download it for free on SSRN. This is a short piece (24 pages) so you can read it in one sitting.

Here are some key points in the Article:

1. The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” – the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Unfortunately, privacy self-management is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control over personal data.

2. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.

3. People cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.

4. Privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.

5. In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.

6. The way forward involves (1) developing a coherent approach to consent, one that accounts for the social science discoveries about how people make decisions about personal data; (2) recognizing that people can engage in privacy self-management only selectively; (3) adjusting privacy law’s timing to focus on downstream uses; and (4) developing more substantive privacy rules.

The full article is here.

Cross-posted on LinkedIn.


Exponential Hacks

As All Things Digital Kara Swisher reports, Living Social experienced a significant hack the other day: over 50 million users’ email, dates of birth, and encrypted passwords were leaked into the hands of Russian hackers (or so it seems). This hack comes on the heels of data breaches at LinkedIn and Zappos. That the passwords were encrypted just means that users better change their passwords and fast because in time the encryption can be broken. A few years ago, I blogged about the 500 million mark of personal data leaked. Hundreds of millions seems like child’s play today.

This raises some important questions about what we mean when we talk about personally identifiable information (PII). Paul Schwartz and my co-blogger Dan Solove have done terrific work helping legislators devise meaningful definitions of PII in a world of reidentification. Paul Ohm is currently working on an important project providing a coherent account of sensitive information in the context of current data protection laws. Is someone’s password and date of birth sensitive information deserving special privacy protection? Beyond the obvious health, credit, and financial information, what other sorts of data do we consider sensitive and why? Answers to these questions are crucial to companies formulating best practices, the FTC as  it continues its robust enforcement of privacy promises and pursuing deceptive practices, and legislators considering private sector privacy regulations of data brokers, as in Senator John Rockefeller’s current efforts.


“Brain Spyware”

As if we don’t have enough to worry about, now there’s spyware for your brain.  Or, there could be.  Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read More


Netflix, Facebook, and Social Sharing

Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.

Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.

According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:

is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.

Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.

Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?

As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.


New Edition of Solove & Schwartz’s Privacy Law Fundamentals: Must-Read (and Check out the Video)

Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.

Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad.  As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful.  For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor.  Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean.  I think that Psy may have a run for his money on making us smile.