Author: Babak Siavoshy


What the Spokeo argument tells us about the future of information privacy law

This is a guest post. The views expressed are the author’s own and should not be attributed to anyone else.  

Yesterday’s oral argument in Spokeo v. Robins revealed a Court divided on the proper scope and application of standing doctrine. There was one point of agreement amongst the justices, however, with particular relevance to the future of information privacy law.

With the possible exception of Justice Sotomayor, no one on the Court seems ready to hold that a mere violation of the FCRA’s statutory requirements could confer standing on a plaintiff. Even Justice Kagan, who came out firmly in support of Robins, was hesitant to concede that a bare violation of the FCRA’s procedural requirements, such as the requirement that credit agencies post their 1-800 numbers, might constitute a redressable “injury in fact” for an affected consumer. In other words, in order to have an actionable claim under a consumer privacy law like the FCRA, plaintiffs have to do more than show that a defendant deviated from legally mandated data-handling rules (“thou shalt take reasonable steps not to publish misinformation”) with respect to their data; plaintiffs also have to show some additional substantive harm on top of that deviation.

If that’s the Court’s position, then it has broad implications for information privacy. That’s because information privacy law is currently understood and articulated almost exclusively as a set of data handling requirements on data holders, rather than as a list of substantive rights of (or harms to) data subjects.  Indeed, there is arguably no widely accepted theory of substantive information privacy rights.  Our current best understanding of information privacy — of what it means to get privacy right, and what it means to get privacy wrong — is almost entirely procedural, meaning that good and bad acts are primarily understood in terms of data handlers’ compliance with or deviations from well-accepted best practices.

Read More


Will the justices look themselves up on Spokeo?

As always, thanks to CoOp for the opportunity to guest post.  The views expressed are my own

One of the more interesting cases slated for review by the Supreme Court next term is Spokeo v. Robins (here’s a WSJ blog post with an outline of some of the issues).  First things first: several regular and guest contributors to this blog have written a ‘friend of the court’ brief in the case.  You can find that brief here; scotusblog has the dozens of other briefs supporting one side or the other.

While I’m planning to write more about the case’s substantive legal issues (which concern Article III standing), this post will be dedicated to the small bit of silliness outlined in the title.  Namely, what will the justices’ reactions be when they look themselves up on Spokeo’s service, and find results that may strike them as a bit… revealing?

You have to assume at least a few of the thirty+ law clerks at the Supreme Court next term will test run the free “people search” tool on with their own names and — why not? — the names of their bosses.  Here’s what they will find displayed:

  • the justices’ various home addresses, home prices, and even a Google Earth photo of their residence;
  • the names of, and information about, the justices’ family members;
  • truncated phone numbers from various phones purported to belong to the justices (full numbers presumably can be unlocked with a subscription);
  • and social media accounts purportedly tied to the justices or their family members.

(I decided against posting screenshots of my test searches, although such screenshots would have undoubtedly been fair use in this context). Much of this information is available for free through Spokeo’s public search tool, with additional details made available with a subscription.  Obviously, I have no idea whether the results posted are accurate.

So what?

Read More


Why privacy matters even if you don’t care about it (or, privacy as a collective good)

privacy is...

“How much do people care about privacy?” This is a key, enduring, question in ongoing debates about technological surveillance. As survey after survey regarding changing privacy attitudes is presented as proof that privacy is dead, one might wonder why we should bother protecting privacy at all.

One common answer is that the privacy surveys are wrong. If survey-makers only asked the right questions, they would see that people do actually care about their privacy. Just look at the most recent Pew Research Survey on privacy and surveillance. We should protect privacy rights because people care about it.

While this answer is fine, I find it unsatisfying. For one, it’s hard to draw firm conclusions about privacy attitudes from the surveys I’ve seen (compare the Pew survey linked above to this Pew survey from the year before). Those attitudes might ebb and flow depending on the context and tools being used, and social facts about the people using them. More importantly, though, while privacy surveys can be very valuable, it’s not clear that they are relevant to key policy questions about whether and how we should protect privacy.

This leads to what I think is the better (but perhaps more controversial) answer to the puzzle: privacy is worth protecting even if turns out most people don’t care about their own privacy. As counterintuitive as it seems, questions about privacy and surveillance don’t–and shouldn’t–hinge on individual privacy preferences.

That’s because questions about privacy rights, like questions about speech or voting or associative rights, are bigger than any individual or group. They are, instead, about the type of society we (including all those survey-takers) want to live in. Or as scholars have suggested, privacy is best thought of as a collective rather than merely an individual good.

Privacy is like voting

Many of our most cherished rights, such as expressive, associational, and voting rights, are understood to protect both individual and collective interests. The right to vote, for example, empowers individuals to cast ballots in presidential elections. But the broader purpose of voting rights–their raison d’être–is to reach collective or systemic goods such as democratic accountability.

Read More


Are Robots and Algorithms Taking Over?

The past half-decade has seen an uptick in thoughtful and influential scholarship on the potential risks — particularly to privacy and civil liberties — of emerging technologies. Regular readers of this blog will not be surprised to find works by several Concurring Opinions bloggers on any list of must-read commentary on the legal, ethical, and political dimensions of new data-driven technologies. Technological progress (or regress, depending on your point of view) has become one of the dominant narratives of our time, and it’s good that critiques of its darker implications have slowly but inexorably entered our political discourse.  

Still, there’s a smallish subset of tech commentary and criticism that is, in my view, overwrought. These are critiques that, on their face, seem to have no particular target other than technology tout court. They often include alarmist headlines which are not supported in substance. They cite the marketing claims of technology vendors as statistics. Their true targets are generally people, or political ideologies, rather than technology — a critical fact which often remains buried in the work. Sue Halpern isn’t usually guilty of being a part of this subset (for example, Halpern’s work on the surveillance disclosures has been thoughtful and important) but her latest effort, on the pages of the Review, comes close. (Though, as I’ll explain, she gets a lot right as well).

The headline: How Robots and Algorithms Are Taking Over.

Screen Shot 2015-03-31 at 7.52.43 PM

Are robots and algorithms really taking over?  Will technological unemployment beget a new era of economic and social disorder? I’m skeptical.

Read More


Why empowering consumers won’t (by itself) stop privacy breaches

Thanks to CoOp for inviting me to guest blog once again. As with my other academic contributions, the views expressed here are my own and don’t necessarily reflect those of my employers past or present.

buyer-bewareWho bears the costs of privacy breaches? It’s challenging enough to articulate the nature of privacy harms, let alone determine how the resulting costs should be allocated. Yet the question of “who pays” is an important, unavoidable, and in my view undertheorized one. The current default seems to be something akin to caveat emptor: consumers of services — both individually as data subjects and collectively as taxpayers — bear most of the risks, costs, and burdens of privacy breaches. This default is reflected, for example, in legal rules that place high burdens on consumers seeking legal redress in the wake of enterprise data breaches and liability caps for violations of privacy rules.

Ironically, the “consumer pays” default may also (unwittingly) be reinforced in well-meaning attempts to empower consumers. This has been one of the unintended consequences of decades of advocacy aiming to strengthen notice and consent requirements. These efforts take it for granted that data subjects are best-positioned to make effective data privacy and security decisions, and thus reinforce the idea that data subjects should bear the ultimate costs of failures to do so. (After all, they consented to the use!). And while notice and consent are still the centerpiece of every regulator’s data privacy toolbox, there’s reason to doubt that empowering consumers to make more informed and granular privacy decisions will reduce the incidence or the costs of privacy breaches.

Read More


Would a right to be forgotten survive First Amendment scrutiny? [discuss in the comments!]

I’ve had some interesting discussions with readers following my post on the EU right to be forgotten’s growing pains.  Here’s a question that’s emerged:

would a right to be forgotten survive First Amendment scrutiny if it were passed under U.S. law?

To be sure, the current EU implementation of the right to be forgotten would almost certainly be vague and overbroad.  But I’m curious whether readers think there is Great_Seal_of_the_United_States_(obverse).svgsome formulation of a right to be forgotten that would survive First Amendment scrutiny and still be broad enough to achieve the basic purpose of the law, which is to give individuals license to force the removal of online content that’s deemed to be outdated or irrelevant.

There is at least one precedent for this kind of speech regulation in the States: California’s “eraser” law, which requires service providers to give minors the right to delete content they themselves posted.  The right to delete your own content is a pretty narrow application of the right to be forgotten.  Would even that narrow application fail First Amendment analysis?  (Putting aside dormant commerce clause and other constitutional concerns).

I have some thoughts on all this myself, but since the readership and authorship of this blog includes distinguished First Amendment scholars, I’ll leave mine for the comments.


What’s ailing the right to be forgotten (and some thoughts on how to fix it)

The European Court of Justice’s recent “right to be forgotten” ruling is going through growing pains.  “A politician, a pedophile and a would-be killer are among the people who have already asked Google to remove links to information about their pasts.”  Add to that list former Merill Lynch Executive Stan O’Neal, who requested that Google hide links to an unflattering BBC News articles about him.

Screen Shot 2014-07-09 at 9.21.19 AMAll told, Google “has removed tens of thousands of links—possibly more than 100,000—from its European search results,” encompassing removal requests from 91,000 individuals (apparently about 50% of all requests are granted).  The company has been pulled into discussions with EU regulators about its implementation of the rules, with one regulator opining that the current system “undermines the right to be forgotten.”

The list of questions EU officials recently sent Google suggests they are more or less in the dark about the way providers are applying the ECJ’s ruling.  Meanwhile, European companies like (pictured) are looking to reap a profit from the uncertainty surrounding the application of these new rules.  The quote at the end of the Times article sums up the current state of affairs:

“No one really knows what the criteria is,” he said, in reference to Google’s response to people’s online requests. “So far, we’re getting a lot of noes. It’s a complete no man’s land.”

What (if anything) went wrong? As I’ll argue* below, a major flaw in the current implementation is that it puts the initial adjudication of right to be forgotten decisions in the hands of search engine providers, rather than representatives of the public interest.  This process leads to a lack of transparency and potential conflicts of interest in implementing what may otherwise be sound policy.

The EU could address these problems by reforming the current procedures to limit search engine providers’ discretion in day-to-day right to be forgotten determinations.  Inspiration for such an alternative can be found in other areas of law regulating the conduct of third party service providers, including the procedures for takedown of copyright-infringing content under the DMCA and those governing law enforcement requests for online emails.

I’ll get into more detail about the current implementation of the right to be forgotten and some possible alternatives after the jump.

Read More


Need an alternative to the third party doctrine? Look backwards, not forward. (Part I)


In light of the renewed discussion on the future of the third party doctrine on this blog and elsewhere (much of it attributable to Riley), I’d like to focus my next couple of posts on the oft-criticized rule, with the aim of exploring a few questions that will hopefully be interesting* to readers. For the purpose of these posts, I’m assuming readers are familiar with the third party doctrine and the arguments for and against it.

I’ll start with the following question: Let’s assume the Supreme Court decides to scale back the third party doctrine.  Where in the Court’s Fourth Amendment jurisprudence should the Justices look for an alternative approach?  I think this is an interesting and important question in light of the serious debate, both in academia and on the Supreme Court, about the third party doctrine’s effect on privacy in the information age.

One answer, which may represent the conventional wisdom, is that there simply is nothing in the Supreme Court’s existing precedent that supports a departure from the Court’s all or nothing approach to Fourth Amendment rights in Smith and Miller.  According to this answer, the Court’s only choice if it wishes to “reconsider” the third party doctrine is to create new, technology specific rules that address the problems of the day.  (I’ve argued elsewhere that existing Fourth Amendment doctrine doesn’t bind the Court to rigid applications of its existing rules in the face of new technologies.)

A closer look at the Court’s Fourth Amendment jurisprudence suggests another option, however. The Supreme Court has not applied the underlying rationale from its third party doctrine cases to all forms of government intrusion.  Indeed, for almost a century the Supreme Court has been willing to depart from the all or nothing approach in another Fourth Amendment context: government searches of dwellings and homes.  As I’ll discuss below, the Supreme Court has used various tools—including the implied license rule in last year’s Jardines, the standard of “common understandings,” and the scope of consent rules in co-habitant cases—to allow homeowners, cohabitants, tenants, hotel-guests, overnight guests, and the like maintain Fourth Amendment rights against the government even though they have given third parties access to the same space.

In other words, it is both common sense and black letter law that a person can provide third parties access to his home for a particular purpose without losing all Fourth Amendment rights against government intrusion. Letting the landlord or the maid into your home for a limited purpose doesn’t necessarily give the police a license to enter without a warrant—even if the police persuade the landlord or the maid to let them in. Yet the Court has abandoned that type of nuance in the context of informational privacy, holding that sharing information with a third party means forgoing all Fourth Amendment rights against government access to that information (a principle that has eloquently been described as the “secrecy paradigm”). As many have noted, this rule has had a corrosive effect on Fourth Amendment rights in a world where sensitive information is regularly shared with third parties as a matter of course.

Why has the Court applied such a nuanced approach to Fourth Amendment rights when it comes to real property and the home, but not when it comes to informational privacy?  And have changes in technology undermined some of the rationale justifying this divergence? These are questions I’ll explore further in Part II of this post; in the meantime I’d love to hear what readers think about them. I’ll spend the rest of this post providing some additional background on the Court’s approach to privacy in the context of real property searches.

More after the jump.

Read More


Justice Roberts’s wit

One great thing about an opinion by Justice Roberts is, well, Justice Roberts’s writing:

The United States asserts that a search of all data stored on a cell phone is “materially indistinguishable” from searches of these sorts of physical items … That is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together.

Alternatively, the Government proposes that law enforcement agencies “develop protocols to address” concerns raised by cloud computing. … Probably a good idea, but the Founders did not fight a revolution to gain the right to government agency protocols.

In 1926, Learned Hand observed (in an opinion later quoted in Chimel) that it is “a totally different thing to search a man’s pockets and use against him what they contain, from ransacking his house for everything which may incriminate him.” … If his pockets contain a cell phone, however, that is no longer true.

Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans “the privacies of life,” Boyd, supra, at 630. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought. Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple— get a warrant.


US v. Ganias and the Fourth Amendment right to delete

For those with more than a passing interest in the Fourth Amendment, I highly recommend Orin Kerr’s coverage of the very important Second Circuit computer search case US v. Ganias.

The ruling creates a Fourth Amendment right to the deletion of files that are over-collected pursuant to a computer search.  Computer searches often involve over-collection of data.  For example, the government will seize a computer and copy the entire hard drive, even though only a fraction of the files in the drive are responsive to a warrant.  This practice is tolerated because, among other things, it reduces the likelihood that the suspect will destroy evidence, and it tends to be less burdensome than confiscating the computer itself.

The Second Circuit’s decision creates a bookend to that tolerated over-collection.  The decision requires the government to delete computer files that are (1) copied as part of a judicially authorized computer search and (2) found (after the fact) to be unresponsive to the warrant authorizing the search.  It also impliedly requires the government to make reasonable efforts to segregate responsive from unresponsive files in computer searches, which almost always involve over-collection of data. The exact application of these rules — such as how long the government can retain data over-collected pursuant to a computer search before having to purge unresponsive files — is still unclear.

The case could have a significant impact on police investigations in the Second Circuit. In addition to changing the way future computer searches are conducted, the case seems to have immediate implications for data currently sitting in government databases, if that data was collected before the court’s ruling pursuant to a computer search.  (The government previously assumed it had a right to retain those files indefinitely; the Court’s ruling seems to have extinguished that argument).

The case may also spawn a torrent of Rule 41(g) motions for return of “property” — here, the copies of files made by the government pursuant to a computer search. And it will raise questions about whether the Fourth Amendment’s right to delete should extend to cases of government over-seizure / over-copying of data outside the context of computer searches.  I’m hoping to say a bit more about the case in the coming days, so stay tuned.

UPDATE: Orin just posted again on Ganias here.