Author: Babak Siavoshy

2

Why privacy matters even if you don’t care about it (or, privacy as a collective good)

privacy is...

“How much do people care about privacy?” This is a key, enduring, question in ongoing debates about technological surveillance. As survey after survey regarding changing privacy attitudes is presented as proof that privacy is dead, one might wonder why we should bother protecting privacy at all.

One common answer is that the privacy surveys are wrong. If survey-makers only asked the right questions, they would see that people do actually care about their privacy. Just look at the most recent Pew Research Survey on privacy and surveillance. We should protect privacy rights because people care about it.

While this answer is fine, I find it unsatisfying. For one, it’s hard to draw firm conclusions about privacy attitudes from the surveys I’ve seen (compare the Pew survey linked above to this Pew survey from the year before). Those attitudes might ebb and flow depending on the context and tools being used, and social facts about the people using them. More importantly, though, while privacy surveys can be very valuable, it’s not clear that they are relevant to key policy questions about whether and how we should protect privacy.

This leads to what I think is the better (but perhaps more controversial) answer to the puzzle: privacy is worth protecting even if turns out most people don’t care about their own privacy. As counterintuitive as it seems, questions about privacy and surveillance don’t–and shouldn’t–hinge on individual privacy preferences.

That’s because questions about privacy rights, like questions about speech or voting or associative rights, are bigger than any individual or group. They are, instead, about the type of society we (including all those survey-takers) want to live in. Or as scholars have suggested, privacy is best thought of as a collective rather than merely an individual good.

Privacy is like voting

Many of our most cherished rights, such as expressive, associational, and voting rights, are understood to protect both individual and collective interests. The right to vote, for example, empowers individuals to cast ballots in presidential elections. But the broader purpose of voting rights–their raison d’être–is to reach collective or systemic goods such as democratic accountability.

Read More

2

Are Robots and Algorithms Taking Over?

The past half-decade has seen an uptick in thoughtful and influential scholarship on the potential risks — particularly to privacy and civil liberties — of emerging technologies. Regular readers of this blog will not be surprised to find works by several Concurring Opinions bloggers on any list of must-read commentary on the legal, ethical, and political dimensions of new data-driven technologies. Technological progress (or regress, depending on your point of view) has become one of the dominant narratives of our time, and it’s good that critiques of its darker implications have slowly but inexorably entered our political discourse.  

Still, there’s a smallish subset of tech commentary and criticism that is, in my view, overwrought. These are critiques that, on their face, seem to have no particular target other than technology tout court. They often include alarmist headlines which are not supported in substance. They cite the marketing claims of technology vendors as statistics. Their true targets are generally people, or political ideologies, rather than technology — a critical fact which often remains buried in the work. Sue Halpern isn’t usually guilty of being a part of this subset (for example, Halpern’s work on the surveillance disclosures has been thoughtful and important) but her latest effort, on the pages of the Review, comes close. (Though, as I’ll explain, she gets a lot right as well).

The headline: How Robots and Algorithms Are Taking Over.

Screen Shot 2015-03-31 at 7.52.43 PM

Are robots and algorithms really taking over?  Will technological unemployment beget a new era of economic and social disorder? I’m skeptical.

Read More

2

Why empowering consumers won’t (by itself) stop privacy breaches

Thanks to CoOp for inviting me to guest blog once again. As with my other academic contributions, the views expressed here are my own and don’t necessarily reflect those of my employers past or present.

buyer-bewareWho bears the costs of privacy breaches? It’s challenging enough to articulate the nature of privacy harms, let alone determine how the resulting costs should be allocated. Yet the question of “who pays” is an important, unavoidable, and in my view undertheorized one. The current default seems to be something akin to caveat emptor: consumers of services — both individually as data subjects and collectively as taxpayers — bear most of the risks, costs, and burdens of privacy breaches. This default is reflected, for example, in legal rules that place high burdens on consumers seeking legal redress in the wake of enterprise data breaches and liability caps for violations of privacy rules.

Ironically, the “consumer pays” default may also (unwittingly) be reinforced in well-meaning attempts to empower consumers. This has been one of the unintended consequences of decades of advocacy aiming to strengthen notice and consent requirements. These efforts take it for granted that data subjects are best-positioned to make effective data privacy and security decisions, and thus reinforce the idea that data subjects should bear the ultimate costs of failures to do so. (After all, they consented to the use!). And while notice and consent are still the centerpiece of every regulator’s data privacy toolbox, there’s reason to doubt that empowering consumers to make more informed and granular privacy decisions will reduce the incidence or the costs of privacy breaches.

Read More

0

Would a right to be forgotten survive First Amendment scrutiny? [discuss in the comments!]

I’ve had some interesting discussions with readers following my post on the EU right to be forgotten’s growing pains.  Here’s a question that’s emerged:

would a right to be forgotten survive First Amendment scrutiny if it were passed under U.S. law?

To be sure, the current EU implementation of the right to be forgotten would almost certainly be vague and overbroad.  But I’m curious whether readers think there is Great_Seal_of_the_United_States_(obverse).svgsome formulation of a right to be forgotten that would survive First Amendment scrutiny and still be broad enough to achieve the basic purpose of the law, which is to give individuals license to force the removal of online content that’s deemed to be outdated or irrelevant.

There is at least one precedent for this kind of speech regulation in the States: California’s “eraser” law, which requires service providers to give minors the right to delete content they themselves posted.  The right to delete your own content is a pretty narrow application of the right to be forgotten.  Would even that narrow application fail First Amendment analysis?  (Putting aside dormant commerce clause and other constitutional concerns).

I have some thoughts on all this myself, but since the readership and authorship of this blog includes distinguished First Amendment scholars, I’ll leave mine for the comments.

5

What’s ailing the right to be forgotten (and some thoughts on how to fix it)

The European Court of Justice’s recent “right to be forgotten” ruling is going through growing pains.  “A politician, a pedophile and a would-be killer are among the people who have already asked Google to remove links to information about their pasts.”  Add to that list former Merill Lynch Executive Stan O’Neal, who requested that Google hide links to an unflattering BBC News articles about him.

Screen Shot 2014-07-09 at 9.21.19 AMAll told, Google “has removed tens of thousands of links—possibly more than 100,000—from its European search results,” encompassing removal requests from 91,000 individuals (apparently about 50% of all requests are granted).  The company has been pulled into discussions with EU regulators about its implementation of the rules, with one regulator opining that the current system “undermines the right to be forgotten.”

The list of questions EU officials recently sent Google suggests they are more or less in the dark about the way providers are applying the ECJ’s ruling.  Meanwhile, European companies like forget.me (pictured) are looking to reap a profit from the uncertainty surrounding the application of these new rules.  The quote at the end of the Times article sums up the current state of affairs:

“No one really knows what the criteria is,” he said, in reference to Google’s response to people’s online requests. “So far, we’re getting a lot of noes. It’s a complete no man’s land.”

What (if anything) went wrong? As I’ll argue* below, a major flaw in the current implementation is that it puts the initial adjudication of right to be forgotten decisions in the hands of search engine providers, rather than representatives of the public interest.  This process leads to a lack of transparency and potential conflicts of interest in implementing what may otherwise be sound policy.

The EU could address these problems by reforming the current procedures to limit search engine providers’ discretion in day-to-day right to be forgotten determinations.  Inspiration for such an alternative can be found in other areas of law regulating the conduct of third party service providers, including the procedures for takedown of copyright-infringing content under the DMCA and those governing law enforcement requests for online emails.

I’ll get into more detail about the current implementation of the right to be forgotten and some possible alternatives after the jump.

Read More

2

Need an alternative to the third party doctrine? Look backwards, not forward. (Part I)

500px-Folder_home.svg

In light of the renewed discussion on the future of the third party doctrine on this blog and elsewhere (much of it attributable to Riley), I’d like to focus my next couple of posts on the oft-criticized rule, with the aim of exploring a few questions that will hopefully be interesting* to readers. For the purpose of these posts, I’m assuming readers are familiar with the third party doctrine and the arguments for and against it.

I’ll start with the following question: Let’s assume the Supreme Court decides to scale back the third party doctrine.  Where in the Court’s Fourth Amendment jurisprudence should the Justices look for an alternative approach?  I think this is an interesting and important question in light of the serious debate, both in academia and on the Supreme Court, about the third party doctrine’s effect on privacy in the information age.

One answer, which may represent the conventional wisdom, is that there simply is nothing in the Supreme Court’s existing precedent that supports a departure from the Court’s all or nothing approach to Fourth Amendment rights in Smith and Miller.  According to this answer, the Court’s only choice if it wishes to “reconsider” the third party doctrine is to create new, technology specific rules that address the problems of the day.  (I’ve argued elsewhere that existing Fourth Amendment doctrine doesn’t bind the Court to rigid applications of its existing rules in the face of new technologies.)

A closer look at the Court’s Fourth Amendment jurisprudence suggests another option, however. The Supreme Court has not applied the underlying rationale from its third party doctrine cases to all forms of government intrusion.  Indeed, for almost a century the Supreme Court has been willing to depart from the all or nothing approach in another Fourth Amendment context: government searches of dwellings and homes.  As I’ll discuss below, the Supreme Court has used various tools—including the implied license rule in last year’s Jardines, the standard of “common understandings,” and the scope of consent rules in co-habitant cases—to allow homeowners, cohabitants, tenants, hotel-guests, overnight guests, and the like maintain Fourth Amendment rights against the government even though they have given third parties access to the same space.

In other words, it is both common sense and black letter law that a person can provide third parties access to his home for a particular purpose without losing all Fourth Amendment rights against government intrusion. Letting the landlord or the maid into your home for a limited purpose doesn’t necessarily give the police a license to enter without a warrant—even if the police persuade the landlord or the maid to let them in. Yet the Court has abandoned that type of nuance in the context of informational privacy, holding that sharing information with a third party means forgoing all Fourth Amendment rights against government access to that information (a principle that has eloquently been described as the “secrecy paradigm”). As many have noted, this rule has had a corrosive effect on Fourth Amendment rights in a world where sensitive information is regularly shared with third parties as a matter of course.

Why has the Court applied such a nuanced approach to Fourth Amendment rights when it comes to real property and the home, but not when it comes to informational privacy?  And have changes in technology undermined some of the rationale justifying this divergence? These are questions I’ll explore further in Part II of this post; in the meantime I’d love to hear what readers think about them. I’ll spend the rest of this post providing some additional background on the Court’s approach to privacy in the context of real property searches.

More after the jump.

Read More

6

Justice Roberts’s wit

One great thing about an opinion by Justice Roberts is, well, Justice Roberts’s writing:

The United States asserts that a search of all data stored on a cell phone is “materially indistinguishable” from searches of these sorts of physical items … That is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together.

Alternatively, the Government proposes that law enforcement agencies “develop protocols to address” concerns raised by cloud computing. … Probably a good idea, but the Founders did not fight a revolution to gain the right to government agency protocols.

In 1926, Learned Hand observed (in an opinion later quoted in Chimel) that it is “a totally different thing to search a man’s pockets and use against him what they contain, from ransacking his house for everything which may incriminate him.” … If his pockets contain a cell phone, however, that is no longer true.

Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans “the privacies of life,” Boyd, supra, at 630. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought. Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple— get a warrant.

0

US v. Ganias and the Fourth Amendment right to delete

For those with more than a passing interest in the Fourth Amendment, I highly recommend Orin Kerr’s coverage of the very important Second Circuit computer search case US v. Ganias.

The ruling creates a Fourth Amendment right to the deletion of files that are over-collected pursuant to a computer search.  Computer searches often involve over-collection of data.  For example, the government will seize a computer and copy the entire hard drive, even though only a fraction of the files in the drive are responsive to a warrant.  This practice is tolerated because, among other things, it reduces the likelihood that the suspect will destroy evidence, and it tends to be less burdensome than confiscating the computer itself.

The Second Circuit’s decision creates a bookend to that tolerated over-collection.  The decision requires the government to delete computer files that are (1) copied as part of a judicially authorized computer search and (2) found (after the fact) to be unresponsive to the warrant authorizing the search.  It also impliedly requires the government to make reasonable efforts to segregate responsive from unresponsive files in computer searches, which almost always involve over-collection of data. The exact application of these rules — such as how long the government can retain data over-collected pursuant to a computer search before having to purge unresponsive files — is still unclear.

The case could have a significant impact on police investigations in the Second Circuit. In addition to changing the way future computer searches are conducted, the case seems to have immediate implications for data currently sitting in government databases, if that data was collected before the court’s ruling pursuant to a computer search.  (The government previously assumed it had a right to retain those files indefinitely; the Court’s ruling seems to have extinguished that argument).

The case may also spawn a torrent of Rule 41(g) motions for return of “property” — here, the copies of files made by the government pursuant to a computer search. And it will raise questions about whether the Fourth Amendment’s right to delete should extend to cases of government over-seizure / over-copying of data outside the context of computer searches.  I’m hoping to say a bit more about the case in the coming days, so stay tuned.

UPDATE: Orin just posted again on Ganias here.

1

Dept. of just-for-fun SCOTUS speculation – Breyer and Roberts are “due”

There are some big Supreme Court decisions coming down in the next few days, and the three remaining cases from the April sitting have a tech angle: Riley / Wurie (search of cell phones incident to arrest) and Aereo (online broadcasting copyright case).

Can we glean anything about who will write the majority opinions in these cases?  Are there any Justices that seem “due” for a majority opinion, based on the assignments so far?  This kind of speculation isn’t usually worth the trouble, but SCOTUSblog’s already done the work, so why not? [image/stats from SCOTUSblog].*

scotus graphic

You’ll see that only two Justices, Roberts and Breyer, haven’t written for  cases argued in the April sitting. Roberts and Breyer also have the fewest authored opinions for the term, at 5 and 4, respectively.

So perhaps Breyer or Roberts are more likely candidates to write the majority opinions in the only remaining cases from the April sitting, Riley, Wurie, and Aereo. What does that tell us? Again, probably not much — but here goes anyway:

  • Let’s start with cell phone searches.  Justice Breyer is associated with the Court’s liberal wing, but he has a recent trend of voting for the government in big Fourth Amendment cases, most recently Maryland v. King and Florida v. Jardines.  Then again, he voted against the government in another recent technological surveillance case, Clapper v. Amnesty International. So, maybe a wash.
  • While there’s a perception that Justice Roberts tends to be pro-government in Fourth Amendment cases, he can’t be accused of being stuck in the past when it comes to the effects of technological change on Fourth Amendment law.  Here’s his first comment during oral argument in the GPS tracking case US v. Jones, responding to the government’s claim that past rulings regarding beeper surveillance should apply to GPS tracking: “That was 30 years ago. The technology is very different, and you get a lot more information from the GPS surveillance than you do from following a beeper… [GPS tracking] seems to me dramatically different.”   Riley has the government arguing that cases authorizing the police to search a cigarette package incident to arrest should apply to cell phones. Isn’t a cell phone “dramatically different” from a cigarette package in much the same way expressed in the Jones quote?
  • What about copyright?  Well, some 40 years ago Justice Breyer wrote a Harvard Law Review Article critiquing copyright expansionism.  The article was written before the Copyright Act of 1976 was passed, and … Ok, it’s over 40 years old.  But Breyer has continued to exhibit pragmatism (in the linked example, in dissent) when it comes to copyright issues.
  • On the other hand, Justice Roberts’s money quote from the Aereo oral argument has got to make the tech start-up a bit nervous about a Roberts opinion in that case: “Your technological model is based solely on circumventing legal prohibitions that you don’t want to comply with.”

All told, I’m not sure we’re much closer to a prediction (and any such prediction will be moot in a few days).  But perhaps CoOp readers will have additional thoughts.

*As noted the image/stats are from Scotusblog (I added the red boxes).  Here’s scotusblog’s recommended citation for this content: Kedar Bhatia, Updated October Term 2013 Stat Pack, SCOTUSblog(Jun. 18, 2014, 10:00 AM), http://www.scotusblog.com/2014/06/updated-october-term-2013-stat-pack-3/

0

Does familiarity with technology affect the way judges vote on privacy?

Judges with daughters are more likely to vote in favor of women’s rights than ones with only sons. Or so reports the New York Times, citing a study by Maya Sen and Adam Glynn.  The study, which considered about 2500 votes by 224 appeals court judges, found that having a daughter “corresponds to a 7 percent increase in the proportion of cases in which a judge will vote in a feminist direction.”

These findings, some would argue, confirm what many already assumed to be true.  Personal experience matters in judicial decision making, at least if by “matters” we mean “has predictive force.”  (Whether it should matter is of course a different issue, which others have written about extensively).

Assuming there’s some truth to this, is there a corollary in the context of privacy rulings?  Is part of the reason the Supreme Court ruled one way in Jones and another in Florence that all the justices ride in cars, while none are likely to be strip searched in a jail cell?  Are judges who emigrated from totalitarian regimes more sensitive to the perils of government overreach?

What about when the question involves the privacy implications of emerging technologies?   If a judge is an avid smartphone user, does that make her more likely to rule in a way that protects smartphones from warrantless searches? Would a judge that uses the internet be more likely to protect the privacy of online search or browsing history?  What about email or social media use?

Exposure to technology could of course cut both ways.  Perhaps tech savvy judges will be more used to — and therefore more amenable to — daily tradeoffs between privacy and convenience.  Or perhaps familiarity with technology simply gives judges more nuanced attitudes towards privacy, but does not affect their overall voting pattern on privacy/tech issues one way or another.

If there are law students out there looking for an interesting research project, it would be fascinating to see if there’s a correlation between judicial age, or other factors reasonably associated with tech savvy, and judicial decision making on legal issues involving privacy and emerging technologies — and if so, which way it cuts.  And if readers know of existing work in this area, do share.