Category: Privacy (Consumer Privacy)


The Greatest Threat to Privacy Part II: Why I Worry More About ISPs Than Google

In a prior post, I began to explain why ISPs pose the greatest threat to privacy in modern life. I argued that many ISPs are likely to begin to experiment with new, more invasive forms of surveillance relying, in part, on so-called Deep-Packet Inspection technology. I am grateful for the vigorous debate which followed in the comments, and I know my article will be much stronger once I incorporate what I have learned reading and responding to these comments.

The last post led only to the conclusion that ISPs pose a great threat to privacy, but to call this the greatest threat in society, I need to answer the question, “compared to what?” In particular, the most common response to my article I have heard is, “Doesn’t Google threaten privacy more?” In this post, let me explain why I worry more about the threat to privacy from ISPs than from Google.

Read More


The Greatest Threat to Privacy: The Internet Service Provider

I have recently posted on SSRN the article that ate my summer, The Rise and Fall of Invasive ISP Surveillance. I make many claims in this article, but the principal one, and the one I want to spend a few posts elaborating and defending, is found in the first sentence of the abstract: “Nothing in society poses as grave a threat to privacy as the Internet Service Provider (ISP).” In this first post, let me explain why ISPs pose an enormous threat to privacy:

Simply put, your ISP has the means, motive, and opportunity to scrutinize nearly every communication departing from and arriving to your Internet-connected computer:

Opportunity: Because your ISP serves as the gateway between your computer and the rest of the Internet, every e-mail message, IM, and tweet you send and receive; every web page and p2p-traded file you download; and every VoIP call you place travels first through your ISP’s routers.

Means: A decade ago, your ISP lacked the tools to efficiently analyze every communication crossing its network, because computers were relatively slow and networks were relatively fast. I use the analogy of the policeman on the side of the road, scrutinizing the passing cars. If the policeman is slow and the road is wide and full of speeding cars, the policeman won’t be able to keep up.

Over the past decade, while network bandwidth has increased, computer processing power has increased at a faster rate, and your ISP can now analyze more information, more inexpensively than before. The roads are wider today, but the policemen are smarter and more efficient. An entire industry–the deep-packet inspection industry–has arisen to provide hardware and software tools for massive, widespread, automated surveillance.

Motive: Third-parties are placing pressure on ISPs to spy on users in unprecedented ways. Advertisers are willing to pay higher rates for behavioral advertising. For example, Ikea will pay more to place an ad in front of people who have been recently surfing furniture websites. To enable behavioral advertising, companies like NebuAd and Phorm have been trying to convince ISPs to collect user web-surfing data they do not collect today. Similarly, the copyrighted content industries seem willing to pay ISPs to detect, report, and possibly block the transfer of copyrighted works.

Because of these three factors, ISPs are scrutinizing more information–and different forms of information–than they ever have before. AT&T has begun to consider monitoring for copyright violations; Charter Communications signed up with NebuAd, sparking a firestorm of publicity and legislative interest which pushed Charter to abandon the deal; and a few British ISPs have begun to use Phorm’s services. I predict that these examples presage a coming storm of unprecedented, invasive ISP monitoring.

In the next post, I will compare the threat to privacy from ISP monitoring to the threat from other entities, in particular, Google and Microsoft.


How Not to Obtain Online Consent, or Why Panera Bread Owes Me Free Muffins


When I need to edit an article, I will sometimes park myself at a booth at the local Panera Bread, sipping the decent coffee, snacking on the beautiful (notice I didn’t say tasty) pastries, and using the free WiFi. Long ago, I noticed that Panera had made a stupid technological mistake that probably strips it of the right to manage its network lawfully.

Panera tries to extract consent from its users using what is known as a captive portal, the same method used by most hotel and airport WiFi network providers. When a Panera WiFi user first tries to connect to any website, Panera’s computers redirect her instead to its own web page with a link to its terms of service (ToS). Only when the user clicks “I agree” may she start surfing.

Compared to some of the other methods Internet providers use for attempting to obtain consent, a captive portal deserves some praise. It is much more likely to be noticed and read than a ToS or privacy policy link buried on a home page (or, as the case may be, not even on the home page). It is better than the paper privacy policies my credit card companies send with their monthly bills, usually along with a half-dozen ads. Unlike either of these methods, a captive portal acts like a virtual stop sign–until you click “I agree,” you can go no further. (Of course, calling even a captive portal meaningful consent seems to stretch things if the ToS offered are dozens of pages long.)

But if Panera ever tried to enforce its WiFi ToS–say it got caught monitoring user communications and had to defend against a wiretapping lawsuit or say it was sued for banning a user suspected of downloading porn in violation of the ToS–a court should probably hold that its ToS are unenforceable. Panera has made a simple web design mistake that introduces doubt about what terms are being agreed to by its users.

Read More


The End of Privacy?

sci-american2.jpgI’ve written an article for the September issue of Scientific American magazine called The End of Privacy? The article is available online here, with a slightly different title: Do Social Networks Bring the End of Privacy?.

The entire issue is devoted to privacy, and there are some other really interesting articles. Here are links to the other articles in the issue:

Whitfield Diffie and Susan Landau, Internet Eavesdropping: A Brave New World of Wiretapping

Steven Ashley, Digital Surveillance: Tools of the Spy Trade

Katherine Albrecht, How RFID Tags Could Be Used to Track Unsuspecting People

Anil K. Jain and Sharath Pankanti, Beyond Fingerprinting: Is Biometrics the Best Bet for Fighting Identity Theft?

Mark A. Rothstein, Tougher Laws Needed to Protect Your Genetic Privacy

Simson L. Garfinkel, Data Fusion: The Ups and Downs of All-Encompassing Digital Profiles

Peter Brown, Privacy in an Age of Terabytes and Terror

Esther Dyson, How Loss of Privacy May Mean Loss of Security

Anna Lysyanskaya, Cryptography: How to Keep Your Secrets Safe


Justice Breyer’s Information Available on Limewire

It does not take much to have a security breach. Just one person can facilitate it. In this case, someone at a high-end investment firm installed LimeWire at the office. According to AP the breach began at the end of last year and continued to June of this year. Breyer’s birthday and Social Security number were part of the breach. Apparently around 2,000 other clients have also had their data shared on LimeWire.

Again the fact of data leaks or breaches is not so new. But given the high profile of the people involved in this one, there may be a movement to have laws passed about the problem. Remember video rentals matter because of Robert Bork’s encounter with data privacy issues during his nomination for the Supreme Court. This data problem is different from Bork’s. So a legislative response may come but it will likely address the issue of identity theft. On the other hand, if senators, representatives, and White House staffers found that even their legal but perhaps interesting surfing habits were part of public knowledge and gossip, maybe the data collection and Internet monitoring that some think is necessary will be seen a threat. One paper that may be of interest on this idea is Neil Richards’s Intellectual Privacy.


The Privacy Paradox

laptop-eyes3.jpgOver at the New York Times’s Bits blog, Brad Stone writes:

Researchers call this the privacy paradox: normally sane people have inconsistent and contradictory impulses and opinions when it comes to their safeguarding their own private information.

Now some new research is beginning to document and quantify the privacy paradox. In a talk presented at the Security and Human Behavior Workshop here in Boston this week, Carnegie Mellon behavioral economist George Loewenstein previewed a soon-to-be-published research study he conducted with two colleagues.

Their findings: Our privacy principles are wobbly. We are more or less likely to open up depending on who is asking, how they ask and in what context.

In one interesting experiment, students who were provided strong promises of confidentiality were less forthcoming about personal details than students who weren’t provided such promises. The researchers explained this behavior as based on the fact that when an issue is raised in people’s minds, they think about it more and are likely to be more concerned about it. Ironically, promising people that their privacy will be protected actually makes them think more about the dangers of their privacy being breached.

There is indeed a growing body of research that examines why people frequently state in polls that they value privacy highly yet in practice trade their privacy away for trinkets or minor increases in convenience. The work of Professor Alessandro Acquisti explores some of the reasons why people might not make rational decisions regarding privacy despite their desire to protect it.

Cover-UP-small.jpgI have also written about this in my new book, UNDERSTANDING PRIVACY (Harvard University Press, May 2008). In particular, I argue that looking at expectations of privacy is the wrong approach toward understanding privacy:

If a more empirical approach to determining reasonable expectations of privacy were employed, how should the analysis be carried out? Reasonable expectations could be established by taking a poll. But there are several difficulties with such an approach. First, should the poll be local or national or worldwide? Different communities will likely differ in their expectations of privacy. Second, people’s stated preferences often differ from their actions. Economists Alessandro Acquisti and Jens Grossklags observe that “recent surveys, anecdotal evidence, and experiments have highlighted an apparent dichotomy between privacy attitudes and actual behavior. . . . [I]ndividuals are willing to trade privacy for convenience or to bargain the release of personal information in exchange for relatively small rewards.” This disjunction leads Strahilevitz to argue that what people say means less than what they do. “Behavioral data,” he contends, “is thus preferable to survey data in privacy.”

But care must be used in interpreting behavior because several factors can affect people’s decisions about privacy. Acquisti and Grossklags point to the problem of information asymmetries, when people lack adequate knowledge of how their personal information will be used, and bounded rationality, when people have difficulty applying what they know to complex situations. Some privacy problems shape behavior. People often surrender personal data to companies because they perceive that they do not have much choice. They might also do so because they lack knowledge about the potential future uses of the information. Part of the privacy problem in these cases involves people’s limited bargaining power respecting privacy and inability to assess the privacy risks. Thus looking at people’s behavior might present a skewed picture of societal expectations of privacy.


Do We Need an Internet Ed. Class?

Classroom2.JPGWhile I was attending the excellent privacy conference Dan Solove and Chris Hoofnagle organized in D.C. a few days ago, it occurred to me that just as one takes driver’s ed. before being able to drive a car, it might make sense to have a required Internet Education class in middle school. Driving is a key way people engage in the economy, and the Internet, especially email and social networking use, is becoming as essential if not more so. Given all the benefits and problems of the Internet from meeting new people and peer production to unfortunate gossiping and dog poop events, it dawned on me that Internet Ed. might fill a gap that appeared as I listened to various people at the conference.

Read More


Is the Computer Fraud and Abuse Act Unconstitutionally Vague?

At the National Law Journal, attorney Nick Akerman (Dorsey & Whitney) contends that the Computer Fraud and Abuse Act (CFAA) indictment of Lori Drew (background about the case is here) is an appropriate interpretation of the statute:

While this may be the first prosecution under the CFAA for cyberbullying, the statute neatly fits the facts of this crime. Drew is charged with violating §§ 1030(a)(2)(C), (c)(2)(B)(2) of the CFAA, which make it a felony punishable up to five years imprisonment, if one “intentionally accesses a computer without authorization . . . , and thereby obtains . . . information from any protected computer if the conduct involved an interstate . . . communication” and “the offense was committed in furtherance of any . . . tortious act [in this case intentional infliction of emotional distress] in violation of the . . . laws . . . of any State.”

There is no question that the MySpace network is a “protected” computer as that term is defined by the statute. Indeed, “[e]very cell phone and cell tower is a ‘computer’ under this statute’s definition; so is every iPod, every wireless base station in the corner coffee shop, and many another gadget.” U.S. v. Mitra, 405 F.3d 492, 495 (8th Cir. 2005). There is also no question that a violation of MySpace’s TOS provides a valid predicate for proving that the defendant acted “without authorization.” What the commentators ignored in their critique of this indictment is that the “CFAA . . . is primarily a statute imposing limits on access and enhancing control by information providers.” EF Cultural Travel B.V. v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003). A company “can easily spell out explicitly what is forbidden.” Id. at 63. Thus, companies have the right to post what are in effect “No Trespassing” signs that can form the basis for a criminal prosecution.

If this interpretation of the law is correct, then the law is probably unconstitutionally vague. A vague law is one that either fails to provide the kind of notice that will enable ordinary people to understand what conduct it prohibits; or authorizes or encourages arbitrary and discriminatory enforcement. The CFAA, as construed by the prosecution in the Drew case, will probably be found vague because it authorizes or encourages arbitrary and discriminatory enforcement.

Suppose I put a notice on this post that says: “No attorneys may post a comment to this blog.” Suppose Nick Ackerman comes to this site, sees this post, and and writes a comment that is defamatory. Under his theory, he can be prosecuted for violating the CFAA. He has “trespassed” on this site. Moreover, if a blog has a policy that it will not tolerate “rude, uncivil, or off-topic comments,” then commenters who make such comments that are tortious (intentional infliction of emotional distress, public disclosure of private facts, false light, defamation, etc.) can be liable for a CFAA violation. Moreover, any use of a website that goes against whatever terms the operator of that site has set forth that constitutes a negligence tort is also criminal.

The problem here is that the CFAA’s applicability would be extremely broad — so broad that the cases likely to be prosecuted would be arbitrary. Since tort law is common law, and is very flexible, broad, and evolving, people would not have adequate notice about what conduct would be legal and not legal. There’s a reason why tort law is different from criminal law — we are willing to accept a lot more ambiguity and uncertainty in tort law than in criminal law, where the stakes involve potential imprisonment.

Moreover, Nick Akerman only focuses on the CFAA § 1030(c)(2)(B)(2), which makes it a felony to exceed authorized access if the offense was committed in furtherance of any tortious act.

The CFAA § 1020(a)(2)(C) makes it a criminal misdemeanor to “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer if the conduct involved an interstate or foreign communication.” If I’m interpreting this correctly (and I don’t purport to be an expert on the CFAA), under the Drew prosecutor’s interpretation of the CFAA, any time a person violates a website’s terms of service and access any information from the site, there’s a criminal violation. That means that if I post on this blog a notice that says: “No attorneys may access any other parts of this blog other than the front page,” and an attorney accesses any other page on my blog, then there’s a CFAA violation. Could the law possibly be this broad? I think it would require a narrowing interpretation in order to avoid problems of unconstitutional vagueness.

The CFAA strikes me as a very poorly drafted statute. The Drew indictment demonstrates the problems with the law. Either courts should fix the CFAA interpretively by narrowing its scope, or else strike it down as unconstitutionally vague. But what clearly cannot stand is for the law to be interpreted as the Drew prosecutor seeks to interpret it.

Hat tip: Dan Slater at the WSJ Blog


My New Book, Understanding Privacy

Cover 5 medium.jpgI am very happy to announce the publication of my new book, UNDERSTANDING PRIVACY (Harvard University Press, May 2008). There has been a longstanding struggle to understand what “privacy” means and why it is valuable. Professor Arthur Miller once wrote that privacy is “exasperatingly vague and evanescent.” In this book, I aim to develop a clear and accessible theory of privacy, one that will provide useful guidance for law and policy. From the book jacket:

Privacy is one of the most important concepts of our time, yet it is also one of the most elusive. As rapidly changing technology makes information more and more available, scholars, activists, and policymakers have struggled to define privacy, with many conceding that the task is virtually impossible.

In this concise and lucid book, Daniel J. Solove offers a comprehensive overview of the difficulties involved in discussions of privacy and ultimately provides a provocative resolution. He argues that no single definition can be workable, but rather that there are multiple forms of privacy, related to one another by family resemblances. His theory bridges cultural differences and addresses historical changes in views on privacy. Drawing on a broad array of interdisciplinary sources, Solove sets forth a framework for understanding privacy that provides clear, practical guidance for engaging with relevant issues.

Understanding Privacy will be an essential introduction to long-standing debates and an invaluable resource for crafting laws and policies about surveillance, data mining, identity theft, state involvement in reproductive and marital decisions, and other pressing contemporary matters concerning privacy.

Here’s a brief summary of Understanding Privacy. Chapter 1 (available on SSRN) introduces the basic ideas of the book. Chapter 2 builds upon my article Conceptualizing Privacy, 90 Cal. L. Rev. 1087 (2002), surveying and critiquing existing theories of privacy. Chapter 3 contains an extensive discussion (mostly new material) explaining why I chose the approach toward theorizing privacy that I did, and why I rejected many other potential alternatives. It examines how a theory of privacy should account for cultural and historical variation yet avoid being too local in perspective. This chapter also explores why a theory of privacy should avoid being too general or too contextual. I draw significantly from historical examples to illustrate my points. I also discuss why a theory of privacy shouldn’t focus on the nature of the information, the individual’s preferences, or reasonable expectations of privacy. Chapter 4 consists of new material discussing the value of privacy. Chapter 5 builds on my article, A Taxonomy of Privacy, 154 U. Pa. L.. Rev. 477 (2006). I’ve updated the taxonomy in the book, and I’ve added a lot of new material about how my theory of privacy interfaces not only with US law, but with the privacy law of many other countries. Finally, Chapter 6 consists of new material exploring the consequences and applications of my theory and examining the nature of privacy harms.

Understanding Privacy is much broader than The Digital Person and The Future of Reputation. Whereas these other two books examined specific privacy problems, Understanding Privacy is a general theory of privacy, and I hope it will be relevant and useful in a wide range of issues and debates.

For more information about the book, please visit its website.


The Digital Person Free Online!

Digital-Person-free.jpgLast month, Yale University Press allowed me to put my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet online for free. The experiment has gone quite well. The book’s website received a big bump in traffic, with many people downloading one or more chapters. The book’s sales picked up for several weeks after it was placed online for free. Sales have now returned to about the same level as before the book went online.

I’m delighted to announce that NYU Press has allowed me to put my book, The Digital Person: Technology and Privacy in the Information Age (NYU Press, 2004) online for free.

Here’s a brief synopsis of The Digital Person from the book jacket:

Seven days a week, twenty-four hours a day, electronic databases are compiling information about you. As you surf the Internet, an unprecedented amount of your personal information is being recorded and preserved forever in the digital minds of computers. These databases create a profile of activities, interests, and preferences used to investigate backgrounds, check credit, market products, and make a wide variety of decisions affecting our lives. The creation and use of these databases–which Daniel J. Solove calls “digital dossiers”–has thus far gone largely unchecked. In this startling account of new technologies for gathering and using personal data, Solove explains why digital dossiers pose a grave threat to our privacy.

Digital dossiers impact many aspects of our lives. For example, they increase our vulnerability to identity theft, a serious crime that has been escalating at an alarming rate. Moreover, since September 11th, the government has been tapping into vast stores of information collected by businesses and using it to profile people for criminal or terrorist activity. In THE DIGITAL PERSON, Solove engages in a fascinating discussion of timely privacy issues such as spyware, web bugs, data mining, the USA-Patriot Act, and airline passenger profiling.

THE DIGITAL PERSON not only explores these problems, but provides a compelling account of how we can respond to them. Using a wide variety of sources, including history, philosophy, and literature, Solove sets forth a new understanding of what privacy is, one that is appropriate for the new challenges of the Information Age. Solove recommends how the law can be reformed to simultaneously protect our privacy and allow us to enjoy the benefits of our increasingly digital world.

Book reviews are collected here.