Category: Privacy (Consumer Privacy)


The End of Privacy?

sci-american2.jpgI’ve written an article for the September issue of Scientific American magazine called The End of Privacy? The article is available online here, with a slightly different title: Do Social Networks Bring the End of Privacy?.

The entire issue is devoted to privacy, and there are some other really interesting articles. Here are links to the other articles in the issue:

Whitfield Diffie and Susan Landau, Internet Eavesdropping: A Brave New World of Wiretapping

Steven Ashley, Digital Surveillance: Tools of the Spy Trade

Katherine Albrecht, How RFID Tags Could Be Used to Track Unsuspecting People

Anil K. Jain and Sharath Pankanti, Beyond Fingerprinting: Is Biometrics the Best Bet for Fighting Identity Theft?

Mark A. Rothstein, Tougher Laws Needed to Protect Your Genetic Privacy

Simson L. Garfinkel, Data Fusion: The Ups and Downs of All-Encompassing Digital Profiles

Peter Brown, Privacy in an Age of Terabytes and Terror

Esther Dyson, How Loss of Privacy May Mean Loss of Security

Anna Lysyanskaya, Cryptography: How to Keep Your Secrets Safe


Justice Breyer’s Information Available on Limewire

It does not take much to have a security breach. Just one person can facilitate it. In this case, someone at a high-end investment firm installed LimeWire at the office. According to AP the breach began at the end of last year and continued to June of this year. Breyer’s birthday and Social Security number were part of the breach. Apparently around 2,000 other clients have also had their data shared on LimeWire.

Again the fact of data leaks or breaches is not so new. But given the high profile of the people involved in this one, there may be a movement to have laws passed about the problem. Remember video rentals matter because of Robert Bork’s encounter with data privacy issues during his nomination for the Supreme Court. This data problem is different from Bork’s. So a legislative response may come but it will likely address the issue of identity theft. On the other hand, if senators, representatives, and White House staffers found that even their legal but perhaps interesting surfing habits were part of public knowledge and gossip, maybe the data collection and Internet monitoring that some think is necessary will be seen a threat. One paper that may be of interest on this idea is Neil Richards’s Intellectual Privacy.


The Privacy Paradox

laptop-eyes3.jpgOver at the New York Times’s Bits blog, Brad Stone writes:

Researchers call this the privacy paradox: normally sane people have inconsistent and contradictory impulses and opinions when it comes to their safeguarding their own private information.

Now some new research is beginning to document and quantify the privacy paradox. In a talk presented at the Security and Human Behavior Workshop here in Boston this week, Carnegie Mellon behavioral economist George Loewenstein previewed a soon-to-be-published research study he conducted with two colleagues.

Their findings: Our privacy principles are wobbly. We are more or less likely to open up depending on who is asking, how they ask and in what context.

In one interesting experiment, students who were provided strong promises of confidentiality were less forthcoming about personal details than students who weren’t provided such promises. The researchers explained this behavior as based on the fact that when an issue is raised in people’s minds, they think about it more and are likely to be more concerned about it. Ironically, promising people that their privacy will be protected actually makes them think more about the dangers of their privacy being breached.

There is indeed a growing body of research that examines why people frequently state in polls that they value privacy highly yet in practice trade their privacy away for trinkets or minor increases in convenience. The work of Professor Alessandro Acquisti explores some of the reasons why people might not make rational decisions regarding privacy despite their desire to protect it.

Cover-UP-small.jpgI have also written about this in my new book, UNDERSTANDING PRIVACY (Harvard University Press, May 2008). In particular, I argue that looking at expectations of privacy is the wrong approach toward understanding privacy:

If a more empirical approach to determining reasonable expectations of privacy were employed, how should the analysis be carried out? Reasonable expectations could be established by taking a poll. But there are several difficulties with such an approach. First, should the poll be local or national or worldwide? Different communities will likely differ in their expectations of privacy. Second, people’s stated preferences often differ from their actions. Economists Alessandro Acquisti and Jens Grossklags observe that “recent surveys, anecdotal evidence, and experiments have highlighted an apparent dichotomy between privacy attitudes and actual behavior. . . . [I]ndividuals are willing to trade privacy for convenience or to bargain the release of personal information in exchange for relatively small rewards.” This disjunction leads Strahilevitz to argue that what people say means less than what they do. “Behavioral data,” he contends, “is thus preferable to survey data in privacy.”

But care must be used in interpreting behavior because several factors can affect people’s decisions about privacy. Acquisti and Grossklags point to the problem of information asymmetries, when people lack adequate knowledge of how their personal information will be used, and bounded rationality, when people have difficulty applying what they know to complex situations. Some privacy problems shape behavior. People often surrender personal data to companies because they perceive that they do not have much choice. They might also do so because they lack knowledge about the potential future uses of the information. Part of the privacy problem in these cases involves people’s limited bargaining power respecting privacy and inability to assess the privacy risks. Thus looking at people’s behavior might present a skewed picture of societal expectations of privacy.


Do We Need an Internet Ed. Class?

Classroom2.JPGWhile I was attending the excellent privacy conference Dan Solove and Chris Hoofnagle organized in D.C. a few days ago, it occurred to me that just as one takes driver’s ed. before being able to drive a car, it might make sense to have a required Internet Education class in middle school. Driving is a key way people engage in the economy, and the Internet, especially email and social networking use, is becoming as essential if not more so. Given all the benefits and problems of the Internet from meeting new people and peer production to unfortunate gossiping and dog poop events, it dawned on me that Internet Ed. might fill a gap that appeared as I listened to various people at the conference.

Read More


Is the Computer Fraud and Abuse Act Unconstitutionally Vague?

At the National Law Journal, attorney Nick Akerman (Dorsey & Whitney) contends that the Computer Fraud and Abuse Act (CFAA) indictment of Lori Drew (background about the case is here) is an appropriate interpretation of the statute:

While this may be the first prosecution under the CFAA for cyberbullying, the statute neatly fits the facts of this crime. Drew is charged with violating §§ 1030(a)(2)(C), (c)(2)(B)(2) of the CFAA, which make it a felony punishable up to five years imprisonment, if one “intentionally accesses a computer without authorization . . . , and thereby obtains . . . information from any protected computer if the conduct involved an interstate . . . communication” and “the offense was committed in furtherance of any . . . tortious act [in this case intentional infliction of emotional distress] in violation of the . . . laws . . . of any State.”

There is no question that the MySpace network is a “protected” computer as that term is defined by the statute. Indeed, “[e]very cell phone and cell tower is a ‘computer’ under this statute’s definition; so is every iPod, every wireless base station in the corner coffee shop, and many another gadget.” U.S. v. Mitra, 405 F.3d 492, 495 (8th Cir. 2005). There is also no question that a violation of MySpace’s TOS provides a valid predicate for proving that the defendant acted “without authorization.” What the commentators ignored in their critique of this indictment is that the “CFAA . . . is primarily a statute imposing limits on access and enhancing control by information providers.” EF Cultural Travel B.V. v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003). A company “can easily spell out explicitly what is forbidden.” Id. at 63. Thus, companies have the right to post what are in effect “No Trespassing” signs that can form the basis for a criminal prosecution.

If this interpretation of the law is correct, then the law is probably unconstitutionally vague. A vague law is one that either fails to provide the kind of notice that will enable ordinary people to understand what conduct it prohibits; or authorizes or encourages arbitrary and discriminatory enforcement. The CFAA, as construed by the prosecution in the Drew case, will probably be found vague because it authorizes or encourages arbitrary and discriminatory enforcement.

Suppose I put a notice on this post that says: “No attorneys may post a comment to this blog.” Suppose Nick Ackerman comes to this site, sees this post, and and writes a comment that is defamatory. Under his theory, he can be prosecuted for violating the CFAA. He has “trespassed” on this site. Moreover, if a blog has a policy that it will not tolerate “rude, uncivil, or off-topic comments,” then commenters who make such comments that are tortious (intentional infliction of emotional distress, public disclosure of private facts, false light, defamation, etc.) can be liable for a CFAA violation. Moreover, any use of a website that goes against whatever terms the operator of that site has set forth that constitutes a negligence tort is also criminal.

The problem here is that the CFAA’s applicability would be extremely broad — so broad that the cases likely to be prosecuted would be arbitrary. Since tort law is common law, and is very flexible, broad, and evolving, people would not have adequate notice about what conduct would be legal and not legal. There’s a reason why tort law is different from criminal law — we are willing to accept a lot more ambiguity and uncertainty in tort law than in criminal law, where the stakes involve potential imprisonment.

Moreover, Nick Akerman only focuses on the CFAA § 1030(c)(2)(B)(2), which makes it a felony to exceed authorized access if the offense was committed in furtherance of any tortious act.

The CFAA § 1020(a)(2)(C) makes it a criminal misdemeanor to “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer if the conduct involved an interstate or foreign communication.” If I’m interpreting this correctly (and I don’t purport to be an expert on the CFAA), under the Drew prosecutor’s interpretation of the CFAA, any time a person violates a website’s terms of service and access any information from the site, there’s a criminal violation. That means that if I post on this blog a notice that says: “No attorneys may access any other parts of this blog other than the front page,” and an attorney accesses any other page on my blog, then there’s a CFAA violation. Could the law possibly be this broad? I think it would require a narrowing interpretation in order to avoid problems of unconstitutional vagueness.

The CFAA strikes me as a very poorly drafted statute. The Drew indictment demonstrates the problems with the law. Either courts should fix the CFAA interpretively by narrowing its scope, or else strike it down as unconstitutionally vague. But what clearly cannot stand is for the law to be interpreted as the Drew prosecutor seeks to interpret it.

Hat tip: Dan Slater at the WSJ Blog


My New Book, Understanding Privacy

Cover 5 medium.jpgI am very happy to announce the publication of my new book, UNDERSTANDING PRIVACY (Harvard University Press, May 2008). There has been a longstanding struggle to understand what “privacy” means and why it is valuable. Professor Arthur Miller once wrote that privacy is “exasperatingly vague and evanescent.” In this book, I aim to develop a clear and accessible theory of privacy, one that will provide useful guidance for law and policy. From the book jacket:

Privacy is one of the most important concepts of our time, yet it is also one of the most elusive. As rapidly changing technology makes information more and more available, scholars, activists, and policymakers have struggled to define privacy, with many conceding that the task is virtually impossible.

In this concise and lucid book, Daniel J. Solove offers a comprehensive overview of the difficulties involved in discussions of privacy and ultimately provides a provocative resolution. He argues that no single definition can be workable, but rather that there are multiple forms of privacy, related to one another by family resemblances. His theory bridges cultural differences and addresses historical changes in views on privacy. Drawing on a broad array of interdisciplinary sources, Solove sets forth a framework for understanding privacy that provides clear, practical guidance for engaging with relevant issues.

Understanding Privacy will be an essential introduction to long-standing debates and an invaluable resource for crafting laws and policies about surveillance, data mining, identity theft, state involvement in reproductive and marital decisions, and other pressing contemporary matters concerning privacy.

Here’s a brief summary of Understanding Privacy. Chapter 1 (available on SSRN) introduces the basic ideas of the book. Chapter 2 builds upon my article Conceptualizing Privacy, 90 Cal. L. Rev. 1087 (2002), surveying and critiquing existing theories of privacy. Chapter 3 contains an extensive discussion (mostly new material) explaining why I chose the approach toward theorizing privacy that I did, and why I rejected many other potential alternatives. It examines how a theory of privacy should account for cultural and historical variation yet avoid being too local in perspective. This chapter also explores why a theory of privacy should avoid being too general or too contextual. I draw significantly from historical examples to illustrate my points. I also discuss why a theory of privacy shouldn’t focus on the nature of the information, the individual’s preferences, or reasonable expectations of privacy. Chapter 4 consists of new material discussing the value of privacy. Chapter 5 builds on my article, A Taxonomy of Privacy, 154 U. Pa. L.. Rev. 477 (2006). I’ve updated the taxonomy in the book, and I’ve added a lot of new material about how my theory of privacy interfaces not only with US law, but with the privacy law of many other countries. Finally, Chapter 6 consists of new material exploring the consequences and applications of my theory and examining the nature of privacy harms.

Understanding Privacy is much broader than The Digital Person and The Future of Reputation. Whereas these other two books examined specific privacy problems, Understanding Privacy is a general theory of privacy, and I hope it will be relevant and useful in a wide range of issues and debates.

For more information about the book, please visit its website.


The Digital Person Free Online!

Digital-Person-free.jpgLast month, Yale University Press allowed me to put my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet online for free. The experiment has gone quite well. The book’s website received a big bump in traffic, with many people downloading one or more chapters. The book’s sales picked up for several weeks after it was placed online for free. Sales have now returned to about the same level as before the book went online.

I’m delighted to announce that NYU Press has allowed me to put my book, The Digital Person: Technology and Privacy in the Information Age (NYU Press, 2004) online for free.

Here’s a brief synopsis of The Digital Person from the book jacket:

Seven days a week, twenty-four hours a day, electronic databases are compiling information about you. As you surf the Internet, an unprecedented amount of your personal information is being recorded and preserved forever in the digital minds of computers. These databases create a profile of activities, interests, and preferences used to investigate backgrounds, check credit, market products, and make a wide variety of decisions affecting our lives. The creation and use of these databases–which Daniel J. Solove calls “digital dossiers”–has thus far gone largely unchecked. In this startling account of new technologies for gathering and using personal data, Solove explains why digital dossiers pose a grave threat to our privacy.

Digital dossiers impact many aspects of our lives. For example, they increase our vulnerability to identity theft, a serious crime that has been escalating at an alarming rate. Moreover, since September 11th, the government has been tapping into vast stores of information collected by businesses and using it to profile people for criminal or terrorist activity. In THE DIGITAL PERSON, Solove engages in a fascinating discussion of timely privacy issues such as spyware, web bugs, data mining, the USA-Patriot Act, and airline passenger profiling.

THE DIGITAL PERSON not only explores these problems, but provides a compelling account of how we can respond to them. Using a wide variety of sources, including history, philosophy, and literature, Solove sets forth a new understanding of what privacy is, one that is appropriate for the new challenges of the Information Age. Solove recommends how the law can be reformed to simultaneously protect our privacy and allow us to enjoy the benefits of our increasingly digital world.

Book reviews are collected here.


Ranking Banks Based on Incidents of Identity Theft

Chris Hoofnagle just released a new report entitled Measuring Identity Theft at Top Banks. In the report, he ranks the top 25 US banks according to their relative incidence of identity theft. The report is based on consumer-submitted complaints to the FTC where the victim identified an institution.

In a previous paper called Identity Theft: Making the Unknown Knowns Known, Chris argued that there should be mandatory public disclosure of identity theft statistics by banks. Since the financial institutions don’t currently release such data, we have no idea which institutions are being more effective at reducing identity theft than others.

For his new paper, Chris made a FOIA request last year to the FTC for two years of consumer complaint data. The FTC found it too burdensome to release two years’ worth of data, so “the request was limited to three randomly-chosen months in 2006, January, March, and September. These months included data from 88,560 complaints, with 46,262 names of institutions were identified by victims.” Chris’s paper is based on an analysis of this data.

From the abstract:

There is no reliable way for consumers, regulators, and businesses to assess the relative incidence of identity fraud at major financial institutions. This lack of information prevents more vigorous competition among institutions to protect accountholders from identity theft. As part of a multiple strategy approach to obtaining more actionable data on identity theft, the Freedom of Information Act was used to obtain complaint data submitted by victims in 2006 to the Federal Trade Commission. This complaint data identifies the institution where impostors established fraudulent accounts or affected existing accounts in the name of the victim. The data show that some institutions have a far greater incidence of identity theft than others. The data further show that the major telecommunications companies had numerous identity theft events, but a metric is lacking to compare this industry with the financial institutions.

This is a first attempt to meaningfully compare institutions on their performance in avoiding identity theft. This analysis faces several challenges that are described in the methods section. The author welcomes constructive criticism, suggestions, and comments in an effort to shine light on the identity theft problem.

This is a fantastic endeavor, as more information on how institutions are protecting against identity theft is sorely needed. Chris admits that his study has some limitations and could be improved if financial institutions would supply more information to the public. But based on the information Chris could find out, this report is quite revealing. Hopefully, it will spark more transparency from financial institutions in the future.

Here is one of many charts in the paper. The chart below is of incidents of identity theft relative to the size of each institution.



Coming Back from the Dead

lazarus2.JPGLazarus had it easy. Not so for Laura Todd, who has been trying to come back from the dead for nearly a decade. According to WSMV News in Nashville:

According to government paperwork, Laura Todd has been dead off and on for eight years, and Todd said there’s no end to the complications the situation creates.

“One time when I (was) ruled dead, they canceled my health insurance because it got that far,” she said.

Todd’s struggle started with a typo at the Social Security administration. She said the government has assured her since the problem that they have deleted her death record, but she said the problems keep cropping up.

On Wednesday, the IRS once again rejected her electronic tax return. She said she’s gone through it before.

“I will not be eligible for my refund. I’m not eligible for my rebate. I mean, I can’t do anything with it,” she said.

Channel 4’s Nancy Amons first reported about Todd’s ordeal last week, but Amons has since found out more about how common the problem is.

According to a government audit, Social Security had to resurrect more than 23,000 people in a period of less than two years. The number is the approximate equivalent to the population of Brentwood.

The audit said the lack of documentation in the Social Security computer makes it impossible for the government’s auditors to determine if the people are dead or alive.

But some of those who are alive have found more complications after their resurrection.

Illinois resident Jay Liebenow was also declared dead. He said Todd is now more vulnerable to identity theft because after someone dies, Social Security releases that person’s personal information on computer discs. He said the information is sold to anyone who wants it, like the Web site

One of the problems with modern recordkeeping is that although computers make things more efficient, they compound the effects that errors have on people’s lives. The difficulty is that the law currently does not afford people with sufficient power to clean up mistakes in their records. Since information is so readily transferred between entities, an error that is corrected in one database has often migrated to another database before the correction. The error doesn’t die. Instead, you do.

Responsibility should be placed on every entity that maintains records to ensure that information is correct and that errors are promptly fixed. Moreover, when information is shared with others, the one sharing the information should have duties to inform the others of the error; and those receiving the data should have a duty to check for corrections in the data from the source.

Right now, we’re living in a bureaucratic data hell, and that’s because that there aren’t sufficient incentives for entities to be careful with the records they keep about people.

Image: The Resurrection of Lazarus by Vincent van Gogh, 1889-90, from Wikicommons.


Facebook Applications: Another Privacy Concern

facebook3.jpgRecently, I’ve been complaining about Facebook’s mishaps regarding privacy. Back in 2006, Facebook sparked the ire of over 700,000 members when it launched News Feeds. In 2007, Facebook launched Beacon and Social Ads, sparking new privacy outcries. An uprising of Facebook users prompted Facebook to change its policies regarding Beacon. For more about Facebook’s recent privacy issues, see my post here.

But that’s not all. Over at CNET, Chris Soghoian reports about some severe privacy concerns with Facebook applications. An application (or “app” for short) is a program that is created by a third party that adds interesting features to one’s profile. These apps have become quite popular with Facebook users. But they come with some very serious potential dangers. Soghoian writes:

[A] new study suggests there may be a bigger problem with the applications. Many are given access to far more personal data than they need to in order to run, including data on users who never even signed up for the application. Not only does Facebook enable this, but it does little to warn users that it is even happening, and of the risk that a rogue application developer can pose. . . .

In order to install an application, a Facebook user must first agree to “allow this application to…know who I am and access my information.” Users not willing to permit the application access to all kinds of data from their profile cannot install it onto their Facebook page.

What kind of information does Facebook give the application developer access to? Practically everything. . . .

The applications don’t actually run on Facebook’s servers, but on servers owned and operated by the application developers. Whenever a Facebook user’s profile is displayed, the application servers contact Facebook, request the user’s private data, process it, and send back whatever content will be displayed to the user. As part of its terms of service, Facebook makes the developers promise to throw away any data they received from Facebook after the application content has been sent back for display to the user.

So when you use a third party application, you basically must put your trust in that third party to follow Facebook’s rules in good faith. In other words, Facebook users use applications at their own risk.

But what if an application is created by some hacker in Russia? Or is designed by a creepy child molester to harvest people’s personal information? Should Facebook be doing more to protect users against the bad-apple application developers?

Soghoian notes that in many cases, applications are being given access to much more personal data than they actually need to function:

Read More