Category: Privacy (Consumer Privacy)

Price Tag

Differential Pricing and Privacy: Good, Bad, or Otherwise?

The vast and ever increasing collection of information about consumers by search engines, advertisers, data brokers, web merchants, and myriad other online and offline companies raises many concerns. A website that stores (and reads) your emails, records every search you make, knows what addresses you look for on its maps, and holds your documents may know more about you than any other single institution, perhaps even including your family members.

Imagine if your email provider reads your email – or some other data accumulator reads your tweets or social network page – and tells the airlines that you are going to a family funeral across the country. Suddenly, you only find that airlines only offer you seats at a very high price. Think that you can hide your identity by searching before you sign in to buy? Doubtful. Web trackers likely know who you are using IP addresses, cookies, or other tricks invisible to most users.

One of the concerns about this data collection is differential or discriminatory pricing. Consumer advocates and others worry that merchants will use personal information to determine how much each individual consumer is willing to pay for something. That consumer then receives an individual price based on that consumer’s interest, need, income, buying patterns, and other factors. The next consumer pays a different price.

What’s the matter when a merchant charges one consumer a different price than another consumer? This is a surprisingly complicated question to answer.

Economists call the gap between what consumers are willing to pay and the market price the consumer surplus. If consumers lived in the economist’s hypothetical world of many buyers, many sellers, and a fair and transparent marketplace, consumers would expect to find prices based on marginal cost of production with lots of consumer surplus. Differential pricing is a merchant’s dream, with each customer paying a price based on willingness to pay rather than a standard price. Differential pricing could end the consumer surplus.

In the offline world, a merchant typically sets a single price for all consumers. The book is $12.99 to anyone who wants to buy it in the book store. Gasoline is $3.25 a gallon no matter how low a car’s gas tank is or how much the car cost.

In reality, things aren’t that simple in the offline world. The bookstore offers consumers a frequent shopper card (sometimes free. sometimes paid) with a discount on all purchases. The consumer with the card pays less than a consumer without one. The gas station offers a discount on Tuesdays because that’s a slow day. The movie theatre offers lower prices early in the day and higher prices in prime time. Many sellers offer a discount to seniors.

Read More


Could Revenge Porn Victims Seek Civil Liability Against Hunter Moore?

Suppose that former revenge porn operator Hunter Moore is convicted of federal crimes of conspiracy to engage in computer hacking. Could individuals whose nude photos appeared on his site next to their home addresses and screenshots of their Facebook profiles sue Moore for intentional infliction of emotional distress and public disclosure of private fact? Probably not, but it’s worth exploring the issue.

The closest case law involves civil penalties provided for under federal criminal law. In M.A. v. Village Voice, a federal district court judge found that enjoyed Section 230 immunity for civil penalties under the child trafficking statute, 18 U.S.C. 2255. Section 2255 allows victims of child trafficking to recover damages from those who had committed or profited from the crimes against them. provides that, “[a]ny person who, while a minor, was a victim of a violation of [criminal statutes concerning child trafficking] and who suffers personal injury as a result of such violation may sue” and “recover actual damages such person sustained.” The representatives of a victim of child trafficking argued that Section 230 immunity was inapplicable because had profited from the plaintiff’s victimization in violation of Section 2255. As the court held, however, Section 2255 was a “civil damages” provision of Title 18, not federal criminal law.

The only remaining question is whether Moore materially contributed to the contested content–nude photos and Facebook screen shots. If so, he could be found liable as a co-developer of the content that often was tantamount to cyber stalking. Of course, the question of liability would remain. Just because a site operator does not enjoy immunity from liability does not mean he would be strictly liable for torts of intentional infliction of emotional distress, for instance. The question would be whether he intentionally inflict emotional distress on particular individuals? Recall that Moore boasted to the press that the more embarrassing and destructive the material, the more money he made. When a reporter told him that revenge porn had driven people to commit suicide, Moore said that he did not want anybody to die, but if it happened, he would be grateful for the publicity and advertising revenue it would generate; “Thank you for the money . . . from all of the traffic, Googling, redirects, and press.” Earlier this year, Moore told Betabeat’s Jessica Roy that he was relaunching his site including not just of people’s Facebook accounts, but their home addresses. “We’re gonna introduce the mapping stuff so you can stalk people,” he told Roy. When talking to Forbes’s Kashmir Hill, Moore backed off his statement, claiming to be drunk, but had tweeted, “I’m putting people’s house info with google earth directions. Life will be amazing.”

More broadly, sites that principally host revenge porn are making a mockery of Section 230. As Citizen Media Law Project’s Sam Bayard explains, a site operator can enjoy the protection of Section 230 while “building a whole business around people saying nasty things about others, and . . . affirmatively choosing not to track user information that would make it possible for an injured person to go after the person directly responsible.” In my book Hate Crimes in Cyberspace, I explore the possibility of Section 230 reform to ensure that the worst actors don’t enjoy immunity. It’s certainly a perverse result that the “Good Samaritan” provision of the Communications Decency Act immunizes from liability sites that solicit and principally host revenge porn and other forms of cyber stalking. More to come in August, when Harvard University Press publishes the book.



Some Thoughts on Section 230 and Recent Criminal Arrests

We’ve devoted considerable attention on our blog to Section 230 of the Communications Decency Act, which immunizes online service providers/hosts from liability for user-generated content. Site operators are protected from liability even though they knew (or should have known) that user-generated content contained defamation, privacy invasions, intentional infliction of emotional distress, civil rights violations, and state criminal activity. Providing a safe harbor for ISPs, search engines, and social networks is a good thing. If communication conduits like ISPs did not enjoy Section 230 immunity, they would surely censor much valuable online content to avoid publisher liability. The same is true of search engines that index the vast universe of online content and produce relevant information to users in seconds and, for that matter, social media providers that host millions, and some billions, of users. Without Section 230, search engines like Google and Bing and social media providers like Yelp, Trip Advisor, Facebook, YouTube, and Twitter might not exist. The fear of publisher liability would have inhibited their growth. For that reason, Congress reaffirmed Section 230’s importance in the SPEECH Act of 2010, which requires U.S. courts to apply the First Amendment and Section 230 in assessing foreign defamation judgments.

In the past few months, prosecutors have arrested notorious revenge porn site operators Hunter Moore, Kevin Bolleart, and Casey Meyering. Those arrests have raised the question, what about Section 230? Hunter Moore’s arrest is the least controversial. Although Section 230 immunity is broad sweeping, it isn’t absolute. It exempts from its reach federal criminal law, intellectual property law, and the Electronic Communications Privacy Act. As Section 230(e) provides, the statute has “[n]o effect” on “any [f]ederal criminal statute” and does not “limit or expand any law pertaining to intellectual property.” Federal prosecutors indicted Moore for conspiring to hack into people’s computers in order to steal their nude images. According to the indictment, Moore paid a computer hacker to access women’s password-protected computers and e-mail accounts to steal nude photos for financial gain—profits for his revenge porn site Is Anyone Up. Site operators may be held accountable for violating federal criminal law.

What about revenge porn operators Bolleart and Meyerson who are facing state criminal charges? Generally speaking, site operators are not transformed into “information content providers” (who are not immunized from liability) unless they co-developed or co-created the allegedly criminal/tortious content, such as by paying for the illegal content and reselling it or drafting some of the contested content themselves. California Attorney General Kamala Harris’s prosecutions of both Bolleart and Meyerson press the question whether Section 230’s immunity extends to sites that effectively engage in extortion by encouraging the posting of sensitive private information and profiting from its removal.

Let’s take Bolleart’s case. It’s based on a similar theory as the case against Meyerson, who runs WinbyState, a private revenge porn site with a connected site that charges for the take down of photos. In December 2013, Bollaert, operator of revenge porn site UGotPosted, was indicted for extortion, conspiracy, and identity theft. His site featured the nude photos, Facebook screen shots, and contact information of more than 10,000 individuals. The indictment alleged that Bollaert ran the revenge porn site with a companion takedown site, Change My Reputation. According to the indictment, when Bollaert received complaints from individuals, he would send them e-mails directing them to the takedown site, which charged up to $350 for the removal of photos. Attorney General Harris explained that Bollaert “published intimate photos of unsuspecting victims and turned their public humiliation and betrayal into a commodity with the potential to devastate lives.”

Bollaert will surely challenge the state’s criminal law charges on Section 230 grounds. His strongest argument is that charging for the removal of user-generated photos is not tantamount to co-developing them. Said another way, charging for the removal of content is not the same as paying for, or helping develop, it. That is especially true of the identity theft charges because Bollaert never personally passed himself off as the subjects depicted in the photos. Nonetheless, the state has a strong argument that the extortion charges fall outside Section 230’s immunity because they hinge on what Bollaert himself did and said, not on what his users posted. Only time will tell if that sort of argument will prevail. Even if the California AG’s charges are dismissed on Section 230 grounds, federal prosecutors could charge Bollaert with federal criminal extortion charges. Sites that encourage cyber harassment and charge for its removal (or have a financial arrangement with removal services) are engaging in extortion. At the least, they are actively and knowingly conspiring in a scheme of extortion. Of course, this possibility depends on the enforcement of federal criminal law vis-à-vis cyber stalking, which as we have seen is stymied by social attitudes and insufficient training.


4 Points About the Target Breach and Data Security

There seems to be a surge in data security attacks lately. First came news of the Target attack. Then Neiman Marcus. Then the U.S Courts. Then Michael’s. Here are four points to consider about data security:

1. Beware of fraudsters engaging in post-breach fraud.

After the Target breach, fraudsters sent out fake emails purporting to be from Target about the breach and trying to trick people into providing personal data. It can be hard to distinguish the real email from an organization having a data breach from a fake one by fraudsters. People are more likely to fall prey to a phishing scheme because they are anxious and want to take steps to protect themselves. Post-breach trickery is now a growing technique of fraudsters, and people must be educated about it and be on guard.

2. Credit card fraud and identity theft are not the same.

The news media often conflates credit card fraud with identity theft. Although there is one point of overlap, for the most part they are very different. Credit card fraud involving the improper use of credit card data can be stopped when the card is cancelled and replaced. An identity theft differs because it involves the use of personal information such as Social Security number, birth date, and other data that cannot readily be changed. It is thus much harder to stop identity theft. The point of overlap is when an identity thief uses a person’s data to obtain a credit card. But when a credit card is lost or stolen, or when credit card data is leaked or improperly accessed, this is credit card fraud, and not identity theft.

3. Data breaches cause harm.

What’s the harm when data is leaked? This question has confounded courts, which often don’t recognize a harm. If your credit card is just cancelled and replaced, and you don’t pay anything, are you harmed? If your data is leaked, but you don’t suffer from identity theft, are you harmed? I believe that there is a harm. The harm of credit card fraud is that it can take a long time to replace all the credit card information in various accounts. People have card data on file with countless businesses and organizations for automatic charges and other transactions. Replacing all this data can be a major chore. People’s time has a price. That price will vary, but it rarely is zero.

Read More

Surveillance Man 02

10 Reasons Why Privacy Matters

Why does privacy matter? Often courts and commentators struggle to articulate why privacy is valuable. They see privacy violations as often slight annoyances. But privacy matters a lot more than that. Here are 10 reasons why privacy matters.

1. Limit on Power

Privacy is a limit on government power, as well as the power of private sector companies. The more someone knows about us, the more power they can have over us. Personal data is used to make very important decisions in our lives. Personal data can be used to affect our reputations; and it can be used to influence our decisions and shape our behavior. It can be used as a tool to exercise control over us. And in the wrong hands, personal data can be used to cause us great harm.

2. Respect for Individuals

Privacy is about respecting individuals. If a person has a reasonable desire to keep something private, it is disrespectful to ignore that person’s wishes without a compelling reason to do so. Of course, the desire for privacy can conflict with important values, so privacy may not always win out in the balance. Sometimes people’s desires for privacy are just brushed aside because of a view that the harm in doing so is trivial. Even if this doesn’t cause major injury, it demonstrates a lack of respect for that person. In a sense it is saying: “I care about my interests, but I don’t care about yours.”

3. Reputation Management

Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being. Although we can’t have complete control over our reputations, we must have some ability to protect our reputations from being unfairly harmed. Protecting reputation depends on protecting against not only falsehoods but also certain truths. Knowing private details about people’s lives doesn’t necessarily lead to more accurate judgment about people. People judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy. Privacy helps people protect themselves from these troublesome judgments.

Read More


With Great Power Comes Great Responsibility

In a sentence, Anupam Chander’s The Electronic Silk Road contains the good, the bad and the ugly of the modern interconnected and globalized world.

How many times do we use terms like “network” and “global”? In Professor Chander’s book you may find not only the meanings, but also the possible legal, economical and ethical implications that these terms may include today.

It’s well known that we are facing a revolution, despite of recent Bill Gates’ words that “The internet is not going to save the world”. I partly agree with Mr. Gates. Probably the internet will not save the world, but for sure it has already changed the world as we know it, making possible the opportunities that are well described in The Electronic Silk Road.

However, I would like to use my spot in this Symposium not to write about the wonders of the Trade 2.0, but to share some concerns that , as a privacy scholar, I have.

The problem is well known and is connected to the risk of the big data companies, that base their business model on consumer-profiling for selling advertisement or additional services to the companies.

“[T]he more the network provider knows about you, the more it can earn” writes Chander, and as noted by V. Mayer-Schönberger and K. Cukier in their recent book Big Data, the risks that could be related with the “dark side” of the big data are not just about the privacy of individuals, but also about the processing of those data, with the “possibility of using big data predictions about people to judge and punish them even before they’ve acted.”.

This is, probably, the good and the bad of big data companies as modern caravans of the electronic silk road: they bring a lot of information, and the information can be used, or better processed, for so many different purposes that we can’t imagine what will happen tomorrow, and not only the risk of a global surveillance is around the corner (on this topic I suggest to read the great post by D. K. Citron and D. Gray Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards), but also the risk of a dictatorship of data.

This possible circumstance, as Professor Solove write in the book Nothing To Hide “[…] not only frustate the individual by creating a sense of helpness and powerlessness, they also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.”

Thus, I guess that the privacy and data protection ground could be the real challenge for the electronic silk road.

Professor Chander’s book is full of examples about the misuse of data (see the Paragraph Yahoo! in China), the problem of protection of sensitive data shared across the world (see the Paragraph Boston Brahmins and Bangalore Doctors), the problem about users’ privacy posed by social networks (see Chapter 5 Facebookistan).

But Professor Chander was able also to see the possible benefits of big data analysis (see the Paragraph Predictions and Predilections), for example in healthcare, thus is important to find a way to regulate the unstoppable flowing of data across the world.

In a so complex debate about a right that is subject to different senses and definitions across the world (what is “privacy” or “personal data” is different between USA, Canada, Europe and China for example), I find very interesting the recipe suggested by Anupam Chander.

First of all, we have to embrace some ground principles that are good both for providers and for law and policy makers: 1) do no evil; 2) technology is neutral; 3) the cyberspace need a dematerialized architecture.

Using these principles, it will be easy to follow Professor Chander’s fundamental rule: “harmonization where possible, glocalization where necessary”.

A practical implementation of this rule, as described in Chapter 8, will satisfy the different view of data privacy in a highly liberal regimes and in a highly repressive regime, pushing the glocalization (global services adapt to local rules) against the deregulation in the highly liberal regimes and the “do no evil” principle against the oppression in the highly repressive regime.

This seems reasonable to me, and at the end of my “journey” in Professor Chander’s book, I want to thank him for giving us some fascinating, but above all usable, theories for the forthcoming international cyberlaw.


Opportunities and Roadblocks Along the Electronic Silk Road

977574_288606077943048_524618202_oLast week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.

International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.

But this trade also brought communication—with ideas and critiques circulated around the world.  The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.

Read More


Who Is The More Active Privacy Enforcer: FTC or OCR?

Those who follow FTC privacy activities are already aware of the hype that surrounds the FTC’s enforcement actions.  For years, American businesses and the Department of Commerce have loudly touted the FTC as a privacy enforcer equivalent to EU Data Protection Authorities.  The Commission is routinely cited as providing the enforcement mechanism for commercial privacy self-regulatory activities, for the EU-US Safe Harbor Framework, and for the Department of Commerce sponsored Multistakeholder process.  American business and the Commerce Department have exhausted themselves in international privacy forums promoting the virtues of FTC privacy enforcement.

I want to put FTC privacy activities into a perspective by comparing the FTC with the Office of Civil Rights (OCR), Department of Health and Human Services.  OCR enforces health privacy and security standards based on the Health Insurance Portability and Accountability Act (HIPAA).

Let’s begin with the FTC’s statistics.  The Commission maintains a webpage with information on all of its cases since 1997.  The FTC’s website is  I’ve found that the link provided does not work consistently or properly at times.  I can’t reach some pages to confirm everything I would like to, but I am sure enough of the basics to be able to make these comments.

The Commission reports 153 cases from 1997 through February 2013.  That’s roughly 15 years, with an average of about ten cases a year.  The number of cases for 2012, the last full year, was 24, much higher than the fifteen-year average.  The Commission clearly stepped up its privacy and security enforcement activities of late.  I haven’t reviewed the quality or significance of the cases brought, just the number.

Read More


The FTC and the New Common Law of Privacy

I recently posted a draft of my new article, The FTC and the New Common Law of Privacy (with Professor Woodrow Hartzog).

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:

  • Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
  • Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
  • Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
  • Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?



The claims we make in this article include:

  • The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
  • The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.


You can download the article draft here on SSRN.


Brave New World of Biometric Identification

120px-Fingerprint_scanner_identificationProfessor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:

a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.

Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.

Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:

FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.

Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’

Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.

Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered.  In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.