Evolution of Privacy Breach Litigation?
In addition to empirical work on data breaches and breach disclosure laws, I’ve also become very interested in data breach litigation. While plaintiffs have seen very little success with legal actions brought against companies that suffer data breaches, I still believe there is some very interesting empirical work that can be done regarding these lawsuits.
In a recent post, Daniel Solove cited a paper by Andrew Serwin (found here) who described in great detail the legal theories and statutes that plaintiffs use when bringing legal actions against companies that suffer data breaches. It isn’t my purpose to repeat that work, but rather to identify an interesting pattern that appears to have emerged over the past 5 to 10 years of privacy breach litigation. Special thanks to Paul Bond of Reed Smith LLP who first brought this to my attention.
Category 1: You lost my data, now I will sue you.
This first category could be characterized by what is classically considered a data breach: plaintiffs suing a company simply because their personally identifiable information (PII) was lost, stolen, or improperly disposed. For example, Choicepoint, TJX, Hannaford, Heartland, etc. Plaintiffs claim that this disclosure of data has harmed, or will harm them, and that they are justified in seeking relief for actual fraud losses, monitoring costs, future expected loss, or emotional distress. Plaintiffs bring these actions under many kinds of tort and contract theories, but generally lose because they’re unable to prove a harm that’s legally recognized (as we discuss further below). The defining characteristic of this category is that the burden lies with the alleged victims to show they were harmed in a legally meaningful way.
Category 2: You violated the law, now I will sue you.
The second category represents legal actions from what we might call ‘intentional or willful’ disclosure of PII and are brought under various state and federal statutes. For example, the Driver’s Privacy Protection Act, the Privacy Act, and the Stored Communications Act. The defining characteristic here is that the legal focus shifts from the plaintiff’s harm, to the defendant’s behavior. That is, mere violation of the Act is justification for plaintiff relief. For example, the DPPA allows recovery up to $2,500 for unauthorized disclosure of a driver’s personal information. The Privacy Act allows recovery of at least $1000 for unauthorized disclosure of personal information by a government agency, and the Stored Communications Act allows recovery up to $10,000 for intentional and unauthorized access of an electronic communication.
Category 3: You collected my data without asking me , now I will sue you.
The third category of lawsuits represents what could be considered ‘unauthorized collection’ of PII and are brought by plaintiffs who claim that organizations knowingly and willfully collected their personal information. For example, in Collegenet v XAP Corp., 442 F. Supp. 2d 1070 (D. Or. 2006), the plaintiff (a competitor) brought action against XAP for unfair business practices through the unauthorized collection of personal information of its customers. Further, in Davis et al. v Videoegg Inc., 2010 WL 3839312 (C.D.Cal.), the complaint states that “VideoEgg…set online tracking devices which would allow access to, and disclosure of [PII] …without actual notice, awareness, or consent and choice of its users…” Not surprisingly, these actions are more common in recent years, likely driven by the explosive popularity of social media, behavioral advertising and flash cookies. (See also actions against Google’s Beacon and NebuAD.)
To be clear, these categories are not mutually exclusive, but are relevant because I think they tell an interesting story of how the landscape of privacy breaches and breach litigation is evolving (notice I’m expanding the scope from just ‘data’ breaches to ‘privacy’ breaches). Perhaps this is just a reflection of technology and social change and therefore expected and obvious.
Regardless, this categorization provides a useful model by which to frame empirical work. In a paper with the amazing David Hoffman and Alessandro Acquisti, we’re building a database of breach lawsuits and performing some interesting docket analysis on these suits. Once we’ve gathered sufficient data, we should be able to estimate the probability that a breached firm will would be involved in a lawsuit, and the variables of the breach, parties, court, etc, that lead to different outcomes.
Colleagues who are data breach litigators suggest that plaintiffs are much more successful regarding the Category 2, relative to the others (the third may just be too new to evaluate). If this is true, then it suggests another alternative to reducing privacy harms from breaches (beyond disclosure and mandated standards) – imposing a fine on breached companies. This is a little different than a strict liability solution, in which the company would bear the full cost of consumer loss. Here, the sanction may instead just be a function of the size of the breach (not the total harm) and imposed as a fine, or tax. In fact, call it a “data breach tax.” And so, as with breaches of Category 2, the plaintiff only has to prove that the company lost their data. Onus is placed on the company, not the consumer.
But is this fair? Is it efficient? How would a data breach tax affect the incentives of companies (and consumers) relative to ex ante regulation, information disclosure or ex post liability? This requires some analytical modeling, which I’ll discuss in an upcoming blog post.