Are People Really Harmed By a Data Security Breach?

“It’s just a flesh wound.”

Monty Python and the Holy Grail

Over at Privacy & Security Source, Andrew Serwin, a leading privacy lawyer and author of an excellent treatise on privacy law, has a very thoughtful and informative post about cases where courts found no harm to individuals by data security breaches.  Serwin observes:

Virtually every case supports the view that most privacy breaches will not support civil liability because harm typically does not exist.

There are at least three general bases upon which plaintiffs argue they are injured by a data security breach:

1. The exposure of their data has caused them emotional distress.

2. The exposure of their data has subjected them to an increased risk of harm from identity theft, fraud, or other injury.

3. The exposure of their data has resulted in their having to expend time and money to prevent future fraud, such as signing up for credit monitoring, contacting credit reporting agencies and placing fraud alerts on their accounts, and so on.

Courts have rejected all three of these arguments.  In many data security breach cases, courts are dismissing claims not because companies practiced reasonable security and weren’t negligent — indeed, in many cases, companies were grossly negligent, even reckless.  I’m continually stunned by how shoddy security practices keep occurring — such as the all-too-common lost laptop with millions of unencrypted records of consumer data.  Instead, courts are dismissing cases even in the face of negligence (or worse) because they conclude that people aren’t really harmed by the exposure of their data.

Serwin’s post discusses the recently-decided case, In re Hannaford Bros. Data Security Breach Litigation (Maine Supreme Court, Sept. 21, 2010) where the court examined the third argument above:

The plaintiffs here have suffered no physical harm, economic loss, or identity theft. As the federal district court recognized, actual injury or damage is an element of both negligence and breach of contract claims. . . .

Our case law, therefore, does not recognize the expenditure of time and effort alone as a harm. The plaintiffs contend that because their time and effort represented reasonable efforts to avoid reasonably foreseeable harm, it is compensable. However, we do not attach such significance to mitigation efforts. . . . Unless the plaintiffs’ loss of time reflects a corresponding loss of earnings or earning opportunities, it is not a cognizable injury under Maine law of negligence.

I find it troubling that courts won’t recognize harm in a data security breach.  In an earlier post about the careless granting of credit, I wrote that one of the problems with courts failing to find any harm is that it makes it cost-effective for companies to fail to invest in adequate security practices:

The reason so much identity theft occurs is because it is cheaper to expose people to the risk of identity theft than to exercise more care in vetting credit applications.  Courts and legislatures are also to blame, for they fail to adequately recognize the harm of identity theft (or data breaches) and will not make companies internalize the full costs.   So the companies do their cost-benefit analysis and conclude that they can expose people to the risk of identity theft because many costs are external — and if people sue, courts won’t recognize them.

People really do suffer real emotional distress because of a data security leak.  In other contexts, courts readily recognize emotional distress alone as a cognizable injury.  Suppose instead of leaking a person’s data, it leaked a person’s nude photo.  In many cases under the public disclosure of private facts tort, courts have no trouble at all recognizing an injury when a person’s nude photo is disclosed — even when it causes the person no reputational harm or financial injury.  The harm is merely embarrassment and emotional distress.

A data security breach does make people worse off by subjecting them to future risk.  They are made more vulnerable.  Imagine I own two safety-deposit boxes.  I want to rent them.  For Box 1, I have lost the key.  For Box 2, I haven’t.  Is Box 1 really worth the same as Box 2?  If I remove the locks to your doors in your house, but there’s no burglar yet or intruder, is there no harm to you?  I think there is — you’re clearly worse off.  Or suppose a company harms people by completely removing their immune systems.  But those people don’t get sick.  Are they not harmed?

One concern is that setting the standard for harm too low will invite excessive litigation. One of the problems with data security these days is what I call the “multiplier problem.” Entities have data on so many people that when there’s a leak, millions could be affected, and even a small amount of damages for each person might add up to a penalty that threatens a company’s business. We live in a world where it is relatively easy for a company to affect millions of people this way.

But we generally make companies who cause wide-scale harm pay for it.  When BP has an oil leak, we demand they pay for the cleanup as well as compensate people for its effects.  The sentiment behind this demand is that BP was profiting by engaging in its activities, and it shouldn’t make other people suffer if it didn’t provide adequate safety. In other words, BP must own the harm it created.

But with a data leak, courts are saying that companies should be off the hook.  They get to use data on millions of people without any consequences if they cause harm.

This is a problem.  Danielle Citron’s thoughtful paper, Reservoirs of Danger, argued that those keeping data should be treated similarly to those engaging in hazardous activities.  I agree.  If you’re going to profit by using people’s data, you should at least be held responsible for compensating people when you fail to keep it secure.

But the multiplier effect is definitely something we can’t ignore.  Data is so easy and cheap to collect today, and it is possible for so many companies to affect so many people.  We don’t want to put them out of business for a data security breach that causes only minor harm but to so many people that it adds up to an insane number.

I don’t think the answer is to make it hard or impossible for people to prove harm. Our litigation system, unfortunately, has become too costly and out of control, so that’s a problem, but there must be a way to allow people to be compensated within reason, without bankrupting a company. Maybe the law should require some kind of liquidated damages (but limited) per person, with an exception for incidents of extraordinary harm.

Another approach might be to find a way to adjudicate cases that is less cumbersome and expensive, where most of the money goes to compensate people.

Or maybe we should require companies that collect data to pay into a general fund which would be administered by the government to compensate people (something like worker’s compensation).  The payment would be like an insurance premium, which could be higher or lower based on whether a company followed industry standards, how much data was held, how sensitive, and whether a company had a breach in the past.

For more on this topic, Serwin’s article, Poised on the Precipice, is well worth reading.

You may also like...

16 Responses

  1. Jim Harper says:

    Serwin’s article is good, and a good touchstone for your commentary, Daniel, on a subject I know you’ve been thinking about for a long time. Legally cognizable harm is the elephant in the room when it comes to privacy. We’d all like people to enjoy privacy at the level they prefer, but if there is not harm, why are we calling on the state to police behavior?

    A couple of thoughts that came to mind as I read your piece might help sharpen the issues:

    You’ve said that companies were negligent or even reckless, but a negligence cause of action lies — as was burned into my brain — when there is a duty, a breach of that duty, causation, and damages. Being “negligent” about something that doesn’t cause a harm is not negligence. It is simply indifference to a priority held by Dan Solove and many others. A segue into the question of…

    What are “adequate security practices”? Perfection in data security would be nearly as bad as total failure. A perfectly secure database is unplugged, encased in concrete, and sunk to the bottom of a deep ocean from a secret location on the surface. Which is to say, it’s useless. You believe — and I don’t know to tell you you’re wrong — that *more* data security would be better. But what if greater investment in data security drives higher costs to consumers without driving their data security or privacy up by an equal or greater amount? Excess security would lower overall consumer welfare, which would be bad.

    I believe that holding data should obligate one to a duty of care toward the data subject. I’m most inclined to disagree with the third category of cases noted above, where a person’s reasonable steps to mitigate likely harm aren’t compensated.

    I’d like to see negligence cases succeed at a rate, and on facts, that drive data holders to optimal security practices. But I don’t know what success rate produces that, and I don’t think equating data holding to engaging in inherently hazardous activity gets you to the right place. It gets you to super-optimal security, which is sub-optimal consumer welfare.

  2. Daniel Solove says:

    Jim — In many cases, the security practices are egregiously bad. Unencrypted data. Employees taking home millions of records on portable devices. And so on. There is a lot of knowledge in the security field about what data security practices are better and worse. Of course, there is no such thing as perfect security, and there may be debates over what “adequate security” means, but I think that in many cases, it is clear that the data security isn’t adequate at all.

    Negligence doesn’t require perfection but the following of reasonable industry standards. Many companies aren’t doing this.

    With regard to negligence, I think we’re arguing past each other. When I said that companies can be negligent or reckless, I was speaking about their degree of fault. There is, of course, a difference from satisfying the fault standard of negligence and having a cause of action in negligence, which is what you’re referring to. The fault standard of negligence involves deviating from a reasonable standard of care. To be liable for a negligence cause of action, that’s where you have duty, breach, causation, and damages. Anyway, I think the argument is over semantics. I was referring not to a cause of action but to the fact that companies can deviate from a reasonable standard of care and not be liable.

  3. An excellent post and discussion, thank you.

    One extra thought, in March the UK ICO published a report which included a discussion on the value of personal information. It describes (Vol 1 p8) how the value may be different to the individual than from three other perspectives.

    The Privacy Dividend – the business case for investing in proactive privacy protection

    (I was a co-author of the ICO report together with Dr John Leach)

  4. Bruce Boyden says:

    I think the debate boils down to this: Is dread a harm?

  5. Ken Rhodes says:

    Daniel, the clarification of a reasonable standard for “degree of negligence” is certainly critical, but there is another prior issue that I think is more “black and white.”

    In my home state (Virgnia), which is not way out in left field on this issue, it is against the law to expose another individual to risk of AIDS without prior disclosure:

    Any person who, knowing he or she is infected with HIV, has sexual intercourse…without having previously disclosed the existence of his or her HIV infection to the other person shall be guilty of a class 1 misdemeanor.
    –Va. Code Ann. § 18.2-67.4:1

    The relevance is this: the law in this instance recognizes culpability not only for the infliction of AIDS, but for the *risk* of it. Yet in the situation you’ve described above, the law quite specifically requires that the damage *occur* in order to justify compensation, not merely the exposure to the risk of damage.

    I think if these types of risks were subject to the same treatment as others (AIDS, the publication of nude photos you mentioned, etc.) then we might see a momentary rush on the courts, but that would subside very quickly once the courts then turned to the issue you’ve explicated–a reasonable standard of diligence on the part of the data holders.

    If the holders of the data are subject to a standard of diligence, then an occasional accident which happens in spite of diligence can be seen as just that, an unforseen accident with no corresponding negligence.

    But that will only happen when the law recognizes the requirement to avoid, not only damage through negligence, but negligent exposure to risk.

  6. Ken Rhodes says:

    @Bruce: I don’t think so. Rather, I think it boils down to: Is risk a harm?

  7. Ryan Calo says:

    Great post, Dan. You know my view: people are harmed to the extent that (1) they experience distress worrying about the possibility of their information being used against them and (2) their information is actually used against them. I believe the court should recognize category (1) for the reasons you stated. Details here:

    This leads me to disagree, though, with your safety deposit box analogy. Why would I be harmed by the loss of a key to a deposit box unless or until I want something in it?


  8. Dissent says:

    Distress could be very time-limited if the consumer (only) has to cancel a credit card or debit card and/or change autopay settings to insert a new card number.

    But suppose that because of the experience, they now find themselves generally anxious about future breaches involving other entities and so they start spending time checking their bank statements every day or are afraid to use their new card as they would normally use it – and, as a result, do not enjoy the same quality of life that they had prior to the breach. Is that “harm?” The courts would seemingly say “no,” but if courts acknowledge lasting psychological impact as a result of other kinds of negligence, why not this kind?

  9. Daniel Solove says:

    Ryan — If the safety deposit boxes were property, wouldn’t the one with the lost key be worth a lot less? Isn’t this diminution in value a harm?

    You ask: “Why would I be harmed by the loss of a key to a deposit box unless or until I want something in it?”

    In other contexts, such as harm to property, we don’t require plaintiffs to prove that they will use the property in order to be damaged. Suppose you crash into my car. The only damage is that my heated seats won’t work. But I’ve never used my heated seats and I don’t know if I ever will. I’m still harmed, right? I can still recover for the loss of my heated seats, and I don’t have to prove I’ll be using them anytime soon.

    A problem that often arises with an identity theft or credit report error — courts say a person isn’t harmed unless they actually try to get a loan and are denied. Suppose you’re a victim of an error in a credit report that causes your score to be very low. At the moment, you’re thinking of buying a new house, but you decide that until your credit is fixed, you better not do anything. Indeed, trying to do something would only be a waste of time and money until your credit report is fixed. Have you been harmed? Courts often want people to apply for credit and be denied, but this seems like a stupid requirement to make people do. They are harmed regardless of whether they apply for credit or not because their freedom to obtain a loan is diminished. Their entire calculus of decisions is affected.

  10. Daniel, nice post. In the stolen credit card context, unless credit is damaged, I think it is difficult to establish any actual harm to the consumer. Consumers can be liable at most for $50 by law, and that amount is routinely waived by the issuing banks. Oftentimes, the consumer is issued a new card, which cuts off the chance of future fraudulent charges. What “harm” is left in this context then? If you open the door up to any increased risk of harm, you open up litigation floodgates that will be crippling relative to the inconvenience suffered by the credit cardholder. Not to mention, if “risk of harm” is cognizable harm, how many new torts have you created (does a reckless driver swerving in out of traffic increase one’s risk of harm)?

    The analysis may be different if other PII is involved (e.g. SS#), but in the credit card context this seems like the right decision.

    Finally, since this is about the relative societal cost-benefits of the fluidity of data transactions versus potential harm to consumers, isn’t this best left to the legislature. If we, as a society, believe that risk of future harm is worthy of recompense, then let’s pass a law.

    Another parting thought that may reveal some inconsistencies in various court’s risk of harm analysis. If risk of harm is not legally recognized, wouldn’t the same rationale apply if a data subject was subject to identity theft? Let’s say a PII breach occurs that allows a ID thief to open a credit account in a data subject’s name. Let’s say the ID thief racks up thousands in purchases. Our data subject discovers the ID theft and expends time and effort fixing his or her credit record. However, the data subject is never denied credit, and after his record is fixed, enjoys the same credit rating he or she had before? Has there been any harm in this case? Or has the data subject reacted to a significantly increased risk of harm? Now, I think that most courts would find cognizable harm if actual ID theft occurred post breach (and I think the Tri-West case indicated just that). So, how can you square the existence of harm when ID theft has occurred, but it has not adversely impacted the data subject except for time/effort to eliminate risk?


    P.S. I have a breakdown of the Hannaford court’s reasoning in my recent blogpost on the topic:

  11. Ryan Calo says:

    Dan, thanks, I see what you’re coming from. Certainly if my car loses resale value because the heater doesn’t work, or if I can no longer sell my safety deposit box because I’ve lost the key, then I’ve been harmed. No question. And certainly if someone uses my information against me and steals my identity, thereby lowering my credit score and cutting off loan options, then I’ve suffered a harm–and a privacy harm at that.

    My point is much more modest: it’s not a privacy harm (at least) if I neither feel anxiety around the data loss, nor is it used against me. For instance, I change my name and social security number and, because of horrible negligence on the part of a data custodian, my old name and social security number gets out. Then there is only a privacy violation.

    I also agree that courts should recognize the subjective harm associated with having one’s information “out there,” whether or not it gets abused. They should compensate it—just as a court might compensate a person who is threatened but not actually hit (cf. assault without battery). Or, at the very least, they should offer credit monitoring and a guarantee that they will help should some bad actor misuse the leaked data.

    Anyway, great post!

  12. Doug DePeppe says:

    Hello Dan,
    I believe that the economic harm doctrine, which is preventing plaintiff recovery in these data breach litigation cases, will not permanently insulate companies from litigation risk (which, in turn, enables data holders to avoid implementing reasonable security measures). The organized crime element in cyberspace, using enterprise hacking botnets like Zeus/Zbot, is engaged in fraudulent activities involving the international transfer of funds from the banking accounts of small businesses through ‘money mules’ to their bank accounts overseas. This criminal scheme results in clear monetary losses to the small business. In several cases now being litigated, those small businesses are suing the banks with some causes of action sounding in tort, specifically the lack of reasonable security practices of the bank. It would not seem likely that the economic harm doctrine would prevent these cases from proceeding.

    My point is that the current Internet dynamic will likely change the macro environment, where suddenly a number of cases establish precedent for imposing liability on data holders under a negligence theory (lack of reasonable security).

    I tend to agree with you that law has to change a societal problem — where poor security exists within a macro cybercrime environment and the market is not adequately addressing the risk to society. Much like the Paisley Snail case stands as a marker for law enabling a needed remedy during societal change (in that case the changed societal transactional relationships brought about by the Industrial Revolution), the prevailing Internet dynamic presents a ripe opportunity to the institution of law to begin rebalancing risks and remedies.

  13. clarinette02 says:

    Thanks Dan for your great post and this online symposium.
    I am modestly adding my personal view on which I have been thinking for some times.
    In terms of data flaw, the closest analogy to my mind is the automibile. Obviously, driving has advantages and inconvenient.
    We have driving ‘codes’ and security measures. They are accidents, they are fines and insurance companies to compensate damages.
    More from a European perspective, in some countries like France car insurance is compulsory.
    Privacy is recognized as a Fundamental Right, also protected by the Article 8 of the European Convention of Human Rights.
    The ease of broadcasting, collecting and data base creation, has shown a rise of issues with available data traffic.
    The number of incidents where a breach of privacy has caused harm should, in my view, encourage to think of a code of practice for digital data traffic.
    I have in mind the case of this lady who sued the phone company she held responsible for her broken marriage as they passed onto her husband the log of her ‘private’ conversations with her lover.
    Or the case of medical information leaked either to deny compensation or reveal medical information about Michael Jackson.(UCLA hospital fined over privacy breaches that sources say involve Michael Jackson’s records)
    How many laptops or USB drivers with confidential data have been lost?

    According to a recent study by the Ponemon Institute, the ‘actual breach incidents worldwide last year’ cost an average of $3.43 million for the organization.

    These incidents of breach of privacy have all caused a harm of various degrees.

    Coming back to the initial analogy, my suggestion is to evaluate the data subject’s rights of compensation according to the harm suffered and the attitude towards the risk.

    – data can be collected with or without consent or even the knowledge of the data subject ;
    – data subject would have suffered from an immediate or potential harm ;
    – data collector/processor negligence to secure data can aggravate their liability and therefore subject to higher compensation.
    These are some element to measure the degree of liability.

    The EU reform of Data Protection Act is considering to create a harmonized data breach penalty and an obligation of notification.

    I am wondering if, very similarly to the driving code, a data handling code could not create a set of rules and a greed of liability to compensate the harm and prejudice suffered by a data subject for intrusion of its privacy or more.

    Base on this, a fine could be imposed in case of non-compliance to the principles of security for data handling in combination with individual compensation guaranteed by an insurance fund policy.

  14. Omer Tene says:

    Great discussion. I think there’s definitely harm, whether or not id theft occurs. You lose your key holder with your home, office and car keys. Even if no one ever breaks in, you’re harmed (trust me – it happens to me often). Someone loses it for you – someone harmed you. Moreover, US law overemphasizes id theft in privacy matters. It’s the result of security breach notification legislation and there not having been a distinct “data protection” cause of action. Privacy isn’t all about id theft, as Dan explained thoroughly in his taxonomy and elsewhere.

  15. David Paul says:

    The point that you graze close to is this:
    If i have the door locks compromised in my home due to negligence…and i go out and PAY a locksmith to replace the locks, i have suffered an ascertainable loss. This is what happened in the Providence case in Oregon and we will fix it at the Supreme Court. Many folks paid for credit monitoring and the theory/proof is great on this.
    good discussion, in general.

  16. Doug DePeppe says:

    Following up on my post: this Washington Post article provides background for the likely sea change that will soon occur, forcing banks (and perhaps outside the financial sector) to implement cybersecurity processes to better secure online banking.
    “Cyberthieves Use Human Money Mules for Risky Work”

    There is too much money being stolen from business accounts – accounts which are not insured – for it to persist without litigation. And, there’s obvious harm here.

    The fundamental problem, in my judgment, is that banks have various security controls and regimes in place that derive from a static compliance-related mindset. In the cybersecurity era, more dynamic controls are needed. The banks are functioning in a Maginot Line era while the threat is a mobile, agile invader.