Drip, Drip, Drip: The Statistical “Certainty” of Data Leaks

Ponemon Institute’s latest survey showed that 90% of businesses of every size reported at least one data breach in the past 12 months.  More than half of the respondents said that they suffered more than two breaches in that same time period.  That same number of respondents also noted that they had little confidence that they could prevent another cyber attack in the coming months. 77% found the attacks more difficult to contain while 43% saw an increase in cyber attacks.  Only 40% could identify where the breaches originated.  The source of the data breaches?  According to the report, malicious insiders caused 52% of the incidents while the rest of the breaches were the result of malicious software, either as downloads, embedded on a rogue site, or distributed by social network sites.  The Ponemon Institute explains that: “The threat from cyber attacks today is nearing statistical certainty and businesses of every type and size are vulnerable to attacks.”

The problem is widespread and expensive.  Since 2005, there’s been more than 534 million breaches, including personal medical records, Social Security numbers, and credit card numbers.  According to a 2009 Javelin Research & Strategy study, individuals are four times more likely to be victims of identity theft after receiving a data breach notification letter.  Identity theft victims spend more than $1000 in out-of-pocket expenses and hundreds of hours of personal time to clean up their credit reports.  It’s a mess.

Many companies say there’s little they can do about it, including firms that can afford sophisticated security systems like Lockheed Martin  Tech departments often blame data breaches on the complexity of networks and financial resources.  Of course, the risk could be better managed by giving employees better training to preven them from downloading malware to their laptops and mobile devices.

But the Ponemon study and others like it provide support for the notion that our current tort approach to these problems–a negligence regime–is inadequate.  Negligence won’t address the signficant number of leaks that will occur despite companies’ exercise of due care over personal data.  Security breaches are an inevitable byproduct of collecting sensitive personal information in networked systems.  No amount of due care will prevent a substantial amount of sensitive data from escaping into the hands of cyber criminals.  Such data leaks constitute the predictable residual risks of what I’ve called “information reservoirs of danger.”  As I’ve argued (and here too), negligence won’t efficiently manage the residual risks of hazardous databases.  It would neither induce companies to change their activity level nor discourage marginal actors from collecting sensitive information because such operators need not pay for the accident costs of their residual risk.  The high levels of residual risk suggest treating cyber reservoirs as ultrahazardous activities–those with significant social utility and significant risk–that warrant strict liability.  Recent Senate and House proposed privacy bills don’t go that far (or even in the ball park) as far I can tell.  I will be blogging about the impending Franken geolocation privacy bill that’s soon to drop, though for other reasons, it will have interesting cyber stalking provisions.  More soon.





You may also like...

3 Responses

  1. Another great note on the inadequacy of the current legal and enforcement regime with respect to both privacy and data security. I have said in the past that “Data is the new Asbestos” … the only way to make sure that entities properly handle the information is a combination of regulation, oversight, and private right of action. It simply will not be cleaned up by the current courses of action.

    I am hardly a proponent of the lawyers class action bar. But…for once for me…I think there is a real consumer protection benefit to turning the lawyers loose on this.

  2. Hi Danielle,

    Andrew Stewart and I have argued that a missing ingredient in what firms can do about security breaches is an understanding of what went wrong for others. For example, you mention Lockheed Martin. Do you know what their security measures were? Do you know which ones failed?

    Absent such knowledge, we’re unable to determine if “giving employees better training” will actually pay off. We don’t understand if blocking Facebook at work has an impact on breaches. Regarding the Ponemon Institute survey you cite, I frankly don’t believe it’s is an accurate portrayal of the underlying reality it attempts to illuminate. New School co-blogger Russell Thomas and I have done analysis of their other reports, and found them seriously flawed. (I can send you links, but worry that ConOp’s anti-spam will eat my comment if I put them here.) Also, see Cormac Herley & Dinei Florencio’s “Sex, Lies and Cybercrime Surveys” which I think is very relevant.

    So today, I’m opposed to strict liability, because as a security professional, I can’t say with great certainly what actions would really reduce my organizations risk. I apply my best professional judgement, and am aware of the limits under which I form up those judgements.

    I do believe that more disclosure, and thus more information, will help us better understand what’s going wrong and learn to better address it.


  3. That’s really helpful insights, Adam, and will check out the cites. Thanks for reading and commenting. Same with Anthony. I appreciate the input.