Sending Out an e-SOS

My colleague Duncan Hollis has a new article up on SSRN, An e-SOS for Cyberspace. In the article, Duncan argues that the “conventional response” to cyberthreats (e.g., hacking, e-espionage, cyberwar, and hacktivistism) isn’t working. Though “cybercrime laws proscribe individuals from engaging in unwanted cyberactivities[, such laws fail because] anonymity is built into the very structure of the Internet. As a result, existing rules on cybercrime and cyberwar do little to deter. They may even create new problems, when attackers and victims assume different rules apply to the same conduct.”  Instead of traditional proscriptive approaches, Duncan proposes that “states adopt a duty to assist victims of the most severe cyberthreats. A duty to assist works by giving victims assistance to avoid or mitigate serious harms. At sea, anyone who hears a victim’s SOS must offer whatever assistance they reasonably can. An e-SOS would work in a similar way. It would require assistance for cyberthreat victims without requiring them to know who, if anyone, was threatening them.”

I read e-SOS in draft and found it fascinating, even though I have little intrinsic interest in international law or cybersecurity issues.  Duncan does a terrific job of storytelling – did you know that the CIA allegedly tampered with the computer control system of a Soviet gas pipeline in 1982, causing the largest non-nuclear explosion in history?  Or that the United States recently rescued North Korean sailors from pirates on receipt of an SOS?  The article is full of such nuggets. And I think the proposal is pretty clever, and borderline workable.  That’s high praise for a law review article.

Anyway, I advise that you download it before some cyberbully manages to hack SSRN and replace it with a trojan horse.  And then come back here, follow me after the jump, and enjoy a classic Police video.

You may also like...

9 Responses

  1. Matt Bodie says:

    Are you using the video to make a larger meta-point? I like the a-ha homage, but this is the classic video I remember:

  2. dave hoffman says:

    No larger point. Just idiocy by me in not checking what I was linking to.

  3. Orin Kerr says:

    Thanks for the tip, Dave. I don’t think this approach to deterring computer crimes works, as it seems to be based on assumptions about the physical world that don’t translate to the Internet. I wrote about this approach and its challenges in this short essay, Orin Kerr “Virtual Crime, Virtual Deterrence: A Skeptical View of Self-Help, Architecture, and Civil Liability.” 1 Journal of Law, Economics & Policy 197-214 (2005).

  4. Duncan Hollis says:

    As the author, I obviously DO think the idea might work. That said, I do not think any of the assumptions about the physical world that Orin critiqued in his 2005 paper are implicated by the e-SOS idea that I am advocating. On the contrary, my paper is a response to one of those very assumptions (the ability to attribute responsibility) which make it difficult for cybercrime (or self-help or civil liability) to regulate and deter the most severe cyberthreats. An e-SOS, in contrast, works without the victim having to know who attacked them (or even if they were attacked at all); the idea is that when really bad things happen due to a computer error/attack/exploitation, victims can call for help, and get it, whether it’s added bandwidth, blocking traffic, or patching code. Existing cyberthreats are at such a level that cybercrime and security are plainly inadequate. My e-SOS idea offers a general idea that CAN be tailored to cyberspace to supplement these existing responses to an increasingly hostile environment.

  5. dave hoffman says:

    Can you say a bit more about how the e-SOS assumes or incorporates notions of physical closeness?

    There is sort of an interesting question about the psychology of the SOS (generally). I bet that some of the reason that the physical SOS works is that the legal duty puts pressure on the bystander effect. Each listener is specifically charged with the responsibility of aiding the victim. The diffusion of responsibility problem is likely to be orders-of-magnitude more severe online. Unlike sailors, surfers on the web don’t really think of themselves as a part of a particular community. So compliance with an e-SOS regime is a difficult problem. Is that what you meant?

  6. Orin Kerr says:

    To explain a bit about what I mean, the argument seems to rest on an analogy between a rescue in cyberspace and a rescue on the high seas — specifically, the case of helping a boat in distress that calls on people nearby to help.

    In that traditional case, the characteristics of the physical world define the duty and tell us whether it is desirable. The issues of who needs to help, and in what cases, and what they have to do, are relatively straightforward. The physical environment provides the answers: It tells us who needs to help (people physically nearby); in what cases (when the threat is serious, which physical clues tell us); and what they have to do (measured based on reasonableness in a physical setting, in which notions of reasonableness are well settled). The physical understandings give us an idea of what the duty is and whether it is desirable and in what circumstances.

    These same questions tend to break down — or at least become extremely complicated — in the digital environment. Take the question of what is a “severe” computer crime. How do you measure that? In the physical world, we know what is a major threat in many cases because we have clear warning signs of cause and effect. That’s particularly true in the traditional “duty to assist” cases. If a boat with 100 people aboard is sinking, we know the severity of the event: 100 lives could be lost if the boat sinks and the people drown. But computer crimes usually don’t give us those sorts of clues. For example, imagine I have a hacker in my network. Is that a severe problem? It’s hard to know. The hacker could be harmless or harmful, and I don’t know his intent unless I know who he is. But I normally won’t know who he is. How do you measure the severity of the event?

    The same goes for who is supposed to help. In the physical world, like the case of a disabled or sinking boat, the answer is whoever is nearby. Physical proximity is key. But physical proximity is no longer a reliable guide to who can help in the case of a computer crime. Network crimes can be from anywhere and to anywhere — they can involve traffic going through dozens of countries at once. The paper suggests that the duty could be limited to those in “the territorial jurisdiction(s) within which the threat lies.” But what is the territorial jurisdiction in which the threat lies? Is that where the known victim is? Is that the jurisdiction in which the IP address of the attacks seem to be originating? What to do if the question of who can realistically help as a technological matter is no longer necessarily connected to who is physically nearby?

  7. Duncan Hollis says:

    I’ve posted some comments in reaction to Orin and Dave’s questions over at Opinio Juris — see

  8. AnonSecurityGuy says:

    I couldn’t download it from SSRN; SSRN doesn’t work for me, for some reason. (Maybe too much fancy Javascript technology.)

    I’m trying to imagine how this could work. How does this scale? If one victim sends out an eSOS, do all 4 billion people in the world have to help out that victim? If not, how do we tell who is obligated to help? What help are they obligated to provide, and what are the limits on the extent of the help their obligations? How does it scale when there are hundreds of millions of victims? Keep in mind that some people estimate that ~ 30% of PCs are infected with malware, spyware, or other unwanted software, at any given time. That’s an awful lot of victims.

    As Orin says, there is no clear notion of proximity for electronic crime, so it’s not clear how the “SOS on the high seas” analogy transfers to electronic crime.

  9. Duncan Hollis says:

    Obviously, there’s a scaling issue; my paper thus specifically talks about the need to limit
    1) which threats would qualify for an e-SOS (i.e., those that are severe in terms of timing, scale, and indirect effects),
    2) who can invoke the e-SOS (i.e., whether to limit it to certain targets–like hydorelectric dams or hospitals; or to certain actors like nation states; or whether to allow some broader set of victims to appeal for help);
    3) who will bear the duty (i.e., just nation states, or private actors too; and the additional need to limit the set of duty-bearers in relation to victims, whether by jurisdictional ties; what I call technical proximity; or even tiering assistance in terms of defining first responders, second responders, etc.);
    4) how to call for help; and
    5) what assistance has to be provided (i.e., whether to require assistance in terms of effort, or result; whether to mandate precise technological help or to provide a more general standard, etc.).
    For my further reactions to the physical proximity issue, see my post at Opinio Juris —