CCR Symposium: Legal Responses to Online Harassment

Danielle Citron’s article does a great job of reframing the question of online anonymous speech into an appropriate broader context. James has already posted about this, so I won’t reiterate his excellent points. The question that it leaves us with – how the law might be adjusted to incorporate these concerns – is a tricky one, however. It is particularly challenging because anonymity both shields anonymous attacks, and protects targeted minorities. Similarly, exposing anonymous harassers can both chill their speech, and protect the speech of their targets (and their targets’ communities), who might otherwise be silenced by threats. I can think of three broad options for a legal response – leave the system as it is, make individual speakers easier to find, or increase liability against conduit websites. Each of these solutions carries its own risks, and none seems like a perfect response…

First, we can retain the system as we have it now. By this I mean websites and most ISPs are not required to store identifying information about posters, and targets of online harassment must file John Doe subpoenas to uncover the identity of their attacker (assuming this is possible) before proceeding with a lawsuit. This system provides uncertain protection for the targets of online harassment, meaning that women and minority groups will continue to find themselves threatened and potentially muzzled online. As Citron notes, this environment can lead to very real structural effects, where not merely the targets, but everyone who identifies with the targets may find the Internet an increasingly inhospitable place.

Further, it provides inconsistent protection for anonymous speech online. Those who are aware that they are engaging in illegal behavior can take steps to ensure that their identity is virtually impossible to determine. If an egregious defender takes steps to shield himself, he will likely not be held liable. Thus, targets of online harassment can often uncover only those the posters who didn’t think their speech was problematic in the first place, or simply didn’t understand that they could be tracked. The worst violators get away free. As more and more lawsuits demonstrate that posters who don’t take such precautions can be caught, the distance between those who are caught and those who escape will grow greater. Already, John Doe lawsuits tend to uncover and punish the middling offenders rather than the most extreme. As a result, the current regime doesn’t provide consistent protection for anonymous speech, or for the targets on online harassment.


Second, we could increase the ability to track posters online. If implemented correctly, this addresses many of the harms of the current regime, but, I fear, adds a few of its own. If all posters had their identities logged, then the burden would appropriately fall heaviest on the most egregious offenders. Unfortunately, increased identification on the Internet has a wide range of troubling implications. First, if the identities become exposed, it hampers the ability of the very threatened minorities that Danielle discusses to use the Internet to express themselves. Second, verifiable IDs that follow web surfers from site-to-site will only increase corporate monitoring of Internet usage. These enhanced databases can be used to invisibly shape user experiences and be opened to gov’t access.

Finally, it’s worth noting that implementing a truly compulsory internet ID system may not be practical. Would it be a worldwide requirement? If so, who would impose it? If not, how would we deal with international users? Most politically palatable schemes would likely still have loopholes that technically-savvy websurfers could exploit. In other words, this regime, in addition to raising serious concerns about the protection of anonymous speech, might not solve the underlying problem.

Because the previous two scenarios rely on holding posters liable, they are only as effective as the ability of websites or plaintiffs to find them. As a third solution, perhaps we should focus – as Citron recommends – on conduit-liability: narrowing or reshaping the safe-harbor provided by Section 230. Of course, placing too much additional burden on the shoulders of conduits could chill speech just as surely as exposing posters’ identities. This is the greatest risk that this third solution poses: by shifting burden back to the conduits, we might muzzle the very discussion we are trying to protect by shielding speakers’ identities.

We might be able to mitigate this danger by crafting a narrow enough burden of care to lay on top of Section 230 immunity. Citron suggests merging individual liability with conduit liability, and requiring websites to maintain IP addresses on their posters in order to retain their section 230 immunity. This mitigates the concerns of the current system, but I worry that it could raise some of the same concerns as requiring surfers to maintain traceable IDs. It still poses the problem, for instance, of serious violators using anonymizers to shield their IP address and protect themselves from prosecution.

One alternative might be to look at the title of Section 230(c): “Protection for ‘Good Samaritan’ blocking and screening of offensive material . . . .” Section 230 could be amended to explicitly include a “Good Samaritan” burden of care for websites. This wouldn’t require websites to take down objectionable material, or to follow a notice-and-takedown style regime. Instead, it would be a procedural right, which would require websites to demonstrate that they had given a good faith consideration to any request to take down information. If they failed this test, although they would not be considered “publishers”, they could still be held liable for user speech on their sites as conduits.

Good faith could be demonstrated by evidence of a considered response to a request to remove the posting. It would be a minimal burden on most websites. Indeed, a serious weakness of this alternative could prove to be that it would be little more than a paper tiger – that websites could easily meet its pro forma requirements and go right on shielding the sort of abuse that Citron’s Article details. At the very least, though, it would compel those sites that succeed, at least in part, because of often harassing attacks – Juicy Campus and Auto Admit being two obvious examples — to shoulder some of the responsibility for the discussion itself. It would also serve as a normative signal that websites have some responsibility for the content that they allow users to post, even if not as full publishers.

Each of the solutions that I mention raises significant concerns. I do think, however, that of the potential options, the final one — a “Good Samaritan” burden of care placed upon websites – could offer a possible first step toward providing some legal recourse for the very real civil rights violations that Citron discusses, while still preserving anonymous and public debate on the Internet. I’d be very interested to hear what everyone else thinks, or any alternative solutions that I haven’t mentioned…

You may also like...

1 Response