Advancing the Fight Against Cyber Hate with Greater Transparency and Clarity about Hate Speech Policies

Today, online intermediaries voluntarily seek to combat digital hatred, often addressing hate speech in their Terms of Service Agreements or Community Guidelines.  Those agreements and guidelines tend to include vague prohibitions of hate speech.  The terms of service for Yahoo!, for instance, requires users of some services to refrain from generating “hateful or racially, ethnically or otherwise objectionable” content without saying more.  Intermediaries can advance the fight against digital hate with more transparency and clarity about the terms of, and harms to be prevented by, their hate speech policies, as well as the consequences of policy violations.  With more transparency and clarity, intermediaries can make behavioral expectations more understandable and users can more fully appreciate the significance of digital citizenship, see here, here, here, and here.  The more intermediaries and users understand why a particular policy prohibits a certain universe of speech, the more likely they can then put into practice, and adhere to, that policy in a way that achieves those objectives.

Before seeking to provide guidance on how intermediaries might do that, it is important to recognize that efforts to define hate speech raise at least two significant challenges.  First, many disagree over which, if any, of the harmful effects potentially generated by such speech are sufficiently serious to warrant action.  Second, controversy also remains about the universe of speech that is actually likely to trigger harms deemed important enough to avoid.  So, for example, even if an intermediary defines hate speech as that which tends to incite violence against targeted groups, how do we determine which speech has the propensity to do that?  Much of this lies in identifying the factors relevant to making such causal predictions.  In Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age (forthcoming BU Law Review 2011), Helen Norton and I don’t pretend that that we can make hard choices easy and recognize that intermediaries’ choices among various options may turn on a variety of issues: their assessment of the relative costs of hate speech and its constraint; empirical predictions about what sort of speech is indeed likely to lead to what sorts of harms; the breadth of their business interests, available resources, and the like; and their sense of corporate social responsibility to foster digital citizenship.  Intermediaries’ choices on how to define hate speech and the harms that they seek to avoid — however difficult — can and should be made in a more principled and transparent way.The Spectrum of Definitions Available to Intermediaries

Narrower Definitions of Hate Speech: More Tangible Harms

1.  Speech that threatens and incites violence

Intermediaries may define prohibited hate speech as that which threatens or encourages violence against individuals or groups.  Calls for violence effectuate a wholesale denial of digital citizenship.  The U.S. Supreme Court has provided guidance for this approach, finding that speech that constitutes a “true threat” or intentional incitement to imminent violence is entirely unprotected by the First Amendment and within the government’s power to regulate.  Assessing whether certain speech is likely to incite imminent violence or will lead reasonable people to fear violence will vary with the content and context of the expression.  Key factors in making such evaluations include — but may not be limited to — the clarity with which the speech advocates violence and the specificity with which individuals are identified as potential targets.  For instance, the inclusion of a target’s personally-identifying information can contribute to the conclusion that a reasonable person would find the expression to communicate a serious expression of intent to inflict bodily harm upon the target.

These sorts of factors can help intermediaries determine whether certain situations should be characterized as threats of, or incitements to, violence.  Posters on a Yahoo! bulletin board, for instance, listed names of specific Arab-Americans alongside their home addresses, telephone numbers, and the suggestion that they are “Islamic terrorists.”  There, the targeted individuals notified Yahoo!, which immediately took down the postings.  Neo-Nazi Hal Turner’s blog postings offer another illustration of speech that threatens or incites violence.  A jury convicted Turner in a criminal case based on his postings saying that Judges Frank Easterbrook, Richard Posner, and William Bauer “deserve to be killed,” along with the targets’ photographs, work locations, and a picture of their courthouse modified to show the locations of “anti-truck bomb barriers.”

Sometimes hate speakers urge violence against groups rather than specific individuals.  For instance, Turner’s website also urged readers to murder “illegal aliens”: “We’re going to have to start killing these people.  I advocate using extreme violence against illegal aliens.  Clean your guns.  Find out where the largest gathering of illegal aliens will be near you . . .  and then do what has to be done.”  Along these lines, some intermediaries may define hate speech as threats of violence against groups and individuals.  beliefnet, a website devoted to providing information on a wide variety of topics related to faith and spirituality, offers a helpful definition of hate speech in this vein.

To be sure, definitional challenges remain even under a narrowly drafted policy that constrains only hate speech that threatens of incites violence against specific individuals or groups.  Under what circumstances would a reasonable person understand certain online speech — such as the use of certain cultural symbols, like nooses and burning crosses — to communicate a true, if implied, threat?  With respect to cross burning, the Supreme Court has observed that some symbols in certain contexts — but not in all contexts — effectively express frightening threats.  Timothy Zick has thoughtfully explored the use of context and cultural meaning to determine whether cross-burning communicates threats of violence or instead political protest as has Alexander Tsesis.  Contextual inquiry is as inevitable as it is difficult under any definition of hate speech.  Focusing on the specific harms to be prevented can help us sharpen and justify our inquiry in a principled way.

2. Speech that intentionally inflicts severe emotional distress

Although this inquiry too is inevitably context-specific, a body of tort law illuminates the factors that courts use in determining if speech amounts to intentional infliction of emotional distress.  As Benjamin Zipursky explains, “over decades and even centuries, courts recognized clusters of cases” constituting extreme and outrageous behavior outside the norms of decency.  These include behavior that is individually targeted, especially threatening or humiliating, repeated, or reliant on especially sensitive or outrageous material.  Targeted individuals cannot fulfill their potential as digital citizens if they find cyberspace an unsafe environment to express their views.

Recall Bonnie Jouhari’s experience with digital hate, see here.  There, an administrative law judge determined that the website operator intentionally inflicted emotional distress on Jouhari and her daughter through “a relentless campaign of domestic terrorism.” The postings’ harms could have been mitigated had an intermediary—such as the Internet access provider hosting the site—removed it.  Unfortunately, the postings remained online long after Ms. Jouhari enlisted the help of the FBI and the state Department of Housing to pursue action against the website operator.

3.  Speech that harasses

Intermediaries might choose to define hate speech in terms of longstanding harassment principles, which permit government to regulate harassing speech at work or at school when such harassment is sufficiently severe or pervasive to undermine access to equal employment or educational opportunity.  Courts and enforcement agencies have interpreted statutorily prohibited harassment to include oral or written conduct that is sufficiently severe or pervasive to create a discriminatory educational or workplace environment.  Factors relevant to assessing whether verbal or written conduct meets this standard include “the frequency of the discriminatory conduct; its severity; whether it is physically threatening or humiliating, or a mere offensive utterance;” and whether it inflicts psychological harm.  In the educational context, for example, verbal or written conduct violates statutory prohibitions on discrimination by federally funded educational activities when the “harassment is so severe, pervasive, and objectively offensive that it can be said to deprive the victims of access to the educational opportunities or benefits provided by the school.”

More specifically, Bryn Mawr defines harassment to include “verbal behavior such as unwanted sexual comments, suggestions, jokes or pressure for sexual favors; nonverbal behavior such as suggestive looks or leering” and offers as examples “continuous and repeated sexual slurs or sexual innuendoes, “offensive and repeated risqué jokes or kidding about sex or gender-specific traits,” and “repeated unsolicited propositions for dates and/or sexual relations.”  The College of William and Mary prohibits “[c]onduct that is sufficiently severe, persistent or pervasive enough so as to threaten an individual or limit the ability of an individual to work, study, or participate in the activities of the College” and defines such conduct to include “making unwanted obscene, abusive or repetitive telephone calls, telephone messages, electronic mail, instant messages using electronic mail programs, or similar communications with intent to harass.”  Although harassment in the employment and education contexts does not parallel that in cyberspace in a number of respects, Internet intermediaries remain free to consider these efforts when crafting their own policies.

Broader Definitions of Hate Speech: Less Tangible Harms

As private actors, intermediaries remain unconstrained by the Constitution and thus are legally free to respond to a wider universe of hate speech than that held to be unprotected by the First Amendment — such as hate speech that inflicts arguably less tangible, yet still substantial harms to digital citizenship.

1.  Speech that silences counter-speech

Intermediaries may define hate speech as including that which silences or devalues its targets’ counter-speech.  They might draw from private universities’ extensive experience in regulating speech of this type, since they—like Internet intermediaries—are unconstrained by the First Amendment yet for institutional reasons generally remain deeply attentive to free speech as well as antidiscrimination concerns.  Some private universities, for example, go beyond the anti-harassment requirements of Titles VI and IX in identifying a certain set of community norms to be protected from disruptive speech.  Such policies often emphasize a spirit of academic freedom that requires not only a commitment to free discourse, but also an understanding that certain expression can actually undermine that discourse.

Colgate University, for example, articulates its commitment to intellectual inquiry and debate by prohibiting “acts of bigotry” because they “are not part of legitimate academic inquiry.”  The university emphasizes the contextual nature of this inquiry, noting that “harassment has occurred if a reasonable person would have found the behavior offensive and his or her living, learning, or working environment would be impaired,” while reserving the right to discipline offensive conduct “that is inconsistent with community standards even if it does not rise to the level of harassment as defined by federal or state law.”

Other proposals would similarly permit private universities to punish slurs, insults, and epithets (normally protected by the First Amendment from regulation by public actors), but would otherwise allow speech that invites a response and rational discourse.  For example, Peter Byrne argues that access to free speech on campus “should be qualified by the intellectual values of academic discourse,” permitting universities to bar racial insults but not “rational but offensive propositions that can be disputed by argument and evidence.”  He argues that “racial insults have no status among discourse committed to truth;” they do not aim to set forth, improve, or critique any proposition.  Indeed, racial insults simply communicate irrational hatred designed to make the target feel less worthy. Intermediaries might choose to define prohibited hate speech as that which shuts down, rather than facilitates, reasoned discourse—e.g., slurs, insults, and epithets.

2. Speech that exacerbates hatred or prejudice

An intermediary might choose to focus on speech that more broadly contributes to bigotry and prejudice by denigrating or defaming an entire group.  Advocates of such an approach often target inflammatory and virulent rhetoric. Jeremy Waldron, for example, distinguishes between “hateful” and “moderate” forms of a particular message, as well as between “attacks on a person and attacks on a position that they hold.” In so doing, he would prohibit speech that is both hateful and attacks the person (rather than the person’s position). He seeks to return to an understanding of group defamation’s harms as including visible signs that “group members may be subject to abuse, defamation, humiliation, discrimination, and violence.”

Under the title “Don’t be sexist, racist, or a hater,” Digg describes its hate speech policy as: “Would you talk to your mom or neighbor like that?  Digg defines hate speech as speech intended to degrade, intimidate, or incite violence or prejudicial action against members of a protected group. For instance, racist or sexist content may be considered hate speech.”  YouTube appears to take a similar definitional approach, explaining that it is “generally okay to criticize a nation, but it is not okay to make insulting generalizations about people of a particular nationality.”

You may also like...