Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

You may also like...

3 Responses

  1. Woody says:

    Danielle, this is a great post. Thank you for bringing the Abuse Standards to our attention. I agree that having this information is very valuable for users. One of the largest problems with vague boilerplate terms of use is that it is almost impossible to know when you are in breach. For example, Facebook prohibits its users from taking “any action on Facebook that infringes or violates someone else’s rights.” What does that mean? Which rights? Civil rights? Rights under the Facebook Terms of Use? All of them? Thus, I agree that the release of these terms is a significant development.

    Do you think this release ties Facebook’s hands a bit with respect to enforcement of its terms? How much can these standards be used to help a court (in addition to users) interpret the meaning of the previously vague restriction on “hate speech?”

    While terms of use agreements often seem one-sided, they still purportedly bind both parties in a contract. If these standards are used to interpret the limits of what is meant by “hate speech,” then has Facebook lost some of its wiggle room with respect to enforcement of the agreement? I’m guessing not significantly given the vagueness of the rest of the terms, but I just throw it out there as food for thought and a possible disincentive for Facebook to be transparent with respect to its interpretation of its own contractual language.

  2. Chuck Cosson says:

    Danielle – great post and thanks for an opportunity to comment ; if you will, to continue the discussion you, Chris and others and I had a while back. I had a couple of thoughts which I offer as my own (speaking only for myself and neither my current or former employers).

    I agree with a great deal of what you say, including transparency by service providers. But I think there are important differences in the type of guidelines one develops for content moderators, and the type of guidelines one develops for users.

    Guidance for moderators needs to be granular and specific. Moderators work on large volumes of data and need very specific instructions so as to ensure consistency and comprehensiveness. For users, however, the same approach is often less effective. Detailed service rules can be confusing to well-meaning users and “lawyered up” – exploited for loopholes – by those with less benevolent intent. This could be particularly interesting, as a legal matter, if the policy is part of service terms and conditions.

    The same dilemma, BTW, occurs in other contexts as well. Chinese users seeking to communicate around the confines of official content restrictions have cleverly invented euphemistic terms like “grass mud horse” (in Mandarin, a near-homophone for a Chinese obscenity).

    Absolutely there are huge differences between the promotion of genocide and the promotion of democracy, but the practical challenge of reducing the target content is much the same. And Chinese content policy has found it more effective to formulate very broad rules and then rely on community and “netizen” pressure for enforcement.

    The YouTube Community Guidelines are a specific example of how companies may choose to address this– they provide a short list of common-sense rules and add, whimsically, “don’t try to look for loopholes or try to lawyer your way around the guidelines.” The Community Guidelines put users on notice but don’t block YouTube from making nuanced decisions, responding to changes in social norms, and reduce litigious exchanges with racist users.

    I fully agree with the aim of more mindful digital citizens – and my point is not an objection to transparency. Rather, an observation that – particularly on the Internet – more detailed rules and consequences may be at times less practical to carry out and less effective than we would wish.

  3. Danielle Citron says:

    Thanks to you both! Chuck, this is very helpful, as always, and I’m hoping that Chris and I can keep up this discussion with you as the Inter-Parliamentary Task Force on Cyber Hate moves forward. And that’s very helpful as I think about all of this for my book. Now, Woody, thanks so much–your comments are always insightful. Now, I’d love to talk to you on and offline about where your question is going–I know it is Section 230. And I think that the immunity is rock solid, those sorts of warnings/prohibitions were at the heart of the Prodigy case and what Congress responded to in passing the Good Samaritan statute. So I’m assuming that any contract claims against intermediaries related to TOS agreements are off the table. Let’s talk about that at PLSC. In that regard, I don’t think the release ties their hands in any legalistic sense. But it does force into the open both the conversation about what hate speech means for FB and it nudges them to address their process, which the policy does not cover. THANKS!