Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

You may also like...

3 Responses

  1. Woody says:

    Danielle, this is a great post. Thank you for bringing the Abuse Standards to our attention. I agree that having this information is very valuable for users. One of the largest problems with vague boilerplate terms of use is that it is almost impossible to know when you are in breach. For example, Facebook prohibits its users from taking “any action on Facebook that infringes or violates someone else’s rights.” What does that mean? Which rights? Civil rights? Rights under the Facebook Terms of Use? All of them? Thus, I agree that the release of these terms is a significant development.

    Do you think this release ties Facebook’s hands a bit with respect to enforcement of its terms? How much can these standards be used to help a court (in addition to users) interpret the meaning of the previously vague restriction on “hate speech?”

    While terms of use agreements often seem one-sided, they still purportedly bind both parties in a contract. If these standards are used to interpret the limits of what is meant by “hate speech,” then has Facebook lost some of its wiggle room with respect to enforcement of the agreement? I’m guessing not significantly given the vagueness of the rest of the terms, but I just throw it out there as food for thought and a possible disincentive for Facebook to be transparent with respect to its interpretation of its own contractual language.

  2. Chuck Cosson says:

    Danielle – great post and thanks for an opportunity to comment ; if you will, to continue the discussion you, Chris and others and I had a while back. I had a couple of thoughts which I offer as my own (speaking only for myself and neither my current or former employers).

    I agree with a great deal of what you say, including transparency by service providers. But I think there are important differences in the type of guidelines one develops for content moderators, and the type of guidelines one develops for users.

    Guidance for moderators needs to be granular and specific. Moderators work on large volumes of data and need very specific instructions so as to ensure consistency and comprehensiveness. For users, however, the same approach is often less effective. Detailed service rules can be confusing to well-meaning users and “lawyered up” – exploited for loopholes – by those with less benevolent intent. This could be particularly interesting, as a legal matter, if the policy is part of service terms and conditions.

    The same dilemma, BTW, occurs in other contexts as well. Chinese users seeking to communicate around the confines of official content restrictions have cleverly invented euphemistic terms like “grass mud horse” (in Mandarin, a near-homophone for a Chinese obscenity).

    Absolutely there are huge differences between the promotion of genocide and the promotion of democracy, but the practical challenge of reducing the target content is much the same. And Chinese content policy has found it more effective to formulate very broad rules and then rely on community and “netizen” pressure for enforcement.

    The YouTube Community Guidelines are a specific example of how companies may choose to address this– they provide a short list of common-sense rules and add, whimsically, “don’t try to look for loopholes or try to lawyer your way around the guidelines.” The Community Guidelines put users on notice but don’t block YouTube from making nuanced decisions, responding to changes in social norms, and reduce litigious exchanges with racist users.

    I fully agree with the aim of more mindful digital citizens – and my point is not an objection to transparency. Rather, an observation that – particularly on the Internet – more detailed rules and consequences may be at times less practical to carry out and less effective than we would wish.

  3. Danielle Citron says:

    Thanks to you both! Chuck, this is very helpful, as always, and I’m hoping that Chris and I can keep up this discussion with you as the Inter-Parliamentary Task Force on Cyber Hate moves forward. And that’s very helpful as I think about all of this for my book. Now, Woody, thanks so much–your comments are always insightful. Now, I’d love to talk to you on and offline about where your question is going–I know it is Section 230. And I think that the immunity is rock solid, those sorts of warnings/prohibitions were at the heart of the Prodigy case and what Congress responded to in passing the Good Samaritan statute. So I’m assuming that any contract claims against intermediaries related to TOS agreements are off the table. Let’s talk about that at PLSC. In that regard, I don’t think the release ties their hands in any legalistic sense. But it does force into the open both the conversation about what hate speech means for FB and it nudges them to address their process, which the policy does not cover. THANKS!