FAN 200 (First Amendment News) Jasmine McNealy, Newsworthiness, the First Amendment, and Platform Transparency
Jasmine McNealy is an assistant professor in the Department of Telecommunication, in the College of Journalism and Communications at the University of Florida, where she studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, and communities. She holds a PhD in Mass Communication with and emphasis in Media Law, and a J.D. from the University of Florida. Her latest article is “Spam and the First Amendment Redux: Free Speech Issues in State Regulation of Unsolicited Email,” Communication Law & Policy (2018).
As of late the controversy, unrelated to the government, of most attention is the banning of Infowars founder and host Alex Jones from various social media sites including Facebook, YouTube, and Vimeo. Jones, purveyor of all manner of racist, sexist, you-name-it conspiracy theories, has drawn ire for spreading a conspiracy theory about the parents of children and teachers killed in the Sandy Hook mass shooting. He is currently being sued by a group of parents who assert that Jones defamed them by claiming that they and their children were crisis actors and not actual victims.
The Jones social media content cull, though some say belated, is interesting for sparking a larger discussion. In a decision met with outrage Twitter, a site now notorious for making controversial decisions about the kinds of content it will allow, had decided not to ban Jones. He would be banned a few days later. Twitter CEO Jack Dorsey, explained that Jones had not violated it rules against offensive content, a contention that has been challenged. But of more significance is the lack of definition of what actually is considered offensive content, not just for Twitter, but across the various social media sites.
Of course, Twitter and other social media sites are private organizations, therefore claims that sites are violating freedom of expression by banning offensive speech are based less in law and more on, at most, ethical considerations. But social platforms play an increasingly significant role in how individuals seek, send, and receive information. In a study published in 2017 by Pew Research Center of American adults who get news from online sources, 53% of participants self-reported getting news from social media. Sixty-two percent reported getting news from search engines, which may lead to social sites. These numbers point to social media sources as playing an important role in the information that people encounter.
How, what, and the volume of information people encounter is important for decision-making. Platform decision about content users see is an issue of concern as more platforms move to algorithmically generated timelines that curate what we see. Zeynep Tufekçi has written that algorithmic timeline curation disrupts the potential for users to choose for themselves the value of the content they encounter, also asserting that YouTube’s algorithm-based recommendation system could be “one of the most powerful radicalizing instruments of the 21stcentury,” for its recommendations of extreme content. Companies like YouTube offer little, if any, insight into how their algorithms work.
The decision by social platforms – algorithmically or not – about whether users are able to see posts and the kinds of content acceptable for posting is a value judgment. Under a traditional rubric, offensive speech, presumably, would have little to no value and could, therefore, be either banned or hidden from other users. But platforms like Facebook and Twitter, however, have rejected offering a concrete definition of what they define as offensive, when said by whom, and in what context. Instead the platforms, though offering written statements as well as having their individual CEOs offer vague explanations, have left offensiveness open to interpretation.
A recent study from Caitlin Carlson and Hayley Rousselle at the University of Seattle testing Facebook’s offensive speech reporting mechanism found that though Facebook would remove some of the posts reported during their study, a significant number of racist, sexist, and otherwise offensive materials were allowed to remain visible, and that there was no discernible rationale for these content moderation decisions. Even after Facebook revealed the community standards its content moderators use in April 2018, investigative reports revealed that moderators have been told to temper their content removal efforts. So while a platform may reveal its objectionable content standards, in practice, offensiveness decisions are a black box– lacking transparency into how both human and algorithmic content moderation value judgments are made.
That an organization would make a judgement about the value of information is not novel. What we consider traditional news organizations have always made judgments about the value of information, and these gatekeeping decisions about what is newsworthy are many times bolstered by First Amendment jurisprudence. The Supreme Court has of declined to enforce laws mandating that news organizations (outside of broadcast) publish certain information. In Miami Herald v. Tornillo, for instance, in which the newspaper argued that a Florida statute requiring it to publish candidate responses to criticism infringed on press freedom, the Court agreed, finding that such a requirement was an “intrusion on the function of editors.”
Of course, the judgement of newsworthiness by the press is found most often in cases against news organizations for invasion of privacy. The newsworthiness of information is a First Amendment-based defense against privacy actions seeking redress for the publication of information highly offensive to a reasonable person. In these cases, if the information is of a legitimate public interest, the publisher will not be found liable for injury. And the courts have used many different tests for newsworthiness. A prominent newsworthiness test “leaves it to the press” to decide the bounds of what is of a legitimate public interest. Perhaps the most common of the tests, used in Virgil v. Time and enshrined in the Restatement of Torts, considers the “customs and conventions of the community” for a newsworthiness determination. For a news organization this would be a consideration of the community in which it is centered. For social media this could mean the community that it has created.
Therefore, while calls exist for policymakers and legislators to do something about the massive platforms that significantly influence the information that individuals encounter, First Amendment jurisprudence demonstrates that such incursions would most likely violate the exercise of freedom of the press. Social media users in the U.S., then, will have to find an alternative way of persuading platforms to act on objectionable content. So far, public outcry is beginning to work particularly when it targets commercial interests.