Category: Cyberlaw

0

FAN 200 (First Amendment News) Special 200th Issue: 15 Women & Their Views on Free Speech

To commemorate the 200th issue of First Amendment News, I invited women from various professions (lawyers, law professors, and a journalism professor) to draft original essays on any aspect of free speech law. Why only women? Fair question. My answer has to do with the fact, as I perceive it, that by-and-large those who receive the most attention in the First Amendment arena are men. I leave it to others to explain if and why that might be so — some of the contributors to this symposium do just that. However that may be, of this I can say with a good measure of certainty: the essays that follow are diverse, thoughtful, somtimes provocative, original, and often mind-opening.  I extend my thanks to the 15 contributors for their sympsoium essays and to Kellye Testy for kindly agreeing to write the Foreword.  

→ Related: 38 Women Who Argued First Amendment Free Expression Cases in the Supreme Court: 1880 -2018 (Aug. 7, 2018)      

→ With this issue First Amendment News ends its long and rewarding affiliation with Concurring Opinions. I want to thank my colleagues here for their valuable and generous support. I especially want to thank Professor Dan Solove who years ago dared to invite me to be a part of his team. Happily, Dan and his colleagues have agreed to allow me to continue to contribute to Concurring Opinions.

Starting sometime in October, FAN’s new host will be the Foundation for Individual Rights in Education (FIRE). Among other things, you can expect more news along with a variety of digital improvements. From time to time, FAN will also host or co-host live and online symposia and may even conduct a study or two. One thing will, however, remain constant: my commitment to being a fair broker of content. So stay tuned — some of the best is yet to come. — RKLC    

_______Symposium_______

Foreword

Kellye Testy, “Prior Restraint: Women’s Voices and the First Amendment

15 Contributors  

Jane Bambauer, “Diagnosing Donald Trump: Professional Speech in Disorder

Mary Anne Franks, “The Free Speech Fraternity

Sarah C. Haan, “Facebook and the Identity Business

Laura Handman & Lisa Zycherman, “Retaliatory RICO: A Corporate Assault on Speech

Marjorie Heins, “On ‘Absolutism’ and ‘Frontierism’”

Margot Kaminski, “The First Amendment and Data Privacy: Between Reed and a Hard Place

Lyrissa Lidsky, “Libel, Lies, and Conspiracy Theories

Jasmine McNealy, “Newsworthiness, the First Amendment, and Platform Transparency

Helen Norton, “Taking Listeners’ First Amendment Interests Seriously

Tamara Piety, “A Constitutional Right to Lie? Again?: National Institute of Family and Life Advocates d/b/a NIFLA v. Becerra

Ruthann Robson, “The Cyber Company Town

Kelli Sager& Selina MacLaren, First Amendment Rights of Access

Sonja West, “President Trump and the Press Clause: A Cautionary Tale

3

FAN 200 (First Amendment News) Jane Bambauer, “Diagnosing Donald Trump: Professional Speech in Disorder”

Jane Bambauer is a Professor of Law at the University of Arizona. Professor Bambauer’s research assesses the social costs and benefits of Big Data, and questions the wisdom of many well-intentioned privacy laws. Her articles have appeared in the Stanford Law Review, the Michigan Law Review, the California Law Review, and the Journal of Empirical Legal Studies. Professor Bambauer’s own data-driven research explores biased judgment, legal education, and legal careers. One of her recent publications is Information Libertarianism, 105 Cal. L. Rev. 335 (2017) (with Derek E. Bambauer).

_________________

Professor Jane Bambauer

It’s obvious to anybody with a passing familiarity with Narcissistic Personality Disorder that President Trump has it. Yet psychiatrists and psychologists have been constrained to some extent by the “Goldwater Rule,” leaving Omarosa to make the most forceful public statements to date about Trump’s mental health.

Section 7.3 of the American Psychiatric Association code of ethics states the rule as follows:

On occasion psychiatrists are asked for an opinion about an individual who is in the light of public attention or who has disclosed information about himself/herself through public media. In such circumstances, a psychiatrist may share with the public his or her expertise about psychiatric issues in general. However, it is unethical for a psychiatrist to offer a professional opinion unless he or she has conducted an examination and has been granted proper authorization for such a statement.

Section 7.1 also advises psychiatrists to refrain from making public statements with authoritative conviction, admonishing them not to use the phrase “psychiatrists know that…”

The American Psychological Association’s code has similar language, and shortly after Donald Trump’s inauguration, that association reminded its members that

When a psychiatrist comments about the behavior, symptoms, diagnosis, etc. of a public figure without consent, that psychiatrist has violated the principle that psychiatric evaluations be conducted with consent or authorization.

Consent is the weasel word. The concept is perfectly applicable to invasive procedures and other direct interventions, but when it is used to constrain other people from talking to each other, it has been stretched beyond its use.

Jeannie Suk Gersen has written an excellent summary of the Goldwater Rule’s origins and constitutional infirmities. As she explained, the AMA’s and APA’s guidelines are explicitly transcribed into some state licensing laws. Others could very well investigate complaints based on the violations of the professional codes of ethics, so the threat of state action is real.

These constraints led some members of the psych professions to propose bold work-arounds. Bandy Lee, organizer of the “Duty to Warn” conference at Yale on the topic of Donald Trump’s mental illness, claimed that members of the profession can and should exercise their duty to warn about Trump’s “dangerousness” without diagnosing him. This proposal is based on a flawed understanding of the Tarasoff doctrine (not to mention a dubious assumption that psychologists’ assessment of a person’s dangerousness is unrelated to an opinion on “behavior, symptoms, diagnosis, etc.”). One contributor to the symposium even suggested that psychologists and psychiatrists should exercise their danger-based powers to detain Trump against his will. Presumably, if a profession cannot even comment on the mental health of a president within the bounds of ethics, it also cannot initiate a coup by overturning a fair, democratic election.

The free speech issues are blatant enough, but the case law on “professional speech” has enough incoherence to make the Goldwater Rule plausibly defensible. Consider Pickup v. Brown, a Ninth Circuit case that decided (wrongly, in my opinion) that a law banning psychologists from practicing “Sexual Orientation Change Efforts” on youth clients was nota speech regulation. Never mind whether the law could pass the requisite level of scrutiny, the court said scrutiny was unnecessary.

The court divided therapist’s communications into two buckets:

(1) doctor-patient communications about medical treatment receive substantial First Amendment protection, but the government has more leeway to regulate the conduct necessary to administering treatment itself;

(2) psychotherapists are not entitled to special First Amendment protection merely because the mechanism used to deliver mental health treatment is the spoken word.

Get it? Dialectical therapy is a “treatment,” not speech.

With the right evidence, I suspect SOCE bans could survive scrutiny, but Pickupis a dangerous case for permitting a restriction on communications to fly under the radar of constitutional review by asserting that some communications get a technical exemption.  “Diagnosis,” in the case of the Goldwater Rule, is a good candidate for the same treatment as “treatment”. Indeed, much of the Food & Drug Administration’s authority over information technologies depends on it.

Chief Justice John Roberts

Under Chief Justice John Roberts, the Court has done good work shaping free speech doctrine so that it looks beyond labels. The Court has applied scrutiny to regulations that target communication and influence even when the text of the law avoids using obvious references to speech. [Expressions] Campaign finance laws are a good (if controversial) example—those laws are superficially about money and donations, but purpose and underlying theory of campaign finance reform is entirely related to managing communications to voters. But the Court has undercut its work by overextending free speech coverage in Janus. That case involved labor laws that compelled all public employees who are represented by unions to pay union fees. In both form and substance, the law addressed an economic free-rider problem, not a communication problem. But the Court treated the law as a regulation on speech because labor contracts require negotiation, and negotiation requires talking. Janus will be a low point in the Roberts Court’s free speech legacy because it provides ammunition to the argument, mostly specious, that since everything is expressive, the First Amendment should be limited to X (to political speech, to vulnerable speakers, to vulnerable listeners, etc.).

(credit: The New York Times)

There could be more fodder from the regulation of products, too. Free speech challenges to bans on readily executable code for 3-D printed guns should lose. Computer code is made up of words, yes, and those words can communicate an idea to other people who read the same programming language. But everyobject and action has embodied information. A traditionally manufactured gun can also teach. It could be put on display with labels showing how it was made. But gun bans that pass Second Amendment scrutiny could still treat the display gun  as contraband. Likewise, code that will be used principally to make guns rather than to engage in the marketplace of ideas can be regulated the same way physical guns are, as long as they are regulated the same way for the same reasons.

This quick survey leaves a lot of nuance out, but to the extent we can agree that the First Amendment applies when communications are targeted by state action, the Goldwater Rule (as incorporated in state licensing laws) should trigger First Amendment scrutiny.

Moreover, the Goldwater Rule should not survive this scrutiny, even at an intermediate level for professional speech. While diagnosing third parties who are not in a direct relationship with a psychologist or psychiatrist could be error-prone in some circumstances, there are plenty of circumstances in which psychiatrists get enough information from a third party’s self-disclosure. Donald Trump is one such case, but he’s not even the archetype. Many non-famous people leave evidence of their delusion and mischief in emails, social media posts, and voicemails. For disorders in the “dark triad,” these may be as useful or more useful than an in-person session. A patient who is in close contact with a malignant narcissist is better off getting counsel from a psychologist or psychiatrist who does not have to pussy-foot around a clear analysis and remote diagnosis of a client’s septic tormentor.

So, psychologists could successfully challenge any government attempt to punish them for diagnosing Donald Trump. Does that mean they should?

(credit: The Blue Diamond Gallery)

Yes, I think so, but not for the reasons that participants of the “Duty to Warn” conference thought. Their motivation to diagnose Trump was to warn the republic that the president is unstable. But Trump was conspicuously unstable during the election, too. Right now, national politics are controlled by Republicans, and Republican politics are ruled by Trump supporters. And Trump supporters still love their leader because, not despite, of his destructiveness and rancor. You know the fable of the frog and the scorpion? The psychologists at the Duty to Warn conference want to yell, “HE’S A SCORPION! HE’S A SCORPION!” but Trump voters will respond, with a snicker, “damn right; he’s OUR scorpion!” And then they will adjust their little MAGA hats on their littlescorpionheads. (By the way, the populist left could develop its own collective narcissism. Like Trump supporters, the prevailing orthodoxy revolves around oppression by power hierarchies, both real and imagined. And they, too, can be played by Putin.)

Instead of diagnosing Trump to issue a warning, psychiatrists and psychologists should do it for another reason. They should do it to help advise people who are in Trump’s sphere of influence. As Vladimir Putin seems to understand, grandiose narcissists can be manipulated because they are so single-minded and exhausted. I suspect the aides who sprinkle Trump’s briefing with his name have gotten some coaching. Indeed, while the popular media criticizes the president for spending his time and energy on cartoonishly grand missions like the Space Force and cartoonishly frivolous things like Twitter flame wars with celebrities, these are exactly the sorts of things that we should hope take up his presidency. They are the presidential equivalent of giving a toddler some pots and pans to bang on.

0

FAN 200 (First Amendment News) Ruthann Robson, “The Cyber Company Town”

Ruthann Robson is  a Professor of Law & University Distinguished Professor at CUNY School of Law. She is the author of Dressing Constitutionally: Hierarchy, Sexuality, and Democracy (2013), as well as the books Sappho Goes to Law School (1998); Gay Men, Lesbians, and the Law (1996); and Lesbian (Out)Law: Survival Under the Rule of Law (1992), and the editor of the three volume set, International Library of Essays in Sexuality & Law(2011). She is a frequent commentator on constitutional and sexuality issues and the co-editor of the Constitutional Law Professors Blog.

_________________

Professor Ruthann Robson

The constitutional chasm between public and private can quickly become a murky swamp when free speech claims arise.  Perhaps this lack of clarity is attributable to the First Amendment’s status as a political and societal concept as well as a legal one, or perhaps it is because the always problematical public-private divide has increasingly been eroded in our era of “public-private partnerships” and “privatization.”  When the free speech involved occurs on social media — which operates currently as our corporate-owned town square — it can seem like a quagmire, especially if the participants are government officials.

Before considering three contemporary examples, a look back at the landmark case of Marsh v. Alabama (1946) is instructive. Marsh, known as the “company town case,” involved Grace Marsh, arrested for trespassing on the Gulf Shipbuilding Corporation’s property, which was the “town” of Chickasaw, Alabama.  Marsh, a Jehovah’s Witness, had stood on the sidewalk near the post office offering literature; when asked to leave she declined. While the Court is somewhat unclear which First Amendment freedom is at issue — speech, press, or religion — the Court’s majority is definite that such an infringement would not be constitutional if committed by a state or municipality. The Court decides that the fact that the corporation owns title to the land is essentially a technicality which should not prevail over the reality that the company town functions like any other town.  This finding of sufficient state action to make the Constitution applicable is supported by the Court’s conclusion on the merits. Justice Black’s opinion for the Court states that “many people” in the United States live in “company-owned towns” and these people, just like others, “must make decisions which affect the welfare of community and nation,” and so must be informed.

In 2018, social media is accessed by more than 70% of the United States population and has largely replaced leaflets distributed on the corner as a source of information that will be used in making “decisions which affect the welfare of community and nation.” According to the Pew Research Center, the most popular sites include You Tube (73%) and Facebook (68%), as well as Instagram (35%), Pinterest (29%), Snapchat (27%), LinkedIn (25%), Twitter (24%), and What’s App (22%). In the United States Supreme Court’s unanimous decision last year in Packingham v. North Carolinathe Court found a state statute prohibiting registered sex offenders from accessing social networking sites violated the First Amendment.  Justice Anthony Kennedy writing for the Court stated  that “we now may be coming to the realization that the Cyber Age is a revolution of historic proportions,” but we do not yet appreciate the “full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be.”  Concurring, Justice Alito found it important to add that the entirety of the internet or even social media sites are “the 21st century equivalent of public streets and parks.”  In Packingham, the state action threshold was easily crossed: there was a state statute with criminal penalties.  The more vexing situations occur when these cyber- “streets and parks” are owned and operated by private companies.

Alex Jones (credit: Political Dig)

There is a  factional (and presidentially approved) argument that these companies practice “censorship” of “conservative” voices.  Recent controversies surrounding “conspiracy theorist” Alex Jones and his platform “InfoWars”are illustrative.  YouTube and Facebook removed Jones’ content and terminated his accounts, while Twitter penalized Jones by curtailing some of his “privileges.”  While the companies made decisions based on interpretations of their “terms of service,” arguments about whether or not the companies were justified often veered into constitutional doctrine, including whether falsehoods, hate speech, and incitements were protected.  When the First Amendment was specifically cited, this provoked a rejoinder of the state action doctrine based on the distinction between the public and private. This in turn was rebutted by the observation that Facebook, for example, is a “public company” evincing a confusion wrought by the state action doctrine (as well as the law of corporations). But even if one recognized that the First Amendment did not apply to the social media companies because they were private actors, there was an argument it should.

More sophisticated legal thinkers, including law students, would be able to frame arguments extending the Marshcompany-town holding.  Yet in Marsh, the application of the First Amendment served to protect Grace Marsh and arguably the community living in Chickasaw, while allowing well-funded conspiracy theorists to not only access but potentially overrun our cyber town squares might result in less “information” and “free speech” given our current First Amendment doctrines that presume a level playing field in the “marketplace of ideas.”

Closer and even more doctrinally difficult situations occur when a government official uses social media and the platform functions. Consider a local government official using the functions of Facebook, including the ability to remove comments to one’s own post and to block a person.  Last year, in Davison v. Loudon County Board of Supervisors, a United States District Judge in the Eastern District of Virginia found that these acts constituted sufficient state action and violated the First Amendment. The judge analyzed the elected official’s uses of the Facebook page, noted that she had government staff who assisted with the page, and also had a separate personal Facebook page.  Although the politician could “take” the page with her when she left office, the judge concluded she “used it as a tool of governance” and the page reflected her efforts to “swathe” it with “the trappings of her office.”  The judge found that this county board supervisor (although not the entire Board of Supervisors) was subject to First Amendment and had violated it.

Finally, there is the President and his notorious Twitter account and statements. The Department of Justice, representing the President, has appealed a final order finding that the state action requirement was satisfied and that the President did violate the First Amendment when blocking users from viewing or responding to his tweets. In her extensive opinion in Knight First Amendment Institute v. Trump, United States District Judge Naomi Reice Buchwald rejected the argument that blocking was not state action because the blocking functionality was afforded every user. She also rejected the argument that because the Twitter account was begun in 2009 it was not governmental now. Relying on stipulations of the parties, the judge reasoned that together with federal employee Daniel Scavino, the “White House Social Media Director,” “President Trump uses @realDonaldTrump, often multiple times a day, to announce, describe, and defend his policies; to promote his Administration’s legislative agenda; to announce official decisions; to engage with foreign political leaders; to publicize state visits; to challenge media organizations whose coverage of his Administration he believes to be unfair; and for other statements, including on occasion statements unrelated to official government business. President Trump sometimes uses the account to announce matters related to official government business before those matters are announced to the public through other official channels.”  Having cleared the hurdle of state action, the judge found a First Amendment violation, importantly observing that the “audience for a reply extends more broadly than the sender of the tweet being replied to, and blocking restricts the ability of a blocked user to speak to that audience. While the right to speak and the right to be heard may be functionally identical if the speech is directed at only one listener, they are not when there is more than one.”

When government officials, whether the President or a local member of a county board, suppress dissident voices in the virtual public square, there is not only “viewpoint discrimination” under First Amendment doctrine, but also an attempt to manufacture consent so dangerous for democracy.  Their acts should clearly constitute state action and they should be held to the rigors of the First Amendment.  Less clear is whether the multi-billion-dollar companies that presently host our public squares should be subject to constitutional constraints in the same manner as the “company towns” of the last century, especially if the consequences of doing so afford us less free speech and make us less informed as we navigate our cyber sidewalks.

0

FAN 200 (First Amendment News) Jasmine McNealy, Newsworthiness, the First Amendment, and Platform Transparency

Jasmine McNealy is an assistant professor in the Department of Telecommunication, in the College of Journalism and Communications at the University of Florida, where she studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, and communities. She holds a PhD in Mass Communication with and emphasis in Media Law, and a J.D. from the University of Florida. Her latest article is “Spam and the First Amendment Redux: Free Speech Issues in State Regulation of Unsolicited Email,” Communication Law & Policy (2018).

_________________

Professor Jasmine McNealy

As of late the controversy, unrelated to the government, of most attention is the banning of Infowars founder and host Alex Jones from various social media sites including Facebook, YouTube, and Vimeo. Jones, purveyor of all manner of racist, sexist, you-name-it conspiracy theories, has drawn ire for spreading a conspiracy theory about the parents of children and teachers killed in the Sandy Hook mass shooting. He is currently being sued by a group of parents who assert that Jones defamed them by claiming that they and their children were crisis actors and not actual victims.

The Jones social media content cull, though some say belated, is interesting for sparking a larger discussion. In a decision met with outrage Twitter, a site now notorious for making controversial decisions about the kinds of content it will allow, had decided not to ban Jones. He would be banned a few days later. Twitter CEO Jack Dorsey, explained that Jones had not violated it rules against offensive content, a contention that has been challenged. But of more significance is the lack of definition of what actually is considered offensive content, not just for Twitter, but across the various social media sites.

Alex Jones (credit: Political Dig)

Of course, Twitter and other social media sites are private organizations, therefore claims that sites are violating freedom of expression by banning offensive speech are based less in law and more on, at most, ethical considerations. But social platforms play an increasingly significant role in how individuals seek, send, and receive information. In a study published in 2017 by Pew Research Center of American adults who get news from online sources, 53% of participants self-reported getting news from social media. Sixty-two percent reported getting news from search engines, which may lead to social sites. These numbers point to social media sources as playing an important role in the information that people encounter.

How, what, and the volume of information people encounter is important for decision-making. Platform decision about content users see is an issue of concern as more platforms move to algorithmically generated timelines that curate what we see. Zeynep Tufekçi has written that algorithmic timeline curation disrupts the potential for users to choose for themselves the value of the content they encounter, also asserting that YouTube’s algorithm-based recommendation system could be “one of the most powerful radicalizing instruments of the 21stcentury,” for its recommendations of extreme content. Companies like YouTube offer little, if any, insight into how their algorithms work.

The decision by social platforms – algorithmically or not – about whether users are able to see posts and the kinds of content acceptable for posting is a value judgment. Under a traditional rubric, offensive speech, presumably, would have little to no value and could, therefore, be either banned or hidden from other users. But platforms like Facebook and Twitter, however, have rejected offering a concrete definition of what they define as offensive, when said by whom, and in what context. Instead the platforms, though offering written statements as well as having their individual CEOs offer vague explanations, have left offensiveness open to interpretation.

A recent study from Caitlin Carlson and Hayley Rousselle at the University of Seattle testing Facebook’s offensive speech reporting mechanism found that though Facebook would remove some of the posts reported during their study, a significant number of racist, sexist, and otherwise offensive materials were allowed to remain visible, and that there was no discernible rationale for these content moderation decisions. Even after Facebook revealed the community standards its content moderators use in April 2018, investigative reports revealed that moderators have been told to temper their content removal efforts. So while a platform may reveal its objectionable content standards, in practice, offensiveness decisions are a black box– lacking transparency into how both human and algorithmic content moderation value judgments are made.

That an organization would make a judgement about the value of information is not novel. What we consider traditional news organizations have always made judgments about the value of information, and these gatekeeping decisions about what is newsworthy are many times bolstered by First Amendment jurisprudence. The Supreme Court has of declined to enforce laws mandating that news organizations (outside of broadcast) publish certain information. In Miami Herald v. Tornillo, for instance, in which the newspaper argued that a Florida statute requiring it to publish candidate responses to criticism infringed on press freedom, the Court agreed, finding that such a requirement was an “intrusion on the function of editors.”

(credit: Heartland Newsfeed)

Of course, the judgement of newsworthiness by the press is found most often in cases against news organizations for invasion of privacy. The newsworthiness of information is a First Amendment-based defense against privacy actions seeking redress for the publication of information highly offensive to a reasonable person. In these cases, if the information is of a legitimate public interest, the publisher will not be found liable for injury. And the courts have used many different tests for newsworthiness.  A prominent newsworthiness test “leaves it to the press” to decide the bounds of what is of a legitimate public interest. Perhaps the most common of the tests, used in Virgil v. Time and enshrined in the Restatement of Torts, considers the “customs and conventions of the community” for a newsworthiness determination. For a news organization this would be a consideration of the community in which it is centered. For social media this could mean the community that it has created.

Therefore, while calls exist for policymakers and legislators to do something about the massive platforms that significantly influence the information that individuals encounter, First Amendment jurisprudence demonstrates that such incursions would most likely violate the exercise of freedom of the press. Social media users in the U.S., then, will have to find an alternative way of persuading platforms to act on objectionable content. So far, public outcry is beginning to work particularly when it targets commercial interests.

0

FAN 200 (First Amendment News) Margot E. Kaminski, “The First Amendment and Data Privacy: Between Reed and a Hard Place”

Margot E. Kaminski is an associate professor of law at the University of Colorado Law School. She specializes in the law of new technologies, focusing on information governance, privacy, and freedom of expression. Her forthcoming work on transparency and accountability in the EU’s General Data Protection Regulation (GDPR) stems from her recent Fulbright-Schuman Innovation Grant in the Netherlands and Italy.

________________________________

Professor Margot Kaminski

The Supreme Court’s recent Fourth Amendment cases show a strong awareness that privacy can implicate First Amendment values. In June 2018 in Carpenter v. United States, a case addressing warrantless location tracking through cell phone records, the majority noted that a lack of privacy can reveal (and presumably chill) “familial, political, professional, religious, and sexual associations.” In Riley v. California, a 2014 Fourth Amendment case addressing cell-phone searches, the majority recognized that while “[m]ost people cannot lug around every piece of mail they have received for the past several months, every picture they have taken, or every book or article they have read,” a cell phone can store all of these things. With these comments, the Court observed that free expression often relies on privacy, and implied that absent privacy protections, people may conform in their choice of reading material, their political affiliations, and ultimately, their speech. In other words, privacy protections often also protect First Amendment rights.

But at the same time, the Court’s recent First Amendment decisions have created additional obstacles for those who seek to draft an American data privacy law.

The United States famously does not have omnibus federal privacy protection. Instead, U.S. privacy law is a patchwork of sectoral protections (like protections for video records, consumer protection at the FTC, state privacy torts, and state AG enforcement). Legislators reading Carpenter may conclude that a number of Justices in that case (including Justice Samuel Alito, who explicitly calls for privacy lawmaking in his dissent) understand the need for omnibus data privacy law. But even as the Court in Carpenter seems to point to the need for privacy legislation, its First Amendment decisions in Reed v. Gilbert and NIFLA v. Becerra threaten to tie legislators’ hands.

Reed treats content-based regulation with suspicion; Becerra does the same with disclosure requirements. In Reed, which addressed a town’s rules for the placement of signs, the Court held that “regulation of speech is content based if a law applies to particular speech because of the topic discussed or the idea or message expressed.” All content-based regulation is subjected to strict scrutiny. Thus, a regulatory scheme that treated Political Signs differently from Temporary Directional Signs was content-based, and subject to strict scrutiny, and because it failed strict scrutiny, unconstitutional.

Becerra, decided this year, limits legislators’ ability to require truthful disclosures. The Court preliminarily enjoined California’s disclosure requirements for crisis pregnancy centers—centers that often pretend to provide abortion services but in practice discourage women from getting abortions. While claiming to be narrow and fact-bound, the majority in Becerra applied Reed’s broad understanding of content-based regulation to disclosure laws. The majority of the Court in Becerra explained that California’s disclosure law was “content-based regulation of speech” because “[b]y compelling individuals to speak a particular message, such notices ‘alte[r] the content of [their] speech.”

Why, in a discussion of data privacy, do I focus on Reed and Becerra and not on an earlier line of cases that directly address privacy laws? Because to an extent many Americans do not realize, data privacy protections are actually about increasing speech, not decreasing it. And at least as enacted elsewhere in the world, the efficacy of data privacy regimes as good policy often depends on being able to calibrate the law differently for different actors or scenarios. The first implicates Becerra on disclosures; the second implicates Reed and content-based analysis.

The Fair Information Practices, which were originally formulated in the United States, are the basis for data privacy laws around the world and are largely built around a concept that should be complimentary to the First Amendment: transparency. Take the EU’s General Data Protection Regulation (GDPR) as an example. Individuals are supposed to be notified when companies obtain their information. They have a right to access their data, and to find out to whom it has been disclosed. They have a right to find out where data has come from. Companies have to explain the purpose of data processing, and how profiling and automated decision-making work. All of these transparency rights and obligations attempt to correct, or at least expose, very real power imbalances between individuals and the companies that profit from their data. The GDPR is a disclosure law, as much as it is a right to stop other people from speaking about you.

Today’s paradoxical privacy problem, then, is that even as data privacy regimes rely in large part on increasing, not decreasing, speech by requiring disclosures to users, the Court’s recent First Amendment cases now shut down disclosure as a regulatory tool. Under Becerra’s reasoning, anydisclosure requirement could potentially be characterized as content-based (or, per Justice Stephen Breyer, “[v]irtually every disclosure law requires individuals ‘to speak a particular message’). The GDPR’s requirement that companies disclose the source of their data? Content-based compelled speech. The GDPR’s requirement that companies reveal to individuals the information held about them? A “particular message,” and thus content-based compelled speech.

The majority in Becerra attempts to cabin the impact of its opinion both (1) by pointing to the possibility of regulating speech incidental to regulated conduct (as it alleges was done by the majority in Planned Parenthood v. Casey, a case addressing compelled disclosures by doctors to patients seeking abortions), and (2) by carving out existing disclosure laws (“we do not question the legality of health and safety warnings long considered permissible, or purely factual and uncontroversial disclosures about commercial products”). The problem is that data privacy does not fit squarely within either of these potential exceptions. It regulates information flow, not conduct, or at least conduct that’s nearly inextricable from information flow (though I’ve argued elsewhere that some forms of privacy violations are actually framed in First Amendment law as conduct-like). And because the U.S. lacks omnibus data protection law, privacy doesn’t readily fall into the Court’s attempt to exempt existing consumer protection law. By virtue of its very newness, data privacy may be more heavily scrutinized than other accepted areas of consumer protection.

Justice Stephen Breyer (credit: The Nation)

As Justice  Breyer notes in his dissent, “in suggesting that heightened scrutiny applies to much economic and social legislation,” Becerra jeopardizes legislators’ judgments in areas long left to legislative discretion. Reedcompounds this problem.Some kinds of information, and some behaviors, create greater privacy harms than others. For example, the GDPR, like many American privacy laws, puts in place added protections for “special categories” of data—or what we would call “sensitive information.” Is this content-based discrimination? Does it apply “to particular speech because of the topic discussed?” If so, this would potentially implicate even our current sectoral approach to privacy, not to mention hundreds of behavior-or-information-type-specific state privacy laws. The GDPR also, in many places, distinguishes between categories of companies. Take, for example, the GDPR’s derogation for small and medium-sized enterprises, which are subject to less onerous record-keeping provisions, presumably because smaller companies pose less of a risk of inflicting privacy harms. A government may also want to create an exception to, or less onerous version of, privacy law for smaller companies as a matter of innovation or competition policy, to encourage the growth of startups. Under Reed —and its predecessor, Sorrell v. IMS — identifying particular topics or speakers, or categories of information flow, could give rise to a challenge of regulation as content-based or even viewpoint-based. On paper at least, as Justice Elena Kagan noted in her concurrence, Reed’s broad take on content-based regulation “cast[s] a constitutional pall on reasonable regulations” and puts in place judicial second-guessing of matters that legislatures are likely institutionally better situated to assess.

One potential loophole, or at least limiting principle, to explore is Justice Samuel Alito’s strangely confident conviction in his concurrence, joined by both Justice Sonia Sotomayor and Justice Anthony Kennedy, that “Rules imposing time restrictions on signs advertising a one-time event” would not be considered content-based. This suggests that it may be possible for legislators to continue to name things in information-related legislation, when the restriction is the kind of restriction (e.g. time place and manner) that the First Amendment allows. But how to line-draw between a law that imposes temporal restrictions on “signs advertising a one-time event” and a law that restricts, in other ways, “Temporary Direction Signs” is frankly beyond me.

Thus legislators wanting to write—or in the case of California, that have recently written and passed—data privacy law may find themselves stuck between Reed and a hard place. To some extent, this can be understood as one example of what some have described as the Lochnerization of the First Amendment: its use for deregulatory purposes. But in the context of privacy, things are perhaps uniquely complicated. Speech values fall squarely on both sides. By regulating speech to protect privacy, you both restrict and protect speech. As the Court noted in Bartnicki v. Vopper, “the fear of public disclosure of private conversations might well have a chilling effect on private speech. . . . In a democratic society privacy of communication is essential if citizens are to think and act creatively and constructively.” And as the Court has increasingly recognized in its Fourth Amendment jurisprudence, personal information beyond communicative content—such as location data, or reading material or pictures stored on a cell phone—can implicate First Amendment concerns as well, by revealing your associations, your political affiliations, your opinions, your innermost thoughts.

In some ways, Carpenter and other cases move the United States closer to Europe on privacy. There is increasing convergence on what counts as sensitive information: the GDPR includes location data in its definition of “personal data;” and the Court in both Jones and Carpenter recognized an expectation of privacy in publicly disclosed location information. The Court in Carpenter continued a recent theme in Fourth Amendment jurisprudence of referring to what might be understood as First Amendment harms; the GDPR, too, addresses speech-related privacy. Even more significantly, Carpenter begins to undermine a central premise of U.S. privacy law: that you don’t have an expectation of privacy in information you have shared. This suggests that privacy protections might travel with private information, and pop up later in information flows—in other words, that a data privacy model may now be more palatable in the United States. And a disclosure-based privacy law targeting third parties (data brokers) is exactly what California recently passed.

But the First Amendment, once again, may be the context that ultimately defines, through constraints, American privacy law. Determining how to navigate the roadblocks of the Court’s recent First Amendment jurisprudence may—even more than legislative inertia—be the central problem U.S. data privacy now faces.

0

FAN 200 (First Amendment News) Sarah C. Haan, “Facebook and the Identity Business”

Sarah C. Haan is an Associate Professor of Law at Washington and Lee School of Law.  Professor Haan writes about corporate political speech and disclosure. Her most recent article is “The Post-Truth First Amendment,” forthcoming in the Indiana Law Journal.

_______________________

Facebook revealed in September 2017 that Russian-linked groups had waged a disinformation campaign on its platform to influence the 2016 election. The news caused public outcry and led to a series of self-regulating responses from Facebook and other social media companies.  In a new work-in-progress, I will examine Facebook’s regulation of political speech and, more broadly, what it means for political discourse to be regulated through private ordering by a global, profit-seeking, public company. My conclusions are different from those of some other scholars, in part because I give sharp focus to Facebook as a business actor.

Both before and after the revelation about Russian disinformation, public statements by Facebook spokespeople and its CEO, Mark Zuckerberg, have invoked a commitment to “basic principles of free speech.”  Here is a recent example:

This tweet suggests that Facebook seeks to uphold “basic principles of free speech.” The company’s offhand speech is full of such references to “free speech” and “freedom of expression,” but you won’t find those terms in Facebook’s securities filings, its Community Standards, or in the sworn testimony of company executives before Congress. Kate Klonick has argued that to the extent that platforms like Facebook moderate content, they rely on a foundation in American free speech norms. But what, precisely, do Facebook’s executives think that “free speech” means?

Speaker Discrimination

Professor Sarah C. Haan

I will argue that although most scholarly attention has focused on Facebook’s regulation of content, in fact Facebook’s regulation of political speech relies heavily on speaker discrimination. The company regulates content, reluctantly, at the margins; it plainly prefers to regulate identity. It does this by distinguishing between “authentic” and “inauthentic” user identity. Facebook allows speakers to post nearly anything if they present an “authentic” identity, but completely prohibits the speech of “inauthentic” speakers. As Sheryl Sandberg has admitted publicly, virtually none of the offending Russian content would have violated Facebook’s rules if it had been published by an “authentic” speaker.

Facebook explains:

Authenticity is the cornerstone of our community. We believe that people are more accountable for their statements and actions when they use their authentic identities. That’s why we require people to connect on Facebook using the name they go by in everyday life.

There might be another reason, too.

Facebook’s business model focuses on the sale of advertising. Although the company describes its “mission” differently, its business purpose is to profit from selling ads. It is this business model that justifies a preference for regulating identity over content.

Facebook needs to know who its users are for at least two reasons. First, it needs to be able to tell an advertiser how many unique individuals its advertising can reach. Thus, under Facebook’s rules, it is a violation to create multiple accounts or to share accounts, ensuring that each human user has just one account.

Second, Facebook needs to know who you are so that its customization and microtargeting features will work. Those features set Facebook apart from its competitors and justify its ad revenue. This explains why, for example, the company refuses to prohibit false content (“fake news”), yet prohibits the use of a “false date of birth” as an aspect of identity.  As part of your expressive identity, you may feelyounger than you are, but Facebook prohibits you from actually identifyingas a younger person. Your Facebook identity is what distinguishes you from other ad targets.

The public may have wrongly concluded that Facebook’s authentic/inauthentic rules were designed specifically for the purpose of culling foreign propagandists from its platform. This is not so. Since it went public in 2012—long before Russian agents sought to influence the 2016 election—the company’s filings with the U.S. Securities and Exchange Commission consistently have discussed “authentic identity” as a business policy linked to user metrics. Facebook’s business risks associated with user identity go well beyond concerns about electoral integrity.

Mark Zuckerberg

When Facebook determines that a speaker is “inauthentic,” it shuts down the speaker’s account, removes all traces of its speech from Facebook, and prevents the speaker from engaging in future speech on Facebook.

On May 22, Mark Zuckerberg testified to the European Parliamentthat Facebook shut down about 580 million fake accounts in the first quarter of 2018—nearly six million fake accounts per day. Perhaps the scale of foreign electoral interference around the world is so vast that only algorithmic identity licensing can save us.  I am skeptical.

Two additional things are worth noting about Facebook’s identity-based speech regime.

Identity Verification

First, since April, Facebook has doubled down on identity policing, employing identity licensing in a way that, I will argue, is a form of prior restraint. Under new rules, which are already shaping political discourse about the 2018 midterm elections, an individual in the U.S. who wants to use Facebook’s paid tools to communicate about “national issues of public importance” must verify his or her identity ex anteby submitting private information, such as passport, driver’s license and Social Security information, to the company for approval. In a second step, Facebook mails a special code to the individual at a physical address in the U.S., and this must be input into the verification system to confirm that Facebook has that person’s working address.

In May, Facebook clarified that it would apply its identity verification rules to all publishers of paid content, including news publishers. In other words, news outlets that use Facebook’s paid tools to boost content must go through identity verification, and must also label the content with a “paid for by” label. This, of course, represents a clear break between Facebook’s notion of “free speech” and recognized press freedoms in the First Amendment canon. Global media groups have called on Facebook to exempt news publishers from the new rules, but Facebook has so far refused. The company’s speaker discrimination does not go so far as to discriminate between the press and other speakers, even though our Constitution takes this for granted.

In the past few years, Facebook has acquired a number of companies that specialize in biometric identity verification technology, suggesting that Facebook is at least leaving open the possibility of pursuing identity verification as a stand-alone product or feature.  The tech industry press occasionally suggests that Facebook’s end game may be to monetize identity itself. In other words, Facebook’s choice to regulate political speech primarily through identity licensing and verification may be driven, not by “free speech” or democracy concerns at all, but rather by its desire to pursue identity verification as a business opportunity.

No More Pseudonyms

Second, Facebook’s authentic/inauthentic identity rules conflate two important types of identity—false identity and anonymous identity—treating them identically because this is convenient for Facebook’s business. False identity means pretending to be someone you’re not. The Mueller Indictment alleged that Russian actors adopted false identity on Facebook and other social media platforms in order to trick people into thinking they were U.S. citizens. If true, this was a crime.

Anonymous identity is something else. Americans have traditionally used anonymous speech to express unpopular political views; The Federalist Papers, for example, were originally published by Alexander Hamilton, John Jay, and James Madison under a pseudonym, Publius. Had they attempted such a trick in 2018 on Facebook, the company would have faulted them for “inauthentic behavior” and restricted their speech. Facebook’s choice to prohibit speech, including political speech, from individuals who choose anonymity (but do not claim false identities) represents another important break between the company’s concept of “free speech” and the First Amendment’s.

Although Silicon Valley tech companies often embody a libertarian spirit—and Facebook’s resistance to policing content or to distinguishing between the press and other speakers seems consistent with that view—the company’s decision to prohibit both false identity and anonymous identity is decidedly notlibertarian. The libertarian view is that speech should be evaluated purely on its merits. A regulatory regime that must authorize your identity before it lets you speak shares little in common with a philosophy that emphasizes freedom and individuality.

* * *

In Citizens United v. FEC, the Supreme Court observed that speaker-based distinctions are often a form of content control. It asserted that the State may not “deprive the public of the right and privilege to determine for itself what speech and speakers are worthy of consideration.” Although scholars were quick to point out that Justice Kennedy’s opinion overstated the First Amendment’s hostility to speaker-based discrimination, these two points resonate when we consider how Facebook is regulating political discourse primarily through identity.  In my view, the issues don’t come fully into focus until we consider Facebook’s business motives.

Even if Facebook eventually loses ground to other speech-regulating competitors, these issues of private ordering are not going away.

0

FAN 191 (First Amendment News) “Robotica” — First Book on Speech Rights & Artificial Intelligence Published

If any current scholarly work of free speech theory survives into the next century, it will undoubtedly be this book.
Abstract: As more and more communication becomes robotized and/or is driven by artificial intelligence, a variety of questions arise about the relation between the government’s regulation of such communication and First Amendment law. Such robotized communication involves everything from our home appliances, automobiles, phones, computers, and more. Ever more press stories are today being written by algorithmic design, and stock transfers follow a similar path of communication.

But is such data speech under the First Amendment? Are such transfers even communication within the meaning of the First Amendment? And if so, to what extent, any why, can the government regulate these new technologies? Such questions and others are explored for the first time in book form in  the latest work by First Amendment scholars Ronald Collins and David Skover.

Professor David Skover

The book (their tenth) is ROBOTICA: SPEECH RIGHTS & ARTIFICIAL INTELLIGENCE (Cambridge University Press, June 2018).

  Following the main text, are four commenatries by Ryan Calo, Jane Bambauer, James Grimmelmann, and Bruce E.H. Johnson. The authors thereafter reply to the commentaries.

Advance Praise

“Collins and Skover have produced a wonderfully readable, thorough, and insightful exploration of the intersection of technology and free speech theory, from the beginning of time well into the future. If any current scholarly work of free speech theory survives into the next century, it will undoubtedly be this book.” — Martin Redish, Louis and Harriet Ancel Professor of Law, Northwestern University Law School

“Collins and Skover have long been among the finest minds focused on free expression in America. In this remarkable book, they now turn insightfully to an incredibly complex and timely issue associated with ‘robotic expression’: how should the First Amendment handle contests involving regulation of ‘robot speech’ as artificial intelligence grows rapidly in prominence? This book conveys their deep knowledge – and the knowledge of other noted scholars – of the history, law, and technology that inform the way we should think about this emerging field of constitutional inquiry. ” — John Palfrey, Head of School at Phillips Academy, Massachusetts & former Executive Director of the Berkman Center for Internet and Society, Harvard University

New Book on Right of Publicity 

Jennifer Rothman has written an important, informative study of the right of publicity as it has developed in the United States and its connections to a robust privacy right. By reexamining the past, she has elaborated principles that will be useful in defining both publicity and privacy rights for the digital age.Rebecca Tushnet, Harvard Law School

Abstract: Who controls how one’s identity is used by others? This legal question, centuries old, demands greater scrutiny in the Internet age. Jennifer Rothman uses the right of publicity―a little-known law, often wielded by celebrities―to answer that question, not just for the famous but for everyone.

In challenging the conventional story of the right of publicity’s emergence, development, and justifications, Rothman shows how it transformed people into intellectual property, leading to a bizarre world in which you can lose ownership of your own identity. This shift and the right’s subsequent expansion undermine individual liberty and privacy, restrict free speech, and suppress artistic works.

The Right of Publicity traces the right’s origins back to the emergence of the right of privacy in the late 1800s. The central impetus for the adoption of privacy laws was to protect people from “wrongful publicity.” This privacy-based protection was not limited to anonymous private citizens but applied to famous actors, athletes, and politicians. Beginning in the 1950s, the right transformed into a fully transferable intellectual property right, generating a host of legal disputes, from control of dead celebrities like Prince, to the use of student athletes’ images by the NCAA, to lawsuits by users of Facebook and victims of revenge porn.

The right of publicity has lost its way. Rothman proposes returning the right to its origins and in the process reclaiming privacy for a public world.

→ RelatedRothman’s Roadmap To The Right of Publicity

Steve Brill’s Latest Book Discusses First Amendment Law (among other things) Read More

1

Calling All SCOTUS Clerks: Illuminating New Book on the Fourth Amendment and Its Original Meaning as a Guide for Carpenter

On June 5, 2017, the Supreme Court announced that it will review United States v. Carpenter, a case involving long-term, retrospective tracking of a person’s movements using information generated by his cell phone. As EFF’s Andrew Crocker and Jennifer Lynch write, “This is very exciting news in the world of digital privacy. With Carpenter, the Court has an opportunity to continue its recent pattern of applying Fourth Amendment protections to sensitive digital data. It may also limit or even reevaluate the so-called ‘Third Party Doctrine,’ which the government relies on to justify warrantless tracking and surveillance in a variety of contexts.”

SCOTUS clerks will surely be reading much Fourth Amendment literature and caselaw in preparation for their work on the Carpenter case. I’d like to nominate David Gray’s brilliant addition to the canon The Fourth Amendment in an Age of Surveillance (Cambridge University Press 2017).

From the book jacket:

The Fourth Amendment is facing a crisis. New and emerging surveillance technologies allow government agents to track us wherever we go, to monitor our activities online and offline, and to gather massive amounts of information relating to our financial transactions, communications, and social contacts. In addition, traditional police methods like stop-and-frisk have grown out of control, subjecting hundreds of thousands of innocent citizens to routine searches and seizures. In this work, David Gray uncovers the original meaning of the Fourth Amendment to reveal how its historical guarantees of collective security against threats of ‘unreasonable searches and seizures’ can provide concrete solutions to the current crisis. This important work should be read by anyone concerned with the ongoing viability of one of the most important constitutional rights in an age of increasing government surveillance.

Here is a video of Prof. Gray talking about the book: https://www.youtube.com/watch?v=pHUNRndaYIo

 

0

FAN (First Amendment News, Special Series #3) Newseum Institute Program on Apple-FBI Encryption Controversy Scheduled for June 15th

images

“The government [recently] dropped a bid to force Apple to bypass a convicted Brooklyn drug dealer’s pass code so it could read data on his phone.” — Government Technology, April 27, 2016

Headline: “Department of Justice drops Apple case after FBI cracks iPhone”San Francisco Chronicle, March 28, 2016

The Newseum Institute has just announced its June 15th event concerning the Apple-FBI encryption controversy. Information concerning the upcoming event is set out below:

Date:  June 15th, 2016

Time: 3:00 p.m.

Location: Newseum: 555 Pennsylvania Ave NW, Washington, DC 20001

Register here (free but limited seating):

http://www.newseum.org/events-programs/rsvp1/

The event will be webcast live on the Newseum Institute’s site

Screen Shot 2016-05-18 at 1.10.36 PM

“PEAR” v. THE UNITED STATES

The issues involved in the Apple cell phone controversy will be argued in front of a mock U.S. Supreme Court held at the Newseum as “Pear v. the United States.”

Experts in First Amendment law, cyber security, civil liberties and national security issues will make up the eight-member High Court, and legal teams will represent “Pear” and the government. The oral argument, supported by written briefs, will focus on those issues likely to reach the actual high court, from the power of the government to “compel speech” to the privacy expectations of millions of mobile phone users.

The Justices hearing the case at the Newseum:

  • As Chief Justice: Floyd Abrams, renowned First Amendment lawyer and author; and Visiting Lecturer at the Yale Law School.
  • Harvey Rishikof, most recently dean of faculty at the National War College at the National Defense University and chair of the American Bar Association Standing Committee on Law and National Security
  • Nadine Strossen, former president of the American Civil Liberties Unionthe John Marshall Harlan II Professor of Law at New York Law School
  • Linda Greenhouse, the Knight Distinguished Journalist in Residence and Joseph Goldstein Lecturer in Law at Yale Law School; long-time U.S. Supreme Court correspondent for The New York Times
  • Lee Levine, renowned media lawyer; adjunct Professor of Law at the Georgetown University Law Center
  • Stewart Baker,national security law and policy expert and former Assistant Secretary for Policy at the U.S. Department of Homeland Security
  • Stephen Vladeck, Professor of Law at American University Washington College of Law; nationally recognized expert on the role of the federal courts in the war on terrorism
  • The Hon. Robert S. Lasnik, senior judge for the Western District of Washington at the U.S. District Court

Lawyers arguing the case:

  • For PearRobert Corn-Revere has extensive experience in First Amendment law and communications, media and information technology law.
    • Co-counsel is Nan Mooney, writer and former law clerk to Chief Judge James Baker of the U.S. Court of Appeals for the Armed Forces.
  • For the U.S. governmentJoseph DeMarco, who served from 1997 to 2007 as an Assistant United States Attorney for the Southern District of New York, specializes in issues involving information privacy and security, theft of intellectual property, computer intrusions, on-line fraud and the lawful use of new technology.
    • Co-counsel is Jeffrey Barnum, a lawyer and legal scholar specializing in criminal law and First Amendment law who argued United States v. Alaa Mohammad Ali before the U.S. Court of Appeals for the Armed Forces while in law school.

Each side will have 25 minutes to argue its position before the Court and an additional five minutes for follow-up comments. Following the session, there will be an opportunity for audience members to ask questions of the lawyers and court members.

The program is organized on behalf of the Newseum Institute by the University of Washington Law School’s Harold S. Shefelman Scholar Ronald Collins and by Nan Mooney.