Category: First Amendment

4

Stanford Law Review Online: The Dead Past

Stanford Law Review

The Stanford Law Review Online has just published Chief Judge Alex Kozinski’s Keynote from our 2012 Symposium, The Dead Past. Chief Judge Kozinski discusses the privacy implications of our increasingly digitized world and our role as a society in shaping the law:

I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.

I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.

He concludes:

Judges, legislators and law enforcement officials live in the real world. The opinions they write, the legislation they pass, the intrusions they dare engage in—all of these reflect an explicit or implicit judgment about the degree of privacy we can reasonably expect by living in our society. In a world where employers monitor the computer communications of their employees, law enforcement officers find it easy to demand that internet service providers give up information on the web-browsing habits of their subscribers. In a world where people post up-to-the-minute location information through Facebook Places or Foursquare, the police may feel justified in attaching a GPS to your car. In a world where people tweet about their sexual experiences and eager thousands read about them the morning after, it may well be reasonable for law enforcement, in pursuit of terrorists and criminals, to spy with high-powered binoculars through people’s bedroom windows or put concealed cameras in public restrooms. In a world where you can listen to people shouting lurid descriptions of their gall-bladder operations into their cell phones, it may well be reasonable to ask telephone companies or even doctors for access to their customer records. If we the people don’t consider our own privacy terribly valuable, we cannot count on government—with its many legitimate worries about law-breaking and security—to guard it for us.

Which is to say that the concerns that have been raised about the erosion of our right to privacy are, indeed, legitimate, but misdirected. The danger here is not Big Brother; the government, and especially Congress, have been commendably restrained, all things considered. The danger comes from a different source altogether. In the immortal words of Pogo: “We have met the enemy and he is us.”

Read the full article, The Dead Past by Alex Kozinski, at the Stanford Law Review Online.

0

Illinois Law Review, Issue 2012:2 (March 2012)

University of Illinois Law Review Logo

University of Illinois Law Review, Issue 2012:2

Please see our website for past issues

Articles

Homogeneous Rules for Heterogeneous Families: The Standardization of Family Law When There is no Standard Family – Katharine K. Baker (PDF)

Legal Sources of Residential Lock-Ins: Why French Households Move Half as Often as U.S. Household – Robert C. Ellickson (PDF)

Sealand, HavenCo, and the Rule of Law – James Grimmelmann (PDF)

David C. Baum Memorial Lecture on Civil Rights and Civil Liberties

Citizens United and Conservative Judicial Activism – Geoffrey R. Stone (PDF)

Notes

Bargaining for Salvation: How Alternative Auditor Liability Regimes Can Save the Capital Markets – Hassen T. Al-Shawaf (PDF)

Analysis Paralysis: Rethinking the Courts’ Role in Evaluating EIS Reasonable Alternatives – J. Matthew Haws (PDF)

The Real Social Network: How Jurors’ Use of Social Media and Smart Phones Affects a Defendant’s Sixth Amendment Rights – Marcy Zora (PDF)

3

Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

7

Santorum: Please Don’t Google

If you Google “Santorum,” you’ll find that two of the top three search results take an unusual angle on the Republican candidate, thanks to sex columnist Dan Savage. (I very nearly used “Santorum” as a Google example in class last semester, and only just thought better of it.) Santorum’s supporters want Google to push the, er, less conventional site further down the rankings, and allege that Google’s failure to do so is political biased. That claim is obviously a load of Santorum, but the situation has drawn more thoughtful responses. Danny Sullivan argues that Google should implement a disclaimer, because kids may search on “Santorum” and be disturbed by what they find, or because they may think Google has a political agenda. (The site has one for “jew,” for example. For a long time, the first result for that search term was to the odious and anti-Semitic JewWatch site.)

This suggestion is well-intentioned but flatly wrong. I’m not an absolutist: I like how Google handled the problem of having a bunch of skinheads show up as a top result for “jew.” But I don’t want Google as the Web police, though many disagree. Should the site implement a disclaimer if you search for “Tommy Lee Pamela Anderson”? (Warning: sex tape.) If you search for “flat earth theory,” should Google tell you that you are potentially a moron? I don’t think so. Disclaimers should be the nuclear option for Google – partly so they continue to attract attention, and partly because they move Google from a primarily passive role as filter to a more active one as commentator. I generally like my Web results without knowing what Google thinks about them.

Evgeny Morozov has made a similar suggestion, though along different lines: he wants Google to put up a banner or signal when someone searches for links between vaccines and autism, or proof that the Pentagon / Israelis / Santa Claus was behind the 9/11 attacks. I’m more sympathetic to Evgeny’s idea, but I would limit banners or disclaimers to situations that meet two criteria. First, the facts of the issue must be clear-cut: pi is not equal to three (and no one really thinks so), and the planet is indisputably getting warmer. And second, the issue must be one that is both currently relevant and with significant consequences. The flat earthers don’t count; the anti-vaccine nuts do. (People who fail to immunize their children not only put them at risk; they put their classmates and friends at risk, too.) Lastly, I think there’s importance to having both a sense of humor and a respect for discordant, even false speech. The Santorum thing is darn funny. And, in the political realm, we have a laudable history of tolerating false or inflammatory speech, because we know the perils of censorship. So, keeping spreading Santorum!

Danielle, Frank, and the other CoOp folks have kindly let me hang around their blog like a slovenly houseguest, and I’d like to thank them for it. See you soon!

Cross-posted at Info/Law.

3

Cyberbullying and the Cheese-Eating Surrender Monkeys

(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)

Introduction

New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read More

0

The Memory Hole

On RocketLawyer’s Legally Easy podcast, I talk with Charley Moore and Eva Arevuo about the EU’s proposed “right to be forgotten” and privacy as censorship. I was inspired by Jeff Rosen and Jane Yakowitz‘s critiques of the approach, which actually appears to be a “right to lie effectively.” If you can disappear unflattering – and truthful – information, it lets you deceive others – in other words, you benefit and they are harmed. The EU’s approach is a blunderbuss where a scalpel is needed.

Cross-posted at Info/Law.

0

A More or Less Ambitious Argument about First Amendment Architecture?

Thanks again to all who have participated in the online symposium on First Amendment Architecture and to Danielle Citron for inviting us on.

For this likely last post, I discuss some thoughts on challenging the negative-liberty model and incorporating media and physical spaces. I present these thoughts in light of suggestions by several scholars that Architecture is, in different ways, either too ambitious or not ambitious enough.
Read More

5

Cary Sherman and the Lost Generation

The RIAA’s Cary Sherman had a screed about the Stop Online Piracy and PROTECT IP Acts in the New York Times recently. Techdirt’s Mike Masnick brilliantly gutted it, and I’m not going to pile on – a tour de force requires no augmentation. What I want to suggest is that the recording industry – or, at least, its trade group – is dangerously out of touch.

Contrast this with at least part of the movie industry, as represented by Paramount Pictures. I received a letter from Al Perry, Paramount’s Vice President Worldwide Content Protection & Outreach. He proposed coming here to Brooklyn Law School to

exchange ideas about content theft, its challenges and possible ways to address it. We think about these issues on a daily basis. But, as these last few weeks [the SOPA and PROTECT IP debates] made painfully clear, we still have much to learn. We would love to come to campus and do exactly that.

Jason Mazzone, Jonathan Askin, and I are eagerly working to have Perry come to campus, both to present Paramount’s perspective and to discuss it with him. We’ll have input from students, faculty, and staff, and I expect there to be some pointed debate. We’re not naive – the goal here is to try to win support for Paramount’s position on dealing with IP infringement – but I’m impressed that Perry is willing to listen, and to enter the lion’s den (of a sort).

And that’s the key difference: Perry, and Paramount, recognize that Hollywood has lost a generation. For the last decade or so, students have grown up in a world where content is readily available via the Internet, through both licit and illicit means; where the content industries are the people who sue your friends and force you to watch anti-piracy warnings at the start of the movies you paid for; and where one aspires to be Larry Lessig, not Harvey Weinstein. Those of us who teach IP or Internet law have seen it up close. In another ten years, these young lawyers are going to be key Congressional staffers, think tank analysts, entrepreneurs, and law firm partners. And they think Hollywood is the enemy. I don’t share that view – I think the content industries are amoral profit maximizers, just like any other corporation – but I understand it.

And that’s where Sherman is wrong and Perry is right. The old moves no longer work. Buying Congresspeople to pass legislation drafted behind closed doors doesn’t really work (although maybe we’ll find out when we debate the Copyright Term Extension Act of 2018). Calling it “theft” when someone downloads a song they’d never otherwise pay for doesn’t work (even Perry is still on about this one).

One more thing about Sherman: his op-ed reminded me of Detective John Munch in Homicide, who breaks down and shouts at a suspect, “Don’t you ever lie to me like I’m Montel Williams. I am not Montel Williams.” Sherman lies to our faces and expects us not to notice. He writes, “the Protect Intellectual Property Act (or PIPA) was carefully devised, with nearly unanimous bipartisan support in the Senate, and its House counterpart, the Stop Online Piracy Act (or SOPA), was based on existing statutes and Supreme Court precedents.” Yes, it was carefully devised – by content industries. SOPA was introduced at the end of October, and the single hearing that was held on it was stacked with proponents of the bill. “Carefully devised?” Key proponents didn’t even know how its DNS filtering provisions worked. He argues, “Since when is it censorship to shut down an operation that an American court, upon a thorough review of evidence, has determined to be illegal?” Because censorship is when the government blocks you from accessing speech before a trial. “A thorough review of evidence” is a flat lie: SOPA enabled an injunction filtering a site based on an ex parte application by the government, in contravention of a hundred years of First Amendment precedent. And finally, he notes the massive opposition to SOPA and PROTECT IP, but then asks, “many of those e-mails were from the same people who attacked the Web sites of the Department of Justice, the Motion Picture Association of America, my organization and others as retribution for the seizure of Megaupload, an international digital piracy operation?” This is a McCarthyite tactic: associating the remarkable democratic opposition to the bills – in stark contrast to the smoke-filled rooms in which Sherman worked to push this legislation – with Anonymous and other miscreants.

But the risk for Sherman – and Paramount, and Sony, and other content industries – is not that we’ll be angry, or they’ll be opposed. It’s that they’ll be irrelevant. And if Hollywood takes the Sherman approach, rather than the Perry one, deservedly so.

Cross-posted at Info/Law.

0

Free Speech Architecture: Normative Aspects (#8)

In seven posts (available here), I have set out the arguments in First Amendment Architecture. This post covers arguments made in the last 25 pages of that article, the normative and theoretical arguments.

In doing so, this post examines the implications of these principles both for how courts should decide future speech cases (that is, normative doctrinal implications) and for what the First Amendment “means” (that is, more theoretical implications).

We’ll begin with doctrine.
Read More

0

Private Property and Public Speech

Marc, Zephyr, and Tim (as well as Derek) have presented a number of interesting insights and challenges in the past few days regarding our First Amendment Architecture symposium. On Friday, I debated the article with Lillian BeVier and Yochai Benkler. They raised some other important points, as well as some overlapping concerns—regarding property, negative liberty, and digital communications infrastructures.

I will present some thoughts, first, on the relationship between property and speech. All the posts discuss the relationship between speech and property to some extent. And Lillian BeVier played the role of my article’s “opponent” absolutely perfectly and effortlessly (without even acting) partly because of her defense of property rights against speech trumps.
Read More