Category: First Amendment

1

On the Intersection of Speech and Politics

This will be my last post guest blogging on Concurring Opinions; I am so grateful for the experience.

Almost everyone agrees that university campuses should be bastions of free speech. Fervent disagreement, however, exists just below the surface of that statement. Depending on how values are prioritized, individuals may differ on when speech becomes harassment, when speech becomes punishable conduct, and when speech is too controversial, extreme, or offensive to be permitted in the classroom. What are your first (and then your second, and third) thoughts when you hear about a UC Santa Barbara professor who emailed his students graphic photographs comparing Holocaust victims to Palestinians in Gaza? Or, what is your reaction to students in a Yale fraternity, as part of an initiation, chanting “No means yes, yes means anal” while marching around campus. Do your views change when you hear about Georgetown University denying official recognition to a pro-choice student organization because of its Catholic and Jesuit tradition?

Prior to joining Penn State Law as a Visiting Assistant Professor, I worked at the Foundation for Individual Rights in Education, an organization that spoke out against the three universities that sought to punish the UCSB professor and the Yale fraternity, and refused recognition to the H*yas for Choice. (The asterisk is because Georgetown will not permit the group to attach the term Hoyas to its name.) While at FIRE, I, a committed feminist, personally argued that the Yale fraternity’s chants did not constitute actionable harassment. Although Yale, like Georgetown, is a private university, both promise their students free speech rights.

I was constantly disheartened that FIRE was labeled as partisan, because it indicates how many people connect the speech they seek to protect to their own political beliefs and assume that others do the same.  When FIRE staffers write columns on The Huffington Post, the organization is accused of being liberal. In most other circumstances, FIRE is dismissed as a conservative mouthpiece, because much of the speech that is censored on campuses is viewed as more harmonious with conservative causes.

Read More

0

The Turn to Infrastructure for Internet Governance

Drawing from economic theory, Brett Frischmann’s excellent new book Infrastructure: The Social Value of Shared Resources (Oxford University Press 2012) has crafted an elaborate theory of infrastructure that creates an intellectual foundation for addressing some of the most critical policy issues of our time: transportation, communication, environmental protection and beyond. I wish to take the discussion about Frischmann’s book into a slightly different direction, moving away from the question of how infrastructure shapes our social and economic lives into the question of how infrastructure is increasingly co-opted as a form of governance itself.

Arrangements of technical architecture have always inherently been arrangements of power. This is certainly the case for the technologies of Internet governance designed to keep the Internet operational. This governance is not necessarily about governments but about technical design decisions, the policies of private industry and the decisions of new global institutions. By “Infrastructures of Internet governance,” I mean the technologies and processes beneath the layer of content and inherently designed to keep the Internet operational. Some of these architectures include Internet technical protocols; critical Internet resources like Internet addresses, domain names, and autonomous system numbers; the Internet’s domain name system; and network-layer systems related to access, Internet exchange points (IXPs) and Internet security intermediaries. I have published several books about the inherent politics embedded in the design of this governance infrastructure.  But here I wish to address something different. These same Internet governance infrastructures are increasingly being co-opted for political purposes completely irrelevant to their primary Internet governance function.

The most pressing policy debates in Internet governance increasingly do not involve governance of the Internet’s infrastructure but governance using the Internet’s infrastructure.  Governments and large media companies have lost control over content through laws and policies and are recognizing infrastructure as a mechanism for regaining this control.  This is certainly the case for intellectual property rights enforcement. Copyright enforcement has moved well beyond addressing specific infringing content or individuals into Internet governance-based infrastructural enforcement. The most obvious examples include the graduated response methods that terminate the Internet access of individuals that repeatedly violate copyright laws and the domain name seizures that use the Internet’s domain name system (DNS) to redirect queries away from an entire web site rather than just the infringing content. These techniques are ultimately carried out by Internet registries, Internet registrars, or even by non-authoritative DNS operators such as Internet service providers. Domain name seizures in the United States often originate with the Immigration and Customs Enforcement agency. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA).

An even more pronounced connection between infrastructure and governance occurs in so-called “kill-switch” interventions in which governments, via private industry, enact outages of basic telecommunications and Internet infrastructures, whether via protocols, application blocking, or terminating entire cell phone or Internet access services. From Egypt to the Bay Area Rapid Transit service blockages, the collateral damage of these outages to freedom of expression and public safety is of great concern. The role of private industry in enacting governance via infrastructure was also obviously visible during the WikiLeaks CableGate saga during which financial services firms like PayPal, Visa and MasterCard opted to block the financial flow of money to WikiLeaks and Amazon and EveryDNS blocked web hosting and domain name resolution services, respectively.

This turn to governance via infrastructures of Internet governance raises several themes for this online symposium. The first theme relates to the privatization of governance whereby industry is voluntarily or obligatorily playing a heightened role in regulating content and governing expression as well as responding to restrictions on expression. Concerns here involve not only the issue of legitimacy and public accountability but also the possibly undue economic burden placed on private information intermediaries to carry out this governance. The question about private ordering is not just a question of Internet freedom but of economic freedom for the companies providing basic Internet infrastructures. The second theme relates to the future of free expression. Legal lenses into freedom of expression often miss the infrastructure-based governance sinews that already permeate the Internet’s underlying technical architecture. The third important theme involves the question of what this technique of governance via infrastructure will mean for the technical infrastructure itself.  As an engineer as well as a social scientist, my concern is for the effects of these practices on Internet stability and security, particularly the co-opting of the Internet’s domain name system for content mediation functions for which the DNS was never intended. The stability of the Internet’s infrastructure is not a given but something that must be protected from the unintended consequences of these new governance approaches.

I wish to congratulate Brett Frischmann on his new book and thank him for bringing the connection between society and infrastructure to such a broad and interdisciplinary audience.

Dr. Laura DeNardis, American University, Washington, DC.

0

Introduction: Symposium on Infrastructure: the Social Value of Shared Resources

I am incredibly grateful to Danielle, Deven, and Frank for putting this symposium together, to Concurring Opinions for hosting, and to all of the participants for their time and engagement. It is an incredible honor to have my book discussed by such an esteemed group of experts. 

The book is described here (OUP site) and here (Amazon). The Introduction and Table of Contents are available here.

Abstract:

Shared infrastructures shape our lives, our relationships with each other, the opportunities we enjoy, and the environment we share. Think for a moment about the basic supporting infrastructures that you rely on daily. Some obvious examples are roads, the Internet, water systems, and the electric power grid, to name just a few. In fact, there are many less obvious examples, such as our shared languages, legal institutions, ideas, and even the atmosphere. We depend heavily on shared infrastructures, yet it is difficult to appreciate how much these resources contribute to our lives because infrastructures are complex and the benefits provided are typically indirect.

The book devotes much-needed attention to understanding how society benefits from infrastructure resources and how management decisions affect a wide variety of private and public interests. It links infrastructure, a particular set of resources defined in terms of the manner in which they create value, with commons, a resource management principle by which a resource is shared within a community.

Infrastructure commons are ubiquitous and essential to our social and economic systems. Yet we take them for granted, and frankly, we are paying the price for our lack of vision and understanding. Our shared infrastructures—the lifeblood of our economy and modern society—are crumbling. We need a more systematic, long-term vision that better accounts for how infrastructure commons contribute to social welfare.

In this book, I try to provide such a vision. The first half of the book is general and not focused on any particular infrastructure resource. It cuts across different resource systems and develops a framework for understanding societal demand for infrastructure resources and the advantages and disadvantages of commons management (by which I mean, managing the infrastructure resource in manner that does not discriminate based on the identity of the user or use). The second half of the book applies the theoretical framework to different types of infrastructure—e.g., transportation, communications, environmental, and intellectual resources—and examines different institutional regimes that implement commons management. It then wades deeply into the contentious “network neutrality” debate and ends with a brief discussion of some other modern debates.

Throughout, I raise a host of ideas and arguments that probably deserve/require more sustained attention, but at 436 pages, I had to exercise some restraint, right? Many of the book’s ideas and arguments are bound to be controversial, and I hope some will inspire others. I look forward to your comments, criticisms, and questions.

4

Why I Don’t Teach the Privacy Torts in My Privacy Law Class

(Partial disclaimer — I do teach the privacy torts for part of one class, just so the students realize how narrow they are.)

I was talking the other day with Chris Hoofnagle, a co-founder of the Privacy Law Scholars Conference and someone I respect very much.  He and I have both recently taught Privacy Law using the text by Dan Solove and Paul Schwartz. After the intro chapter, the text has a humongous chapter 2 about the privacy torts, such as intrusion on seclusion, false light, public revelation of private facts, and so on.  Chris and other profs I have spoken with find that the chapter takes weeks to teach.

I skip that chapter entirely. In talking with Chris, I began to articulate why.  It has to do with my philosophy of what the modern privacy enterprise is about.

For me, the modern project about information privacy is pervasively about IT systems.  There are lots of times we allow personal information to flow.  There are lots of times where it’s a bad idea.  We build our collection and dissemination systems in highly computerized form, trying to gain the advantages while minimizing the risks.  Alan Westin got it right when he called his 1970’s book “Databanks in a Free Society.”  It’s about the data.

Privacy torts aren’t about the data.  They usually are individualized revelations in a one-of-a-kind setting.  Importantly, the reasonableness test in tort is a lousy match for whether an IT system is well designed.  Torts have not done well at building privacy into IT systems, nor have they been of much use in other IT system issues, such as deciding whether an IT system is unreasonably insecure or suing software manufacturers under products liability law.  IT systems are complex and evolve rapidly, and are a terrible match with the common sense of a jury trying to decide if the defendant did some particular thing wrong.

When privacy torts don’t work, we substitute regulatory systems, such as HIPAA or Gramm-Leach-Bliley.  To make up for the failures of the intrusion tort, we create the Do Not Call list and telemarketing sales rules that precisely define how much intrusion the marketer can make into our time at home with the family.

A second reason for skipping the privacy torts is that the First Amendment has rendered unconstitutional a wide range of the practices that the privacy torts might otherwise have evolved to address.  Lots of intrusive publication about an individual is considered “newsworthy” and thus protected speech.  The Europeans have narrower free speech rights, so they have somewhat more room to give legal effect to intrusion and public revelation claims.

It’s about the data.  Torts has almost nothing to say about what data should flow in IT systems.  So I skip the privacy torts.

Other profs might have other goals.  But I expect to keep skipping chapter 2.

 

10

Bloggers v. Bloggers

I’m truly stumped by this one. On the one hand, there is no better test of a free speech enthusiast’s commitment to principle than a case where a self-proclaimed “journalist” harasses bloggers by creating websites to ruin their Internet footprints. On the other hand, when the tactics of an individual are so corrosive to the free exchange of ideas, can they really be called speech?

A $2.5 million judgment was awarded against Crystal Cox for defamation after she allegedly purposely destroyed the reputation of Obsidian Financial Group, LLC and its firm principal Kevin Padrick. She’s also targeted popular blogger Marc Randazza (and his daughter), creating websites to affect their Google footprints, then offering her services to undo the reputational harms that she has perpetrated.

Because most of what Cox wrote was too hyperbolic and subjective to give rise to a defamation suit, Cox was sued only for a blog post with specific statements that Padrick and Obsidian committed fraud. Cox claims to have a source for these statements, but she was not able to prove their veracity. Under Oregon’s libel laws, media persons do not have to reveal their sources, and plaintiffs seeking presumed damages against journalists must prove that statements were made with “actual malice.” However, according to the district court, Cox is not a media person. She has no journalistic credentials, does not engage in fact-checking and other techniques of journalists, and does not contact the “other side” to get multiple perspectives on a story.

Read More

4

Stanford Law Review Online: The Dead Past

Stanford Law Review

The Stanford Law Review Online has just published Chief Judge Alex Kozinski’s Keynote from our 2012 Symposium, The Dead Past. Chief Judge Kozinski discusses the privacy implications of our increasingly digitized world and our role as a society in shaping the law:

I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.

I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.

He concludes:

Judges, legislators and law enforcement officials live in the real world. The opinions they write, the legislation they pass, the intrusions they dare engage in—all of these reflect an explicit or implicit judgment about the degree of privacy we can reasonably expect by living in our society. In a world where employers monitor the computer communications of their employees, law enforcement officers find it easy to demand that internet service providers give up information on the web-browsing habits of their subscribers. In a world where people post up-to-the-minute location information through Facebook Places or Foursquare, the police may feel justified in attaching a GPS to your car. In a world where people tweet about their sexual experiences and eager thousands read about them the morning after, it may well be reasonable for law enforcement, in pursuit of terrorists and criminals, to spy with high-powered binoculars through people’s bedroom windows or put concealed cameras in public restrooms. In a world where you can listen to people shouting lurid descriptions of their gall-bladder operations into their cell phones, it may well be reasonable to ask telephone companies or even doctors for access to their customer records. If we the people don’t consider our own privacy terribly valuable, we cannot count on government—with its many legitimate worries about law-breaking and security—to guard it for us.

Which is to say that the concerns that have been raised about the erosion of our right to privacy are, indeed, legitimate, but misdirected. The danger here is not Big Brother; the government, and especially Congress, have been commendably restrained, all things considered. The danger comes from a different source altogether. In the immortal words of Pogo: “We have met the enemy and he is us.”

Read the full article, The Dead Past by Alex Kozinski, at the Stanford Law Review Online.

0

Illinois Law Review, Issue 2012:2 (March 2012)

University of Illinois Law Review Logo

University of Illinois Law Review, Issue 2012:2

Please see our website for past issues

Articles

Homogeneous Rules for Heterogeneous Families: The Standardization of Family Law When There is no Standard Family – Katharine K. Baker (PDF)

Legal Sources of Residential Lock-Ins: Why French Households Move Half as Often as U.S. Household – Robert C. Ellickson (PDF)

Sealand, HavenCo, and the Rule of Law – James Grimmelmann (PDF)

David C. Baum Memorial Lecture on Civil Rights and Civil Liberties

Citizens United and Conservative Judicial Activism – Geoffrey R. Stone (PDF)

Notes

Bargaining for Salvation: How Alternative Auditor Liability Regimes Can Save the Capital Markets – Hassen T. Al-Shawaf (PDF)

Analysis Paralysis: Rethinking the Courts’ Role in Evaluating EIS Reasonable Alternatives – J. Matthew Haws (PDF)

The Real Social Network: How Jurors’ Use of Social Media and Smart Phones Affects a Defendant’s Sixth Amendment Rights – Marcy Zora (PDF)

3

Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

7

Santorum: Please Don’t Google

If you Google “Santorum,” you’ll find that two of the top three search results take an unusual angle on the Republican candidate, thanks to sex columnist Dan Savage. (I very nearly used “Santorum” as a Google example in class last semester, and only just thought better of it.) Santorum’s supporters want Google to push the, er, less conventional site further down the rankings, and allege that Google’s failure to do so is political biased. That claim is obviously a load of Santorum, but the situation has drawn more thoughtful responses. Danny Sullivan argues that Google should implement a disclaimer, because kids may search on “Santorum” and be disturbed by what they find, or because they may think Google has a political agenda. (The site has one for “jew,” for example. For a long time, the first result for that search term was to the odious and anti-Semitic JewWatch site.)

This suggestion is well-intentioned but flatly wrong. I’m not an absolutist: I like how Google handled the problem of having a bunch of skinheads show up as a top result for “jew.” But I don’t want Google as the Web police, though many disagree. Should the site implement a disclaimer if you search for “Tommy Lee Pamela Anderson”? (Warning: sex tape.) If you search for “flat earth theory,” should Google tell you that you are potentially a moron? I don’t think so. Disclaimers should be the nuclear option for Google – partly so they continue to attract attention, and partly because they move Google from a primarily passive role as filter to a more active one as commentator. I generally like my Web results without knowing what Google thinks about them.

Evgeny Morozov has made a similar suggestion, though along different lines: he wants Google to put up a banner or signal when someone searches for links between vaccines and autism, or proof that the Pentagon / Israelis / Santa Claus was behind the 9/11 attacks. I’m more sympathetic to Evgeny’s idea, but I would limit banners or disclaimers to situations that meet two criteria. First, the facts of the issue must be clear-cut: pi is not equal to three (and no one really thinks so), and the planet is indisputably getting warmer. And second, the issue must be one that is both currently relevant and with significant consequences. The flat earthers don’t count; the anti-vaccine nuts do. (People who fail to immunize their children not only put them at risk; they put their classmates and friends at risk, too.) Lastly, I think there’s importance to having both a sense of humor and a respect for discordant, even false speech. The Santorum thing is darn funny. And, in the political realm, we have a laudable history of tolerating false or inflammatory speech, because we know the perils of censorship. So, keeping spreading Santorum!

Danielle, Frank, and the other CoOp folks have kindly let me hang around their blog like a slovenly houseguest, and I’d like to thank them for it. See you soon!

Cross-posted at Info/Law.

3

Cyberbullying and the Cheese-Eating Surrender Monkeys

(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)

Introduction

New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read More