Category: Privacy (Gossip & Shaming)

3

Revenge Porn Site Operators and Federal Criminal Liability

My recent post offered a potential amendment to Section 230 of the CDA that would exempt from the safe harbor operators whose sites are primarily designed to host illegal activity. Even without such legal change, cyber cesspool operators could face criminal liability if prosecutors took matters seriously.  Section 230 does not provide a safe harbor to federal criminal charges.  Consider revenge porn operator Hunter Moore’s statement to the press (Forbes’s Kashmir Hill and Betabeat’s Jessica Roy) that, on his new site, he will overlay maps of individuals’ homes next to their naked pictures and social media accounts (if he does not like them).  If Moore is serious, he might open himself up to criminal charges of aiding and abetting cyber stalking.  Congress, in its 2006 reauthorization of the Violence Against Women Act (VAWA), banned the use of any “interactive computer service” to engage in a “course of conduct” that places a person in another state in reasonable fear of serious bodily injury or death or that is intended to cause, and causes, a victim to suffer substantial emotional distress.  18 U.S.C.A. 2261A(2) (2012).  As the Executive Director of the National Center for Victims of Crime explained in congressional testimony:

[S]talkers are using very sophisticated technology . . . —installing spyware on your computer so they can track all of your interactions on the Internet, your purchases, your e-mails and so forth, and using that against you, forwarding e-mails to people at your job, broadcasting your whereabouts, your purchases, your reading habits and so on, or installing GPS in your car so that you will show up at the grocery store, at your local church, wherever and there is the stalker and you can’t imagine how the stalker knew that you were going to be there. . . . this legislation amends the statute so that prosecutors have more effective tools, I think, to address technology through VAWA.

Congress ought to consider passing laws that criminalize the operation of sites designed to facilitate the posting of nude photographs without subjects’ consent, along the lines of state invasion of privacy laws.  States like New Jersey prohibit the posting of someone’s nude or partially nude images without his or her consent if the images were recorded in a place where a reasonable person would enjoy an expectation of privacy.  The Senate Judiciary Committee recently approved a bill that makes it a crime to make an online app whose primary use is to facilitate cyber stalking.  The next important step is to criminalize sites doing the same.

Of course, laws will have limited coercive and expressive impact if they are never enforced.  As the group End Revenge Porn rightly notes, “State police argue that the crime is occurring on the internet, which therefore crosses state lines and is out of their jurisdiction.  The FBI claim that these cases are civil and/or do not threaten national security and should therefore should be handled solely by lawyers.”  Changing those social attitudes and legal solutions are key.  Advocacy groups like Without My Consent , lawyers, law professors like Mary Anne Franks, see hereAnn Bartow, see here, and Derek Bambauer, see here, activists like Jill Filipovic and Charlotte Laws, and most recently victims behind Women Against Revenge Porn and End Revenge Porn are working hard on this score.  One might say that their work is part of an emerging cyber civil rights movement.  (Check out Professor Franks’s important commentary about revenge porn on HuffPo Live).  Lucky for us at CoOp, Professor Franks will be joining us next month as a guest blogger.  I will be working hard to finish my book Hate 3.0: The Rise of Discriminatory Online Harassment and How to Stop It (forthcoming Harvard University Press) and working with Professor Franks on non-consensual pornography, so more to come.

5

The Importance of Section 230 Immunity for Most

Why leave the safe harbor provision intact for site operators, search engines, and other online service providers do not attempt to block offensive, indecent, or illegal activity but by no means encourage or are principally used to host illicit material as cyber cesspools do?  If we retain that immunity, some harassment and stalking — including revenge porn — will remain online because site operators hosting it cannot be legally required to take them down.  Why countenance that possibility?

Because of the risk of collateral censorship—blocking or filtering speech to avoid potential liability even if the speech is legally protected.  In what is often called the heckler’s veto, people may abuse their ability to complain, using the threat of liability to ensure that site operators block or remove posts for no good reason.  They might complain because they disagree with the political views expressed or dislike the posters’ disparaging tone.  Providers would be especially inclined to remove content in the face of frivolous complaints in instances where they have little interest in keeping up the complained about content.  Take, as an illustration, the popular newsgathering sites Digg.  If faced with legal liability, it might automatically take down posts even though they involve protected speech.  The news gathering site lacks a vested interest in keeping up any particular post given its overall goal of crowd sourcing vast quantities of news that people like.  Given the scale of their operation, they may lack the resources to hire enough people to cull through complaints to weed out frivolous ones.

Sites like Digg differ from revenge porn sites and other cyber cesspools whose operators have an incentive to refrain from removing complained-about content such as revenge porn and the like.  Cyber cesspools obtain economic benefits by hosting harassing material that may make it worth the risk to continue to do so.  Collateral censorship is far less likely—because it is in their economic interest to keep up destructive material.  As Slate reporter and cyber bullying expert Emily Bazelon has remarked, concerns about the heckler’s veto get more deference than it should in the context of revenge porn sites and other cyber cesspools.  (Read Bazelon’s important new book Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of Character and Empathy).  It does not justify immunizing cyber cesspool operators from liability.

Let’s be clear about what this would mean.  Dispensing with cyber cesspools’ immunity would not mean that they would be strictly liable for user-generated content.  A legal theory would need to sanction remedies against them.  Read More

2

Revenge Porn and the Uphill Battle to Pierce Section 230 Immunity (Part II)

Plaintiffs’ lawyers have some reason to think that they can convince courts to change their broad-sweeping view of Section 230.  In the rare case, courts have pierced the safe harbor, though not because the site operators failed to engage in good faith attempts to protect against offensive or indecent material.  In 2011, a federal district court permitted a woman to sue the site operator of the Dirty.com for defamation on the grounds that Section 230 is forfeited if the site owner “invites the posting of illegal materials or makes actionable postings itself.”  Sarah Jones v. Dirty World Entertainment Recordings LLC, 766 F. Supp.2d 828, 836 (E.D. Kentucky 2011).

That trial judge relied on a Ninth Circuit decision, Fair Housing Council v. Roommates.com, which involved a classified ad service that helps people find suitable roommates.  To sign up for the site’s service, subscribers had to fill out an online questionnaire that asked questions about their gender, race, and sexual orientation.  One question asked subscribers to choose a roommate preference, such as “Straight or gay males,” only “Gay” males, or “No males.”  Fair housing advocates sued the site, arguing that its questionnaires violated federal and state discrimination laws.  The Ninth Circuit found that Section 230 failed to immunize the defendant site from liability because it created the questions and choice of answers and thus became the “information content provider.”  The court ruled that since the site required users to answer its questions from a list of possible responses of its choosing, the site was “the developer, at least in part, of that information.”  Each user’s profile page was partially the defendant’s responsibility because every profile is a “collaborative effort between [the site] and the subscriber.”

As the Ninth Circuit held (and as a few courts have followed), Section 230 does not grant immunity for helping third parties develop unlawful conduct. The court differentiated the defendant’s site from search engines whose processes might be seen as contributing to the development of content, its search results.  According to the court, ordinary search engines “do not use unlawful criteria to limit the scope of searches conducted on them” and thus do not play a part in the development of unlawful searches.  The court endorsed the view that sites designed to facilitate illegal activity fell outside Section 230’s safe harbor provision.

Here is the rub.  To reach its conclusion, the Ninth Circuit essentially had to rewrite the statute, which defines information content providers as those responsible for the “creation and development of information provided through the Internet,” not the creation and development of illegal information. Read More

4

Revenge Porn and the Uphill Battle to Sue Site Operators

Last week, a group of women filed a lawsuit against the revenge porn site Texxxan.com as well as the hosting company Go Daddy!  Defendant Texxxan.com invites users to post nude photographs of individuals who never consented to their posting.  Revenge porn sites — whether Private Voyeur, Is Anyone Down?, HunterMoore.tv (and the former IsAnyoneUp?), or Texxxan.com — mostly host women’s naked pictures next to their contact information and links to their social media profiles. Much like other forms of cyber stalking, revenge porn ruins individuals’ reputations as the pictures saturate Google searches of their names, incites third parties to email and stalk individuals, causes terrible embarrassment and shame, and risks physical stalking and harm.  In the recently filed suit, victims of revenge porn have brought invasion of privacy and civil conspiracy claims against the site operator and the web hosting company, not the posters themselves who may be difficult to find. More difficult though will be getting the case past a Rule 12(b)(6) motion to dismiss.

In this post, I’m going to explain why this lawsuit is facing an uphill battle under Section 230 of the Communications Decency Act and why extending Section 230’s safe harbor to sites designed to encourage illicit activity seems out of whack with the broader purpose of CDA.  In my next post, I will talk about cases that seemingly open the door for plaintiffs to bring their suit and why those cases provide a poor foundation for their arguments.

Does Section 230 give revenge porn operators free reign to ruin people’s lives (as revenge porn site operator Hunter Moore proudly describes what he does)?  Sad to say, they do.  Read More

2

Ravi Sentenced in Tyler Clementi Case

Dharun Ravi was sentenced today for his violations of Tyler  Clementi’s privacy.  From Yahoo:

A New Jersey judge sentenced a former Rutgers student to 30 days in jail for using a webcam to spy on his roommate kissing another man.

Dharun Ravi, 20, was convicted on two second-degree bias intimidation charges in a case that garnered national headlines because his roommate, Tyler Clementi, committed suicide after the spying.

Clementi, 18, jumped from the George Washington Bridge three days after learning that a September 2010 encounter with an older man was seen by a computer-mounted camera Ravi had set up in their dorm room. The case highlighted the issues of gay bullying and teen suicide.

The judge also placed three years of probation. Rave faced a maximum sentence of 10 years in prison. The judge spared the prison time and did not recommend Ravi be deported to India, where he was born and remains a citizen. Ravi was also ordered to get counseling and to pay $10,000 towards a program to help victims of bias crimes.

Update: Just after I posted this, I saw that Danielle Citron got to this first.  Check out her post here.

4

Hey Look at Me! I’m Reading! (Or Not) Neil Richards on Social Reading

Do you want everyone to know what book you read, film you watch, search you perform, automatically? No? Yes? Why? Why Not? It is odd to me that the ideas behind the Video Privacy Protection Act do not indicate a rather quick extension. But there is a debate about whether our intellectual consumption should have privacy protection, and if so, what that should look like. Luckily, Neil Richards has some answers. His post on Social Reading is a good read. In response to the idea that automatic sharing is wise and benefits all captures some core points:

Not so fast. The sharing of book, film, and music recommendations is important, and social networking has certainly made this easier. But a world of automatic, always-on disclosure should give us pause. What we read, watch, and listen to matter, because they are how we make up our minds about important social issues – in a very real sense, they’re how we make sense of the world.

What’s at stake is something I call “intellectual privacy” – the idea that records of our reading and movie watching deserve special protection compared to other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we try to understand the world we live in. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist, gay teen, or anti-globalization books; or visiting web sites about abortion, gun control, or cancer; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”

And before you go off and say Neil doesn’t get “it” whatever “it” may be, note that he is making a good distinction: “when we share – when we speak – we should do so consciously and deliberately, not automatically and unconsciously. Because of the constitutional magnitude of these values, our social, technological, professional, and legal norms should support rather than undermine our intellectual privacy.”

I easily recommend reading the full post. For those interested in a little more on the topic, the full paper is forthcoming in Georgetown Law Journal and available here. And, if you don’t know Neil Richards’ work (SSRN), you should. Even if you disagree with him, Neil’s writing is of that rare sort where you are better off by reading it. The clean style and sharp ideas force one to engage and think, and thus they also allow one to call out problems so that understanding moves forward. (See Orwell, Politics and the English Language). Enjoy.

4

Why I Don’t Teach the Privacy Torts in My Privacy Law Class

(Partial disclaimer — I do teach the privacy torts for part of one class, just so the students realize how narrow they are.)

I was talking the other day with Chris Hoofnagle, a co-founder of the Privacy Law Scholars Conference and someone I respect very much.  He and I have both recently taught Privacy Law using the text by Dan Solove and Paul Schwartz. After the intro chapter, the text has a humongous chapter 2 about the privacy torts, such as intrusion on seclusion, false light, public revelation of private facts, and so on.  Chris and other profs I have spoken with find that the chapter takes weeks to teach.

I skip that chapter entirely. In talking with Chris, I began to articulate why.  It has to do with my philosophy of what the modern privacy enterprise is about.

For me, the modern project about information privacy is pervasively about IT systems.  There are lots of times we allow personal information to flow.  There are lots of times where it’s a bad idea.  We build our collection and dissemination systems in highly computerized form, trying to gain the advantages while minimizing the risks.  Alan Westin got it right when he called his 1970’s book “Databanks in a Free Society.”  It’s about the data.

Privacy torts aren’t about the data.  They usually are individualized revelations in a one-of-a-kind setting.  Importantly, the reasonableness test in tort is a lousy match for whether an IT system is well designed.  Torts have not done well at building privacy into IT systems, nor have they been of much use in other IT system issues, such as deciding whether an IT system is unreasonably insecure or suing software manufacturers under products liability law.  IT systems are complex and evolve rapidly, and are a terrible match with the common sense of a jury trying to decide if the defendant did some particular thing wrong.

When privacy torts don’t work, we substitute regulatory systems, such as HIPAA or Gramm-Leach-Bliley.  To make up for the failures of the intrusion tort, we create the Do Not Call list and telemarketing sales rules that precisely define how much intrusion the marketer can make into our time at home with the family.

A second reason for skipping the privacy torts is that the First Amendment has rendered unconstitutional a wide range of the practices that the privacy torts might otherwise have evolved to address.  Lots of intrusive publication about an individual is considered “newsworthy” and thus protected speech.  The Europeans have narrower free speech rights, so they have somewhat more room to give legal effect to intrusion and public revelation claims.

It’s about the data.  Torts has almost nothing to say about what data should flow in IT systems.  So I skip the privacy torts.

Other profs might have other goals.  But I expect to keep skipping chapter 2.

 

3

Ravi Trial Verdict for Invading the Privacy of Clementi

Dharun Ravi was found guilty of invasion of privacy when he used a webcam to watch and broadcast online Clementi’s intimate activities with another man in their shared dorm room.  From CNN:

A former Rutgers University student accused of spying on and intimidating his gay roommate by use of a hidden webcam was found guilty on all counts, including invasion of privacy and the more severe charges of bias intimidation, in a case that thrust cyberbullying into the national spotlight.

Dharun Ravi, 20, could now face up to 10 years in jail and deportation to his native India. He was also found guilty of witness tampering, hindering apprehension and tampering of physical evidence.

The jury was confronted with a series of questions on each charge. Though it found Ravi not guilty on several questions within the verdict sheet, because he was found guilty on at least one question on each main count, he could now face the maximum penalty.

From ABC News:

A New Jersey jury today found former Rutgers student Dharun Ravi guilty on all counts for using a webcam to spy on his roommate, Tyler Clementi, having a gay sexual encounter in 2010.

Ravi, 20, was convicted of invasion of privacy, bias intimidation, witness tampering and hindering arrest, stemming from his role in activating the webcam to peek at Clementi’s date with a man in the dorm room on Sept. 19, 2010. Ravi was also convicted of encouraging others to spy during a second date, on Sept. 21, 2010, and intimidating Clementi for being gay.

Ravi was found not guilty of some subparts of the 15 counts of bias intimidation, attempted invasion of privacy, and attempted bias intimidation, but needed only to be found guilty of one part of each count to be convicted.

I blogged about this case here and here and here.

Here is New Jersey’s invasion of privacy statute:

Read More

3

Cyberbullying and the Cheese-Eating Surrender Monkeys

(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)

Introduction

New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read More