Category: Web 2.0


The Vexing Problem of Shared Personal Data

share-data2.jpgI blogged earlier about the recent privacy kerfuffle with Facebook’s potentially permanent control over user data. In that post, I critiqued the “trust us” response that Facebook and so many companies make when responding to issues involving the use of people’s data.

There is, however, another argument Zuckerberg raises in response to Facebook’s data retention policy. He writes:

When a person shares something like a message with a friend, two copies of that information are created—one in the person’s sent messages box and the other in their friend’s inbox. Even if the person deactivates their account, their friend still has a copy of that message.

facebook3.jpgZuckerberg is raising a rather thorny issue involving shared data. Although the “trust us” argument is rather specious, the shared data argument is much more difficult. One of the reasons why Facebook wants to maintain user data even after a user has left Facebook is that a lot of Facebook data is shared between friends. Facebook claims that it doesn’t want to allow users who leave Facebook to permanently delete their data from all parts of Facebook since their data appears on their friends’ Facebook pages. Zuckerberg also mentions the fact that email messages sent from one friend to another leave a copy in a friend’s inbox. One of the thorny issues with digital information is that it is shared.

What should Facebook do when a user wants to remove his or her data from all parts of Facebook, including their data on the pages of their friends? There are several ways of dealing with this:

(a) allow the user to remove it completely wherever it is;

(b) notify the people whose profiles contain the information and seek their consent before removing it; or

(c) not allow the user to remove it.

Facebook appears to have chosen (c). Before attacking or praising Facebook’s choice, consider the following questions:

Read More


“Please Trust Us”: Facebook and Control of Personal Data

facebook3.jpgRecently, Facebook changed its Terms of Service (TOS). According to the New York Times:

This month, when Facebook updated its terms, it deleted a provision that said users could remove their content at any time, at which time the license would expire. Further, it added new language that said Facebook would retain users’ content and licenses after an account was terminated. . . .

The changes in the terms of service had gone mostly unnoticed until Sunday, when the blog Consumerist cited them and interpreted them to mean that “anything you upload to Facebook can be used by Facebook in any way they deem fit, forever, no matter what you do later.”

Given the widespread popularity of Facebook — by some measurements the most popular social network with 175 million active users worldwide — that claim attracted attention immediately.

The blog post by Consumerist, part of the advocacy group Consumers Union, received more than 300,000 views. Users created Facebook groups to oppose the changes. To some of the thousands who commented online, the changes meant: “Facebook owns you.”

The new and old TOS both state:

You hereby grant Facebook an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license (with the right to sublicense) to (a) use, copy, publish, stream, store, retain, publicly perform or display, transmit, scan, reformat, modify, edit, frame, translate, excerpt, adapt, create derivative works and distribute (through multiple tiers), any User Content you (i) Post on or in connection with the Facebook Service or the promotion thereof subject only to your privacy settings or (ii) enable a user to Post, including by offering a Share Link on your website and (b) to use your name, likeness and image for any purpose, including commercial or advertising, each of (a) and (b) on or in connection with the Facebook Service or the promotion thereof.

However, the new TOS does not contain the following language that the old TOS contains:

Read More


Criminalizing Google’s YouTube in Italy

google-italia.jpgIn Italy, a rather disturbing prosecution is taking place. Google officials, including Chief Privacy Counsel Peter Fleischer, are being criminally prosecuted for a video somebody else uploaded to YouTube. According to an article by Tracey Bentley in the International Association of Privacy Professionals’ The Privacy Advisor:

The video that sparked the investigation was captured in a Turin classroom. Four high school boys were recorded taunting a young man with Down syndrome, and hitting the 17-year-old with a tissue box. One of the boys uploaded the footage to Google Video’s Italian site on September 8, 2006.

According to Google, more than 200,000 videos are uploaded to Google Video each day. Under EU legislation incorporated into Italian law in 2003, Internet service providers are not responsible for monitoring third-party content on their sites, but are required to remove content considered offensive if they receive a complaint about it. Between November 6 and 7, 2006, Google received two separate requests for the removal of the video–one from a user, and one from the Italian Interior Ministry, the authority responsible for investigating Internet-related crimes. Google removed the video on November 7, 2006, within 24 hours of receiving the requests.

Nonetheless, Milan public prosecutor Francesco Cajani decided that by allowing the 191-second clip onto its site, Google executives were in breach of Italian penal code. . . .

Cajani is prosecuting Google as an Internet content provider. Unlike Internet service providers, Italian penal code states that Internet content providers are responsible for the third-party content posted to their sites. This is essentially the same law regulating newspaper and television publishers.

I’ve been quite critical of very broad immunity for websites or ISPs that host defamatory or privacy invasive content of others. See Chapter 6 of The Future of Reputation. However, I find this Italian prosecution extremely troubling. And if I find it troubling, one can only imagine how apoplectic Professor Eric Goldman will be!

First, this is a criminal prosecution, and I’m generally very troubled for criminal prosecutions for defamation or privacy invasions. There might be some limited circumstances where criminal liability is warranted, though I believe that the problem is best deal with through civil liability, not criminal. While the prospect of civil liability can certainly chill speech, criminal law is an even more serious threat, and therefore, it shouldn’t be treated in the same way. Free speech protections should therefore be greater when criminal liability is involved.

Second, Google is not the content provider here. It shouldn’t be prosecuted as one. Apparently, from the reports (I haven’t seen the specific Italian law), Italy has a law that resembles Communications Decency Act (CDA), 47 U.S.C. § 230, which immunizes a website or ISP for the content posted by others. I agree with this general immunity. However, I believe that if a website or ISP is on notice that content is defamatory or invasive of privacy, then it must take down that material or lose its immunity from civil liability. Under the CDA, as interpreted by the courts, websites and ISPs are immune even after having knowledge that content posted on their sites is defamatory or invasive of privacy. I’ve argued that immunity under these circumstances is going too far. From what I’ve read, Italian law adopts the position I advocate.

But Google complied with the law and took down the videos after being notified. Thus, I don’t understand what Google did wrong. I don’t understand how it can be deemed the content provider. If Google officials can be criminally prosecuted any time a person uploads a defamatory or privacy invasive video to YouTube, it’s hard to see how they can possibly avoid running afoul of the law. YouTube and much of Web 2.0 would pose massive risks of criminal liability.

So as one who has strongly advocated for less immunity for defamatory and privacy invasive material online, even I find Italy’s prosecution of Fleischer and other Google executives to be quite outrageous and unjustified.

If anyone has a link to the Italian ISP immunity legislation in English, as well as more information about the specific criminal charges against Google, please let me know.


The Lori Drew Trial: Verdict

A verdict has been reached in the Lori Drew case. Kim Zetter reports:

Lori Drew, the 49-year-old woman charged in the first federal cyberbullying case, was cleared of felony computer-hacking charges by a jury Wednesday morning, but convicted of three misdemeanors. The jury deadlocked on a remaining felony charge of conspiracy.

After just over a day of deliberation, the six-man, six-woman jury acquitted Drew of three felony charges of violating the federal Computer Fraud and Abuse Act, in an emotionally charged case that stemmed from a 2006 MySpace hoax targeting a 13-year-old girl, who later committed suicide.

Tina Meier, the mother of the girl, shook her head silently from the gallery as the verdict was read.

Prosecutors claimed Drew and others obtained unauthorized access to MySpace by creating a fake profile for a nonexistent 16-year-old boy named “Josh Evans.” The account was used to flirt with, and then reject, 13-year-old old Megan Meier. The case hinged on the government’s novel argument that violating MySpace’s terms of service for the purpose of harming another was the legal equivalent of computer hacking, and Drew faced a maximum sentence of five years in prison for each charge.

But on Wednesday, jurors found Drew guilty only of three counts of gaining unauthorized access to MySpace for the purpose of obtaining information on Megan Meier — misdemeanors that potentially carry up to a year in prison, but most likely will result in no time in custody. The jury unanimously rejected the three felony computer hacking charges that alleged the unauthorized access was part of a scheme to intentionally inflict emotional distress on Megan.


The Lori Drew Case: Why Not Rule on the Motions?

According to Kim Zetter’s account of the Lori Drew trial, Judge Wu has postponed ruling on any of the legal issues until after the jury’s verdict:

When the prosecution rested its case Friday at about 2:00 p.m., defense attorney H. Dean Steward moved for an immediate dismissal, based on testimony that proved Drew never saw MySpace’s contract, and wasn’t the one who set up the account and accepted the terms.

U.S. District Judge George Wu asked both sides to file written briefs on the issue over the weekend, and allowed testimony to continue in the case.

Why not rule on it now? Judge Wu hasn’t ruled on the merits of how the CFAA should be interpreted, whether it is unconstitutionally vague, and now whether or not the prosecution, as a matter of law, has failed to prove the requisite mens rea. Why won’t he rule on any of these issues?

The only reason I can think of is that he’s waiting to see if the jury acquits Drew, which then moots the issues. This is the only scenario I can think of in which he won’t eventually have to rule on the motions.

Why not just issue a ruling one way or the other? That’s what I thought judges are supposed to do. Is there something I’m missing here about his judicial strategy?


The Lori Drew Case: Sarah Drew’s Testimony

Over at Wired’s Threat Level blog, Kim Zetter’s excellent coverage of the Lori Drew trial continues. In this post, she discusses the testimony of Lori Drew’s daughter Sarah:

The girl’s testimony, if true, supports the defense’s assertions that Lori Drew was unaware of Meier’s previous suicide attempt until after Meier killed herself in 2006.

The younger Drew, who prosecutors say was involved in the creation of the fake MySpace account through which Meier was bullied, denied playing any role in the creation of the account, although she admitted she was present when many of the messages were written and when the final message was sent to Meier telling her the world “would be a better place without you.” She insisted she told Ashley Grills, who confessed to writing the last message, not to send it, although she didn’t say why she told this to Grills.

She said it was Grills — who has been granted immunity by prosecutors — who devised the plan, created the account and sent the messages. Neither she nor her mother knew the account was created until “after the fact,” and neither one was home when Grills clicked on the terms of service to create the Josh Evans profile. She also said her mother wasn’t home when Grills sent the final message to Meier.

The girl’s words seemed designed to strike at the heart of the conspiracy charge against her mother, which asserts that she conspired with her daughter and Grills to intentionally violate the MySpace terms of service in order to inflict intentional emotional distress on Meier.

Grills had said that both Drews were with her when she created the account, but that none of them had read the terms of service. That testimony raised questions about whether Lori Drew could be convicted of conspiracy if she didn’t click to agree on the terms of service or even know they existed.


The Lori Drew Case: Does the CFAA Require Knowledge?

Over at Wired’s Threat Level Blog, Kim Zetter is providing great coverage of the Lori Drew case.

Here’s her post about Tina Meier’s testimony (the mother of Megan Meier).

Zetter’s most recent post describes the direct examination of Ashley Gill, one of the people who participated with Lori Drew in the creation of the fake MySpace profile.

The young woman who typed the final, cruel message to 13-year-old Megan Meier the day she killed herself took the stand to testify against her former employer and confidant, Lori Drew, on Thursday.

But several moments in 20-year-old Ashley Grill’s 80-minutes of testimony seemed to undermine the government’s case. The most damaging statement: that it was her idea, not Drew’s, to create a fake MySpace account to befriend Megan.

Though the jury doesn’t have to find that Drew instigated the plan to convict her of conspiracy, the revelation is nonetheless at odds with the government’s position that the 39-year-old Drew took a leading role in creating a MySpace account for “Josh Evans,” a purported 16-year-old boy who flirted with the emotionally-vulnerable Megan, and ultimately turned on her. The statement came as Grill described the genesis of the hoax, which unfolded at Drew’s home in September 2006.

Grill was in the kitchen with Drew and Sarah, Lori Drew’s daughter, when she proposed creating a fake MySpace account to get information on Megan. Drew applauded the plan, and thought it was funny, but not herself conceive it, Grill said.

The three of them crowded around Drew’s computer as Grill set up the profile. None of the three read MySpace’s terms-of-service first. As Ashley began, Lori and Sarah left for soccer practice, urging Grill to finish up in their absence.

There are several interesting things here. First, the hurtful emails, the ones that led to Meier’s suicide, were not penned by Drew but by Grill, the prosecution’s own witness. Second, and more importantly, the government’s witness conceded that Drew had not read MySpace’s terms of service. Unless the CFAA is a strict liability statute, or can be violated negligently or recklessly, then the prosecution must prove that Drew knew she was violating the terms of service. Thus far, I haven’t heard anything to indicate she knew it was a violation of MySpace’s terms of service to create a fake profile. If knowledge is required, and if knowledge isn’t proven, then the prosecution’s case shouldn’t survive a directed verdict motion.

I only know a little about the CFAA, so I ask the experts: Am I correct that knowledge that one accesses a site without authorization is required for there to be a CFAA violation, even under the prosecution’s warped interpretation of the statute?


Voting 2.0

A cherished right in the United States is to vote in secrecy. But what if we don’t want to exercise that right in secret? What if in this age of insecure and inaccurate e-voting machines we want to record our votes and our voting experiences, say with cell phones or video cameras? According to The New York Times, many voters plan to do just that, making it likely that this election will be the “most recorded in history.”

Much like the online communities that came together to expose flaws in Diebold’s source code in 2003 after activist Bev Harris discovered the code on an unsecured website, Web 2.0 platforms are emerging for the sole purpose of recording voting problems. Jon Pincus’s Voter Suppression Wiki will let voters collaborate to collect examples of problems with voting, from exceptionally long lines or more direct actions to intimidate voters. Allison Fine and Nancy Scola are using Twitter to monitor voting problems. YouTube has created a channel, Video Your Vote, to encourage submissions. Even The New York Times has a Polling Place Photo Project on its website. Such public participation will no doubt generate crucial information for states and the Election Assistance Commission to study and may even enhance the legitimacy of this election.


Judge Kozinski: The First Amendment Is Dead

free speech rip.jpg

Judge Alex Kozinski came to Temple this afternoon and delivered the Arlin Adams lecture, on “The Late, Great First Amendment.” Typically provocative, Kozinski argued that individuals’ inability to bring effective lawsuits for internet speech renders obsolete existing First Amendment doctrine. In his view, traditional First Amendment doctrine had promoted an informed democratic discourse by maintaining a threat – though remote – of the possibility of recovery for libel, defamation, copyright infringement, trademark infringement, and spreading protected national secrets. By contrast, given the Streisand effect and Wikileaks’ portability and thus immunity, the modern world provides no effective remedies for unprotected speech.

Without liability pressure disciplining the speaking market, Kozinski sketched out a distopian lemons market for speech: untrusted intermediaries, unreported international and national news, and a cacophony of speakers saying little of interest.

I’m running off to class now, so I don’t have time for an extended analysis, but it strikes me that Kozinski’s eulogy for the First Amendment was premature for at least three reasons: (1) the kind of mass media he mourned – protected by a prior restraint doctrine and fattened by classified ads – is the exception and not the norm in our tradition, so any conclusions relying on the Amendment’s relationship to the particular character of the news media seem overdrawn; (2) as my colleague David Post pointed out, there are strong economic reasons for online intermediaries to establish transparent reputations for honesty – that is, technical warranties ought to solve the lemons problem; (3) speech may be governed by law even if plaintiffs can’t effectively enforce available legal rules. Think international law. Or, closer to home, think about the duty of care in Delaware. No one really believes that corporate actors are acting according to their whim and fancy despite facing no remedy for their negligence. If the First Amendment has no downside teeth, it can still create sticky norms.

As I said, a great speech. It featured references to David Lat & the Volokh Conspiracy, among others. But not CoOp. Maybe we ought to be running a hotties contest.

More later (maybe.)