CCR Symposium: Balancing Anonymity and Accountability Online

Is cyberspace a “wild west“? Should “anything go” online? Are statements and behavior sharply distinguishable? Are the former “only words” that change the world through the unforced force of persuasion, and the latter the kind of action we rightfully fear and regulate?

Danielle Citron’s work Cyber Civil Rights suggests that the answer to all these questions is no. She both applies and challenges the old doctrinal categories we’ve used to deal with these issues–such as the speech/conduct distinction, the line between threats and mere contempt, and defamation law’s parsing of facts and opinions. Technology has made online reputations persistent, searchable, replicable, and visible. Citron powerfully argues that it’s time for their creators to be accountable as well.

Yet who is the creator of an online reputation? Is it individuals or aggregators? If the first pages of Google results that show up for a query about a certain person are dominated by critics, slanderers, rumorers, and haters, have each of those individuals created her reputation online? Or should the search engine itself (and those using it) take some responsibility, and agree to “technological due process” for the person whose unflattering portrait they’ve painted and sought out?

If we are to build an economy of accountability online, I believe that the behavior of all these groups will have to change. Citron focuses on the ultimate sources of online attacks, and her rich account of the harms they create and creative legal approaches to dealing with it are commendable. I propose to complement her work with a broader examination of the way in which undeservedly bad reputations are created and maintained online.

Today the usual libertarian response to online malevolence is to urge its victim to track down the source:

If the defendant is known, pursuing such claims is common-place. The obstacle facing plaintiffs who do not know the legal identity of those who may have defamed them or intruded upon their privacy is the same facing law enforcement: to “subpoena sites and Internet service providers [and other intermediaries] to learn the original author’s IP address, and from there, his legal identity.”

Yet keeping track of everything everyone does online is not exactly a civil libertarian’s dream. Citron recommends a standard of care that would “require website operators to configure their sites to collect and retain vistors’ IP addresses,” but I think the timing of such tracking is tricky. If it’s a constant requirement, it could effectively put the weight of tort law behind British style super-surveillance that troubles many in the US. I therefore suggest a slight modification to Citron’s proposal: that IP address tracking be made permanent after a complaint (analogous to DMCA “notice and takedown” procedures), but that otherwise it be up to website owners to decide whether to track or not.

I admit that this revision would effectively institute an “every dog gets one bite” rule in cyberspace; someone may well broadcast or narrowcast threats, slanders, and rumors from within an anonymity-protecting site with effective impunity until he or she was complained about. There are a few reasons to permit this type of non-accountability.

First, we need to recognize the value of samizdat–material that, for one reason or another, authorities want to see suppressed. Anonymity allows its circulation. Commenting on COINTELPRO, once source estimates that “roughly 20,000 people [were] investigated by the FBI solely on the basis of their political views between 1956-1971.” Jack Balkin has observed the dangers associated with the increasing precision of data collection and retention in a “national surveillance state:”

The National Surveillance State poses . . . major dangers for our freedom. The first danger is that government will create a parallel track of preventative law enforcement that routes around the traditional guarantees of the Bill of Rights. The second danger is that traditional law enforcement and social services will increasingly resemble the parallel track. Once governments have access to powerful surveillance and data mining technologies, there will be enormous political pressure to use them in everyday law enforcement and for delivery of government services. Private power and public-private cooperation pose a third danger. Because the Constitution does not reach private parties, government has increasing incentives to rely on private enterprise to collect and generate information for it, thus circumventing constitutional guarantees.

There need to be safe spaces on the web where those with unpopular views can exchange ideas without fear of retribution. . . until, of course, such exchanges degenerate into slanders, active threats against individuals, or damaging rumors.

Second, the “one bite” rule would put some well-deserved focus on the intermediaries who are often the main reason why some isolated act of “malwebolence” persists to harm the reputation of its victim. While the law requires credit bureaus to remove bankruptcies from credit reports after a decade, new online entities can collect damaging information from public records and across the web and elevate its salience indefinitely. Intermediaries particularly need to be held responsible in an age of “personalized search.” For example, imagine you apply for a job, and the HR department has “personalized” its results to assure that the most damaging information available about a person (from its perspective) comes up first. You would have to avail yourself of personalizing software like its own in order to be fully aware of all the negative information such a personalized search was generating. Yet trade secrecy and contracts will likely prevent you from ever accessing an exact replica of the programs used by the educators, employers, landlords, bankers, and others making vital decisions about your future.

Citron has argued that “technological due process” should create safeguards for those affected by automated decisionmaking by public decisionmakers. I believe that a privacy-protecting “Fair Reputation Reporting Act” aimed at important private decisionmakers (like banks, employers, and insurers) could help address the reputational harms caused by those who abuse the technological “Ring of Gyges” that online anonymity creates. Rumors about a person can percolate in blogs and message boards for years. Even if the First Amendment and anonymity offer de jure and de facto protection to the authors of such rumors, affected individuals deserve to know whether certain important decisionmakers rely on them. In limited cases, the intermediary source of the information should also provide the target of a derogatory posting with the opportunity to annotate it. A Fair Reputation Reporting Act would empower individuals to know the basis of adverse employment, credit, and insurance decisions—and to go to their source (and the source of their salience) to demand some relief from digital scarlet letters.

To summarize: Citron’s CCR breaks important new ground in promoting accountability online and protecting the interests of the vulnerable. A slight revision to her proposal’s promotion of mandatory IP-address tracking would better recognize the social value of anonymity. Responsible intermediaries could address the inevitable harms that would result from such tailoring of the proposal–and a Fair Reputation Reporting Act might dampen the types of voyeuristic curiosity that give rumors and innuendoes such an undeservedly long life online.

You may also like...