CCR Symposium: Screening Software

Yesterday, I introduced some practical considerations that suggest the regime of IP logging proposed in Cyber Civil Rights might be less effective than it sounds as a way to identify anonymous harassers. Today, I want to turn to the second of the three concrete policy elements the paper outlines, namely the use of screening software.

There are many hypothetical kinds of computer filtering software that would, if they existed, be highly valuable. For example, software that could filter out illicit copies of copyrighted works, without impinging on fair use, authorized copying, or the exchange of public domain materials, would be greeted eagerly not only by the content industry, but by ISPs. Software that could protect children from obscene materials online without collateral harm to protected expression (such as health information) would be ideal for libraries. Such software would also, as Danielle writes, be “wholly consistent with the Communications Decency Act’s objectives.” Congress has always been happy to permit the kind of well-done filtering imagined in these hypotheticals.

To her credit, Danielle does not assert that such ideal software exists today, and in that respect she stands above a long and unfortunate tradition of wishful thinkers. In fact, she acknowledges that there will be “inevitable failures of this software to screen out all offensive material.” (I imagine Danielle would also acknowledge the converse, inevitable failure to leave in all of the material that is not offensive in the salient sense.)

Is such software feasible? Danielle’s paper summarizes Susan Freiwald to the effect that “reducing defamation through technological means may be possible if companies invest in code to make it feasible.” Friewald in the original writes: ” If a legal rule demanded it, companies would likely invest in code that made it feasible” (Susan Freiwald, Comparative Institutional Analysis in Cyberspace: The Case of Intermediary Liability for Defamation, 14 Harv. J.L. & Tech. 569, at 629). In other words, if the law required firms to invest in trying to solve this problem, they would invest. Freiwald is, as Danielle is, apparently optimistic about the likely results of such investment. But the citation doesn’t offer authoritative grounds for optimism.

There’s no shortage of demand for the platonically ideal filtering software. And there would be plenty of privately profitable uses for it, if it did exist, as well as publicly beneficial ones. Public libraries may not provide much of a financial incentive for software development, but the content industries, as the conflicts over Digital Rights Management have repeatedly shown, certainly do. So why haven’t software companies created such software yet? One might argue that the potential market is too small, which does not strike me as plausible. Another theory would be that these firms are so ideologically committed to an unfettered Internet that they all choose, all the time, not to make these profitable investments. Yet another would be that they aren’t judging the technical risks and rewards accurately—the task is easier than they believe, or the market larger.

But the explanation that I find most persuasive may also be the simplest: The best we can hope to do, in filtering, is a crude approximation of the Platonic ideal. When software companies offer frustratingly coarse filters, and when they tell us that better ones are not feasible, they are making an admission against interest, and it deserves to be taken seriously.

It’s true that there is a moderate market for in-home filtering software directed at young children, and for some (but not most) workplace environments. These contexts share two important properties: First, the party purchasing the filtering software (parents or business owners or IT staff) does not have to live under its restrictions, and therefore may be less sensitive to the coarseness of those restrictions; and second, the harm from overblocking is low because neither young children nor employees in their work have as strong an interest in being able to send or receive free expression as the median Internet user does.

If ideal filtering were possible—if computers were, or could become, that good at evaluating human expression—then the technology would have applications far beyond the present case of preventing Internet harassment. But consider how hard it is to tell whether something counts as an instance of harassment. Lawyers and judges debate edge cases. Even an example from Danielle’s paper (suggesting that a harasser should be awarded a “Congressional medal”) could plausibly be read in its context as sarcastic reproach, rather than endorsement, of the harasser. A search for antagonizing words might catch harassers but it would also ensnare Danielle’s paper and this symposium.

You may also like...

1 Response

  1. It’s a fair point, and one that was on my mind while writing yesterday’s post. The debate on filtering so far has been quite unsatisfactory, as snake-oil merchants try to convince us that they have got The Solution, which usually lasts about five minutes before it’s proven to be anything but. We see a good example of this in the suggested Australian national filtering system supervised by the ACMA. Where filtering has been suggested for the purposes of copyright protection (e.g. ISP-level technologies like Audible Magic, which is magic in the sense that no-one knows how it works and it can’t get it right all the time), it provokes a justifiable outcry, and I’ve been one of the outcriers. Whatever about the technological advances, I think we’re still a very long way from an acceptance that, for legal purposes, filtering is even close to ready.