CCR Symposium: Differences Among Online Entities
I’ve described my reactions to the first prong of Danielle’s proposed standard of care (IP logging) as well as the second prong (filtering). I’ll now complete the project with a brief look at the third and final prong of the proposed standard of care: differentiated expectations for different classes of online entities. Regretfully, these thoughts are composed in haste to get them in under the wire of the symposium’s conclusion.
ISPs are different in so many respects from web sites that it is probably best to deal with each in turn. Looking to the ISP side first, a regime of differentiated standards for different classes of service provider could deal effectively with the concerns raised in my first post about home users, public amenities, and other actors poorly positioned to log and authenticate the people to whom they provide service, by exempting them from the IP logging regime. This would make the IP logging far from comprehensive, but logging by commercial ISPs could continue, as it already does, to provide useful information to law enforcement about which broadband customer originated certain traffic.
Danielle’s argument proposes new, harassment-related uses for IP information that is already logged and already routinely used in other legal contexts. This raises the question: In between the IP logging that already does occur, and the IP logging that a real-world implementation of Danielle’s proposal would wisely and reasonably not require, is there any new IP logging that the proposal would introduce for service providers? I’m not sure.
As for web sites: Large and well established sites could be forced to filter content, clumsily and with collateral harm. They could be forced to retain logs of each visit, probably without too much added cost. But what about the periodic tendency of new web sites to become popular overnight? In some cases, the sites aren’t well engineered for their newfound popularity. In others, the very features that make the sites popular may inherently make filtering or logging difficult. Twitter may be an example of both of these phenomena: rather than a well established site, humming along, it has been a growing, unstable, sometimes broken site even when used by millions of users. Implementing filtering or logging requirements is difficult for any site that struggles with prior questions, like staying online while overwhelmed by user demand. And the very high volume of messages means that a tiny added cost for the posting of each new message (CPU cycles to analyze and filter, or a delay while the message waits for its turn to be added to a log) could help bring the whole system to a grinding halt.
There’s much more to say here, and I hope, in time, to be able to develop it further. I’ll end by recording my gratitude to the organizers, my fellow participants, and our readers.