CCR Symposium: Differences Among Online Entities

I’ve described my reactions to the first prong of Danielle’s proposed standard of care (IP logging) as well as the second prong (filtering). I’ll now complete the project with a brief look at the third and final prong of the proposed standard of care: differentiated expectations for different classes of online entities. Regretfully, these thoughts are composed in haste to get them in under the wire of the symposium’s conclusion.

ISPs are different in so many respects from web sites that it is probably best to deal with each in turn. Looking to the ISP side first, a regime of differentiated standards for different classes of service provider could deal effectively with the concerns raised in my first post about home users, public amenities, and other actors poorly positioned to log and authenticate the people to whom they provide service, by exempting them from the IP logging regime. This would make the IP logging far from comprehensive, but logging by commercial ISPs could continue, as it already does, to provide useful information to law enforcement about which broadband customer originated certain traffic.

Danielle’s argument proposes new, harassment-related uses for IP information that is already logged and already routinely used in other legal contexts. This raises the question: In between the IP logging that already does occur, and the IP logging that a real-world implementation of Danielle’s proposal would wisely and reasonably not require, is there any new IP logging that the proposal would introduce for service providers? I’m not sure.

As for web sites: Large and well established sites could be forced to filter content, clumsily and with collateral harm. They could be forced to retain logs of each visit, probably without too much added cost. But what about the periodic tendency of new web sites to become popular overnight? In some cases, the sites aren’t well engineered for their newfound popularity. In others, the very features that make the sites popular may inherently make filtering or logging difficult. Twitter may be an example of both of these phenomena: rather than a well established site, humming along, it has been a growing, unstable, sometimes broken site even when used by millions of users. Implementing filtering or logging requirements is difficult for any site that struggles with prior questions, like staying online while overwhelmed by user demand. And the very high volume of messages means that a tiny added cost for the posting of each new message (CPU cycles to analyze and filter, or a delay while the message waits for its turn to be added to a log) could help bring the whole system to a grinding halt.

There’s much more to say here, and I hope, in time, to be able to develop it further. I’ll end by recording my gratitude to the organizers, my fellow participants, and our readers.

You may also like...

1 Response

  1. That’s very interesting, David, and I’m glad that you’ve approached these points so methodically, so I hope you’ll indulge me butting in again. You’re absolutely right, unfortunately, in that the differentiation question is an easier one when it comes to ISPs vs sites than between different kinds of site. I’d suggest that Frank Pasquale’s work on search engines gives us something to think about here, in that he (and others) have been tackling the power exercised by the search engine as non-ISP intermediary.

    On my earlier post today I pointed to the EU’s three-way distinction between ISP, cache and host as a possible approach, but something I left out was how this interacts with net neutrality/common carriers. I don’t have a problem with some sort of trade-off, something like (simplified for speed)

    – common carrier-type immunity (absolute or close to it) in return for actual common carrier behaviour (no packet inspection, etc),

    – limited immunity (or another way, enhanced obligations of scrutiny) in return for no common carrier-type obligations.

    Even though I see the huge value of going easy on hosts, especially when we see how responding to threats makes them trigger happy (this DMCA study has lots of useful data on that point), it does still seem puzzling (and, to those identifying threats to participation) that they can (knowingly) host and profit from stacks of actionable abuse regardless of knowledge – they do play a different role to the ISP, particularly in that the ISP’s business activities are more general than an individual site and the relationship between content and conduit is quite different.