CCR Symposium: Practical Aspects of IP Logging

I’m honored to be taking part in the symposium. Danielle’s article illustrates an important problem and does a great job—as this ongoing symposium itself illustrates—of launching a conversation about how that problem may be addressed.

Thus far, the symposium’s discussion of of “traceable anonymity” has focused on its legal and normative aspects. Danielle writes that traceable anonymity the suitable standard of care for ISPs and web sites has three elements: In this post, I’ll review what those elements are, and discuss the first (mandatory IP address logging, which she calls “traceable anonymity”) in some detail. I’ll save the latter two elements of traceable anonymity the proposed standard of care for subsequent posts.

Working at the boundary between policy scholarship and technical scholarship, one frequently observes a kind of “reciprocal optimism,” in which the lawyers make optimistic assumptions about how well technical solutions will work, and the technologists make optimistic assumptions about how well legal solutions will work. IP logging is, I fear, an instance of the former tendency.

Here are the three elements of traceable anonymity as Danielle uses the term the standard of care that Danielle proposes (emphases mine):

. . . First, it should require website operators to configure their sites to collect and retain visitors’ IP addresses. In other words, the standard of care should demand “traceable anonymity.” This would allow posters to comment anonymously to the outside world but permit their identity to be traced in the event they engage in unlawful behavior. Requiring traceable anonymity is hardly a burdensome step: some blogs already deny access to anonymous posters. . .

Second, as screening software advances, some classes of online actors may reasonably be expected to deploy the software to limit the amount and kinds of harmful materials on their sites. This certainly is wholly consistent with the Communications Decency Act’s objectives. As Susan Freiwald explains, reducing defamation through technological means may be possible if companies invest in code to make it feasible. Naturally, online actors would not be liable for the inevitable failures of this software to screen out all offensive material as § 230 demands. But making a reasonable, good-faith attempt to conduct cost-effective screening could significantly reduce harm. . .

Third, and more generally, the duty of care should take into account differences among online entities. ISPs and massive blogs with hundreds or thousands of postings a day cannot plausibly monitor the content of all postings. The duty of care will also surely evolve as technology improves. Current screening technology is far more effective against some kinds of abusive material than others; progress may produce cost-effective means of defeating other attacks. Conversely, technological advances will likely offer online mobs new means of carrying out their assaults, creating new risks against which victims can ask website operators to take reasonable precautions.

Suppose that we did implement the first of these three suggestions: U.S. web sites have to log the IP addresses of all their users, and with appropriate court process, those logs are available to people pursuing legal action against harassers. (ISPs, who already do log the IP addresses of their users, would be likewise required to provide their logs.) Would this permit anonymous actors to be unmasked?

As Nathaniel has pointed out, “Those who are aware that they are engaging in illegal behavior can take steps to ensure that their identity is virtually impossible to determine. If an egregious defender takes steps to shield himself, he will likely not be held liable. Thus, targets of online harassment can often uncover only those the posters who didn’t think their speech was problematic in the first place. . .” Nathaniel and other posters have also highlighted the problems of international enforcement.

Both of these issues are significant, particularly since an interest in protecting dissident and other threatened groups has led the open source community to make it extremely easy for even a non-technical Internet user to engage in effectively untraceable speech online.

But there is another, much larger, reason why mandatory IP address logging by web sites and ISPs would fail to make anonymity traceable: Under a surprisingly broad range of routine circumstances, parties who are neither web site operators nor ISPs are providing Internet connectivity in a fashion that breaks the link between user identity and IP address.

When an Internet connection is shared among multiple users, the most common technical setup is called “network address translation” (NAT). Basically, a group of users all connect to one another, and jointly share a single IP address on the Internet. As incoming traffic arrives at the shared IP address, its addressing information is translated so that it can be passed along to the right user.

This set up is common in what you might think of as typical local area networks, such as businesses and universities. For example, when any member of the Princeton Computer Science department visits a web site, the address he or she is “coming from” is always the same ( This is the address of the department’s firewall, which not only does NAT, but also blocks various kinds of malicious traffic.

Of course, one could require (all? some?) businesses and nonprofits that provide Internet access to keep logs identifying which of their users sent which traffic. Some businesses and nonprofits do already log this information, but many do not. Requiring all to maintain such logs would impose new costs on those who do not already do so, mostly by increasing the amount of skilled IT staffing required operate a local network.

But the NAT problem extends far beyond “enterprise” networks. Any coffee shop or other public amenity that currently provides free public wifi would need to log the connections it provides. Today, a coffee shop can configure a broadband modem and router and simply leave them running; but if we want IP addresses to be traceable back to end users, we’ll need each shop to capture and curate logs of its transient users.

In fact, mere automatic logging might not be enough, since the coffee shop doesn’t know which computer belongs to whom: Whatever information about each user we want to be available (name? address?) we would have to require the coffee shop to collect. Nor would it be sufficient to accept just any assertion by a customer about his or her identity. To have a reasonable expectation that the information is right, particularly in the cases of bad actors that we care about, we would want the coffee shops to take steps to verify the identity of their customers. If, as it seems safe to assume, coffee shops don’t want to worry about verifying the identities of their customers, then identifying anonymous Internet users in coffee shops would instead require some kind of national system whose use use is compulsory. And in any case, the costs of logging would presumably still be borne by the shop. The coffee shop might pass the costs of such a regime on to consumers, absorb them as diminished profits, or decide, in light of the requirement, to stop providing Internet connectivity.

The problem also extends beyond even small commercial settings, and into individual homes. If you go to Best Buy to purchase a home router, it will automatically provide NAT in order to allow you to connect multiple computers, and at your option offer publicly available wifi, over your single home broadband connection. The ISP will know that traffic comes from a given broadband customer, which can be useful information, but the ISP will not know which computer connected to a local router originated certain traffic. To get hold of that information, we’d need to require individual home users to log the activity of the several computers that share their home broadband connection. If you offer wifi openly, your neighbor can use it. If you have a friend visiting who keeps an iPhone in her pocketbook, or if you bring home a new TiVo with wireless capabilities, those devices will also look like laptops and need to be logged. Or, they won’t be logged and could be used for anonymous mischief.

Each of these parties might be required to log IP addresses, or we might start making exceptions to the requirement. In the former case, financial costs and foregone utility would be substantial. In the latter, the practical reach of “traceable anonymity” could be substantially diminished, even for domestic users who take no steps to obscure their identities.

You may also like...

6 Responses

  1. Paul Ohm says:

    I think you raise a lot of excellent points, David, and I’m very glad you’re participating. I will counter, however, that you might be overstating the NAT point. Many, many harassers are probably not obscured by NAT in this way. Most home users–even the ones behind NAT routers–are traceable to someone with their IP address and a civil subpoena. If the law does nothing but cause would-be harassers to scurry to open access points, Internet cafes, and the benches outside the Princeton CS building, while forcing them all to stop harassing from home, that’s still a major accomplishment, as far as I am concerned.

    I’d also recommend you look at the growing comment thread to Prof. Froomkin’s post, if you haven’t already, where we’re debating related topics.

  2. All points well taken. There would indeed be some cases, perhaps a large number, where technically unskilled or uncareful Internet harassers, operating from home, would be easier to find thanks to this policy. And those cases might justify the policy.

    I guess the main thing I was trying to illustrate is that even if Danielle would prefer traceable anonymity to be comprehensive, the policy she has suggested would not come close to making it so. This also responds to Prof. Froomkin’s claim; I’ll take it up over there.

  3. For those who want a detailed look at some of the technical issues in traceability, let me refer them to Richard Clayton’s PhD dissertation “Anonymity and traceability in cyberspace” (, especially chapters 2-3.

  4. Note: I just realized that I was making an incorrectly broad use of the term “traceable anonymity” which, as its name would suggest, applies to only the first of Danielle’s three-pronged standard of care. I regret the error and won’t repeat it.

  5. Note: I just realized that I was making an incorrectly broad use of the term “traceable anonymity” which, as its name would suggest, applies to only the first of Danielle’s three-pronged standard of care. I’ve now corrected that above. I regret the error and won’t repeat it.