Future of the Internet Symposium: The Role of Infrastructure Management in Determining Internet Freedom

Last week, Facebook reportedly blocked users of Apple’s new Ping social networking service from reaching Facebook friends because the company was concerned about the prospect of massive amounts of traffic inundating its servers.  This is precisely the type of architectural lockdown Jonathan Zittrain brilliantly portends in The Future of the Internet and How to Stop It. Contemplating this service blockage and re-reading Jonathan’s book this weekend have me thinking about the role of private industry infrastructure management in shaping Internet freedom.

The Privatization of Internet Governance

I’m heading to the United Nations Internet Governance Forum in Vilnius, Lithuania, where I will be speaking on a panel with Vinton Cerf and members of the Youth Coalition on Internet Governance about “Core Internet Values and the Principles of Internet Governance Across Generations.” What role will “infrastructure management” values increasingly play as a private industry ordering of the flow of information on the Internet? The privatization of Internet governance is an area that has not received enough attention.  Internet scholars are often focused on content.  Internet governance debates often reduce into an exaggerated dichotomy, as Milton Mueller describes it, between the extremes of cyberlibertarianism and cyberconservativism. The former can resemble utopian technological determinism and the later is basically a state sovereignty model that wants to extend traditional forms of state control to the Internet.

The cyberlibertarian and cyberconservative perspectives are indistinguishable in that they both tend to disregard the infrastructure governance sinews already permeating the Internet’s technical architecture.  There is also too much attention to institutional governance battles and to the Internet Governance Forum itself, which is, in my opinion, a red herring because it has no policy-making authority and fails to address important controversies.

Where there is attention to the role of private sector network management and traffic shaping, much analysis has focused on “last mile” issues of interconnection rather than the Internet’s backbone architecture.  Network neutrality debates are a prime example of this.  Another genre of policy attention addresses corporate social responsibility at the content level, such as the Facebook Beacon controversy and the criticism Google initially took for complying with government requests to delete politically sensitive YouTube videos and filter content. These are critical issues, but equally important and less visible decisions occur at the architectural level of infrastructure management.  I’d like to briefly mention two examples of private sector infrastructure management functions that also have implications for Internet freedom and innovation: private sector Internet backbone peering agreements and the use of deep packet inspection for network management.

Private Sector Internet Backbone Peering Agreements

For the Internet to successfully operate, Internet backbones obviously must connect with one another.  These backbone networks are owned and operated primarily by private telecommunications companies such as British Telecom, Korea Telecom, Verizon, AT&T, Internet Initiative Japan and Comcast.  Independent commercial networks conjoin either at private Internet connection points between two companies or at multi-party Internet exchange points (IXPs).

IXPs are the physical junctures where different companies’ backbone trunks interconnect and exchange Internet packets and route them toward their appropriate destinations.  One of the largest IXPs (based on throughput of peak traffic) is the Deutscher Commercial Internet Exchange (DE-CIX) in Frankfurt, Germany.  This IXP connects hundreds of Internet providers, including content delivery networks and web hosting services as well as Internet service providers.  Google, Sprint, Level3, and Yahoo all connect through DE-CIX, as well as to many other IXPs.

Other interconnection points involve private contractual arrangements between two telecommunications companies to connect for the purpose of exchanging Internet traffic. Making this connection at private interconnection points requires physical interconnectivity and equipment but it also involves agreements about cost, responsibilities, and performance. There are generally two types of agreements – peering agreements and transit agreements. Peering agreements refer to mutually beneficial arrangements whereby no money is exchanged among companies agreeing to exchange traffic at interconnection points.  In a transit agreement, one telecommunications company agrees to pay a backbone provider for interconnection. There is no standard approach for the actual agreement to peer or transit, with some interconnections involving formal contracts and others based upon verbal agreements between companies’ technical personnel.

Interconnection agreements are an unseen regime.  They have few directly relevant statutes, almost no regulatory oversight, and little transparency in private contracts and agreements.  Yet these interconnection points have important economic and implications to the future of the Internet.  They certainly have critical infrastructure implications depending on whether they provide sufficient redundancy, capacity and security.  Disputes over peering and transit agreements, not just problems with physical architecture, have created network outages in the past.  The effect on free market competition is another concern, related to possible lack of competition in Internet backbones, dominance by a small number of companies, and peering agreements among large providers that could be detrimental to potential competitors. Global interconnection disputes have been numerous and developing countries have complained about transit costs to connect to dominant backbone providers.  The area of interconnection patents is another emerging concern with implications to innovation.  Interconnection points are also obvious potential points of government filtering and censorship.  Because of the possible implications to innovation and freedom, greater transparency and insight into the arrangements and configurations at these sites would be very helpful.

Network Management via Deep Packet Inspection

Another infrastructure management technique with implications to the future of the Internet is the use of deep packet inspection (DPI) for network management traffic shaping.  DPI is a capability manufactured into network devices (e.g. firewalls) that scrutinizes the entire contents of a packet, including the payload as well as the packet header.  This payload is the actual information content of the packet.  The bulk of Internet traffic is information payload, versus the small amount of administrative and routing information contained within packet headers.  ISPs and other information intermediaries have traditionally used packet headers to route packets, perform statistical analysis, and perform routine network management and traffic optimization.  Until recent years, it has not been technically viable to inspect the actual content of packets because of the enormous processing speeds and computing resources necessary to perform this function.

The most publicized instances of DPI have involved the ad-serving practices of service providers wishing to provide highly targeted marketing based on what a customer views or does on the Internet.  Other attention to DPI focuses on concerns about state use of deep packet inspection for Internet censorship. One of the originally intended uses of DPI, and still an important use, is for network security. DPI can help identify viruses, worms, and other unwanted programs embedded within legitimate information and help prevent denial of service attacks. What will be the implications of increasingly using DPI for network management functions, legitimately concerned with network performance, latency, and other important technical criterion?

Zittrain discusses how the value of trust was designed into the Internet’s original architecture.  The new reality is that the end-to-end architectural principle historically imbued in Internet design has waned considerably over the years with the introduction of Network Address Translation (NATs), firewalls, and other networks intermediaries. Deep packet inspection capability, engineered into routers, will further erode the end-to-end principle, an architectural development which will have implications to the future of the Internet’s architecture as well as to the future of individual privacy and network neutrality.

As I head to the Internet Governance Forum in Vilnius, Lithuania, Zittrain’s book is a reminder of what is at stake at the intersection of technical expediency and Internet freedom and how private ordering, rather than governments or new Internet governance institutions, will continue to shape the future of the Internet.

You may also like...

3 Responses

  1. Frank Pasq says:

    What an illuminating post–thanks so much for focusing on the interconnection arrangements, which are really under-scrutinized. I also wanted to note other problems with secrecy in the field. For example, a recent article discussed “The Secrecy of FCC Broadband Infrastructure Statistics” (31 Hastings Comm. & Ent L.J. 339 (2009)). Search engine algorithms are also protected by trade secrecy, and according to a recent article even the revenue breakdown for a company like Google is rarely leaked.* I think all these closed systems make it hard to theorize about the “present of the internet,” let alone the future!

    * at
    http://www.fastcompany.com/1687241/what-are-bp-apple-amazon-and-others-spending-on-google-advertising

  2. Steven Bellovin says:

    Private interconnection agreements are indeed very important, but are little known outside of the Internet operational community. Beyond the concerns expressed by Laura, it’s important to note that interconnection agreements contain policy statements. For example, ISP A might buy limited transit from ISP B, but only for certain destinations, certain kinds of traffic, and subject to certain bandwidth constraints. Put another way, if your ISP has purchased transit via two different upstream ISPs, how does it decide which one to use to send your traffic to a given destination? There is a technical answer, spelled out in assorted (very complex) RFCs, but the technical mechanisms set up by network engineers are intended to reflect the business deals reached by management. Setting up these mechanisms is itself a very difficult and challenging task.

    Interconnection policies are quite opaque from the outside. The contracts generally contain confidentiality clauses, and it’s been shown mathematically that it’s infeasible to deduce them from the outside. This raises some interesting legal and societal questions. First, they’re related to the whole network neutrality question. Part of what (some) ISPs want to do involves interconnections: preferred traffic could be routed differently, over a faster but more expensive path. How can, say, the FCC verify that ISP policies are are in compliance with some future regulatory scheme? You can’t tell from the outside; you’d have to look at the contracts and at the technical configurations — but the technical configurations are, as I said, quite complex and hard to understand. Auditing, in other words, will be very difficult.

    Another question is a matter of contract law: how does a customer verify that they’re getting what they paid for? Remember that you pay your ISP; you have no business relationship with anyone else. What are the service guarantees you’re receiving? How can you verify that you are indeed receiving that service? You can’t even ask to see the interconnection contracts; they’re confidential.