Future of the Internet Symposium: Generative End Hosts vs. Generative Networks?
Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.
As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:
1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture)  to design the architecture of a network creates a network with these characteristics.
2. A sufficient number of general-purpose end hosts  that allowed their users to install and run any application they like.
Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”
In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.
Three trends, he argues, threaten to get us to a world where users will increasingly access the Internet through information appliances or locked down PCs, endangering the Internet’s ability to serve as an engine of innovation in the future:
* the emergence of tethered information applicances;
* the move towards software as a service; and
* a misguided focus on end-host-based security measures.
Zittrain’s thought-provoking book creates much-needed attention for a component of the Internet’s architecture that network engineers and Internet policy had mostly taken for granted (for an early exception, see Gillett, Lehr, Wroclawski and Clark, 2001, Do Appliances Threaten Internet Innovation ?) and for the different ways in which this component may be threatened.
I agree with Zittrain that without a sufficient number of general-purpose end hosts controlled by end users, the Internet’s engine of innovation would start to stutter. The questions of
* what exactly constitutes a sufficient number and
* whether something (and if yes, what) needs to be done to make sure that there enough generative end hosts remain
are important questions we need to think about.
I have two concerns:
1. In his desire to increase awareness of the importance of generative end hosts, Zittrain seems to downplay the importance of a generative network infrastructure. The generativity of the overall system, however, rests on both components. It would be a mistake to emphasize one at the expense of the other.
2. I ‘m not sure we need a new “generativity principle” to address the security problem that Zittrain describes. Instead, Zittrain’s concerns can be addressed within the framework provided by the broad version of the end-to-end arguments. I will tackle this issue in my second post.
Preserving the generativity of the network infrastructure remains important
It’s not entirely clear to me whether Zittrain thinks generative end hosts are more important than a generative network infrastructure. Some parts of the book leave the question open (“So what can generativity contribute to this debate? One lesson is that the endpoints matter at least as much as the network.”, Zittrain, p. 180 in the context of the discussion of the network neutrality debate). Other parts seem to suggest that protecting the generativity of the network, for example through network neutrality rules, may be less important. This assertion seems to rest on two arguments: 1) that the generativity of the network is not threatened, and 2) that even if the generativity of the network were compromised, generative end hosts would be able to overcome this problem.
1. The generativity of the network is threatened
In the section on “Network Neutrality and Generativity,” Zittrain argues that “so far, generativity is alive and well at the network level.” (p. 180). I don’t share this assessment.
In the past, the generativity of the network infrastructure resulted from the application of the broad version of the end-to-end arguments. As I show in chapter 7 of my book, an architecture can deviate from the broad version of the end-to-end arguments along two dimensions: It can become more “opaque” by implementing more application-specific functionality in the network’s core, or it can become more “controllable” by increasing network providers’ ability to control applications and content on their networks. The Internet’s architecture currently deviates from the broad version of the end-to-end arguments along both of these dimensions, with negative consequences for application innovation.
On the one hand, the network has become more opaque. The broad version of the end-to-end arguments requires the lower layers of the network to be very general; they should not be optimized in favor of specific applications. In the current Internet, asymmetric bandwidth to and from the home, network address translators and firewalls all implicitly optimize the network for the needs of client-server applications, creating difficulties for applications with different needs. In particular, network address translators and firewalls, taken together, have made it very difficult to develop and deploy new applications whose mode of operations differs from client-server applications. This applies, for example, to peer-to-peer applications, applications that use UDP, and applications that use one signaling connection to set up a second connection. Network address translators and firewalls have also made it almost impossible to deploy new transport protocols, leading to “ossification” of the transport layer (for more on this, see my book, pp. 385-386). Thus, deviations from the broad version of the end-to-end arguments create serious problems for innovation in new applications and transport layer protocols today.
At the same time, the network has become more controllable. A network based on the broad version of the end-to-end arguments is application-blind; as a result, network providers are unable to see which applications are using their networks and to control their execution. By contrast, in the current Internet, devices for deep packet inspection, i.e. devices in the network that can look into data packets, determine the application or content whose data the packets are carrying and process the packets based on this information, have been widely deployed. Whether network providers have an incentive to use this technology to discriminate against applications on their networks is hotly debated as part of the network neutrality debate. My research and conversations with innovators and venture capitalists (pdf) indicate that the threat of discrimination negatively affects innovation today.
2. Competition and generative end hosts won’t be sufficient to solve these problems
According to Zittrain, competition, to the extent it exists, will be able to mitigate this problem. In the absence of competition, “some intervention could be helpful, but in a world of open PCs some users can more or less help themselves, routing around some blockages that seek to prevent them from doing what they want to do online.” (p. 181)
I don’t share Zittrain’s optimism regarding the power of competition and user self-help.
First, as I explain in detail in chapter 6 of my book (pp. 255-264), a number of factors make competition, to the extent it exists, less effective in disciplining providers than is commonly assumed. These factors include the existence of switching costs or network providers’ ability to use discrimination instead of outright blocking.
Second, after consolidation among network providers, individual network providers now cover large, continuous territories. Under these conditions, “mov[ing] to a new physical location to have better options for Internet access,” as Zittrain suggests on p. 185, will not be an option for most users, if they want to live reasonably close to their work. And what if the new network provider later changes its mind and starts discriminating as well?
Third, Zittrain overestimates the ability of generative end devices to route around discrimination. As Bill Lehr, Marvin Sirbu, Sharon Gillett, Jon Peha have explained (pp. 637-638), this ability is ultimately limited. For example, using anonymizers, encryption or port switching to evade discrimination will not help users if they want to use a real-time application and the network provider slows down all traffic (Zittrain mentions a similar example on p. 181). To the extent circumventing discrimination is at least possible, it may require a level of technical sophistication that many users do not have, leaving the majority of Internet users unprotected.
Thus, a focus on the importance of generative end hosts should not come at the expense of generative network infrastructure. If we want to maintain the Internet’s generativity, we need to preserve both.
The original architecture of the Internet that governed the Internet from its inception to the early 1990s was based on a design principle called the end-to-end arguments. There are two versions of the end-to-end arguments that both shaped the original architecture of the Internet: what I call “the narrow version”, which was first identified, named and described in a seminal paper by Saltzer, Clark and Reed in 1984 (Saltzer, Reed and Clark, 1984, End-to-End Arguments in System Design, ACM Transactions on Computer Systems, 2(4), 277–288) and what I call “the broad version”, which was the focus of later papers by the same authors (e.g., Reed, Saltzer and Clark, 1998, Commentaries on ‘Active Networking and End-to-End Arguments’, IEEE Network, 12(3), 69–71). To see that there are two versions, consider the following two statements of “the end-to-end principle”: “A function should only be implemented in a lower layer, if it can be completely and correctly implemented at that layer. Sometimes an incomplete implementation of the function at the lower layer may be useful as a performance enhancement” (first version) and “A function or service should be carried out within a network layer only if it is needed by all clients of that layer, and it can be completely implemented in that layer” (second version). The first version paraphrases the end-to-end principle as presented in the 1984 paper. The second version is directly taken from the paper on active networking and end-to-end arguments. Clearly, the second version establishes much more restrictive requirements for the placement of a function in a lower layer.
While the authors never explicitly drew attention to the change in definition, there are real differences between the two versions in terms of scope, content and validity that make it preferable to distinguish between the two. At the same time, the silent coexistence of two different design principles under the same name explains some of the confusion surrounding the end-to-end arguments. While both versions shaped the original architecture of the Internet, the broad version is the one that has important policy implications, such as the Internet’s impact on innovation. For a detailed description of the end-to-end arguments and their relationship to the Internet’s original Architecture, see Internet Architecture and Innovation, chapters 2 and 3.
End hosts are the devices that use the network, such as the devices that users use to access the Internet, or the servers on which content and application providers make their offerings available to the public.
Zittrain uses a similar argument to alleaviate concerns about allowing network providers to filter the network for security purposes: “Moreover, if the endpoints remain free as the network becomes slightly more ordered, they remain as safety valves should network filtering begin to block more than bad code.”(pp. 165-166)