Future of the Internet Symposium: Do we need a new generativity principle?

[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet’s overall ability to foster innovation) is here.]

In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)

Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.[1]

Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.

This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments [2] (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.

For example, if it is true that distributed denial-of-service attacks can be identified and stopped only in the network, the broad version clearly allows the implementation of the associated functions in the network. After all, the broad version allows implementing functions in the network if they cannot be completely and correctly implemented at the end hosts only.

In contrast, according to the broad version, a function (such as encryption) that can only be completely and correctly implemented end-to-end between the original source and ultimate destination of data should not be implemented in the network. This is because “a function or service should be carried out within a network layer only if it is needed by all clients of that layer, and it can be completely implemented in that layer.” (Reed, Saltzer and Clark, 1998, Commentaries on ‘Active Networking and End-to-End Arguments’, p. 69)

Finally, even if the broad version requires a function to be implemented at the end hosts, it is possible to deviate from the default rule established by the broad version based on considerations such as those advanced by Zittrain within the overall framework provided by the end-to-end arguments (see Internet Architecture and Innovation, p. 367-368). This does not mean that any implementation of security-related function in the network should automatically be justified. Given the Internet community’s experience with firewalls described in my last post, the long-term consequences of any such implementation for the evovability of the network need to be carefully considered first.

Thus, ultimately Zittrain and I agree that it may sometimes be possible and necessary to implement certain security-related functions in the network even if the broad version of the end-to-end arguments would, by default, require implementing the functions at the end hosts. Does this insight require a new generativity principle given that we can get there within the framework of the broad version of the end-to-end arguments? I’m not sure. (I’m not sure, either, whether Zittrain intends his principle to replace or complement the broad version of the end-to-end arguments, which may matter in answering the question.)

If Zittrain and I ultimately agree about the possibility of implementing certain security-related functions in the network (of course, we may disagree about specific cases), why does it matter how we get there? Whether the end-to-end arguments, as Zittrain seems to suggest, categorically rule out the implementation of security-related functions in the network is relevant to the broader debate about the future of the end-to-end arguments as technical design principles.

Most network engineers agree that a number of developments put pressure on the Internet’s technical foundations. These include the Internet’s growing size, its transition from a research network operated by public entities to a commercial network operated by commercial providers who need to make profits, and its transition from a network connecting a small community of users who trust one another to a global network with users who do not know one another and may even intend to harm one another.

When network engineers think about how to address these challenges (whether it’s in the context of incremental modifications to the existing Internet infrastructure or in the context of clean-slate approaches that aim to design a new Internet architecture from scratch), they need to decide whether using the end-to-end arguments as a technical design principle still makes sense. In these discussions, one class of counterarguments comes up again and again: that the end-to-end arguments constrain the development of the Internet’s architecture too much and prevent the network’s core from evolving as it should. For example, researchers advancing this argument assert that the end-to-end arguments prohibit the provision of quality of service [3] in the network, require the network to be simple, or make it impossible to make the network more secure. As I show in my book (pp. 106-107, 366-368), these claims are not correct. The end-to-end arguments allow some, but not all forms of quality of service; they do not require the network to be simple, or “stupid;” and they do not make it impossible to make the network more secure.

Of course, these insights alone do not imply that the end-to-end arguments should continue to guide the Internet’s evolution in the future (a question I take up in my book). It does mean, though, that the end-to-end arguments are not automatically out of the running on the grounds that they restrict the evolution of the network too much.

[Disclaimer: Some of this post is taken from my book, Internet Architecture and Innovation.]

Footnote 1:
Some have wondered whether the chain of events described by Zittrain is realistic. Nobody knows whether there will be a catastrophic security attack. I’m convinced, though, that in the aftermath of a catastrophic attack neither users’ nor legislators’ desire for a quick solution to the security problem will leave much room for consideration of the consequences of any countermeasures for the Internet’s generativity – just as in the aftermath of 9/11, the desire to prevent another terrorist attack didn’t leave much room for consideration of the impact of the countermeasures on civil liberties. The resulting lockdown may affect the generativity of the end hosts or the generativity of the network, but the Internet’s overall generativity will certainly be affected.

Footnote 2:
The original architecture of the Internet that governed the Internet from its inception to the early 1990s was based on a design principle called the end-to-end arguments. There are two versions of the end-to-end arguments that both shaped the original architecture of the Internet: what I call “the narrow version”, which was first identified, named and described in a seminal paper by Saltzer, Clark and Reed in 1984 (Saltzer, Reed and Clark, 1984, End-to-End Arguments in System Design, ACM Transactions on Computer Systems, 2(4), 277–288) and what I call “the broad version”, which was the focus of later papers by the same authors (e.g., Reed, Saltzer and Clark, 1998, Commentaries on ‘Active Networking and End-to-End Arguments’, IEEE Network, 12(3), 69–71). To see that there are two versions, consider the following two statements of “the end-to-end principle”: “A function should only be implemented in a lower layer, if it can be completely and correctly implemented at that layer. Sometimes an incomplete implementation of the function at the lower layer may be useful as a performance enhancement” (first version) and “A function or service should be carried out within a network layer only if it is needed by all clients of that layer, and it can be completely implemented in that layer” (second version). The first version paraphrases the end-to-end principle as presented in the 1984 paper. The second version is directly taken from the paper on active networking and end-to-end arguments. Clearly, the second version establishes much more restrictive requirements for the placement of a function in a lower layer.

While the authors never explicitly drew attention to the change in definition, there are real differences between the two versions in terms of scope, content and validity that make it preferable to distinguish between the two. At the same time, the silent coexistence of two different design principles under the same name explains some of the confusion surrounding the end-to-end arguments. While both versions shaped the original architecture of the Internet, the broad version is the one that has important policy implications, such as the Internet’s impact on innovation. For a detailed description of the end-to-end arguments and their relationship to the Internet’s original Architecture, see Internet Architecture and Innovation, chapters 2 and 3.

Footnote 3:
A network that provides “Quality of Service” (QoS) offers different types of service to different data packets. For example, it may guarantee a minimum bandwidth or maximum delay, or it may give some traffic priority over others without giving absolute guarantees.

You may also like...

1 Response

  1. How far do you think IMS (IP Multimedia Subsystem) has come as a potential way of building security and QoS into the system?

    Does it strike the kind of balance you say is needed?

    Thanks,
    William Abbott Foster, PhD
    Faculty Associate
    Science, Technology, and Society
    Arizona State University