Review of Network Protocols Design
Student Name: Ruofei Zhang Date: Sep. 26th, 1999
After reading four research papers on network protocols design, I think these papers¡¯ contention or content can be divided into two camps. One camp, represented by ¡°Network Protocols¡±(Tanenbaum, Andrew S), ¡°The Philosophy of the DARPA Internet Protocols¡±(Clark David D), ¡°End to End Arguments in System Design¡± (Soltzer, J. H.), supports and comes up with some design methodologies to existing standardized network architecture-----fixed or static network protocol (ISO/OSI or TCP/IP) stack. Although their description have some concrete differences on partition of layers and position of some functions in the network software, their fundamental ideas are same and they comply with same standard of network architecture design philosophy. On the contrary, the other camp, represented by ¡°A Dynamic Network Architecture¡±(O. Malley), advances relatively revolutionary network software architecture compared with conventional protocol stack. They pose a new way to organize network software, a dynamic network architecture. Before we describe the differences between these two methodologies in detail, their common features should be mentioned.
These two camps both agree that to reduce the complexity of network software and to assure the independility as well as flexibility of implementation, network software should be layered into a hierarchy protocols. Each layer encapsulates some functions and is built upon the one below it. The purpose of each layer is to offer certain services to the higher layers, shielding those layers from the details of how the offered services are actually implemented. Between each pair of adjacent layer is an interface. The interface defines which primitive operations and services the lower layer offers to the upper one. In order to adapt to the progress of technology and different implementation of same layers, the interface should be simplified and precise. These common senses between two camps are very important. They constitute the basis of their research and communications.
Although the two camps have same ideas described above, their differences are great. The network software architecture of ¡°static camp¡± has three important properties: the protocol architecture is simple, the protocol encapsulates complex functionality, and the topology of the protocol graph is relatively static. That is to say the number of protocols in the architecture is fixed. So each message passed from the top of the protocol stack to the bottom goes through an identical path, no matter which service or application this message belong to, just as the ISO/OSI and TCP/IP protocol stack description. But the network software architecture of ¡°dynamic camp¡± differs much. They think the protocol graph should be complex, individual protocol encapsulate a single function, and the topology of the graph is dynamic. The right number of layers can¡¯t be defined statically and the number of layers visited by each message should be variable. They describe their ideas and illustrate the advantages of using the architecture in the paper. Most important of their ideas is that they represent network software with a graph of two types of protocols: microprotocols and virtual protocols. Microprotocols are like conventional protocols in that they communicate with their peers, but they also differ from traditional protocols in that they implement a single function and do not support options. Virtual protocols, on the other hand, are quite different from conventional protocols. They do not have headers, do not necessarily have a peer on the other machine, and serve only to direct messages through the protocol graph. In fact, the dynamic architecture can be thought of as programming environment in which the path a message follows through the protocol graph corresponds to the flow of control., microprotocols correspond to simple statements and virtual protocols correspond to conditional statements. In my opinion, virtual protocols are the most novel feature of dynamic architecture, and they play a critical role in being able to dynamically select the right path through the protocol graph. Another important idea they represent is simpler layer and reusable protocols as well as protocol encapsulation, just as the technology of object oriented system and software engineering, I think. One of basic difference between ¡°dynamic architecture¡± and ¡°static architecture¡± is that ¡°dynamic architecture¡± attempt to know a little about all protocols while existing architecture attempt to know everything about a small fixed set of protocol. In my opinion, these features are both advantages and weakness of ¡°dynamic architecture¡±. First the dynamic network architecture can be made to perform competitively with conventional static architecture. Second, microprotocols, because they implement a single function, promote reuse, which in turn makes communication services easier to implement. Third, a dynamic architecture has the advantage of allowing application programmers to configure exactly the right combination of protocols for their applications. Fourth, a dynamic architecture has the advantage of adapting more quickly to changes in the underlying network technology. On the other hand, there are some weaknesses of dynamic architecture. First, protocols can¡¯t take advantage of the absolute knowledge of what protocols are above and below them, which gives the performance passive influences. Second, the performance of dynamic architecture is competitive with standardized architecture only when it is implemented on a special-purpose OS platform (X-Kernel in the paper). Until now, the dynamic architecture can¡¯t be implemented on a general platform (such as UNIX). Third, although the cost decomposing existing protocols is to be acceptable if we adopt some measures, it is still the case that layering provides a great opportunity for having disastrous performance. So we can see the traditional network architecture and novel dynamic architecture are complementary. We can make these two architectures coexist with each other. The new approach can be applied to the design of new protocols that augments today¡¯s network architectures. It is also possible for a given distributed system to adopt new approach internally and still interoperate with the rest of the world using existing protocols. We can use virtual protocols to distinguish between protocols that are tuned for different network technologies too. Anyway, the dynamic architecture is a novel idea and a beneficial attempt of new protocol schemes.
Although other tree papers (J. H. Saltzer, D. D. Clark and A. S. Tanenbaum) are in the camp of existing fixed (static) network architecture (ISO/OSI or TCP/IP). Each of them presents some conceptions or principle as well as explanation of existing protocols design. Seltzer¡¯s paper presents a design principle that helps guide placement of functions among the modules of a distributed computer system. The principle, called the end-to-end argument, suggests that relevant functions can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication. Therefore, providing that questioned function as a feature of the communication system itself is not possible. This argument is based on that many requirements of application are beyond the control capacity of communication subsystem. So it¡¯s impossible to assure the data correctness, data security, etc only depend on improving reliability of low level system (communication system). There are many factors to affect the reliability of a distributed system rather than only communication system. So the help and information from application of endpoint is needed. Yes, I think it¡¯s a very important principle for distributed system design and it provides guidance for assigning functions to layers. This guidance can be used not only in communication system design but also in other system design. In my point of view, this argument is absolutely right for early communication system because at that time the reliability and stability of processor, memory and magnetic disk as well as communication line are not very good. So we need end-to-end principle very much. But as the reliability and performance of these devices and software improve, this argument is not an absolute rule now. Now using the end-to-end argument requires subtlety of analysis of application requirement. In some cases, we can put pertinent functions in communication subsystem to improve the performance and reduce latency without decrease the degree of satisfaction to applications¡¯ requirement. Although the importance of end-to-end principle maybe is abated now, I still think it is a rational principle that helps in application and protocol design.
In D. D. Clark¡¯s paper we know the early reasoning which shaped the Internet protocols (TCP/IP). We know why the TCP/IP is as it is, why TCP/IP protocol stack is based on a connectionless mode of service, why there are two layers separately-------TCP and IP, etc. The paper catalogs one view of the objectives of the Internet architecture and discusses the relation between these goals and the important features of the protocols. The important features of TCP/IP are decided by early goals of Internet architecture. These goals have a set of priorities, which strongly colored the design decisions within the Internet architecture. For example, the most important goal, survivability, decided that the Internet is based on datagram service mode. To support multiple types of communication services, TCP and IP are separated and IP becomes the based protocol of protocol stack. The goal of interconnectivity also leads the Internet architecture becomes to be TCP and IP layers. In one word, all features of TCP/IP protocol stack are adapted to the goals of early Internet. It¡¯s also reasonable because engineering systems should be application-oriented and should solve the problems we set up. I also know from this paper that Internet is very successful to meet the most important goals of early demand, but it still have something to do on the satisfaction to other goals (accountability, cost effective, distributed management, etc) successful too. And we need an effective formal tool kit for describing performance apart from logic correctness to design the protocols complied with objectives. This paper is a good material for me to understand the Internet design philosophy.
A. S. Tnenbaum¡¯s paper ¡°Network Protocols¡± is a suitable tutorial about network hierarchies. He uses the ISO/OSI reference model as a guide. The descriptions of features and reasons for each layer design as well as some practical implementations of each layer are precise and easily understandable. It¡¯s a good reference to make our vision and comprehension of network protocol design principles be more clear and penetrating.