Wednesday, September 17, 2008

Fundamental design issues for the future Internet

In this paper, the author's thesis is that due to the emergence of multimedia applications such as voice/video telephony with very different delay/bandwidth service requirements from traditional data applications, the following list of things need to be done:
  1. IP should extend its service model, since services which are aligned with service requirements increases overall utility of the network.
  2. The application should request the appropriate service instead of being it being estimated by the network, to maintain separation between network and application layers.
  3. Admission control, instead of over-provisioning, should be adopted to prevent network overload, since it is economically more sound.
The author looks at the Internet from the high-level, and use macro-type of analysis to make his arguments. To do so, a framework is introduced whereby the design objective is to maximize the total utility of users of the Internet, subject to physical constraints of the network. This is not meant as a way to optimize resource allocation or choose network parameters, but rather as a tool to characterize various design choices. Unfortunately, the outcome of the analysis sometimes depend on the various cost and utility parameters, such as the relative cost of over-provisioning and changing network mechanisms, and utility function when certain service requirements are not met. For example, in trying to show that service models should be extended, the argument is assuming that over-provisioning cannot be done and the cost of adding a service model is negligible. 

The author argues for an explicit choosing of service by an application, rather than the service being implicitly assigned by the network, based on a layer separation argument. While an implicit assignment might be bad, it is not clear if an explicit assignment would work in practice either, due to issues such as incentives (and authentication, accounting, billing to establish financial incentives). In other words, even if it was established that separate services is the way to go, there might not be a good way to actually assign a service to a packet.

The discussion on admission controls seems to me to be the most sound. I like the use of the utility maximizing framework here, since the outcome is independent of quantities, but just depends on the characteristics of the utility function. The logical outcome of best-effort service for data, and admission control for real-time applications was very satisfying. Unfortunately, it is not clear to me how admission control is to be done for the Internet, since the user (or gateway) has to in effect probe the state of the network, which might not be easy or stationary. Also, such a mechanism can be abused to "excommunicate" users - a chilling thought.

I'm not sure how different the Internet was during the 1990s, but with hindsight, it is interesting to note that various things we accept today, such as spam, web-crawling, IP (video)telephony, were mentioned in this paper.

No comments: