Monday, September 15, 2008

Congestion control for high bandwidth-delay product networks

This paper proposes eXplicit control protocol (XCP) to address some of the performance shortcomings of TCP, particularly as delay/bandwidth increases. For example, with high RTT, TCP's policy of additive increase increases bandwidth utilization very slowly. Another scenario is with short flows, which is unable to use available bandwidth faster than slow start allows. The authors also point out that with TCP uses an implicit congestion feedback through packet loss, which only indicates the presence/absence of congestion and not its amount, thus sources have to keep pushing the link to congestion before backing off continuously.

As implied by its name, XCP provides feedback about the state of congestion at the routers to senders, by indicating to the senders how much to adjust its congestion window by. Two key elements of XCP are the efficiency controller and the fairness controller, which operate independently. The efficiency controller aims at maximizing link utilization, and uses a feedback controller based on spare bandwidth and queue size to set the aggregate change in rate. The parameters of this feedback controller are chosen based on the feedback delay to ensure that the system is stable. The fairness controller then parcels out the aggregate change to various packets to achieve fairness, based on an additive-increase-multiplicative-decrease mechanism. As the authors noted, XCP can converge to fairness much faster, since the adjustment is done with each RTT instead with each packet loss.

This paper is full of useful ideas that can be applicable in other problems as well. First, there is the use of control theory to examine how a network (the plant) should be controlled and parameters should be chosen. Second, the use of a fluid model to allow the analysis of protocols by approximating a discrete system with a continuous one. (How good the approximation is would be debatable though.)  Third, the mention of a shadow pricing model is an interesting way of determining how to allocate resources based on how much users are willing to pay for them.

On the other hand, the policing of misbehaving sources based on border agents sounds like an afterthought to address the vulnerabilities of the system to non-compliant senders. While it is possible (but expensive) to have gateways to shut down the bad sources, it would be far better to have a scheme which provides no ineherent incentives to a source to misbehave - i.e. the compliant sender policy should also be one that gives the sender the highest throughput.

The experimental results are also to be expected. If a router is able to control how sources should adjust their transmission rate, then it is easy to believe that it would be able to achieve high link utilization.

1 comment:

Randy H. Katz said...

Good comments -- policing bad behavior with low overhead and state is actually a huge research challenge. The paper does present a very interesting conceptual framework for transport protocols -- which is why it is considered such an important paper.