Monday, September 15, 2008

Random early detection gateways for congestion avoidance

This is another paper on congestion avoidance at the gateway. However, here the goal is not so much to enforce fair queuing (although this is done in a way), but to control the average packet queue size so as to ensure a reasonable amount of latency while maintaining high throughput. The basic idea is to mark/drop packets probabilistically when the packet queue in some range (and mark/drop all packets if it exceeds some maximum threshold). The authors list 4 main design goals:
  1. Avoid bias against bursty traffic
  2. Avoid global synchronization (i.e. sources reducing windows at the same time)
  3. Control queue size in absence of cooperating sources
There are two key elements in the RED algorithm. The first is to compute the average queue size for each incoming packet. This is done via a exponential weighted moving average. There is a slight twist in that it also tries to take into account idle time when the queue is empty by computing the average as though some number of small packets enter the gateway during the idle time. This to me seems like a hack, since the average is only updated per incoming packet, whereas what was probably intended is to have an average queue size over time. This could be done in a more principled manner, perhaps using the approach in the "Core-stateless fair queuing paper", but probably this method is easier and does not make too much of a difference in the end. 

The second is to determine what probability to drop/mark packets at. The desired probability is computed based on a linear function of the average queue size - intuitively, the larger the queue size, the more packets the gateway should mark.  The reason for choosing this feedback mechanism is not made clear, and I think some control theory can be applied to determine what is an ideal probability based on the desired time constant. However, again, this need not be an exact science. What is interesting is their next step, in which the actual probability is computed, based on a desired goal of not having too many packets dropped close to each other and avoid having a long interval between dropped packets. Instead, the process is modified such that packets are dropped uniformly.


One advantage of this approach is that there is no need to track individual flow rates, thus it scales well with the number of connections. Of course, fair queueing is not one of its objective, so there was no need to do so in the first place. On the other hand, because the packet dropping process is probabilistically applied to every incoming packet, it also has the outcome of dropping more packets from the connection using a higher bandwidth, although this only ensures that each connection is penalized by the same fraction.

1 comment:

Randy H. Katz said...

Everyone implements FQ. It is good in terms of fairness and it turns out that it is not hard to implement given modern router processing capabilities.