Monday, October 20, 2008

A first-principles approach to understanding the Internet's router-level topology

This paper tries to articulate the dangers of over-relying on probabilistic network topology models that matches macro statistics such as power laws on degree distributions. Its key observations (and shown in the experiments) are that different network topologies can have the same node degree distribution and that a graph that is "more likely" according to a degree distribution often have poor performance metrics due to the presence of highly connected hub nodes. The authors also tries to make the point that even when core routing topology stays the same, different end-user bandwidth demands would drive a different edge network topology leading to very different degree distributions - thus there may not be a single degree distribution for the Internet.

I think that the thesis of the paper does go deeper beyond the above arguments, but I had a hard time trying to grasp it (and may not still). The authors claim to be pushing for a new methodology to study and characterize Internet topology but they beat around the bush quite a bit. They state that technology and economic constraints are drivers for network topology, but the assertions made are qualitative statements, and are weakly followed through by the design of a "heuristically optimal" network which performs well but may not be "likely" according to a probabilistic toplogy model.

What I have hoped to see was a paper that says something along the following lines: if I have these technical constraints, and I want to satisfy certain economic conditions and some user demand (possibly heterogeneous), this is the kind of Internet network topology that I would see. There would be certain topology metrics that can be predicted and verified from collected data.

No comments: