Researchers have analyzed data on the structure of the internet – with the conclusion that there is no suitable comparison from the real world for the network structure
In the past, the structure of the internet could be compared to that of an onion: at the core there were the tier 1 providers, who owned the international backbones. Above them were all the other layers, from the regional provider to the end user. The orange has also been used as a comparison – because of its segment structure. And the association to equate one of the typical internet structural graphs with a portion of spaghetti is not far away either.
All these ideas, however, do not have much to do with reality, say israeli researchers (after publishing preliminary results a year ago in the current ie of the proceedings of the U.S. Academy of sciences (PNAS)).
As a basis, they made use of the data collected by the european DIMES project. The "distributed internet measurements simulations" have found over 25,000 so-called autonomous systems in nearly two and a half billion.000 so-called autonomous systems, each representing internet providers or other large organizations that together represent the internet.
Technically, an autonomous system is a group of networks that are administered by a common management according to common routing rules. The israeli researchers have studied this data using a special mathematical method, k-shell decomposition.
The network is decomposed in such a way that first the nodes with a single link remain – these are then assigned (including their links) to the first network level, the 1-shell. Then the nodes with two links are removed step by step and assigned to the 2-shell. The process continues recursively with increasing number of links until all nodes are assigned to a group.
The result: the internet has a very densely packed core, comprising about 100 systems, among which, for example, google and ATT worldnet are. Researchers have assigned all autonomous systems with the maximum k-number, k-max, to this core. The shell around it consists of two different components: on the one hand, it contains about 15.000 systems that are "peer-connected", i.E. Well connected to each other. Information can travel between them without coming into contact with the core.
Schematic representation of the MEDUSA model of the internet
In just four steps, the scientists have calculated, data travels from one component to any other in this zone. Such linkages lack further approximately 5000 systems. These isolated nodes connected only to the core find it harder to exchange data with the rest of the internet. If the core were to fail suddenly, these networks would be completely disconnected. The good news is that even in this case, 70 percent of the internet would still be interconnected – the group with the most reciprocal connections.
In this respect, the model resembles a small animal – the authors call it MEDUSA. From their work they derive certain recommendations for the design of data traffic. To prevent data congestion in the center, they argue, more data should be routed, for example, through the outer, also well-connected regions – although the route through the core would in any case be the shorter one (with fewer intermediate stations).
The actions of israeli researchers are not entirely unchallenged in the scientific community. One criticism of the method used is that only the number of links is counted – but not their type and importance. In particular, the relationship between provider and customer is not included in this model.
In addition, the way in which the data is obtained is called into question: quantities in which route tracing is performed by a small number of observers to a coarse number of destinations are likely to be biased – an imbalance that could not be eliminated even by increasing the number of observers. The scientists counter this criticism with the meanwhile very rough number of observers in more than 90 countries of the earth.