Structured Peer-To-Peer Systems: The Impact of Lawyers and Music Piracy on Distributed Systems Research
by Sameh El-Ansary
What began as an initiative for music piracy has evolved into an interesting research area in distributed systems, due to legal rather than technical reasons.
After the decay of Napster as a mainstream peer-to-peer (P2P) system due to legal reasons, the high demand for file-sharing applications led to the appearance of traffic-demanding systems such as Gnutella or Freenet. The large amount of traffic induced by such systems due to their dependence on flooding algorithms constitutes a major concern to ISPs. This is the case even for optimised forms like the Kazaa system, which deploys the notion of super peers.
Initially, P2P systems development was driven by widespread demand for sharing of music files. The expensive solution of flooding-based systems started to attract the attention of research communities, which resulted in the transition to a new generation of P2P systems that provide a more elegant solution to the problem, namely, satisfying the constraints of decentralisation and scalability. The leading systems of that generation include Chord, CAN, Pastry, and Tapestry. Such systems were collectively said to implement a Distributed Hash Table (DHT) data structure as they provide the simple interface of a hash table: put(key, value), get(key). Unlike flooding-based systems which construct random overlays, DHT systems build structured overlays with desirable scalability properties.
A Unified View to Structured P2P
During our study of the leading DHT systems in the European project PEPITO, we have observed that while all DHTs have a similar interface, each system introduces a new overlay structure, eg, a ring, d-dimensional Cartesian space, torus, etc. Moreover, each system defines an algorithm for lookup, joining of new nodes and handling of leaving and failing nodes. However, having different structures and algorithms for each system makes understanding and reasoning about such systems complicated, and means research results are applicable only to a particular system rather than to the general concept of structured overlays.
Through close inspection of these systems we have reached two main conclusions:
- there exists a common framework based on the simple principle of a k-ary search, which is general enough to explain all DHT structures
- the concept of a structured overlay network could be used as a general technique for building scalable distributed systems even in non-P2P contexts.
For the first observation, the distributed k-ary search principle could be summarised as follows: given the basic assumptions of DHTs, ie, the mapping of a set of nodes and items into one identifier space of size N, each node needs to retain knowledge (routing information) about other nodes in order to lookup and insert items. Each node starts a lookup or insertion process by having a 'view' of the identifier space. A view is a division of the space into k equal intervals. For each interval, there is one 'responsible' node. That is, if a node n wants to insert item x in the DHT, it determines the interval to which x belongs, say interval i, and forwards the item or query about the item to node n0, where n0 is the node responsible for the interval i. Node n0, in its turn, will have a 'view' of the interval i, ie, it will divide it into k parts as well and behave similarly to node n. The process ends after logK(N) hops, when an interval of length 1 is determined and the node responsible for it is either the node which saves the item or that in which the item is to be inserted. Therefore, one can see that the number of routing entries kept by each node is k logK(N) entries. This results from the division of intervals by a factor of k after each routing hop and the fact that we have k such intervals at each node. Since at each view of a node n, one out of the k intervals has n as a 'responsible', we therefore keep k-1 other responsibles per view, making the actual number of routing entries (k-1) logK(N). Given this distributed k-ary search approach as a meta-model, we can instantiate other systems by deciding the function for the choice of the responsible of an interval. In Chord for instance, this function is the Successor function, ie, the first node encountered moving clockwise in a circular identifier space. In Pastry and Tapestry, it is a node whose identifier has a prefix with one extra digit match with the identifier of the item being looked up or inserted.
By using this framework, we were able to optimise a generalised form of the Chord system, and designed an optimal broadcast algorithm that takes advantage of the distributed search tree. We also developed a new meta-system based on the distributed k-ary search principle.
For the second observation, the consideration of structured overlays as a general architecture for scalable systems could be summarised as follows: given a service S, served by a set of resources R, we can connect the set R in any structured overlay following the distributed k-ary search principle. The benefit of such connectivity is twofold:
- no one resource in the set R is crucial to the delivery of S. This is in contrast to a system like DNS which depends on its root servers for operation
- if S needs to scale, the number of resources in R grows logarithmically and thus scalability is achieved.
Finally, we can only speculate about what would have happened if Napster had not been shut down due to legal problems. Napster would most probably have continued to operate successfully and solved scalability problems by adopting a Google-like model, ie, having a huge farm of clusters. However, the fall of Napster combined with the high popularity of the application of music sharing forced the research community to look at scalable distributed systems in an unprecedented way and to construct decentralised architectures that constitute the means for building scalable distributed systems.
Sameh El-Ansary, SICS