JOINT ERCIM ACTIONS
ERCIM News No.37 - April 1999

Multiservice Internet: Service Network or Ham Technology?

by Chris Cooper


As people collaborate over greater distances on a regular basis, so an increasingly significant, if as yet potential benefit of the global Internet is support of technical discussion, training, seminars, etc. - in fact the complete range of computer-supported co-operative working made accessible from each participants desktop.

In a previous article (ERCIM News No.35, October 1998), David Duce described work undertaken in the European Framework 4 Telematics project MANICORAL to develop and begin initial trial assessment of a co-operative visualization system to support a European community of geodetic scientists and engineers. The experience gained with that project indicated that such desktop-based access to co-operative visualization had substantial potential: it also demonstrated how much still remains to be done to enable the Internet to support such styles of use in practice.

The demands of this sort of application are considerable when compared with, for example, a single user browsing world-wide web services. The latter demands support from the underlying network for only a single type of traffic: transport of traditional data, reliably, on a point-to-point basis-which the Internet has been providing on a best-effort basis for many years.

Collaborative visualization however requires not only traditional reliable data transport (for the visualization) but also continuous media transport for voice (and possibly video) communication. Moreover, both the data and continuous media support need to be on a multipoint basis in order to support (small) groups of more than two people. Experience in the MANICORAL project indicated that the current best-effort Internet fell far short of expectations in respect of what was required for routine collaboration, partly because multicast is not yet all-pervasive as a service, and partly because the quality of voice communication was too often so variable, including incomprehensibility or total loss, as to render a session abortive. There were a number of contributing causes to the project’s specific experience. Here, a few of the underlying technological issues relating to the evolution of the Internet from a single-service (best-effort data) to multiservice are remarked upon.

The basis for multipoint transport in the Internet has been around for nearly a decade in the form of the multicast overlay network known as the Mbone. Great progress has been made in multicast routing, and this is now available in routers. Nevertheless, the transition from prototype multicast overlay to integrated multicast service still evidently has a long way to go to approach coverage comparable with point-to-point service. Even greater is the problem represented by the need to carry more than a single category of traffic. Continuous media traffic, such as speech, does not require that every bit is delivered correctly (though obviously most of it must be), but it does demand that voice packets are delivered on a regular, timely basis. Packets arriving too late are as good as lost: too late means that no more samples are available to play out to the listener and there is a gap in the sound. Too many such gaps and speech becomes incomprehensible. In the case of conversational speech, the round-trip delay also needs to be short to prevent it interfering with normal conduct of the conversation.

The traditional approach to the problem of continuous media traffic transport in networks is to reserve resources for the lifetime of the activity, as is done in the telephone network during a call. This is also a part of the approach taken in seeking ways of integrating this traffic with traditional data traffic, whether the network technology is asynchronous transfer mode (ATM) for broadband ISDN, or integrated services Internet. There are two possibilities for reserving resources to handle traffic which cannot tolerate delay: the resources either need to be reserved dynamically by some sort of signalling or they must be permanently reserved through some service contract. In either case, some state has to be introduced into the switching elements of the network, the Internet routers. Resource reservation protocol, RSVP, has been developed as a means of letting a host signal its need for resources to be reserved dynamically in the network on a flow-by-flow basis. Considerable progress has also been made in the development of queue management scheduling disciplines to enable packets to be transmitted on an output link in such a way as to support the quality of service required for the different flows to which the packets belong.

So what’s the problem? Well, two really: a scaling problem, and a basic problem related to resource allocation. The scaling problem arises from attempting to associate state with potentially every flow in the Internet. The memory space and packet processing associated with this may be manageable in more or less local or small regional networks, but it is not possible to support it in the core or backbone routers of the global Internet. The generally agreed approach to dealing with this problem is to aggregate flows into a few types, to mark packets accordingly at the edges of the network, at the same time as operating admission control, so that the core need only perform very simple tests on a small field in each packet to determine which type of behaviour (scheduling) is appropriate for that packet, in the knowledge that the admission control will have effectively guranteed that the resources are available in the network to handle the packet.

The other problem is that any form of resource reservation by one party implies that in times of high load or potential congestion some other party is denied service. Without additional criteria, there is no way of deciding who should be given service and who should be denied. One possible way out is to introduce pricing: different qualities of service attract different tariffs, and operators size their networks to provide service for amounts of different types of traffic according to observed traffic patterns and specific contracts. One effect of such a mode of operation is that there is a direct incentive to upgrade a network to cope with more traffic of a particular type as soon as traffic projections indicate approach to the region of denial of service: suitable investment might not only prevent subscribers moving to another provider but also offer the potential of further profit.

This article is based in part on a talk given by David Duce last summer at Multi-Service Networks’98, held each year a Cosener’s House, Abingdon, Oxfordshire, UK. Details of this year’s workshop will be available at http://www.acu.rl.ac.uk/msn99/

Please contact:

Chris Cooper - CLRC
Tel: +44 1235 44 6211
E-mail: Chris.Cooper@rl.ac.uk


return to the ERCIM News 37 contents page