JOINT ERCIM ACTIONS
ERCIM News No.37 - April 1999

Control of Network Resources over Multiple Time-Scales

by Matthias Grossglauser


Networked multimedia applications require resource allocation because of their quality of service (QoS) requirements. On the other hand, network efficiency depends crucially on the degree of resource overbooking inside the network. A key problem in concurrently achieving both goals is caused by the fluctuation over multiple time-scales of the traffic load emitted by multimedia applications, because it makes it hard to predict resource requirements with sufficient accuracy. This in turn requires a careful design of control mechanisms so that they cover all time-scales.

In our work, we examine resource control over three natural time-scales. On the packet time-scale, we evaluate the performance of traffic smoothing as a mechanism to accommodate bandwidth fluctuation. Our interest stems from the mounting experimental evidence that packet arrival processes exhibit ubiquitous properties of self-similarity and long range dependence (LRD). A random process exhibits long-range dependence if it has a non-summable autocorrelation function. Intuitively, this means that the process exhibits fluctuations over a wide range of time-scales. This property is of importance because it cannot be captured by Markovian traffic models, which have traditionally been the analytical tool of choice in the teletraffic community. However, we show that in the case of traffic smoothing, there exists a correlation horizon that separates relevant from irrelevant fluctuation time-scales for the purpose of performance prediction. This illustrates the general principle that the traffic, system, and performance metric time-scales together determine the set of candidate traffic models.

Per-flow smoothing is not effective in removing the longer-term traffic fluctuations. To achieve high utilization, we therefore need a mechanism to share the link bandwidth among multiple flows. We advocate renegotiation as an efficient mechanism to accommodate fluctuations over time-scales beyond the correlation horizon, which we call the burst time-scale. A new network service model called RCBR (Renegotiated Constant Bit Rate) combines network simplicity with desirable quality of service guarantees, while achieving much of the potential statistical multiplexing gain of bursty traffic. With RCBR, the network guarantees a constant bit rate to the application. The application can renegotiate this bit rate, but there is a small probability of renegotiation blocking. A network implementing RCBR is simple because there is no substantial buffering in the network, and therefore no need for elaborate buffer management and packet scheduling mechanisms. The quality of service is determined by the renegotiation blocking probability, which is kept small enough by limiting the number of flows in the system. This is achieved through admission control.

On the flow time-scale, we discuss measurement-based admission control (MBAC) as a means of relieving the application of the burden of a-priori traffic specification. The traditional approach to admission control requires an a priori traffic descriptor in terms of the parameters of a deterministic or stochastic model. However, it is generally hard or even impossible for the user or the application to come up with a tight traffic descriptor before establishing a flow. MBAC avoids this problem by shifting the task of traffic characterization from the user to the network, so that admission decisions are based on traffic measurements instead of an explicit specification. This approach has several important advantages. First, the user-specified traffic descriptor can be trivially simple (eg, peak rate). Second, an overly conservative specification does not result in an overallocation of resources for the entire duration of the session.

Relying on measured quantities for admission control raises a number of issues that have to be understood in order to develop robust schemes.

Estimation Error

There is the possibility of making errors associated with any estimation procedure. In the context of MBAC, the estimation errors can translate into erroneous flow admission decisions. The effect of these decision errors has to be carefully studied, because they add another level of uncertainty to the system, the first level being the stochastic nature of the traffic itself.

Dynamics and Separation of Time-scale

A MBAC is a dynamical system, with flow arrivals and departures, and parameter estimates that vary with time. Since the estimation process measures the in-flow burst statistics, while the admission decisions are made for each arriving flow, MBAC inherently links the flow and burst time-scale dynamics. Thus, the question of impact of flow arrivals and departures on QoS arises. Intuitively, each flow arrival carries the potential of making a wrong decision, and each departing flow allows to recover from a past mistake.

Memory

The quality of the estimators can be improved by using more past information about the flows present in the system. However, memory in the estimation process adds another component to the dynamics of a MBAC. Using too large a memory window will reduce the adaptability of MBAC to non-stationarities in the statistics. A key issue is therefore to determine an appropriate memory window size to use.

Using a simple model that captures the impact of measurement uncertainty and the interplay of burst and flow time-scale dynamics, we study all of the above issues in a unified analytical framework. The goal is to shed insight on the design of robust MBAC schemes which can make QoS guarantees in the presence of measurement uncertainty, without requiring the tuning of external system parameters. The figure illustrates how the traffic time-scales and control mechanisms considered in our work relate to each other.

In our future work, we hope to address other problems that can affect the quality of service experienced by the user. The Internet grows rapidly in size and capacity, enabling more services such as virtual private networks (VPN) and voice-over-IP, with more vendors providing its elements, and with an increasing number of operators competing for market share. The resulting system is of very large scale and complexity, to such an extent that one must assume that there is always something wrong somewhere. The resulting flood of alarm information makes it difficult for human network operators to detect, isolate, and repair faults efficiently. This stresses the importance of network management. Performance monitoring, fault identification and localization, planning and resource provisioning, and configuration management are important and challenging future research topics, encompassing both architectural and performance issues.

More information about this project is available at: http://www.research.att.com/~mgross/

Please contact:

Matthias Grossglauser - AT&T Labs - Research
Tel: +1 973 360 7172
E-mail: mgross@research.att.com

Matthias Grossglauser was the winner of the 1998 Cor Baayen Award competition (see http://www.ercim.eu/activity/cor-baayen.html). The work for which Grossglauser - an EPFL - graduate - received the award was carried out at INRIA Sophia Antipolis under the guidance of Jean Bolot where he was a member of the RODEO team. He defended his thesis in spring 1998 on the topic 'Control of Network Resources over Multiple Time-Scales'. Grossglauser recently joined AT&T Laboratories in the US.


return to the ERCIM News 37 contents page