next up previous contents index
Next: Related Works Up: A New Rate Control Previous: A New Rate Control   Contents   Index

Introduction

Recent studies on real-time multimedia applications [60,81,103,152] involving the transmission over packet switched networks have emphasized the difficulties of having a best-effort Internet service model. Best effort introduces variable delays and loss distribution patterns that greatly decrease multimedia quality. This best-effort delivery policy will not change for a long time; thus, in order to get acceptable Quality of Service (QoS) levels, it has become extremely important to develop control mechanisms that eliminate, or at least minimize, the negative effects of some specific network parameters on the quality of multimedia signals as perceived by users at destination points [18]. In fact, the quality perceived by end-users may define the scope of applicability [81], or, furthermore, the final acceptance of real-time multimedia services [147]. Several approaches have been developed to address this issue, including: a) forward error correction (FEC) techniques, which have been developed to minimize the effect of packet loss, by sending additional information to aid in packet recovery [18,103,116]; b) control mechanisms working at destination points introduced to minimize the effect of delay variations between successive packets [19,37]; c) adapting scalable bit-rate codec and prioritized transmission algorithms, at the network layer, used to get a smooth degradation of quality during network congestion periods [11]; d) TCP-friendly rate control protocols to avoid congesting the network [22,26,41,42,44,77,107,108]. One of the goals behind the use of TCP in the Internet is to avoid the collapse of the network. TCP has the property of being fair to other TCP-like flows. This is to avoid congesting the network and to provide the maximum possible utilization of the network. However, as mentioned in Chapter 3, TCP is not suitable for the transport of real-time multimedia flows. The most appropriate protocol for these flows is UDP, in conjunction with other protocols (RTCP, RTP, etc.). Using UDP, which is non-reliable in the sense that the sender cannot guarantee that packets will arrive to the source in the correct order or completely get lost, the source sends packets at the rate it wants without taking care if this will congest the network or not. Allowing the sources to send at the rate they need can result in a severe network congestion. Thus, the idea of using rate control protocols instead of the open loop UDP, with the aim of regulating the sending rate of all the UDP sources, has the following goals: $\bullet$ The majority of the existing rate control protocols aim to reach all these goals. Thus, they are generally referred as TCP-Friendly Rate Control (TFRC for short). These protocols try to use the same existing mechanisms of TCP to regulate the sending rate of UDP sources. It is clear form this point that the performance of these protocols may lead, in some situations, to the reasons for which TCP is not used to transport real-time multimedia flows (see Section 3.3 for details). Thus, using TCP-Friendly-like mechanisms to adapt the sending rate is a way to behave as a TCP connection and hence to be less aggressive than UDP open loop flows. A TCP-Friendly sender is able to adapt its bandwidth consumption according to network conditions. This means that it will send less data than a normal open loop UDP application. It means also that adapting the flow to the current measured network conditions will be benefic to the global Internet but may reduce dramatically the perceived quality [129]. It is stated in [126] that ``All of those approaches assume that the sender can adjust its transmission rate in accordance with the values determined by the adaptation scheme and in accordance with the dynamics of the TCP-friendly algorithm with arbitrary granularity and with no boundaries as to the maximum or minimum bandwidth values. However, in reality there are certain maximum and minimum bandwidth requirements for multimedia contents above and below which the sent data is meaningless for the receiver. Additionally, the used compression style for a multimedia content might dictate that the changes in the transmission rate of a video stream for example can only be realized in a granular way. Also, there might be some restrictions arising from the user himself with regard to the acceptable variations in the perceived QoS over a time interval. Such restrictions depend primarily on the transferred content, the used compression style, the communication scenario and the user himself.'' Therefore, once the sender is obliged to decrease its bandwidth, it has to make a decision on how to decrease it in order to maximize the quality as perceived by the end-users. As we will see, this is the goal of the control mechanism that we propose in this Chapter. The diversity of the existing encoding algorithms (and hence the encoders) makes it possible to have one encoder that gives better quality than the others for the same conditions (network state, output bit rate, etc.) as we showed in the previous Chapter. In addition, for the same encoder, we can considerably reduce the network utilization (bit rate) by changing some of its parameters (for example, changing the quantization parameter or the frame rate in video encoder), of course at the cost of having some impact on the perceived quality. An example of such a variable, is the frame rate of video streams. You can reduce it from 30 to 15 frames/sec. without loosing a significant amount of the perceived quality. This is because experiments [119] (confirmed in our study) show that human eyes (HVS) are not too sensitive to the changes of frame rates greater than 16 frames/sec (see Chapter 7 for similar conclusions). These two observations, after a thorough analysis and study of both speech and video streams distorted by several network and encoder parameters' values, led us to think about designing a new protocol to guarantee the delivery of the best possible quality of the stream while maintaining the TCP-friendliness of the stream sp as to avoid network collapse during congestion. In this Chapter, we want to integrate user perception and network parameters for rate control instead of basing the rate control only on network passive measurements (loss, delay, etc.) as the traditional protocols. The neural network approach that we have previously described in Chapter 4 and validated in Chapters 5 and 6 can measure the quality of multimedia flows when affected by both network and encoding impairments. As we have seen, the results obtained correlate well with those obtained by human subjects for wide range values of both network and encoding distortion. An additional motivation of our work is to lay the basis for designing some kind of network protocols that take into account the end-user perception of the quality instead of basing these protocols on passive network measurements. So far, it has not been possible, because there was no mechanism that could measure the quality in real time without having the access to the original signal, nor gave results that correlate well with human perception, nor quantified the direct influence of each of the quality-affecting parameters while being computationally simple. However, our tool satisfies all these requirements to a certain extent. This Chapter is organized as follows: in Section 8.2, we provide an overview of the existing rate control mechanisms found in the literature. We describe our proposed rate control scheme and we provide a list of the possible controlling parameters in Section 8.3. We validate our protocol and show the obtained results in Section 8.4. A general discussion of some points related to the proposed protocol is given in Section 8.5. Finally, Section 8.6 concludes this part of our work.
next up previous contents index
Next: Related Works Up: A New Rate Control Previous: A New Rate Control   Contents   Index
Samir Mohamed 2003-01-08