Project Report on Active Queue Management

Description
A queue management system is used to control queues. Queues of people form in various situations and locations in a queue area.

TECHNICAL RESEARCH REPORT
A Parallel Virtual Queue Structure for Active Queue Management by Jia-Shiang Jou, Xiaobo Tan, John S. Baras

CSHCN TR 2003-18 (ISR TR 2003-36)

The Center for Satellite and Hybrid Communication Networks is a NASA-sponsored Commercial Space Center also supported by the Department of Defense (DOD), industry, the State of Maryland, the University of Maryland and the Institute for Systems Research. This document is a technical report in the CSHCN series originating at the University of Maryland. Web site http://www.isr.umd.edu/CSHCN/

1

A Parallel Virtual Queue Structure for Active Queue Management
Jia-Shiang Jou, Xiaobo Tan and John S. Baras Department of Electrical and Computer Engineering and Institute for Systems Research University of Maryland, College Park, MD 20742 USA {jsjou, xbtan, baras}@isr.umd.edu

Abstract— The performance of the Adaptive RED scheme is susceptible to bursty web traf?c. In this paper a parallel virtual queue structure is proposed for active queue management at the bottleneck router. Real time connections such as web and UDP, and non-real time connections such as FTP are served in two different virtual queues with drop-tail and Adaptive RED policies, respectively. Both queues share the same physical buffer memory. Simulation shows that ?le delivery for the web traf?c is greatly improved due to shorter packet delay and lower packet loss rate, and that the queue length variation for the FTP traf?c is small. To improve the goodput of the non-real time connections, a modi?ed Adaptive RED scheme with dynamic queue length thresholds is also proposed. This scheme is able to keep the packet dropping probability within a desired small region. Stability of the queue length under this scheme is studied through analysis and numerical computation. Index Terms— Adaptive RED, Active Queue Management, Virtual Queue, Scheduling.

I. I NTRODUCTION For small queuing delay, the buffer size in a router is in general not large. However, a router with small buffer size often has a high packet dropping rate since the Internet traf?c is bursty. When packets are lost, the TCP protocol dramatically reduces the ?ow rate during the congestion avoidance phase [1]. Therefore, after a buffer over?ow event in a drop-tail queue, all connections sense packet loss and slow down the transfer rate simultaneously. In order to prevent this global synchronization phenomenon and increase link utilization, many active queue management schemes such as RED (Random Early Detection) [2] have been proposed and received increasing attention. The basic idea of RED is to randomly drop packets to prevent buffer over?ow and the global synchronization
Research partially supported by DARPA through SPAWAR, San Diego under contract No. N66001-00-C-8063.

problem. The dropping probability is a non-decreasing function of the queue length. A TCP connection with a higher ?ow rate has a better chance to get packets dropped and reduce its rate more rapidly. By dropping packets actively, RED keeps the queue length within a desired region. However, some simulation and analysis results [3] [4] [5] have demonstrated that the performance of RED is very sensitive to parameter settings. Based on the original idea of RED, there have been some modi?cations such as Stabilized RED (SRED) [6], Flow RED (FRED) [7], Weighted RED [8], Random Early Marking (REM) [9], BLUE [10] and Adaptive RED [11] [12]. The Adaptive RED scheme dynamically updates the maximum dropping probability according to the exponentially weighted moving average (EWMA) of the queue length, and makes itself more robust with respect to the congestion level. The Adaptive RED policy provides good rate control for TCP connections operating in the congestion avoidance phase [13] [12]. However, a great portion of Internet traf?c is web and UDP traf?c. Since most web connections involve transfer of several small ?les, these connections have a short life and are mostly operated in the TCP slow start phase with a small congestion window. Dropping web packets in this phase is not an effective way to control the traf?c rate and alleviate the congestion at the bottleneck router. Furthermore, from the viewpoint of a web user, one or several packet losses in the slow start phase would lead to extra delay for retransmission or even TCP timeout. It would also force TCP to enter the congestion avoidance phase prematurely with a small congestion window and result in a low throughput. The delay and low throughput would severely degrade the performance of delivering short messages such as web pages, and web browsers experience long waiting times even with a high speed network. On the other hand, the Adaptive RED fails to maintain the queue length within the desired region due to the bursty nature of web traf?c. To address

2

these problems, we propose a parallel virtual queue structure for active queue management in this paper. In this structure, real time (web, UDP) and non-real time (FTP) traf?c are separated into two different virtual queues which share the same physical buffer memory. The drop-tail policy is applied at the ?rst virtual queue to serve real time applications. In order to have a small mean delay, the service rate of this drop-tail queue is dynamically determined by its virtual queue length. The remaining non-real time traf?c is directed to an Adaptive RED virtual queue. Simulation shows that this parallel virtual queue structure not only has the advantages of Adaptive RED such as high link utilization and small delay, but also greatly reduces the total packet loss rate at the router. Despite that the bandwidth is shared with the bursty drop-tail virtual queue, the Adaptive RED queue has a small length variation. The original Adaptive RED dynamically changes the maximum dropping probability Pmax to keep the queue length within the thresholds. However, for some non-real time applications, high goodput (low packet dropping rate) is more important than short packet delay. Hence we explore a modi?ed Adaptive RED policy for the nonreal time applications at the second virtual queue, where the queue length thresholds are dynamically adjusted to maintain the dropping probability of Adaptive RED algorithm in a desired range. The remainder of the paper is organized as follows. In Section II, we demonstrate the vulnerability of the Adaptive RED in the presence of web and UDP traf?c. The parallel virtual queue structure is described in Section III. Comparison of this approach with the original Adaptive RED scheme is given through simulation in Section IV. In Section V, we present the modi?ed Adaptive RED policy with dynamic queue length thresholds. Performance analysis is provided for both virtual queues (the drop-tail and the modi?ed Adaptive RED queue) in Section VI. Finally we conclude in Section VII. II. V ULNERABILITY OF A DAPTIVE RED W EB - MICE
TO

running HTTP 1.1 protocol and having a P areto [14] ?le size distribution with parameters (Kp =2.3Kbytes, ?=1.3) (mean 10Kbytes). The round-trip propagation delay of HTTP connections is uniformly distributed in (16, 240)ms. Note that the mean rate of the aggregate web traf?c is around 1.2M bps. There is one CBR traf?c source which periodically generates a 1Kbytes UDP packet every 50ms. Besides these short web connections and UDP traf?c, there are 10 persistent FTP connections sharing the bottleneck link with round-trip propagation delay of 64ms. Figure 1 shows that the Adaptive RED works well with those FTP connections before the web traf?c comes in. However, after the CBR source and web servers begin to share the bandwidth at time t=100s, the queue length of Adaptive RED deviates dramatically from the desired region. Since the Adaptive RED scheme relies on average queue length to determine the dropping probability and control the TCP ?ow rate, the extra queue length perturbation contributed by the bursty web traf?c makes the Adaptive RED increase/decrease its dropping probability rapidly. This over-reaction causes a great queue length variation and poor performance in packet delay and loss.
160

140

120

Queue Length (Pkts)

100

80

60

40

20

0

0

50

100

150

200

250

300

350

400

Time (Sec.)

Fig. 1. Queue length of the Adaptive RED: 10 FTP starting at t=0 and 800 WEBs and 1 CBR coming in at t=100s.

In this section we consider a scenario containing short-life TCP (WEB), UDP (CBR) and long-life TCP (FTP) traf?c. The purpose is to demonstrate that the performance of the Adaptive RED scheme is severely degraded by the short-life web traf?c. The network in our ns2 experiment has a simple dumbbell topology with the bottleneck link bandwidth C =3.0M bps. One side of the bottleneck consists of 800 web clients. Each client sends a web request and has a think time of Exponential distribution with mean 50s after the end of each session. The other side contains 800 web servers,

Since most web pages contain one or several very small ?les, these TCP connections are mostly operated in their slow start phase during the session life. According to the TCP protocol, the congestion control window is just beginning to increase its size from the initial value and the ?ow rate is low. Dropping packets in the slow start phase cannot ef?ciently alleviate the congestion level at the bottleneck router. In other words, any random dropping/marking policy such as RED is unable to effectively control the congestion level without considering short-life TCP (and UDP) traf?c. Furthermore, losing one or two packets in the slow start phase not only causes a very low throughput and extra delay, but also leads to

3

a high probability of connection timeout. This is further illustrated below. From the viewpoint of a web browser, a short-life TCP session may only need several round-trip times (RTT) to ?nish the whole transmission. When the sender senses a packet loss, the slow start threshold ssthresh will be reduced to min(cwnd, rcv window)/2 [1] and the new congestion window size cwnd is also decreased depending on different TCP versions (For TCP Reno, the new cwnd = ssthresh and TCP enters the fast recovery phase. For TCP Tahoe, cwnd = M SS (maximum segment size) and TCP begins a new slow start phase). Since original cwnd is just beginning to increase its size from its initial value M SS in the ?rst slow start phase, one packet loss during the initial several roundtrip times leads TCP to enter the congestion avoidance phase with a very small ssthresh and cwnd. In the congestion avoidance phase, TCP slowly increases cwnd (the increment is about one M SS per round-trip time) from the current ssthresh. Therefore, losing one packet in the slow start phase takes TCP a long time to complete a short message. In addition, since the web traf?c is short but bursty, these web connections usually experience a higher packet loss rate (see the web packet loss rates of the Adaptive RED and the drop-tail policies in Table III). The default initial value of ssthresh is 64KB and the packet size is 1KB in this paper. Assuming a typical packet dropping probability P d =0.04 when using the Adaptive RED, the probability of losing one or more packets in the slow start phase is equal to 1 ? (1 ? P d )64 = 0.9267 (assuming that packets are dropped independently). Therefore, most TCP connections have at least one packet dropped in their ?rst slow start phase. For example, assuming that the 15 th packet is dropped by the Adaptive RED, ssthresh decreases from 64KB to 4KB and the new congestion window cwnd is decreased from 8KB to 1KB (Tahoe). The situation gets worse if one packet is dropped earlier (in the ?rst 3 roundtrip times). The congestion window at this moment is so small that the sender may not have enough data packets to trigger the receiver to generate three duplicate acknowledgements. If packets cannot be recovered by this fast recovery scheme, TCP has to depend on the protocol timer for error recovery. The default value of the protocol timer is usually large and the delivery delay could be increased dramatically by timeout events. Moreover, the probability of losing two or more packets of the same congestion window in the slow start phase also cannot be ignored. These events lead to a high probability of TCP timeout and connection reset. For illustration we conduct simulation of transferring a

TABLE I D ELIVERY DELAY OF
SMALL FILE : MEAN AND STANDARD DEVIATION

Pd 30KB 90KB 150KB

0.00 0.88(.0006) 1.18(.0008) 1.34 (0.0008)

0.02 1.60(1.88) 2.79(2.39) 3.51(1.90)

0.04 2.74(4.27) 4.81(3.91) 6.51(4.60)

0.08 5.88(7.79) 9.24(6.30) 13.38(8.87)

small web ?le in a stand alone and one hop environment. There is no other traf?c sharing the bandwidth and packets are dropped intentionally. Figure 2 shows the mean delivery delay v.s. the dropping probability for ?le sizes 30KB -210KB , and Table I lists the mean and standard deviation of the delay. For example, TCP takes about 4.81s to complete transmission of a 90KB ?le if Pd = 0.04; in comparison, in the loss free case, the ?le can be delivered in 1.18s. Since most web pages have sizes in the above range, a web browser will experience a long response time when the dropping probability of the Adaptive RED is high.
80 30KB 60KB 90KB 120KB 150KB 180KB 210KB

70

60

Mean Session Life (Sec)

50

40

30

20

10

0

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Dropping Prob. P

d

Fig. 2. Mean delivery delay of small ?le v.s. dropping probability Pd with ?le sizes 30, 60, ..., 210Kbytes, bandwidth 3M bps and round-trip time 128ms.

III. A PARALLEL V IRTUAL Q UEUES S TRUCTURE To solve the problem discussed in Section II, we propose a parallel virtual queue structure in the router. The ?rst virtual queue deals with the short-life real-time traf?c (web, UDP). Since dropping these packets cannot effectively alleviate the congestion level, but severely increases delivery delay, it would be good to keep them in the queue unless the total buffer (shared with the other queue) has over?owed. Hence, the queuing policy of the ?rst virtual queue is chosen to be drop-tail to minimize the packet loss rate. In order to have a short delivery delay for web browsers and UDP connections, the service rate C1 (t) is changed dynamically according to its virtual queue length q1 (t).

4

The second virtual queue serves long-life TCP connections such as FTP with large ?le sizes, where the Adaptive RED is used. Although the available bandwidth of this queue is determined by C 2 (t)=C -C1 (t), the Adaptive RED scheme is expected (and will be veri?ed by simulation in Section IV) to keep its virtual queue length q 2 (t) in a desired region for the following reason. When there is a heavy workload at the droptail queue, C2 (t) decreases quickly. FTP receivers experience slower packet arrival rates and send acknowledgement packets (ACK) back more slowly. Without increasing the dropping probability at the Adaptive RED queue, the slower ACK arrival rates from the receivers make FTP senders reduce ?ow rates automatically without shrinking their congestion window sizes. On the other hand, when the congestion level is alleviated, the Adaptive RED queue receives more bandwidth. Since the congestion window sizes are still large in the FTP servers, the throughputs of FTP is quickly recovered by faster arrival rates of ACK packets from the receivers. With this parallel virtual queue structure (which will be called RED+Tail policy in this paper), we can keep the bene?ts of Adaptive RED such as high (100%) link utilization. Furthermore, the packet loss rate of the shortlife TCP and UDP connections is greatly reduced by the drop-tail policy and a shared buffer. The packet loss rate of long-life TCP traf?c is also reduced due to the suppressed bandwidth, larger thresholds (longer RT T ) and more stable average virtual queue length for the Adaptive RED queue. We now discuss how to implement the RED+Tail policy. The ?rst problem is how to split the long-life traf?c from other short-life web traf?c at the router. To this end, the router has to know the age or elapsed time of each TCP connection. Unfortunately, this information is hidden in the TCP header which is not available to the IP router. However, one may roughly estimate the elapsed time by using the following approach: • When a packet arrives with a new source-destination pair which has not been seen by the router in the past T sec, we treat it as a new TCP connection and identify this connection as a short-life connection; • Send the new connection packets to the drop-tail queue; • Set a counter for the number of packets of this connection; • If the cumulative packets number is greater than a threshold N , we assume that the ?le size is large enough and this TCP connection has already left its slow start phase. We redirect the subsequent packets of this connection to the Adaptive RED queue; • Remove the counter if there is no packet arrival in

the last T sec. Preliminary simulation results show that this approach successfully prevents small web traf?c from entering the RED queue. The probability of false alarm is less than 0.02 in our scenario. Since the web traf?c has small ?le sizes and short session times, there is no harm if the short-life connection packets are misdirected to the RED queue after time T . Figure 3 shows the RED+Tail parallel queue structure in the router. Recall that C1 (t) and C2 (t) denote the service rates of the drop-tail queue and the Adaptive RED queue at time t respectively. In order to allocate bandwidth dynamically to both queues and assign a desired region of queue length for the Adaptive RED queue, we de?ne the maximum threshold maxthi and minimum threshold minthi for i = 1, 2. The service rates C1 (t) and C2 (t) are given by the following algorithm: • if q1 = 0, then C1 (t) := 0. • if 0 < q1 < minth1 , then C1 (t):=C1min . q1 • if minth1 ? q1 , then C1 (t):=min(C maxth , 1 C1max ). • C2 (t) := C ? C1 (t), where C is the link bandwidth. The variable q 1 denotes the queue length of the drop-tail queue. The constant C1max preserves the minimum available bandwidth C ? C1max for the RED queue to prevent FTP connections from timeout.

Long?life TCP

Adaptive RED

C2

Drop?Tail Short?life TCP & UDP

C1

Router

Fig. 3. The parallel virtual queue structure for active queue management.

IV. S IMULATION

AND

C OMPARISON

In this section, we compare the RED+Tail scheme with the Adaptive RED on typical TCP performance metrics. For the Adaptive RED, we use the parameter settings suggested by Floyd et al [12] (? and ? of the AIMD algorithm). Both schemes were implemented in the ns2 simulator. The network topology and scenario are as described in Section II. Table II lists the parameters for the RED+Tail policy and the Adaptive RED policy. Note that the virtual queues of the RED+Tail scheme

5
120 100

share the total physical buffer size, i.e., the packets in the drop-tail virtual queue will not be dropped unless the physical memory is full. The Adaptive RED is set in a “gentle” mode meaning that the dropping probability between (maxth2 , 2maxth2 ) is linear in (Pmax , 1).
TABLE II E XPERIMENT S ETTINGS Virtual Qu. i i=1 i=2 Adapt. RED Buffer Size 160KB shared 160KB minthi 2KB 20KB 20KB maxthi 30KB 80KB 80KB ? 0.01 0.01 ? 0.9 0.9

q1 (Pkts)

80 60 40 20 0 0 50 100 150 200 250 300 350 400

100 80

q2 (Pkts)

60 40 20 0

0

50

100

150

200

250

300

350

400

Time(Sec.)

The performance for a connection is evaluated by the packet loss rate, delay and throughput. However, we are more concerned about packet loss rate and delay for web (short-TCP) and CBR (UDP) connections, and more concerned about throughput for FTP (long-TCP). We replaced the Adaptive RED with RED+Tail scheme at the router and repeated the experiment of Section II. For comparison, an experiment with the drop-tail policy was also conducted. The random seed of the simulator was ?xed so that the processes of web requests and ?le sizes had the same sample paths in all experiments. Table III lists the performance metrics under RED+Tail, the Adaptive RED and the traditional drop-tail scheme respectively. Figure 4 shows the queue lengths of the RED+Tail scheme, which demonstrates that the virtual queue length q2 is quite stable and stays in the desired region even after the web and CBR traf?c begins to share the bandwidth at time t=100s. The actual dropping probability for the FTP traf?c is reduced from 4.15% to 2.75% by a longer queuing delay (184ms, see Table III). This scheme prevents the over-reaction behavior of RED in the original Adaptive RED case and keeps the mean queue length q 2 in a desired region (Compare to Figure 1).
TABLE III P ERFORMANCE M ETRICS Policy RED+Tail:FTP RED+Tail:WEB RED+Tail:CBR AdaptRED:FTP AdaptRED:WEB AdaptRED:CBR DropTail:FTP DropTail:WEB DropTail:CBR Loss % 2.747 1.278 0.300 4.149 4.514 3.950 1.916 4.234 1.550 Delay Sec. 0.184 0.114 0.109 0.143 0.143 0.141 0.349 0.340 0.342 Rate KB/s 209.465 144.455 19.867 217.531 137.124 19.140 215.243 138.983 19.601

Fig. 4. Queue lengths of RED+Tail virtual queues: 10 FTPs starting at t=0 go to virtual queue 2, and 800 WEBs + 1 CBR starting at t=100 go to virtual queue 1.
Adaptive RED
40 30 40 30 20 10 0 100 200 300 400 0 0 100 200 300 400

RED+Tail

FTP

20 10 0

80

80 60 40 20 0 100 200 300 400 0 10 0 100 200 300 400

WEB CBR

60 40 20 0 10

5

5

0

0

100

200

300

400

0

0

100

200

300

400

Fig. 5. Packet losses (packets/sec.) of Adaptive RED and RED+Tail.

Figure 5 shows the packet loss rates of FTP, web and CBR connections under the Adaptive RED and RED+Tail schemes. We see that RED+Tail provides great improvement in packet loss for web and CBR connections. The web packet loss rate is reduced from 4.51% to 1.28% and CBR packet loss rate is reduced from 3.95% to 0.30%. Figure 6 compares the packet delays. The mean queuing delay of web and CBR packets in the RED+Tail scheme is shortened at the cost of the FTP packets delay. The web and CBR packet delay depends on how much bandwidth is allocated to the drop-tail queue. One can satisfy a mean delay requirement for the web and CBR connections by properly adjusting the parameter maxth1 . For example, the maxth1 of the RED+Tail scheme is set to be 30Kbytes so that the estimate of mean delay at the drop-tail queue is about 80ms. However, the service rate C 1 reaches its maximum C1max when q1 > maxth1 . The actual mean delay should be larger than expected. For our simulation the mean delay of web and CBR traf?c is around 110ms (refer to analysis in Section VI-A).

6
Adaptive RED
1 1

RED+Tail

15

x 10

4

0.5

0.5

0

Throughput (Kbyes)

0

100

200

300

400

0

Adaptive RED:FTP Adaptive RED:WEB Adaptive RED:CBR Adaptive RED:Total RED+Tail:FTP RED+Tail:WEB RED+Tail:CBR RED+Tail:Total
0 100 200 300 400

FTP

Total

10

FTP

0.4

0.4 0.3 0.2 0.1 0 100 200 300 400 0 0 100 200 300 400

WEB

0.3 0.2 0.1 0

5

WEB
0.4 0.4 0.3 0.2 0.1 0 100 200 300 400 0

CBR

0.3 0.2 0.1 0

CBR
0
0 100 200 300 400

0

50

100

150

200

250

300

350

400

Time (Sec.)

Fig. 6.

Packet delays (sec.) of Adaptive RED and RED+Tail.
Adaptive RED
400 300 400 300 FTP 200 100 0 100 200 300 400 0 400 300 WEB 200 100 0 100 200 300 400 0 30 20 10 0 0 100 200 300 400 0 100 200 300 400

Fig. 8. Accumulated throughputs of Adaptive RED and RED+Tail.
25 DropTail AdaptiveRed Red+Tail

RED+Tail

FTP

200 100 0 400 300

Mean File Delay +/? 1 STD (sec.)

20

15

WEB

200 100 0 30 20 10 0

10

5

CBR

CBR

0

100

200

300

400

0

100

200

300

400

0 20

40

60

80

100

120

140

160

File Size (Kbytes)

Fig. 7. Throughputs (KBytes/sec.) of Adaptive RED and RED+Tail.

Fig. 9.

Small ?le delivery delay: mean and standard deviation.

Figures 7 and 8 show the throughputs of FTP, web and CBR traf?c. Both schemes achieve 100% utilization of the link bandwidth. Due to the bandwidth allocation scheme in the RED+Tail scheme, FTP has a slightly smaller throughput. However, the saved bandwidth allows web burst to pass through the bottleneck link faster. Figure 9 compares the small web ?le delivery time under different schemes. Since the RED+Tail policy has a small packet loss rate, its delivery time is almost equal to the loss free case in Table I. On the other hand, the Adaptive RED has a loss rate 4.5%, its delivery time is three times longer. Note that the drop-tail queue has a similar loss rate (4.2%) as Adaptive RED for web packets. However, the ?le delivery time of the drop-tail scheme is about 2.5 times longer than Adaptive RED’s. This is mainly due to the long queuing delay (0.340sec) of the drop-tail policy. V. DYNAMIC T HRESHOLDS FOR A DAPTIVE RED AND FAIRNESS The Adaptive RED relies on adjusting the dropping probability to control the ?ow rates of TCP connections and keep the average queue length in a desired region.

However, for those applications with large ?le sizes, the goodput is more important than the packet delay. The packet loss rate is a key factor in determining the connection goodput. Since the minimum and maximum thresholds of the Adaptive RED scheme are ?xed, the dropping probability of Adaptive RED is high when a congestion happens. This high dropping probability causes frequent retransmissions and low goodput. To maintain a low packet loss rate, we propose the following modi?ed Adaptive RED scheme for the Adaptive RED virtual queue where minth 2 and maxth2 are dynamically adjusted while D=maxth 2 ? minth2 is maintained constant. • Pick 0 < ? < 1 (? =0.05 in this paper). ¯d > PU , then minth2 := minth2 (1 + ? ), • If P maxth2 := minth2 + D . ¯d < PL , then minth2 := minth2 (1 ? ? ), • If P maxth2 := minth2 + D , ¯d is the average dropping probability obtained by where P the EWMA algorithm and (PL , PU ) is the desired region of dropping probability. Note that if we set PU < Pmax , the ?oating thresholds do not change the current slope of dropping probability function dramatically, since the

7
Fixed Thresholds
0.12

distance between the thresholds is ?xed. The rationale behind the above scheme is that, by ¯d > PU ), the queuing increasing the thresholds (when P delay is increased and the ?ow rates are reduced. Since the average TCP throughput is proportional to RT T1?P , d we achieve the same throughput without raising the packet loss rate. Figures 10 and 11 compare the Adaptive RED schemes with ?xed and dynamic thresholds respectively. There are 20 persistent FTP servers sharing a 6M bps bottleneck link. Another 20 FTP servers arrive at time 100s and leave at time 300s. It can be seen that the ?xed threshold scheme has a small queue length variation and a large dropping probability (0.05). In contrast, the dynamic threshold scheme has a much lower average dropping probability (0.014 with PL =0.01, PU =0.02), but a higher packet delay. Note that both schemes achieve 100% link utilization so that each FTP connection has the same throughput. However, with a much lower packet loss rate, the dynamic threshold scheme achieves a higher goodput. This dynamic threshold scheme allows us to consider the trade-off between packet loss and queuing delay in an Adaptive RED queue. Dynamically varying the thresholds may also have implications in achieving throughput fairness among multiple Adaptive RED queues. Since the ?ow rates of TCP connections are determined by the corresponding dropping probabilities and queuing delays at different queues, connections with shorter link propagation delays and higher throughputs can be suppressed by raising the queue length thresholds at the router.
350 300 250 200 150 100

Dropping Prob. Pd

0.1

0.08 0.06 0.04 0.02 0 0 50 100 150 200 250 300 350 400 450 500

Dynamic Thresholds
0.12 P

Dropping Prob. Pd

0.1

max

Inst. P Avg. P

0.08 0.06 0.04 0.02 0 0 50 100 150 200 250 300 350 400 450 500

Time (sec.)

Fig. 11. Dropping probability with ?xed and dynamic thresholds: 20 FTP starting at t=0, and another FTP 20 starting at t=100s and leaving at t=300s, C=6Mbps, dk =64ms (Inst. P: instantaneous dropping probability; Avg. P: EWMA average of Inst. P). TABLE IV P ERFORMANCE M ETRICS :R ED +TAIL WITH DYNAMIC
THRESHOLD SCHEME

Policy Dyn. Thres.:FTP Dyn. Thres.:WEB Dyn. Thres.:CBR

Loss % 0.899 2.306 0.519

Delay Sec. 0.318 0.093 0.091

Rate KB/s 209.455 144.505 19.827

Adaptive RED queue is set to be (P L ,PU )=(0.005,0.010). Figure 12 shows the lengths of both virtual queues and the dropping probability at the Adaptive RED queue. The dropping probability stays in the desired region most of the time as expected. Note that the ?ow rate of FTP connections are reduced without increasing the queue length Fixed Thresholds q2 (t) and the dropping probability dramatically when the bursty web traf?c arrives at t=100. This is because that the available bandwidth for FTP connections is reduced and FTP senders see a longer round-trip time (longer packet delay at q 2 , see Figure 14). Figures 13 and 14 show the packet losses and deDynamic Thresholds lays for FTP, web and CBR connections respectively. Table IV collects the corresponding performance metrics. Comparing to Figure 5, 6 and Table III, The packet loss rate of FTP connection is reduced from 2.747% to 0.988% at the cost of packet delay (increased from Time (sec.) 0.184s to 0.318s). Since the average queue length at the Adaptive RED queue is about 80KBytes instead Fig. 10. Average queue length with ?xed and dynamic thresholds: 20 FTP starting at t=0, and another 20 FTP starting at t=100s and of 60KBytes in the ?xed threshold scheme, web and leaving at t=300s, C=6Mbps, dk =64ms. UDP packets see a smaller shared buffer at the drop-tail queue and experience a higher loss rate from 1.278% The NS2 simulation in Section IV is conducted again to 2.306% and from 0.300% to 0.519%, respectively. with the modi?ed Adaptive RED serving the second However, the average delays of web and UDP packets virtual queue. Parameters (except minth 2 and maxth2 , are slightly shorter for a smaller shared buffer space at which are dynamically adjusted) used are as listed in Ta- the drop-tail queue. The delay and loss at the drop-tail ble II. The desired region of dropping probability for the queue can be improved by increasing the buffer size
Queue Length
50 0 0 50 100 150 200 250 300 350 400 450 500 350 300 250 200 150 100

Queue Length

Maxth Minth Inst. QLen Avg. QLen

50

0

0

50

100

150

200

250

300

350

400

450

500

8
2 1.5

and using a more aggressive bandwidth allocation policy (smaller maxth1 ). Finally, Tables III and IV also show that the throughputs for the ?xed threshold scheme and the dynamic threshold scheme are almost the same.
q1 (Pkts)
100

FTP

1 0.5 0 0 50 100 150 200 250 300 350 400

0.4

WEB

0.3 0.2 0.1

50

0

0

50

100

150

200

250

300

350

400

0

0

50

100

150

200

250

300

350

400

0.4

CBR

0.3 0.2 0.1 0 0 50 100 150 200 250 300 350 400

q2 (Pkts)

150 100 50 0

Time(Sec.)
0 50 100 150 200 250 300 350 400

0.02

Dropping. Pd

0.015 0.01

Fig. 14. Dynamic threshold scheme: Packet delays (sec.) of RED+Tail.
0 50 100 150 200 250 300 350 400

0.005 0

Time (sec.)

C1max

Fig. 12. Dynamic threshold scheme: Virtual queue lengths of RED+Tail and dropping probability of the Adaptive RED queue, 10 FTPs starting at t=0 and 800 WEBs + 1 CBR starting at t=100.

20 15

C1min minth1
0 50 100 150 200 250 300 350 400

FTP

10 5 0 80 60

maxth1

q1(t)

WEB

Fig. 15. Dynamic bandwidth allocation at the drop-tail queue: Cq1 (t) C1 (t)=max( C1min , min( maxth , C1max )). 1

40 20 0 10 0 50 100 150 200 250 300 350 400

5

0

0

50

100

150

200

250

300

350

400

Time (Sec.)

Fig. 13. Dynamic threshold scheme: Packet losses (packets/sec.) of RED+Tail.

VI. P ERFORMANCE A NALYSIS FOR THE PARALLEL V IRTUAL Q UEUES A. Drop-Tail Queue with Adaptive Service Rate First, we investigate the queuing delay of CBR and web traf?c at the drop-tail queue. Note that the service rate C1 (t) of this queue is a function of minth1 , maxth1 and the current length of drop-tail queue q 1 (t) (Figure 15). Without loss of generality, we let C1min = 0 and C1max = C in the analysis. When a packet enters the drop-tail queue at time t, it sees an instant queue length q1 (t) and a service rate
C1 (t) = max(C1min , min( q1 (t)C , C1max )). maxth1

(1)

q1 (t)C Since C1 (t) ? maxth , maxth1 /C is a lower bound of 1 the average queuing delay at the drop-tail queue. We redo the experiment in Section IV with parameters listed in Table II except maxth1 being varied from 10KB to 80KB . Figure 16 shows the mean packet delay of CBR and web traf?c at the drop-tail queue and that of FTP traf?c at the Adaptive RED queue as maxth1 is varied. The simple lower bound of average queuing delay maxth1 /C is seen to provide a good approximation. Note that the packet delay at the Adaptive RED queue with ?xed thresholds is almost a constant even when maxth1 is decreasing. That is because the Adaptive RED queue has ?xed thresholds (minth2 , maxth2 ) and the average queue length of RED queue is always around q ¯2 =(minth2 +maxth2 )/2. Since the dynamic bandwidth allocation policy does not dramatically change the long¯2 , the mean delay term average of bandwidth allocation C ¯2 . ¯2 /C at the Adaptive RED queue is around q For some real time applications such as video conference and voice, small delay jitter is very important for the connection quality. Figure 16 also shows that the drop-tail queue has a very small delay variance. Note that the delay variance at the Adaptive RED queue is slightly increased when a smaller value of maxth 1 at the

CBR

9

drop-tail queue is applied. According to these results, the mean packet delay requirements at both queues can be satis?ed by properly designing the values of (minth1 , maxth1 ) and (minth2 , maxth2 ).
CBR & WEB at Drop?tail Queue Mean Delay +/? 1 STD
0.4 CBR WEB Lower Bound

0.3

0.2

0.1

0

0

10

20

30

40

50

60

70

80

FTP at Adaptive RED Queue Mean Delay +/? 1 STD
0.4 FTP 0.3

congestion only happens in the forward direction and n /C in the backward direction is a the queuing delay q b n = d + q n /C with constant. Hence we can write R k k n dk = dk + qb /C . Based on the assumption of ?xed dropping probability at the router, each TCP connection experiences a ?xed packet loss rate Pd and the corresponding average congestion window size is assumed to be a constant ¯ . Hence, the average ?ow rate T n of the k th TCP W k connection at slot n is ¯ W n n Tk = n + Ek (3) Rk
n is a white Gaussian process with zero mean where Ek and variance ? 2 modeling the ?ow rate perturbation of the k th connection at slot n. Given the arrival rate of each TCP connection, the dynamics of queue length q n follows the Lindley equation: n+1 N

0.2

0.1

0

0

10

20

30

40

50

60

70

80

maxth (Kbytes)
1

Fig. 16. Mean delays (sec.) of CBR and WEB packets at the droptail queue and mean delay of FTP packets at the Adaptive RED queue (with ?xed thresholds) with maxth1 = 10, 20,...,80 (KBytes).

q

= min{B, max[0, q + (
n k =1

n Tk ? C )S ]},

(4)

B. Adaptive RED Queue with Dynamic Thresholds In Section V we proposed the modi?ed Adaptive RED scheme with dynamic thresholds in the parallel queue structure for controlling the ?ow rate of non-real time applications. The maximum threshold maxth 2 and minimum threshold minth2 are changed dynamically to keep the packet dropping probability P d within a desired small region (PL , PU ) at the cost of packet delay variation. In this subsection we analyze issues related to the stability of this virtual queue. For ease of analysis, it is assumed that the dropping probability Pd of the Adaptive RED at the bottleneck router is ?xed so that the average ?ow rate of each TCP connection can be approximated by a simple function of its round-trip time. Note that this assumption is not very restrictive considering the interval (PL , PU ) is small. n as the Consider N persistent TCP ?ows. De?ne T k th average ?ow rate of the k TCP connection during time slot n. Let dk be the link round-trip propagation delay of connection k. At the beginning of time slot n the k th n , which is equal connection sees a round-trip time R k to the sum of link propagation delay and the average queuing delay in the forward direction q n /C and in the n /C : backward direction q b
n q n qb + , (2) C C n are the forward where C is the link bandwidth, q n and qb queue length and the backward queue length at the beginning of time slot n, respectively. We assume that n Rk = dk +

where B is the buffer size and S is the duration of one time slot. Since the queue length of Adaptive RED is mostly operated in a region far from the boundary, we ?rst ignore the max and min operations in (4) and have a simpli?ed nonlinear dynamic system:
q n+1 = f (q n ) + ? n ,

(5)

where
N

f (q ) = q + S {(
n n k =1

¯C W ) ? C }, q n + dk C

(6)

and
N

?n = S
k =1

n Ek .

(7)

To avoid the trivial case q ? 0, we assume that the sum of possible peak rates of all connections is greater than the link bandwidth at the bottleneck router:
N k =1

¯ W ? C. dk

(8)

Figure 17 shows the queue length dynamics (and the throughput of a persistent TCP connection) based on the model (5), where the ?ow rate deviations ? = 77021, 128490 (bits/s) for N =20, 40 are measured from the simulation in Section V, respectively. For both N = 20 and N = 40, Figure 17 shows consistent steady state behavior with simulation results in Figure 10. The

10
350

Qlength (Kbytes)

300 250 200 150 100 50 0 0 10 20

N=40

N=20

N=20

dependent on q 0 . Proof: First we observe that the function f is convex since
N
70 80

30

40

50

60

f (q ) =
k =1

600

¯C 2S W > 0, ?q ? [0, ?). (q + dk C )3
N

(11)

Throughput (Kbits)

500 400 300 200 100 0 0

N=20 N=20 N=40

For any B0 such that B0 > qe and
B0 ? f (0) = ( ¯ W ? C )S, dk

(12)

10

20

30

40

50

60

70

80

k =1

Time (Sec.)

Fig. 17. Queue length and TCP throughput (of a single connec¯ =6.02×104 bits. Compare with tion) with C=6Mbps, dk =64ms, W simulation in Fig.10.
600

one can verify that f maps [0, B 0 ] to [0, B0 ] due to convexity of f and
N

f (q ) = 1 ? S
k =1

¯C W < 1, ?q ? [0, ?).(13) (q + dk C )2

500

N=40

equilibrium
400

45o

300

N=20 200

When restricted to [0, B0 ], f (q ) ? ?1 with ?1 ? (0, 1). If (10) is satis?ed, f (q ) > ?1, ?q ? [0, ?], and f (q ) ? ??2 , ?q ? [0, B0 ], with ?2 ? (0, 1). Hence |f (q )| ? ? = max(?1 , ?2 ) < 1 ?q ? [0, B0 ], which implies that f is a contraction mapping on [0, B 0 ]. By the Contraction Mapping Principle [15],
|q n ? qe | < ?n |q 0 ? qe |, if q0 ? [0, B0 ].

f(qn) (Pkts)

100 ?45o 0 0 100 200 300 400 ?45o 500 600

(14)

qn (Pkts)

Since B0 can be arbitrarily large (as long as B 0 < ?), qe is globally asymptotically stable. Note that the contraction constant ? depends on B 0 and thus on q 0 . From Proposition 6.1, when rate update is frequent enough, the equilibrium will be asymptotically stable (the equilibrium itself does not depend on S ). Another suf?cient condition for asymptotic stability is the following: Proposition 6.2: If f (qe ) ? 0, then qe is a globally asymptotically stable. Proof: As shown in the proof of Proposition 6.1, f is convex. If f (qe ) ? 0, graphical analysis reveals that
|q n+1 ? qe | ? |q n ? qe |,

Fig. 18. Mapping functions and equilibrium points when N=20, 40 with S=RTT.

mapping f (·) is plotted in Figure 18 for N = 20 and N = 40. We ?rst analyze the stability of the equilibrium of the model (5) when there is no ?ow disturbance., i.e., n = 0. An equilibrium q of q n+1 = f (q n ) should Ek e satisfy
N k =1

¯ W = C. dk + q n /C

(9)

Since by assumption, (9) has a unique solution qe is located at the intersection of the graph of f with the 45o line (see Figure 18). It is well known that qe is locally asymptotically stable if |f (qe )| < 1. In the following we give conditions for qe to be globally asymptotically stable. Proposition 6.1: If the rate update interval S sati?es
S< ¯( W 2C
N ?2 k =1 dk )

¯ N W k =1 dk ? C qe in [0, ?).

,

(10)

the equilibrium qe is globally asymptotically stable. Furthermore, |q n ? qe | < ?n |q 0 ? qe | for some ? ? (0, 1)

where the equality holds if and only if q n = qe . The claim thus follows. For the homogeneous case d k = d, we have q e = ¯ ? dC . And the condition f (qe ) ? 0 is equivalent NW ¯ W to S ? NC = qe /C + d. In other words, qe is asymptotically stable if the rate update interval S is no larger than the round-trip time (RTT). Figure 19 shows the mapping f and the equilibrium q e for different S . Figure 20 shows the queue length dynamics (noise is included) for S =RT T and 2RT T , respectively. We can see that in the case S =RT T , the queue length stays around q e with small variation, while in the case S =2RT T , the queue length dynamics is much more chaotic.

11

C (W ¯

For the heterogeneous case, a suf?cient condition S ? (N ?1) ?1 ? (N for stability can be derived. qe +Dm )2 )
600

Theorem [16]. When condition (10) is satis?ed, g is a contraction mapping with respect to its ?rst argument, i.e.,
|g(x, ? ) ? g(y, ? )| < ?|x ? y |, ?x, y ? [0, B ], ??,(16)

45 500

o

equilibrium
400

S=RTT 300

S=0.5RTT 200 S=2RTT

100 ?45o 0 0 100 200 300 400 500 600

q (Pkts)

n

Fig. 19. Mapping function and equilibrium point when N=40 with S=0.5RTT, 1RTT and 2RTT.

where ? ? (0, 1). Hence the system is weakly asymptotically stable by Theorem 12.6.1 of [16]. The invariant probability measure µ ? has probability masses at q = 0 and q = B , and has probability density on (0, B ). An approximation to µ ? can be obtained by numerically advancing the probability distribution µ n for the queue length q n . We have discretized the queue length and consequently obtained a Markov chain for the dynamics of the queue length distribution. Let the packet size have a ?xed length L (bits), n z :=ceil(q n /L) be the number of packets in the queue at time n and ? n = [P r (z n = 0), ..., P r (z n = B )] denote the corresponding probability vector. We have
? n+1 = ? n T, ? = ? T,
? ?

f(qn) (Pkts)

600

Queue Length (Pkts)

500 400 300 200 100 0 0

S=RTT

(17) (18)

50

100

150

200

250

300

350

400

450

500

600

Queue Length (Pkts)

500 400 300 200 100 0 0

S=2RTT

where ? ? = limn?? ? n is the steady state distribution and T (i, j ) = P r [z n+1 = j |z n = i]is the corresponding transition matrix of the Markov chain. The conditional probability P r [z n+1 = j |z n = i] is obtained as
P r [j ? (min{B, max[0, f (iL) + ? })/L < (j + 1)]. (19)

50

100

150

200

250

300

350

400

450

500

Time (Sec.)

Fig. 20.

Queue length with N=40, S=RTT and 2RTT.

Next we consider the Lindley equation with random n perturbation ? n = S N k =1 Ek :
q n+1 = g(q n , ? n ) = min{B, max[0, f (q n ) + ? n ]} (15)
n } is white and stationary, so is Note that since {Ek {? n }. It turns out that stability of the equilibrium of the deterministic system q n+1 = f (q n ) is closely related to stochastic stability of the system (15). Proposition 6.3: The stochastic system (15) admits an invariant probability measure µ ? for the queue length q n . Furthermore, if the condition (10) on Proposition 6.1 is satis?ed, this system is weakly asymptotically stable, i.e., the queue length distribution µ n for q n converges to µ? weakly. Sketch of Proof. Since f is continuous and {? n } is identically and independently distributed, the system (15) is a regular stochastic dynamic system [16]. Since [0, B ] is compact, the system admits an invariant probability measure µ? by the Krylov -Bogolubov

On the other hand, when the buffer size B is far greater than the equilibrium queue length and the perturbation magnitude is small, the transformation g(q, ? ) can be linearized around the equilibrium point q e . Let Qn = q n ? qe . Then
Qn+1 = f (qe )Qn + ? n .

(20)

Since {? n } is white Gaussian process with zero mean and variance N S? 2 , {Qn } will be a Gaussian process with zero mean and normalized variance
V ar [Qn /S ] = N ?2 . 1 ? |f (qe )|2

(21)

From (21) the normalized queue length variation will be minimal if f (qe ) = 0, which corresponds to S = RT T for the homogeneous case. Figure 21 shows the queue length distributions obtained through empirical estimation from ns2 simulation, numerical computation based on (17), and linear approximation based on (21), repectively. Three distributions agree well, which veri?es that our nonlinear model (15) captures the queue length dynamics under the Adaptive RED scheme with dynamic thresholds.

12
0.05 0.04

N=20

0.03 0.02 0.01 0 NS2 Empericial Distribution Numerical Computation Linear Approximation 150 200 250

pendency property. Based on observations of the “recent past” traf?c, the future bandwidth demand of the web traf?c is predictable. In future work optimal bandwidth allocation based on prediction of the congestion level will be explored.
300 350 400

Distribution

0

50

100

Queue length q (Kbytes)
0.05 0.04 NS2 Empericial Distribution Numerical Computation Linear Approximation

R EFERENCES
N=40

0.03 0.02 0.01 0

0

50

100

150

200

250

300

350

400

Queue length q (Kbytes)

Fig. 21. S=RTT.

Steady state queue length distributions for N=20 and 40,

VII. C ONCLUSIONS In this paper we have ?rst demonstrated the vulnerability of Adaptive RED scheme to bursty web traf?c, and then proposed a parallel virtual queue structure for active queue management at the router. A simple detection algorithm is employed to separate the shortlife and long-life TCP connections into different virtual queues. The packet loss rate and mean delay for shortlife traf?c can be greatly reduced by dynamic bandwidth allocation with this parallel queue structure. This scheme combines the advantages of drop-tail and Adaptive RED policies. The simulation results in the study show that this scheme achieves a shorter mean delay for real time applications and keeps a high throughput for the best effort connections as well as greatly reduces the packet loss rate in both queues. This parallel virtual queue structure also offers more degrees of freedom for AQM due to its ?exibility in accommodating variants of the Adaptive RED scheme and different dynamic bandwidth allocation algorithms. We have explored a modi?ed Adaptive RED scheme with sliding queue length thresholds. This scheme is able to maintain the dropping probability within a small interval and improve the goodput of non-real time connections. The queue length variation under this policy has been analyzed and conditions for it stability have been given. The dynamic threshold Adaptive RED might also be useful for achieving throughput fairness among multiple RED queues. As to the dynamic bandwidth allocation policy for the drop-tail queue, we only used the current virtual queue length information. However, it is well-known that web traf?c is strongly correlated and has a long range de-

[1] W. R. Stevens, TCP/IP Illustrated (Volume 1), Addison Wesley, 1994. [2] S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, pp. 397–413, 1993. [3] M. Christiansen, K. Jaffay, D. Ott, and F.D. Smith, “Tuning RED for web traf?c,” in Proceedings of SIGCOMM 00, 2000, pp. 139–150. [4] V. Misra, W. Gong, and D. F. Towsley, “Fluid-based analysis of a network of AQM routers supporting TCP ?ows with an application to RED,” in Proceedings of SIGCOMM 00, 2000, pp. 151–160. [5] P. Ranjan, E. Abed, and R. La, “Nonlinear instabilities in TCPRED,” in Proceedings of INFOCOM 02, 2002, pp. 249 –258. [6] T. J. Ott, T. V. Lakshman, and L. H. Wong, “SRED: Stabilized RED,” in Proceedings of INFOCOM 99, 1999, pp. 1346–1355. [7] D. Lin and R. Morris, “Dynamics of random early detection,” in Proceedings of SIGCOMM 97, Cannes, France, September 1997, pp. 127–137. [8] D. Clark and W. Feng, “Explicit allocation of best-effort packet delivery service,” IEEE/ACM Transactions on Networking, vol. 6, no. 4, pp. 362–373, August 1998. [9] S. Athuraliya, S. Low, V. Li, and Q. Yin, “REM: Active queue management,” IEEE Network, vol. 15, no. 3, pp. 48–53, 2001. [10] W. Feng, D. Kandlur, D. Saha, and K. G. Shin, “BLUE: A new class of active queue management algorithms,” Tech. Rep. U. Michigan EECS CSE-TR-387-99, 1999. [11] W. Feng, D.D. Kandlur, D. Saha, and K.G. Shin, “A selfcon?guring RED gateway,” in Proceedings of INFOCOM 99, 1999, pp. 1320–1328. [12] S. Floyd, R. Gummadi, and S. Shenker, “Adaptive RED: An algorithm for increasing the robustness of RED,” available at http://www.icir.org/?oyd/papers/adaptiveRed.pdf, Aug. 2001. [13] Van Jacobson, “Congestion avoidance and control,” in Proceedings of SIGCOMM 88, 1988, pp. 314–329. [14] M. Taqqu, W. Willinger, and R. Sherman, “Proof of a fundamental result in self-similar traf?c modeling,” Comput. Comm. Rev., vol. 26, pp. 5–23, 1997. [15] D. R. Smart, Fixed Point Theorems, Combridge University Press, London, New York, 1974. [16] A. Lasota and M.C. Mackey, Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics, Springer-Verlag, second edition, 1994.

Distribution



doc_699250418.pdf
 

Attachments

Back
Top