The debate between Loss-based Congestion Avoidance (LCA) and Delay-based Congestion Avoidance (DCA) is almost as old as TCP congestion control itself. Jacobson’s original TCP congestion control [1] is the canonical example of LCA: packet losses indicate network congestion, and so a TCP transfer should decrease its window after a packet loss to reduce the load in the network. The obvious problem with LCA is that a TCP sender keeps increasing its window until it causes buffer overflows. These “self-induced” packet drops cause increased loss rate, decreased throughput, and significant delay variations (at least with Drop-Tail queues). To deal with the previous issue, DCA schemes attempt to control the send-window of a TCP transfer based on Round-Trip Time (RTT) measurements. The basic idea is that if the sendwindow large enough to saturate the available bandwidth, the transfer will cause increasing queueing at the tight link of the network path,1 and thus increasing RTTs. So, the sender should decrease the transfer’s window when the RTTs start increasing. There are several variations of DCA schemes. Starting with Jain’s initial proposal in 1989 [3] and Mitra’s fundamental work [4], the networking research community considered several ways to modify TCP, including TCP Tri-S [5], TCP Vegas [6], TCP BFA [7], and most recently TCP FAST [8]. Interest in DCA algorithms has been re-emerged in the last couple of years, as it becomes increasingly clear that Jacobson’s LCA-based TCP cannot efficiently use high-bandwidth and long-distance network paths. Recently, however, measurement studies have shown that there is little correlation between increased delays (or RTTs) and congestive losses [9], [10], [11]. This experimental observation raises major doubts on whether DCA algorithms would be effective in practice, as their main assumption is that RTT measurements can be used to predict and avoid network congestion [12]. Our objective in this short note is to suggest possible reasons for the weak correlations between delays and losses, and to identify conditions under which DCA schemes can fail to provide robust congestion control.
[1]
Fernando Paganini,et al.
Fast kernel: Background theory and experimental results
,
2003
.
[2]
Debasis Mitra.
Asymptotically optimal design of congestion control for high speed data networks
,
1992,
IEEE Trans. Commun..
[3]
Manish Jain,et al.
End-to-end available bandwidth: measurement methodology, dynamics, and relation with TCP throughput
,
2002,
SIGCOMM 2002.
[4]
Raj Jain,et al.
A delay-based approach for congestion avoidance in interconnected heterogeneous computer networks
,
1989,
CCRV.
[5]
Injong Rhee,et al.
Delay-based congestion avoidance for TCP
,
2003,
TNET.
[6]
V. Jacobson,et al.
Congestion avoidance and control
,
1988,
CCRV.
[7]
Jon Crowcroft,et al.
Eliminating periodic packet losses in the 4.3-Tahoe BSD TCP congestion control algorithm
,
1992,
CCRV.
[8]
Larry L. Peterson,et al.
TCP Vegas: End to End Congestion Avoidance on a Global Internet
,
1995,
IEEE J. Sel. Areas Commun..
[9]
Thomas R. Gross,et al.
TCP Vegas revisited
,
2000,
Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064).
[10]
Amr Awadallah,et al.
TCP-BFA: Buffer Fill Avoidance
,
1998,
HPN.
[11]
Darryl Veitch,et al.
Understanding end-to-end Internet traffic dynamics
,
1998,
IEEE GLOBECOM 1998 (Cat. NO. 98CH36250).
[12]
Nitin H. Vaidya,et al.
Is the round-trip time correlated with the number of packets in flight?
,
2003,
IMC '03.