Computer Science Computer Networks and Communications

Network Traffic and Congestion Control

Description

This cluster of papers focuses on various aspects of congestion control in computer networks, including TCP performance, active queue management, bandwidth estimation, internet topology, multicast routing, delay analysis, wireless networks, and quality of service (QoS) routing.

Keywords

Congestion Control; TCP; Active Queue Management; Network Performance; Bandwidth Estimation; Internet Topology; Multicast Routing; Delay Analysis; Wireless Networks; QoS Routing

Self-Similar Network Traffic: An Overview (K. Park & W. Willinger). Wavelets for the Analysis, Estimation, and Synthesis of Scaling Data (P. Abry, et al.). Simulations with Heavy-Tailed Workloads (M. Crovella … Self-Similar Network Traffic: An Overview (K. Park & W. Willinger). Wavelets for the Analysis, Estimation, and Synthesis of Scaling Data (P. Abry, et al.). Simulations with Heavy-Tailed Workloads (M. Crovella & L. Lipsky). Queueing Behavior Under Fractional Brownian Traffic (I. Norros). Heavy Load Queueing Analysis with LRD On/Off Sources (F. Brichet, et al.). The Single Server Queue: Heavy Tails and Heavy Traffic (O. Boxma & J. Cohen). Fluid Queues, On/Off Processes, and Teletraffic Modeling with Highly Variable and Correlated Inputs (S. Resnick & G. Samorodnitsky). Bounds on the Buffer Occupancy Probability with Self-Similar Input Traffic (N. Likhanov). Buffer Asymptotics for M/G/ Input Processes (A. Makowski & M. Parulekar). Asymptotic Analysis of Queues with Subexponential Arrival Processes (P. Jelenkovi). Traffic and Queueing from an Unbounded Set of Independent Memoryless On/Off Sources (P. Jacquet). Long-Range Dependence and Queueing Effects for VBR Video (D. Heyman & T. Lakshman). Analysis of Transient Loss Performance Impact of Long-Range Dependence in Network Traffic (G.-L. Li & V. Li). The Protocol Stack and Its Modulating Effect on Self-Similar Traffic (K. Park, et al.). Characteristics of TCP Connection Arrivals (A. Feldmann). Engineering for Quality of Service (J. Roberts). Network Design and Control Using On/Off and Multilevel Source Traffic Models with Heavy-Tailed Distributions (N. Duffield & W. Whitt). Congestion Control for Self-Similar Network Traffic (T. Tuan & K. Park). Quality of Service Provisioning for Long-Range-Dependent Real-Time Traffic (A. Adas & A. Mukherjhee). Toward an Improved Understanding of Network Traffic Dynamics (R. Riedi & W. Willinger). Future Directions and Open Problems in Performance Evaluation and Control of Self-Similar Network Traffic (K. Park). Index.
This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a … This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods.
This document specifies the architecture for Multiprotocol Label Switching (MPLS). [STANDARDS-TRACK] This document specifies the architecture for Multiprotocol Label Switching (MPLS). [STANDARDS-TRACK]
Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and … Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes.
Abstract This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service. … Abstract This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service. A model is described from which max‐min fairness of rates emerges as a limiting special case; more generally, the charges users are prepared to pay influence their allocated rates. In the preferred version of the model, a user chooses the charge per unit time that the user will pay; thereafter the user's rate is determined by the network according to a proportional fairness criterion applied to the rate per unit charge. A system optimum is achieved when users' choices of charges and the network's choice of allocated rates are in equilibrium.
In this paper, we analyze a performance model for the TCP Congestion Avoidance algorithm. The model predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet … In this paper, we analyze a performance model for the TCP Congestion Avoidance algorithm. The model predicts the bandwidth of a sustained TCP connection subjected to light to moderate packet losses, such as loss caused by network congestion. It assumes that TCP avoids retransmission timeouts and always has sufficient receiver window and sender data. The model predicts the Congestion Avoidance performance of nearly all TCP implementations under restricted conditions and of TCP with Selective Acknowledgements over a much wider range of Internet conditions.We verify the model through both simulation and live Internet measurements. The simulations test several TCP implementations under a range of loss conditions and in environments with both drop-tail and RED queuing. The model is also compared to live Internet measurements using the TReno diagnostic and real TCP implementations.We also present several applications of the model to problems of bandwidth allocation in the Internet. We use the model to analyze networks with multiple congested gateways; this analysis shows strong agreement with prior work in this area. Finally, we present several important implications about the behavior of the Internet in the presence of high load from diverse user communities.
In this paper we use jump process driven Stochastic Differential Equations to model the interactions of a set of TCP flows and Active Queue Management routers in a network setting. … In this paper we use jump process driven Stochastic Differential Equations to model the interactions of a set of TCP flows and Active Queue Management routers in a network setting. We show how the SDEs can be transformed into a set of Ordinary Differential Equations which can be easily solved numerically. Our solution methodology scales well to a large number of flows. As an application, we model and solve a system where RED is the AQM policy. Our results show excellent agreement with those of similar networks simulated using the well known ns simulator. Our model enables us to get an in-depth understanding of the RED algorithm. Using the tools developed in this paper, we present a critical analysis of the RED algorithm. We explain the role played by the RED configuration parameters on the behavior of the algorithm in a network. We point out a flaw in the RED averaging mechanism which we believe is a cause of tuning problems for RED. We believe this modeling/solution methodology has a great potential in analyzing and understanding various network congestion control algorithms.
CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP … CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trip times) by making the window growth to be independent of RTT -- thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggressively when the window is far from the saturation point, and the slowly when it is close to the saturation point. This feature allows CUBIC to be very scalable when the bandwidth and delay product of the network is large, and at the same time, be highly stable and also fair to standard TCP flows. The implementation of CUBIC in Linux has gone through several upgrades. This paper documents its design, implementation, performance and evolution as the default TCP algorithm of Linux.
This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements.Please refer to the current edition of the "Internet Official Protocol Standards" … This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements.Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1)
We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors … We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using a gradient projection algorithm. In this system, sources select transmission rates that maximize their own benefits, utility minus bandwidth cost, and network links adjust bandwidth prices to coordinate the sources' decisions. We allow feedback delays to be different, substantial, and time varying, and links and sources to update at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly time-varying environment. We discuss its fairness property.
A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates … A resource reservation protocol (RSVP), a flexible and scalable receiver-oriented simplex protocol, is described. RSVP provides receiver-initiated reservations to accommodate heterogeneity among receivers as well as dynamic membership changes; separates the filters from the reservation, thus allowing channel changing behavior; supports a dynamic and robust multipoint-to-multipoint communication model by taking a soft-state approach in maintaining resource reservations; and decouples the reservation and routing functions. A simple network configuration with five hosts connected by seven point-to-point links and three switches is presented to illustrate how RSVP works. Related work and unresolved issues are discussed.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
Several new architectures have been developed for supporting multimedia applications such as digital video and audio. However, quality-of-service (QoS) routing is an important element that is still missing from these … Several new architectures have been developed for supporting multimedia applications such as digital video and audio. However, quality-of-service (QoS) routing is an important element that is still missing from these architectures. In this paper, we consider a number of issues in QoS routing. We first examine the basic problem of QoS routing, namely, finding a path that satisfies multiple constraints, and its implications on routing metric selection, and then present three path computation algorithms for source routing and for hop-by-hop routing.
Fair queuing is a technique that allows each flow passing through a network device to have a fair share of network resources. Previous schemes for fair queuing that achieved nearly … Fair queuing is a technique that allows each flow passing through a network device to have a fair share of network resources. Previous schemes for fair queuing that achieved nearly perfect fairness were expensive to implement; specifically, the work required to process a packet in these schemes was O(log(n)), where n is the number of active flows. This is expensive at high speeds. On the other hand, cheaper approximations of fair queuing reported in the literature exhibit unfair behavior. In this paper, we describe a new approximation of fair queuing, that we call deficit round-robin. Our scheme achieves nearly perfect fairness in terms of throughput, requires only O(1) work to process a packet, and is simple enough to implement in hardware. Deficit round-robin is also applicable to other scheduling problems where servicing cannot be broken up into smaller units (such as load balancing) and to distributed queues.
Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP … Vegas is an implementation of TCP that achieves between 37 and 71% better throughput on the Internet, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study, using both simulations and measurements on the Internet, of the Vegas and Reno implementations of TCP.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links … Reliable transport protocols such as TCP are tuned to perform well in traditional networks where packet losses occur mostly because of congestion. However, networks with wireless and other lossy links also suffer from significant losses due to bit errors and handoffs. TCP responds to all losses by invoking congestion control and avoidance algorithms, resulting in degraded end-to end performance in wireless and lossy systems. We compare several schemes designed to improve the performance of TCP in such networks. We classify these schemes into three broad categories: end-to-end protocols, where loss recovery is performed by the sender; link-layer protocols that provide local reliability; and split-connection protocols that break the end-to-end connection into two parts at the base station. We present the results of several experiments performed in both LAN and WAN environments, using throughput and goodput as the metrics for comparison. Our results show that a reliable link-layer protocol that is TCP-aware provides very good performance. Furthermore, it is possible to achieve good performance without splitting the end-to-end connection at the base station. We also demonstrate that selective acknowledgments and explicit loss notifications result in significant performance improvements.
Vegas is a new implementation of TCP that achieves between 40 and 70% better throughput, with one-fifth to one-half the losses, as compared to the implementation of TCP in the … Vegas is a new implementation of TCP that achieves between 40 and 70% better throughput, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study—using both simulations and measurements on the Internet—of the Vegas and Reno implementations of TCP.
We analyze 20 large sets of actual variable-bit-rate (VBR) video data, generated by a variety of different codecs and representing a wide range of different scenes. Performing extensive statistical and … We analyze 20 large sets of actual variable-bit-rate (VBR) video data, generated by a variety of different codecs and representing a wide range of different scenes. Performing extensive statistical and graphical tests, our main conclusion is that long-range dependence is an inherent feature of VBR video traffic, i.e., a feature that is independent of scene (e.g., video phone, video conference, motion picture video) and codec. In particular, we show that the long-range dependence property allows us to clearly distinguish between our measured data and traffic generated by VBR source models currently used in the literature. These findings give rise to novel and challenging problems in traffic engineering for high-speed networks and open up new areas of research in queueing and performance analysis involving long-range dependent traffic models. A small number of analytic queueing results already exist, and we discuss their implications for network design and network control strategies in the presence of long-range dependent traffic.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, … In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send. Unlike the models in [6, 7, 10], our model captures not only the behavior of TCP's fast retransmit mechanism (which is also considered in [6, 7, 10]) but also the effect of TCP's timeout mechanism on throughput. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP throughput and is accurate over a wider range of loss rates.
This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to … This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, "not TCP-friendly", or simply using disproportionate bandwidth. A flow that is not "TCP-friendly" is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion.
The notion of self-similarity has been shown to apply to wide-area and local-area network traffic. We show evidence that the subset of network traffic that is due to World Wide … The notion of self-similarity has been shown to apply to wide-area and local-area network traffic. We show evidence that the subset of network traffic that is due to World Wide Web (WWW) transfers can show characteristics that are consistent with self-similarity, and we present a hypothesized explanation for that self-similarity. Using a set of traces of actual user executions of NCSA Mosaic, we examine the dependence structure of WWW traffic. First, we show evidence that WWW traffic exhibits behavior that is consistent with self-similar traffic models. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local-area network. To do this, we rely on empirically measured distributions both from client traces and from data independently collected at WWW servers.
RFC 5681 documents the following four intertwined TCP congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.RFC 5681 explicitly allows certain modifications of these algorithms, including modifications … RFC 5681 documents the following four intertwined TCP congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.RFC 5681 explicitly allows certain modifications of these algorithms, including modifications that use the TCP Selective Acknowledgment (SACK) option (RFC 2883), and modifications that respond to "partial acknowledgments" (ACKs that cover new data, but not all the data outstanding when loss was detected) in the absence of SACK.This document describes a specific algorithm for responding to partial acknowledgments, referred to as "NewReno".This response to partial
The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send … The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send rate, which is the amount of data sent by the sender in unit time. In this paper we develop a simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time (RTT) for a bulk transfer TCP flow. Unlike the models of Lakshman and Madhow (see IEE/ACM Trans. Networking, vol.5, p.336-50, 1997), Mahdavi and Floyd (1997), Mathis, Semke, Mahdavi and Ott (see Comput. Commun. Rev., vol.27, no.3, 1997) and by by Ott et al., our model captures not only the behavior of the fast retransmit mechanism but also the effect of the time-out mechanism. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates. We also present a simple extension of our model to compute the throughput of a bulk transfer TCP flow, which is defined as the amount of data received by the receiver in unit time.
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over … A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today's wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics.Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON's routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5% of the transfers doubled their TCP throughput and 5% of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
In this paper, we demonstrate the existence of fair end-to-end window-based congestion control protocols for packet-switched networks with first come-first served routers. Our definition of fairness generalizes proportional fairness and … In this paper, we demonstrate the existence of fair end-to-end window-based congestion control protocols for packet-switched networks with first come-first served routers. Our definition of fairness generalizes proportional fairness and includes arbitrarily close approximations of max-min fairness. The protocols use only information that is available to end hosts and are designed to converge reasonably fast. Our study is based on a multiclass fluid model of the network. The convergence of the protocols is proved using a Lyapunov function. The technical challenge is in the practical implementation of the protocols.
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 … Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 wide area traces, investigating a number of wide area TCP arrival processes (session and connection arrivals, FTP data connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. We find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib interarrivals preserves burstiness over many time scales; and that FTP data connection arrivals within FTP sessions come bunched into "connection bursts", the largest of which are so large that they completely dominate FTP data traffic. Finally, we offer some results regarding how our findings relate to the possible self-similarity of wide area traffic.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
The authors propose a computationally simple approximate expression for the equivalent capacity or bandwidth requirement of both individual and multiplexed connections, based on their statistical characteristics and the desired grade-of-service … The authors propose a computationally simple approximate expression for the equivalent capacity or bandwidth requirement of both individual and multiplexed connections, based on their statistical characteristics and the desired grade-of-service (GOS). The purpose of such an expression is to provide a unified metric to represent the effective bandwidth used by connections and the corresponding effective load of network links. These link metrics can then be used for efficient bandwidth management, routing, and call control procedures aimed at optimizing network usage. While the methodology proposed can provide an exact approach to the computation of the equivalent capacity, the associated complexity makes it infeasible for real-time network traffic control applications. Hence, an approximation is required. The validity of the approximation developed is verified by comparison to both exact computations and simulation results.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections … The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connection's share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol, TCP. However, traffic such … This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol, TCP. However, traffic such as best-effort unicast streaming multimedia could find use for a TCP-friendly congestion control mechanism that refrains from reducing the sending rate in half in response to a single packet drop. With our mechanism, the sender explicitly adjusts its sending rate as a function of the measured rate of loss events, where a loss event consists of one or more packets dropped within a single round-trip time. We use both simulations and experiments over the Internet to explore performance.
This paper describes scalable reliable multicast (SRM), a reliable multicast framework for light-weight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to … This paper describes scalable reliable multicast (SRM), a reliable multicast framework for light-weight sessions and application level framing. The algorithms of this framework are efficient, robust, and scale well to both very large networks and very large sessions. The SRM framework has been prototyped in wb, a distributed whiteboard application, which has been used on a global scale with sessions ranging from a few to a few hundred participants. The paper describes the principles that have guided the SRM design, including the IP multicast group delivery model, an end-to-end, receiver-based model of reliability, and the application level framing protocol model. As with unicast communications, the performance of a reliable multicast delivery algorithm depends on the underlying topology and operational environment. We investigate that dependence via analysis and simulation, and demonstrate an adaptive algorithm that uses the results of previous loss recovery events to adapt the control parameters used for future loss recovery. With the adaptive algorithm, our reliable multicast delivery algorithm provides good performance over a wide range of underlying topologies.
A number of empirical studies of traffic measurements from a variety of working packet networks have demonstrated that actual network traffic is self-similar or long-range dependent in nature-in sharp contrast … A number of empirical studies of traffic measurements from a variety of working packet networks have demonstrated that actual network traffic is self-similar or long-range dependent in nature-in sharp contrast to commonly made traffic modeling assumptions. We provide a plausible physical explanation for the occurrence of self-similarity in local-area network (LAN) traffic. Our explanation is based on convergence results for processes that exhibit high variability and is supported by detailed statistical analyzes of real-time traffic measurements from Ethernet LANs at the level of individual sources. This paper is an extended version of Willinger et al. (1995). We develop here the mathematical results concerning the superposition of strictly alternating ON/OFF sources. Our key mathematical result states that the superposition of many ON/OFF sources (also known as packet-trains) with strictly alternating ON- and OFF-periods and whose ON-periods or OFF-periods exhibit the Noah effect produces aggregate network traffic that exhibits the Joseph effect. There is, moreover, a simple relation between the parameters describing the intensities of the Noah effect (high variability) and the Joseph effect (self-similarity). An extensive statistical analysis of high time-resolution Ethernet LAN traffic traces confirms that the data at the level of individual sources or source-destination pairs are consistent with the Noah effect. We also discuss implications of this simple physical explanation for the presence of self-similar traffic patterns in modern high-speed network traffic.
The problem of allocating network resources to the users of an integrated services network is investigated in the context of rate-based flow control. The network is assumed to be a … The problem of allocating network resources to the users of an integrated services network is investigated in the context of rate-based flow control. The network is assumed to be a virtual circuit, connection-based packet network. It is shown that the use of generalized processor sharing (GPS), when combined with leaky bucket admission control, allows the network to make a wide range of worst-case performance guarantees on throughput and delay. The scheme is flexible in that different users may be given widely different performance guarantees and is efficient in that each of the servers is work conserving. The authors present a practical packet-by-packet service discipline, PGPS that closely approximates GPS. This allows them to relate results for GPS to the packet-by-packet scheme in a precise manner. The performance of a single-server GPS system is analyzed exactly from the standpoint of worst-case packet delay and burstiness when the sources are constrained by leaky buckets. The worst-case session backlogs are also determined.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>
article Free Access Share on Congestion avoidance and control Author: V. Jacobson University of California, Lawrence Berkeley Laboratory, Berkeley, CA University of California, Lawrence Berkeley Laboratory, Berkeley, CAView Profile Authors … article Free Access Share on Congestion avoidance and control Author: V. Jacobson University of California, Lawrence Berkeley Laboratory, Berkeley, CA University of California, Lawrence Berkeley Laboratory, Berkeley, CAView Profile Authors Info & Claims ACM SIGCOMM Computer Communication ReviewVolume 25Issue 1Jan. 1995 pp 157–187https://doi.org/10.1145/205447.205462Published:11 January 1995Publication History 78citation357DownloadsMetricsTotal Citations78Total Downloads357Last 12 Months164Last 6 weeks14 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteeReaderPDF
Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and … Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes.
In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated … In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was "yes".
In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated … In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels 1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was “yes”. Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimation exponential retransmit timer backoff slow-start more aggressive receiver ack policy dynamic window sizing on congestion Karn's clamped retransmit backoff fast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet. This paper is a brief description of ( i ) - ( v ) and the rationale behind them. ( vi ) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in [KP87]. ( viii ) is described in a soon-to-be-published RFC. Algorithms ( i ) - ( v ) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them. By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy? There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, or A sender injects a new packet before an old packet has exited, or The equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.
Danny De Vleeschauwer , Chia‐Yu Chang , Koen De Schepper +1 more | EURASIP Journal on Wireless Communications and Networking
Paul Filson Tengangatu | JATISI (Jurnal Teknik Informatika dan Sistem Informasi)
Pemanfaatan File Transfer pada Telkom Gambir menjadi hal yang penting, karena Cabang Perusahaan Telkom ini terdiri dari divisi yang berperan penting dalam instalasi dan perawatan jaringan GPON di Jakarta Pusat. … Pemanfaatan File Transfer pada Telkom Gambir menjadi hal yang penting, karena Cabang Perusahaan Telkom ini terdiri dari divisi yang berperan penting dalam instalasi dan perawatan jaringan GPON di Jakarta Pusat. Oleh karena itu, penulis mencari cara untuk meningkatkan kualitas layanan File Transfer terhadap performa jaringan LAN dan topologi jaringan Telkom Gambir dengan membuat simulasi menggunakan Riverbed Modeler. Kemudian penulis membandingkan simulasi tersebut dengan menggunakan teknik traffic shaping dan teknik bandwidth management yang penulis ketahui, untuk meningkatkan kualitas layanan File Transfer pada jaringan Telkom Gambir. Dengan melakukan analisis terhadap hasil simulasi Global Statistics dari skenario File Transfer dan membandingkan skenario tersebut dengan sistem yang sedang berjalan saat ini. Berdasarkan analisis penulis, penetapan limit pada Access Point merupakan skenario terbaik untuk meningkatkan kualitas layanan File Transfer. Hal ini dikarenakan penetapan limit pada Access Point selain memiliki performa kualitas layanan yang lebih baik dari sistem saat ini juga memberikan Traffic Received dan Traffic Sent terbaik dari semua skenario.
A sketch is a probabilistic data structure that can accurately estimate massive network traffic with a small memory overhead. To improve the measurement accuracy, most sketch-based schemes separate elephant flows … A sketch is a probabilistic data structure that can accurately estimate massive network traffic with a small memory overhead. To improve the measurement accuracy, most sketch-based schemes separate elephant flows from mouse flows to accommodate the skewed network traffic. However, the increased algorithmic complexity often results in a sacrifice of measurement throughput. In addition, some improved sketches may be over-reliant on the skewed distribution of traffic, which results in unstable accuracy. To this end, a novel sketch, called Air Sketch, is proposed in this paper. It treats flows of different sizes as air with different temperatures. Meanwhile, a deterministic replacement strategy is applied to elephant flows. In order to improve throughput, an asymmetric insertion and query algorithm with a global hash is designed. The performance of Air Sketch is evaluated using real traffic transaction datasets, anonymized internet traces, and synthetic Zipf datasets. The experimental results demonstrate that Air Sketch can outperform the best typical measurement methods by up to 27 times in flow size measurement and up to 40 times in elephant flow detection. Additionally, Air Sketch achieves high accuracy and stability while achieving high insertion and query throughput.
Efficient traffic signal control plays a critical role in promoting sustainable mobility by reducing congestion and minimizing vehicle emissions. This paper proposes an enhanced max-pressure (MP) signal control strategy that … Efficient traffic signal control plays a critical role in promoting sustainable mobility by reducing congestion and minimizing vehicle emissions. This paper proposes an enhanced max-pressure (MP) signal control strategy that explicitly accounts for phase switching time losses in grid road networks. While the traditional MP control strategy is recognized for its decentralized architecture and simplicity, it often neglects the delays introduced by frequent phase changes, limiting its real-world effectiveness. To address this issue, three key improvements are introduced in this study. First, a redefined phase pressure formulation is presented, which incorporates imbalances in traffic demand across multiple inlet roads within a single phase. Second, a dynamic green phase extension mechanism is developed, which adjusts phase durations in real time based on queue lengths to improve traffic flow responsiveness. Third, a current-phase protection mechanism is implemented by applying an amplification factor to the current-phase pressure calculations, thereby mitigating unnecessary phase switching. Simulation results using SUMO on a grid network demonstrate that the proposed strategy significantly reduces average vehicle delays and queue lengths compared with traditional MP, travel-time based MP, and fixed-time control strategies, leading to improved overall traffic efficiency. Specifically, the proposed method reduces total delay by 24.83%, 26.67%, and 47.11%, and average delay by approximately 16.18%, 18.91%, and 36.22%, respectively, while improving traffic throughput by 2.25%, 2.76%, and 5.84%. These improvements directly contribute to reducing traffic congestion, fuel consumption, and greenhouse gas emissions, thereby reinforcing the role of adaptive signal control in achieving smart and sustainable cities. The proposed approach can serve as a practical reference for improving real-world traffic signal control systems, particularly in regions seeking to improve sustainability and operational efficiency.
Mobile Ad Hoc Networks (MANETs) operate autonomously through decentralized configurations for military as well as emergency and academic applications. The adaptable network structure and unstable nature of MANETs result in … Mobile Ad Hoc Networks (MANETs) operate autonomously through decentralized configurations for military as well as emergency and academic applications. The adaptable network structure and unstable nature of MANETs result in major traffic jam occurrences when network activity is at the peak. This research studies the congestion issue of MANETs by implementing network simulation with Machine Learning analytics to identify and control traffic congestion effectively. The investigation employed OPNET 14.5v to simulate office scenarios that contained five, ten and fifteen mobile nodes to study congestion patterns. The study measured network performance through three metrics that consisted of bits per second network load and seconds of media access delay as well as bits per second traffic reception. Network congestion increased as node density increased because 2.8 Mbps load appeared under five nodes but the network load reached 5.2 Mbps with fifteen nodes. Maximum traffic conditions caused media access delay to reach its highest point at 0.0056 seconds. A collection of ML models included Decision Trees and Random Forest followed by Artificial Neural Networks (ANNs) for congestion detection purposes. The evaluation resulted in substantial experimental precision levels of 98.7%, 99.3% and 99.8%. This research proved that using ML-based adaptive load balancing promoted both network stability along with real-time throughput enhancement when faced with congestion situations. The findings prove that predictive analysis operates in real-time to solve traffic congestion problems which results in improved routing stability and decreased delays in military ad hoc networks. Through the OPNET simulation platform researchers gain an organized environment to evaluate and enhance such systems.
Muhammad Aslam Noor , Sofyar Sofyar | Jurnal Teknologi Informasi Universitas Lambung Mangkurat (JTIULM)
The performance of a Local Area Network (LAN) greatly depends on the efficiency of the routing protocol applied. This study aims to optimize the configuration of the Open Shortest Path … The performance of a Local Area Network (LAN) greatly depends on the efficiency of the routing protocol applied. This study aims to optimize the configuration of the Open Shortest Path First (OSPF) protocol to improve LAN performance. OSPF is a dynamic routing protocol based on a link-state algorithm that calculates the shortest path using Dijkstra’s algorithm. The research method employed is simulation using Cisco Packet Tracer software. The network topology consists of multiple routers and end-devices, divided into three OSPF areas, each configured with specific IDs, subnets, and IP addresses. The configuration process includes setting IP addresses, assigning OSPF areas, and testing connectivity using ping and traceroute commands. The results demonstrate that OSPF successfully establishes full adjacency between routers, synchronizes the Link-State Advertisement (LSA) database, and ensures optimal routing paths across devices. This implementation proves that OSPF enhances efficiency, convergence speed, and network stability. The study contributes to the development of small to medium-scale LAN networks requiring optimal and reliable data traffic management. It also provides practical insight for network engineers in designing scalable and high-performance routing configurations.
The rapid development of wireless network technology and the continuous evolution of network service demands have raised higher requirements for congestion control algorithms. In 2016, Google proposed the Bottleneck Bandwidth … The rapid development of wireless network technology and the continuous evolution of network service demands have raised higher requirements for congestion control algorithms. In 2016, Google proposed the Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control algorithm based on the Transmission Control Protocol (TCP) protocol. While BBR offers lower latency and higher throughput compared to traditional congestion control algorithms, it still faces challenges. These include the periodic triggering of the ProbeRTT phase, which impairs data transmission efficiency, data over-injection caused by the congestion window (CWND) value-setting policy, and the difficulty of coordinating resource allocation across multiple concurrent flows. These limitations make BBR less effective in multi-stream competition scenarios in high-speed wireless networks. This paper analyzes the design limitations of the BBR algorithm from a theoretical perspective and proposes the Adaptive-BBR (Ad-BBR) algorithm. The Ad-BBR algorithm incorporates real-time RTT and link queue-state information, introduces a new RTprop determination mechanism, and implements a finer-grained, RTT-based adaptive transmission rate adjustment mechanism to reduce data over-injection and improve RTT fairness. Additionally, the ProbeRTT phase-triggering mechanism is updated to ensure more stable and smoother data transmission. In the NS3, 5G, and Wi-Fi simulation experiments, Ad-BBR outperformed all comparison algorithms by effectively mitigating data over-injection and minimizing unnecessary entries into the ProbeRTT phase. Compared to the BBRv1 algorithm, Ad-BBR achieved a 17% increase in throughput and a 30% improvement in RTT fairness, along with a 13% reduction in the retransmission rate and an approximate 20% decrease in latency.
MPTCP is rapidly emerging as one of the most advanced networking protocols. Standardized by the IETF as an extension of TCP, it enables seamless communication across multiple interfaces from source … MPTCP is rapidly emerging as one of the most advanced networking protocols. Standardized by the IETF as an extension of TCP, it enables seamless communication across multiple interfaces from source to destination. Despite its potential, existing multipath congestion control mechanisms face significant challenges due to the diverse QoS characteristics of heterogeneous interfaces. While recent algorithms primarily emphasize enhancing the growth dynamics of the congestion window (CWND), the reduction mechanisms remain largely overlooked. Furthermore, conventional congestion control approaches often rely on manual adjustments, which are insufficient in highly dynamic network environments. Given the demonstrated success of machine learning algorithms across industries such as IoT, video streaming, and autonomous vehicles, this study introduces the Deep Deterministic Policy Gradient Multi-Path (DDPG-MP) framework. This innovative approach dynamically optimizes congestion control using a balancing factor, enabling adaptive and efficient performance in multipath networking environments.
In the emerging paradigm of embodied intelligence, eVTOL technology holds significant potential to transform the low-altitude economy, particularly in short-distance emergency logistics and urban distribution. Companies like Meituan and Shunfeng … In the emerging paradigm of embodied intelligence, eVTOL technology holds significant potential to transform the low-altitude economy, particularly in short-distance emergency logistics and urban distribution. Companies like Meituan and Shunfeng (SF) are pioneering fixed low-altitude routes to reduce reliance on human delivery. We first investigate the performance and routing of Meituan’s eVTOL system, focusing on the dynamic optimization of eVTOL reserves and total costs at distribution stations under fluctuating order surges and charging constraints. An iterative algorithm is constructed, supported by numerical examples and Monte Carlo simulations. Our results reveal that cost parameters and demand characteristics jointly shape eVTOL incremental decision-making and its economic performance. To optimize costs, strategies like multi-period decentralized scheduling or low-frequency centralized decision-making are proposed. Future research will address limitations such as 2C charging effects and joint battery-eVTOL replenishment to further advance urban logistics and low-altitude economy development.
Abstract Ensuring optimal quality of service (QoS) in computer networks requires a detailed assessment of performance metrics, with data network queuing delay within intermediate devices being critical parameters. This paper … Abstract Ensuring optimal quality of service (QoS) in computer networks requires a detailed assessment of performance metrics, with data network queuing delay within intermediate devices being critical parameters. This paper presents a predictive Quality of Service (QoS) model designed to reduce queuing delays by analyzing traffic patterns in intermediate devices in point-to-point network connections. The proposed novel Length Packet Queuing (LPQ) model leverages packet length analysis to predict and manage queuing delays without relying on traditional packet marking mechanisms. Through Poisson distribution and polynomial regression models, network traffic patterns and queuing delays are estimated, respectively, demonstrating significant improvements of conventional QoS models. Simulations and experimental scenarios validated the LPQ model’s effectiveness, showing lower delays through various network loads and traffic conditions. The results of this research highlight the potential of the novel LPQ model for enhancing QoS in hybrid networks, where user applications generate diverse packets.
El uso de contenedores y kubernetes en la administración de infraestructura de redes híbridas facilita el despliegue de aplicaciones de manera escalable y eficiente en entornos mixtos, locales y nube. … El uso de contenedores y kubernetes en la administración de infraestructura de redes híbridas facilita el despliegue de aplicaciones de manera escalable y eficiente en entornos mixtos, locales y nube. En una red híbrida, la infraestructura de TI debe ser capaz de gestionar recursos de múltiples orígenes (locales y en la nube). Aquí, la integración de contenedores y Kubernetes puede optimizar y automatizar la administración de estos componentes que hacen parte de una arquitectura híbrida. El propósito de este artículo es analizar el impacto y los beneficios del uso de contenedores y Kubernetes en un entorno de redes híbridas a través de una revisión de fundamentos teóricos e investigaciones previas, se explorarán conceptos clave, estudios recientes y desafíos emergentes, así como se discutirá la relevancia de estas tecnologías en el contexto actual de la infraestructura de redes híbridas.
The widespread adoption of the internet has transformed communication, work, and information access, emphasizing the need for high-speed connectivity. Accurate prediction of network latency, particularly Ping, is essential for enhancing … The widespread adoption of the internet has transformed communication, work, and information access, emphasizing the need for high-speed connectivity. Accurate prediction of network latency, particularly Ping, is essential for enhancing user experiences and optimizing network efficiency. This study focuses on predicting Ping latency using data from ADSL internet speed tests, incorporating variables such as geographical coordinates (longitude and latitude), subscribed package download and upload speeds, and internet provider band. The dataset is split into training and testing sets for model evaluation. Through our analysis of ADSL speed test data, we achieve a Mean Absolute Percentage Error (MAPE) of 11.98% for Ping prediction. These results provide valuable insights for stakeholders aiming to enhance the reliability and efficiency of broadband services. For network operators and service providers, our results provide a roadmap for optimizing infrastructure and refining management approaches to deliver superior service quality. Likewise, end-users stand to benefit from improved network performance, leading to smoother online interactions and heightened satisfaction.