For a few rows, such as the first and the last, agreement between the last two columns is quite good. However, there are some decidedly anomalous cases in between particularly the numbers in bold. As delayB changes from 35 to 36, the goodput ratio jumps from 1. These values were, admittedly, specially chosen by trial and error to illustrate relatively discontinuous behavior of the goodput ratio, but, still, what is going on? This erratic behavior in the goodput ratio in the table above turns out to be due to what are called phase effects in [FJ92] ; transmissions of the two TCP connections become precisely synchronized in some way that involves a persistent negative bias against one of them.
We begin by taking a more detailed look at the bottleneck queue when no competition is involved. Our first observation is that at each instant when a packet from A fully arrives at R, R is always at exactly the same point in forwarding some earlier packet on towards B; the output meter always reads the same percentage. In the absence of other congestion, this R-to-R time includes no variable queuing delays, and so is constant.
The output-meter percentage above is determined by this elapsed time. If we ratchet up the winsize until the queue becomes full when each new packet arrives, then the phase percentage represents the fraction of the time the queue has a vacancy.
Finally, even in the presence of competition through R, the phase of a single connection remains constant provided there are no queuing delays along the bidirectional A—B path except at R itself, and there only in the forward direction towards B. The significance of phase to competition is that whenever A and B send packets that happen to arrive at R in the same ms interval while R is forwarding some other packet, if the queue has only one vacancy then the connection with the smaller phase will always win it.
Imagine that R forwards a B packet and then, four packets 40 ms later, forwards an A packet. Now consider what happens when a packet is dropped. We can visualize phase effects with ns-2 by letting delayB range over, say, 0 to 50 in small increments, and plotting the corresponding values of ratio2 above. Classically we expect ratio2 to be close to 1. In the graph below, the blue curve represents the goodput ratio; it shows a marked though not perfect periodicity with period 5 ms.
When the blue curve is high, the slower B—D connection is proportionately at an unexpected disadvantage. In the real world, the kind of precise transmission synchronization that leads to phase effects is seldom evident, though perhaps this has less to do with rarity and more with the fact that head-to-head TCP competitions are difficult to observe intimately.
Usually, however, there seems to be sufficient other traffic present to disrupt the synchronization. How can we break this synchronization in simulations? One way or another, we must inject some degree of randomization into the bulk TCP flows. The second, of course, means we are no longer simulating FIFO queues.
A third way of addressing phase effects is to make use of the ns-2 overhead variable, which introduces some modest randomization in packet-departure times at the TCP sender.
Because this technique is simpler, we will start with it. One difference between the use of overhead and telnet traffic is that the latter has the effect of introducing delays at all nodes of the network that carry the traffic, not just at the TCP sources. This is equal to 0 by default and is measured in units of seconds. Packets are always sent in order; if packet 2 is assigned a small overhead delay and packet 1 a large overhead delay, then packet 2 waits until packet 1 has been sent.
For this reason, it is a good idea to keep the average overhead delay no more than the average packet interval here 10 ms. The following graph shows four curves representing overhead values of 0, 0. For each curve, ratio1 not the actual goodput ratio and not ratio2 is plotted as a function of delayB as the latter ranges from 25 to 55 ms.
We will return to the choice of ratio1 here in We conclude that the latter two values for overhead are quite effective at mitigating phase effects. One crude way to quantify the degree of graph oscillation is by calculating the mean deviation; the respective deviation values for the curves above are are 1. Recall that the time to send one packet on the bottleneck link is 0. Compared to the ms-per-packet R—D transmission time, average delays of 5 and 10 ms per flow overhead of 0.
They are, however, quite large when compared to the 1. Generally, if the goal is to reduce phase effects then overhead should be comparable to the bottleneck-router transmission rate.
The problem with using overhead this way is that it does not correspond to any physical network delay or other phenomenon. Its use here represents a decidedly ad hoc strategy to introduce enough randomization that phase effects disappear. We can also introduce anti-phase-effect randomization by making use of the ns-2 telnet application to generate low-to-moderate levels of random traffic.
We denote the latter number by actualSize. See exercise 9. In this case we have. The first question we need to address is whether telnet traffic can sufficiently dampen phase-effect oscillations, and, if so, at what densities. The following graph is the telnet version of that above in The telnet total packet size is bytes. The given telnet density percentages apply to each of the A—D and B—D telnet connections; the total telnet density is thus double the value shown.
So far the use of telnet is very promising. As densities increase, however, phase-effect oscillation returns, and the curves converge towards the heavier red curve at the top. Once the sliding window becomes the limiting factor on telnet packet transmission, the telnet connections behave much like — and become synchronized with — their corresponding bulk-traffic ftp connections.
At that point their ability to moderate phase effects is greatly diminished, as actual packet departures no longer have anything to do with the exponential random distribution that generates the packets.
Despite this last issue, the fact that small levels of random traffic can lead to large reductions in phase effects can be taken as evidence that, in the real world, where other traffic sources are ubiquitous, phase effects will seldom be a problem.
The next step is to compare the effect on the original bulk-traffic flows of overhead randomization and telnet randomization. The graphs below plot ratio1 as a function of delayB as the latter ranges from 0 to in increments of 5. The bottleneckBW is 0. The four reddish-hued curves represent the result of using telnet with a packet size of , at densities ranging from 0. These may be compared with the three green curves representing the use of overhead , with values 0.
This raises the awkward possibility that the exact mechanism by which we introduce randomization may have a material effect on the fairness ratios that we ultimately observe. The advantage of using telnet for randomization is that it represents an actual network phenomenon, unlike overhead. The drawback to using telnet is that the effect on the bulk-traffic goodput ratio is, as the graph above shows, somewhat sensitive to the exact value chosen for the telnet density.
In the remainder of this chapter, we will continue to use the overhead model, for simplicity, though we do not claim this is a universally appropriate approach. In all three of the preceding graphs This, however, is in fact an artifact, due to the second flaw in our simulated network: RTT is not very constant. This means that the computed values for RTTratio are too large, and the computed values for ratio1 are thus too small.
While one approach to address this problem is to keep careful track of RTT actual , a simpler strategy is to create a simulated network in which the queuing delay is small compared to the propagation delay and so the RTT is relatively constant. We turn to this next. In modern high-bandwidth networks, queuing delays tend to be small compared to the propagation delay; see To simulate such a network here, we simply increase the bandwidth of all the links tenfold, while leaving the existing propagation delays the same.
The value of overhead also needs to be scaled down by a factor of 10, to 0. The delayB parameter runs from 0 to in steps of 1. The first observation to make is that ratio1 is generally too large and ratio2 is generally too small, when compared to 1. In other words, neither is an especially good fit. This appears to be a fairly general phenomenon, in both simulation and the real world: TCP Reno throughput ratios tend to be somewhere between the corresponding RTT ratio and the square of the RTT ratio.
The synchronized-loss hypothesis led to the prediction As this conclusion appears to fail, the hypothesis too must fail, at least to a degree: it must be the case that not all losses are shared.
Most of this variation appears unrelated to the 5 ms period we would expect for phase effects as in the graph at However, it is important to increment delayB in amounts much smaller than 5 ms in order to rule this out, hence the increment of 1.
These delayB values correspond to increasing the RTT by integral multiples 2, 3 and 4 respectively, and the peaks are presumably related to some kind of higher-level phase effect. If the synchronized-loss fairness model fails, with what do we replace it? Here are two ad hoc options. First, we can try to fit a curve of the form.
We do not, however, possess for either of these formulas a model for the relative losses in the two primary TCP connections that is precise enough to offer an explanation of the formula though see the final paragraph of The transit capacity is packets and another 20 can be in the queue at R; thus an ideal sawtooth would oscillate between and packets.
This low utilization, however, is indeed related to loss and timeouts; the corresponding combined goodput percentage for SACK TCP which as we shall see in If the synchronized-loss model is not entirely accurate, as we concluded from the graph above, what does happen with packet losses when the queue fills? At this point we shift focus from analyzing goodput ratios to analyzing the underlying loss events, using the ns-2 tracefile.
We will look at the synchronization of loss events between different connections and at how many individual packet losses may be involved in a single TCP loss response. The fact that Selective ACKs are nearly universal in the real world is another good reason to enable them here.
Packets are dropped when the queue fills. It may be the case that only a single packet is dropped; it may be the case that multiple packets are dropped from each of multiple connections. We will refer to the set of packets dropped as a drop cluster.
Eventually the TCP senders that have experienced packet loss reduce their cwnd , and thus the queue utilization drops. At that point we would classically not expect more losses until the senders involved have had time to grow their cwnd s through the additive-increase process to the point of again filling the queue.
As we shall see in It is possible that the packet losses associated with one full-queue period are spread out over sufficient time more than one RTT, at a minimum that TCP Reno responds to them separately and halves cwnd more than once in rapid succession. We will refer to this as a loss-response cluster , or sometimes as a tooth cluster. In the ns-2 simulator, counting individual lost packets and TCP loss responses is straightforward enough. For TCP Reno, there are only two kinds of loss responses: Fast Recovery, in which cwnd is halved, and coarse timeout, in which cwnd is set to 1.
Counting clusters , however, is more subjective; we need to decide when two drops or responses are close enough to one another that they should be counted as part of the same cluster. We use the notion of granularity here: two or more losses separated by less time than the granularity time interval are counted as a single event.
We can also use granularity to decide when two loss responses in different connections are to be considered parts of the same event, or to tie loss responses to packet losses. The appropriate length of the granularity interval is not as clear-cut as might be hoped. In order to cover coarse timeouts, a granularity of from two to three seconds often seems to work well for packet drops. The difference between one fast-recovery response and two in rapid succession is that in the latter case cwnd is halved twice, to about a quarter of its original value.
In this setting, a granularity of one to two seconds is often sufficient. We next turn to some examples of actual TCP behavior, and present a few hopefully representative cwnd graphs. In each figure, the red line represents the longer-path B—D flow and the green line represents the shorter A—D flow.
Some of the graphs here were prepared with an earlier version of the basic2. That is enough sometimes to lead to noticeably different sets of packet losses.
While the immediate goal is to illuminate some of the above loss-clustering issues above, the graphs serve as well to illustrate the general behavior of TCP Reno competition and its variability.
We will also use one of the graphs to explore the idea of transient queue spikes. In each case we can count teeth visually or via a Python script; see In the latter case we must use an appropriate granularity interval eg 2. Many times the tooth count is quite dependent on the exact value of the granularity. The two cwnd graphs, though, do not exactly move in lockstep. The red graph gets a little ahead of the green in the interval , despite synchronized losses there.
The time scale for queue-utilization averaging is about one RTT here. The queue graph is scaled vertically by a factor of 8 so the queue values maximum 20 are numerically comparable to the cwnd values the transit capacity is about Aside from these two points, representing three isolated green-only losses, the red and green teeth appear quite well synchronized.
We also see evidence of loss-response clusters. Overall, the red path has 11 teeth if we use a tooth-counting granularity of 3 seconds, and 8 teeth if the granularity is 5 seconds. The slope of the green teeth is slightly greater than the slope of the red teeth, representing the longer RTT for the red connection. Recall that the transit capacity for flow1 here is about packets and the queue capacity is about 20; we therefore in general expect the sum of the two cwnd s to range between and , and that the queue should mostly remain empty until the sum reached We can indeed confirm visually that in most cases the tooth peaks do indeed add up to about Let us zoom in on it a little more closely, this time without any averaging of the queue-utilization graph.
What is going on? What is happening is that each flow sends a flight of about 20 packets, spaced 1 ms apart, but by coincidence these two flights begin arriving at R at the same moment. During this 21 ms interval, a flight of 20 packets arrive from node A flow 0 , and a flight of 19 packets arrive from node B flow 1. The ACK flights that triggered these data-packet flights were indeed spaced 1 ms apart, consistent with the bottleneck link, but the ACKs and the data-packet flights that triggered those ACKs passed through R at quite different times, because of the ms difference in propagation delay on the A—R and B—R links.
Transient queue peaks like this complicate any theory of relative throughput based on gradually filling the queue. Fortunately, while transient peaks themselves are quite common as can be seen from the zoomed graph above , only occasionally do they amount to enough to overflow the queue. And, in the examples created here, the majority of transient overflow peaks including the one analyzed above are within a few seconds of a TCP coarse timeout response and may have some relationship to that timeout.
However, transient peaks are relatively infrequent. The queue graph gives the appearance of much more solid yellow; this is due to a large number of transient queue spikes. Under greater magnification it becomes clear these spikes are still relatively sparse, however. Here we have without the queue data a good example of highly un synchronized teeth: the green graph has 12 teeth, after the start, while the red graph has five. But note that the red-graph loss events are a subset of the green loss events.
It is time to see if we can avoid these anomalies and thus obtain behavior closer to the classic sawtooth. In the following subsection There is little if any dependence on the value for delayB.
We get essentially no coarse timeouts. Runs for seconds, typically involving cwnd -halving adjustments per flow, almost never have more than one coarse timeout and the majority of the time have none. Clusters of multiple loss responses also vanish almost entirely. In these second runs, there are usually cases where two loss responses occur within 1. Total link utilization ranges from For TCP Reno, the corresponding utilization range is from Here is a graph similar to that above in The run time is seconds, bottleneckBW is 8 Mbps, and delayB runs from 0 to in increments of 5.
We did not use an increment of 1, as in the similar graph in Ratio1 is shown in blue and ratio2 in green. We again try to fit curves as in Using our earlier Python nstrace.
Anticipating more complex scenarios, we define a Python class flowstats to hold the per-flow information, though in this particular case we know there are exactly two flows. If we want a count of each separate TCP response, we count them both. This is to avoid counting losses related to slow start. A more elaborate version of this script, set up to count clusters of teeth and clusters of drops, is available in teeth.
Our next experiment is to count the loss events in each flow, and to identify the loss events common to both flows. We use the tooth-counting script from the previous section, with a granularity of 1. With SACK TCP it is rare for two drops in the same flow to occur close to one another; the granularity here is primarily to decide when two teeth in the two different flows are to be counted as shared.
The blue region at the bottom represents the percentage of all loss-response events teeth that are shared between both flows. Applying the formula of While it is not at all clear this will continue to hold as delayB continues to increase beyond , it does suggest a possible underlying explanation for the second formula of Over the lifetime of a connection, the average cwnd is the number of packets sent divided by the number of RTTs.
This simple relationship is slightly complicated by the fact that the RTT varies with the fullness of the bottleneck queue, but, as before, this effect can be minimized if we choose models where RTT noLoad is large compared with the queuing delay.
We will for both flows use the appropriate RTT noLoad as an approximation for RTT, and will use the number of packets acknowledged as an approximation for the number transmitted. These calculations give the true average cwnd values in the table below.
With the relatively high value for bottleneckBW it takes a long simulation to get a reasonable number of losses. The simulations used here ran seconds, long enough that each connection ended up with losses. The bold columns represent the extent by which the ratio of the estimated average cwnd to the true average cwnd differs from unity. For flow 1, with the longer RTT, this is because the number of packets transmitted drops precipitously almost sevenfold while cwnd — and therefore the loss-event probability p — stays relatively constant.
Here we will test that theory. In our standard model in However, by the time the queue size is , the ratio has climbed almost to 22; TCP Vegas is swamped. The vertical axis plots the Reno-to-Vegas goodput ratio; the horizontal axis represents the queue capacity. The performance of TCP Vegas falls off quite quickly as the queue size increases.
For a rather different outcome using fair queuing rather than FIFO queuing, see In prior simulations we have also made the following setting, in order to make the total TCP packetsize including headers be The first seconds are shown.
During this period the bandwidth ratio is about 1. The green plot shows some spikes that probably represent implementation artifacts. At the point of packet loss, TCP Vegas begins incrementing cwnd again.
We conclude that, for smaller bottleneck-queue sizes, TCP Vegas does indeed hold its own. Unfortunately, in the scenario here the bottleneck-queue size has to be quite small for this to work; TCP Vegas suffers in competition with TCP Reno even for moderate queue sizes.
That said, queue capacities out there in the real Internet tend to increase much more slowly than bandwidth, and there may be real-world situations were TCP Vegas performs quite well when compared to TCP Reno.
When simulating wireless networks, there are no links; all the configuration work goes into setting up the nodes, the traffic and the wireless behavior itself. Wireless nodes have multiple wireless-specific attributes, such as the antenna type and radio-propagation model. Nodes are also now in charge of packet queuing; before this was the responsibility of the links.
Finally, nodes have coordinates for position and, if mobility is introduced, velocities. For wired links the user must set the bandwidth and delay.
For wireless, these are both generally provided by the wireless model. Propagation delay is simply the distance divided by the speed of light. Ad hoc wireless networks must also be configured with a routing protocol, so that paths may be found from one node to another.
We looked briefly at DSDV in The maximum range of a node is determined by its power level; this can be set with node-config below using the txPower attribute , but the default is often used. In the ns-2 source code, in file wireless-phy.
We create a simulation here in which one node mover moves horizontally above a sequence of fixed-position nodes stored in the Tcl array rownodes. The leftmost fixed-position node transmits continuously to the mover node; as the mover node progresses, packets must be routed through other fixed-position nodes. When the mover moves out of range of one fixed-position node, AODV finds a new route which will be via the next fixed-position node quite quickly; we return to this below.
DSDV Of course, whether a given routing mechanism is fast enough depends very much on the speed of the mover ; the simulation here does not perform nearly as well if the time is set to 10 seconds rather than as the mover moves too fast even for AODV to keep up.
Because there are so many configuration parameters, to keep them together we adopt the common convention of making them all attributes of a single Tcl object, named opt. We list the simulation file itself in pieces, with annotation; the complete file is at wireless. We begin with the options. Further details can be found in the Radio Propagation Models chapter of the ns-2 manual. The next values are specific to our particular layout.
The opt bottomrow value determines the number of fixed-position nodes in the simulation. The spacing between adjacent bottom-row nodes is opt spacing meters. The moving node mover moves at height meters above this fixed row. The fixed row itself is 50 meters above the bottom of the topology. The opt x and opt y values are the dimensions of the simulation, in meters; the number of bottom-row nodes and their spacing determine opt x. As mentioned earlier, we use the AODV routing mechanism.
When the mover node moves out of range of the bottom-row node that it is currently in contact with, AODV receives notice of the failed transmission from the Wi-Fi link layer ultimately this news originates from the absence of the Wi-Fi link-layer ACK. This triggers an immediate search for a new route, which typically takes less than 50 ms to complete.
The earlier DSDV The finishing time opt finish also represents the time the moving node takes to move across all the bottom-row nodes; the necessary speed is calculated in opt speed. If the finishing time is reduced, the mover speed increases, and so the routing mechanism has less time to find updated routes. The use-newtrace option enables a different tracing mechanism, in which each attribute except the first is prefixed by an identifying tag, so that parsing is no longer position-dependent.
We look at an example below. Note the special option namtrace-all-wireless for tracing for nam, and the dimension parameters opt x and opt y. The next step is to create a Topography object to hold the layout still to be determined.
Finally, we create a General Operations Director, which holds information about the layout not necessarily available to any node. The next step is to call node-config , which passes many of the opt parameters to ns and which influences future node creation:.
Finally we create our nodes. The bottom-row nodes are created within a Tcl for -loop, and are stored in a Tcl array rownode. We now make the mover node move, using setdest. If the node reaches the destination supplied in setdest , it stops, but it is also possible to change its direction at later times using additional setdest calls, if a zig-zag path is desired. Various external utilities are available to create a file of Tcl commands to create a large number of nodes each with a designated motion; such a file can then be imported into the main Tcl file.
CBR traffic does not use sliding windows. The simulation can be viewed from the nam file, available at wireless. In the simulation, the mover node moves across the topography, over the bottom-row nodes. The CBR traffic reaches mover from rownode 0 first directly, then via rownode 1 , then via rownode 1 and rownode 2 , etc.
The motion of the mover node is best seen by speeding up the animation frame rate using the nam control for this, though doing this means that aliasing effects often make the CBR traffic appear to be moving in the opposite direction.
Above is one frame from the animation, with the mover node is almost but not quite directly over rownode 3 , and so is close to losing contact with rownode 2. Two CBR packets can be seen en route ; one has almost reached rownode 2 and one is about a third of the way from rownode 2 up to the blue mover node. The packets are not shown to scale; see exercise The tracefile is specific to wireless networking, and even without the use of use-newtrace has a rather different format from the link-based simulations earlier.
Full details can be found in the ns-2 manual. Here is an edited record of the first packet drop the initial d indicates a drop-event record :. After the creation of the mathematical model, the most important step is to create a computer program for updating the state and event variables through time by time slicing or event scheduling.
If this simulation is carried out successively in parallel computers, it is called Parallel or Distributed simulation. It provides simulation for routing and multicast protocols for both wired and wireless networks. Install NS-2 using this command : sudo apt-get install ns2 Nam is also needed to install. Nam Network Animator is an animation tool to graphically represent the network and packet traces. Use this command : sudo apt-get install nam Some basic Otcl script syntax :.
In ns2, the topology consists of a collection of nodes and links. Python set ns [new Simulator] The simulator object has member functions that enable the creation of the nodes and define the links between them. The class simulator contains all the basic functions. Skip to content. Change Language. Related Articles. Table of Contents. Save Article. Improve Article. Like Article. Last Updated : 08 Nov, Create a simulator object. Close the NAM trace file. Execute NAM on the trace file.
Create links between the nodes.
0コメント