Changes between Version 10 and Version 11 of GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/Execute


Ignore:
Timestamp:
05/14/13 11:03:27 (12 years ago)
Author:
shuang@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/Execute

    v10 v11  
    2020= STEPS FOR EXECUTING EXERCISE =
    2121Now that we have reserved the nodes. Let's log on to each nodes and do experiment. [[BR]]
    22 Recall that the nodes reserved are pc73, pc81, pc55, pc47, pc84.[[BR]]
    23 pc47 is top; pc84 is bottom; pc81 is left; pc55 is right; pc73 is center. We can figure this out from the output of "createsliver" step. [[BR]]
    24 As mentioned earlier, we can use "readyToLogin.py" to show the topology as well as the log in commands. [[BR]]
    25 Or, if you are a GENI Portal user, use the "details" button to check details of your slice.
     22You can find the nodes you reserved from the output of "createsliver" step. [[BR]]
     23Or you can use "readyToLogin.py" to show the topology as well as the log in commands. [[BR]]
     24Or, if you are a GENI Portal user, use the "details" button to check details of your slice. [[BR]]
    2625
    2726 '''Useful commands:''' [[BR]]
     
    6463}}}
    6564
    66 
    67 '''1. compare cubic and reno under no loss/delay introduced:[[BR]]'''
    68  use default TCP congestion control (cubic) on left and right, run iperf between them (TCP flow comes from right to left): [[BR]]
     65= Exercises =
     66'''3.1 Comparison of Reno and CUBIC: [[BR]]'''
     67 GENI nodes provide two TCP congestion control algorithms, CUBIC and Reno, that can be chosen at run-time. [[BR]]
     68 The list of available algorithms are listed in the file ''/proc/sys/net/ipv4/tcp available congestion control''. [[BR]]
     69 The “Reno” congestion control provided by the Linux kernel is actually the [http://tools.ietf.org/html/rfc3782 NewReno] algorithm, but we will refer to it as Reno here to be consistent with Linux terminology. [[BR]]
     70 Note that congestion control actions are very similar between Reno and [http://tools.ietf.org/html/rfc3782 NewReno], but [http://tools.ietf.org/html/rfc3782 NewReno] has a more nuanced approach to loss recovery.. [[BR]]
     71 These congestion control algorithms can be chosen by placing the keywords ''reno'' or ''cubic'' in the file ''/proc/sys/net/ipv4/tcp congestion control''. For example, to configure a host to use the Reno algorithm, use:
     72{{{
     73echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control
     74}}}
     75 The tc command will then be used to set up network conditions for observation and testing. For example, if eth1 is the physical interface representing the link L on the Center node, the following command on the Center node will add a 200 ms delay to all packets leaving the interface:
     76{{{
     77sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms
     78}}}
     79 Specific network setup commands will be provided as needed. [[BR]]
     80 Run an Iperf server on the Left node. The Iperf client will be run on the Right node. The duration for an Iperf session (''-t'' option) is 60 seconds unless otherwise mentioned. Note carefully that some exercises require a much longer duration. Ensure that your sliver lifetimes are long enough to capture the duration of your experiment. All of the experiments should be repeated at least a 5 times (especially when the interfaces include random delays or losses) to ensure confidence in the results, as transient conditions can cause significant variations in any individual run.
     81 
     82 - 1. Question: What are the goodputs when the Reno and CUBIC algorithms are used on the network with no emulated delay or loss? Which is better?
     83 - 2. Question: Qualitatively, under what conditions does BIC/CUBIC perform better than Reno’s AIMD?
     84 - 3. Question: Change the delay to of interface L to 300 ms using the following command, and run an Iperf session for 1800 seconds.
     85   {{{
     86   sudo /sbin/tc qdisc add dev L root handle 1:0 netem limit 1000000000 delay 300ms
     87   }}}
     88    What are the goodputs of Reno and CUBIC? Which performed better? What do you conclude?
     89 - 4. Question: Repeat the above experiment with 30 parallel connections and 1800 seconds for each algorithm by using the ''-P 30'' option on Iperf. How do CUBIC and Reno differ? What do you conclude?
     90 - 5. Question: Remove the netem queueing discipline which causes delay and add a loss of 5% by using the following commands on the center node. Replace L with the appropriate physical interface. Alternatively, one can change a queueing discipline instead of deleting and adding a new one.
     91   {{{
     92   sudo /sbin/tc qdisc del dev L root
     93   sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 5%
     94   }}}
     95   How do the goodputs of Reno and CUBIC differ under loss for 60 s Iperf sessions? [[BR]][[BR]]
     96 - '''Some Hint/Guidance on how to run the experiments: '''
     97 [[BR]]Use default TCP congestion control (cubic) on left and right, run iperf between them (TCP flow comes from right to left): [[BR]]
    6998 On left, run:
    7099{{{
    71100/usr/local/etc/emulab/emulab-iperf -s
    72101}}}
    73  On right, run (10.10.1.1 is the ip address for left):
     102 On right, run (10.10.1.1 is the ip address for left in our case, you need to find the actual IP address that is used by left in your own experiment):
    74103{{{
    75104/usr/local/etc/emulab/emulab-iperf -c 10.10.1.1 -t 60
    76105}}}
    77  Results: 94.2 Mbps for Cubic when there is no delay introduced
    78 {{{
    79 ------------------------------------------------------------
    80 Client connecting to 10.10.1.1, TCP port 5001
    81 TCP window size: 16.0 KByte (default)
    82 ------------------------------------------------------------
    83 [  3] local 10.10.2.1 port 53755 connected with 10.10.1.1 port 5001
    84 [  3]  0.0-60.0 sec    674 MBytes  94.2 Mbits/sec
    85 }}}
    86 
    87106 Let both left and right use reno as the TCP congestion control mechanism, repeat the experiments:
    88107{{{
    89108echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control
    90109}}}
    91  Results: 94.2 Mbps for reno when there is no delay introduced
     110
     111'''3.2 Ensuring Fairness Among Flows [[BR]]'''
     112 Restore the network state with the following command:
    92113{{{
    93 ------------------------------------------------------------
    94 Client connecting to 10.10.1.1, TCP port 5001
    95 TCP window size: 16.0 KByte (default)
    96 ------------------------------------------------------------
    97 [  3] local 10.10.2.1 port 53073 connected with 10.10.1.1 port 5001
    98 [  3]  0.0-60.0 sec    674 MBytes  94.2 Mbits/sec
     114sudo /sbin/tc qdisc del dev L root
    99115}}}
     116 Run an Iperf client on the Right node with 10 parallel TCP connections (use the -P option), connecting to an Iperf server on the Left node for 60 seconds. Simultaneously, run a 20 Mbps UDP Iperf client on the Top node connecting to an UDP Iperf server session running on the Left node for 60 seconds.
     117 - 1. Question: What are the throughput shown by the UDP and TCP Iperf server sessions? Why are they what they are?
     118 - 2. Question: Provide the necessary steps and commands to enable queueing disciplines that enforce fairness among all the 11 flows in the network, and demonstrate that your solution is effective.
    100119
    101  '''Answer:''' they are the same under no loss/delay
     120'''3.3 Reordering [[BR]]'''
     121 Delete the previous queuing discipline and use the following ''netem'' configuration on interface L to create an 100 ms delay:
     122{{{
     123sudo /sbin/tc qdisc del dev L root
     124sudo /sbin/tc qdisc add dev L root handle 1:0 netem delay 100ms
     125}}}
     126 As before, run a TCP Iperf client on the Right node connecting an Iperf server on the Left for 60 seconds.
     127 - 1. Question: What is the TCP goodput?
     128 - 2. Question: Introduce packet reordering, adding a 75 ms delay variance to the interface L with the following command:
     129 {{{
     130 sudo /sbin/tc qdisc change dev L root handle 1:0 netem delay 100ms 75ms
     131 }}}
     132 What is the TCP goodput now?
     133 - 3. Question: By tweaking the parameters in the file ''/proc/sys/net/ipv4/tcp_reordering'', how much can the TCP goodput be improved? What is the best goodput you can show? Why is too high or two low value bad for TCP?
    102134
    103 '''2. add 300ms delay and see how it goes: [[BR]]'''
    104 We can introduce the delay by configuring the interface on center that is connected with left, using sudo /sbin/tc qdisc [[BR]]
    105  With Cubic, here is the result:
     135'''3.4 Performance of SACK under Lossy Conditions'''
     136 Using Cubic as the congestion avoidance algorithm, set the loss characteristics on interface L using the following commands:
    106137{{{
    107 [  3]  0.0-1800.2 sec  6.57 GBytes  31.3 Mbits/sec
    108 [  3]  0.0-60.2 sec    213 MBytes  29.7 Mbits/sec
     138sudo /sbin/tc qdisc del dev L root
     139sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 10%
     140}}}
     141 - 1. Question: What kind of goodput do you get using CUBIC with SACK (the default configuration)? Why do you see this performance?
     142 - 2. Question: Disable SACK at the sender using this command:
     143{{{
     144echo 0 | sudo tee /proc/sys/net/ipv4/tcp sack
     145}}}
     146  What is the goodput without SACK? In what circumstances is SACK most beneficial? Remember that, due to the random nature of loss events, these experiments must be repeated at least five times to draw any conclusions.
    109147
    110 [  3]  0.0- 1.0 sec  56.0 KBytes    459 Kbits/sec
    111 [  3]  1.0- 2.0 sec    312 KBytes  2.56 Mbits/sec
    112 [  3]  2.0- 3.0 sec    640 KBytes  5.24 Mbits/sec
    113 [  3]  3.0- 4.0 sec  2.67 MBytes  22.4 Mbits/sec
    114 [  3]  4.0- 5.0 sec  3.57 MBytes  29.9 Mbits/sec
    115 [  3]  5.0- 6.0 sec  3.65 MBytes  30.6 Mbits/sec
    116 [  3]  6.0- 7.0 sec  3.70 MBytes  31.1 Mbits/sec
    117 [  3]  7.0- 8.0 sec  3.66 MBytes  30.7 Mbits/sec
     148'''3.5 An Experimental Congestion Avoidance module for Linux'''[[BR]]
     149 Source code needed (to-be changed by you): [[BR]]
     150 - [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/Makefile Makefile]
     151 - [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/tcp_exp.c tcp_exp.c]
     152 In this exercise, you will develop and evaluate a TCP congestion control module for the Linux kernel. Linux provides a pluggable interface for TCP congestion control, which allows named congestion control modules to manipulate its sending rate and reaction to congestion events. You have already used the reno and cubic modules, and in this exercise you will create one named exp. [[BR]]
     153 Linux kernel modules must be compiled against kernel source that matches the kernel into which the module will be loaded. In order to prepare your ProtoGENI host for kernel module development, follow these steps:
     154 1. Comment out the line:
     155 {{{
     156 exclude=mkinitrd* kernel*
     157 }}}
     158 in the file ''/etc/yum.conf'', to allow yum to install kernel headers.
     159 2. Install the required packages with this command:
     160 {{{
     161 sudo yum install kernel-devel kernel-headers
     162 }}}
     163 3. Fix up the kernel version in the installed headers to match the running kernel; this can be tricky, but these steps should handle it.
     164  (a) Find your kernel sources. They are in ''/usr/src/kernel'', in a directory that depends on the installed version. As of the time this handout was created, that directory is ''2.6.27.41-170.2.117.fc10.i686''. We will call this directory ''$KERNELSRC''.
     165  (b) Identify your running kernel version by running ''uname -r''. It will be something like ''2.6.27.5-117.emulab1.fc10.i686''. The first three dotted components (''2.6.27'', in this case) are the major, minor, and micro versions, respectively, and the remainder of the version string (''.5-117.emulab.fc10.i686'') is the extraversion. Note the extraversion of your kernel.
     166  (c) In''$KERNELSRC/Makefile'',find the line beginning with ''EXTRAVERSION''. Replace its value with the extraversion of your kernel.
     167  (d) Update the kernel header tree to this new version by running the command:
     168{{{
     169sudo make include/linux/utsrelease.h
    118170}}}
    119  With Reno, here is the result:
    120 {{{
    121 [  3]  0.0-1800.1 sec  6.57 GBytes  31.3 Mbits/sec
    122 [  3]  0.0-60.2 sec    214 MBytes  29.8 Mbits/sec
     171  More details to handle version issues are provided at [http://tldp.org/LDP/lkmpg/2.6/html/x380.html Building modules for a precompiled kernel]. [[BR]]
     172 A Makefile for compiling the module and the source for a stub TCP congestion control module are included in [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/Makefile Makefile]. [[BR]]
     173 The module is named tcp exp (for experimental TCP), and the congestion control algorithm is named exp. Comments in the provided source file explain the relationship between the various functions, and more information can be found in [http://lwn.net/Articles/128681/ Pluggable congestion avoidance modules]. [[BR]]
     174 The compiled module (which is built with make and called ''tcp_exp.ko'') can be inserted into the kernel using ''insmod''. It can be removed using the command ''rmmod tcp_exp'' and reloaded with ''insmod'' if changes are required. [[BR]]
     175 Once the module is complete and loaded into the kernel, the algorithm implemented by the module can be selected in the same manner that reno and cubic were selected in previous exercises, by placing the keyword exp in ''/proc/sys/net/ipv4/tcp_congestion_control''.
     176 
     177 '''3.5.1 Algorithm Requirements''' [[BR]]
     178 The experimental congestion control module is based on Reno, but has the following modifications: [[BR]]
     179  • It uses a Slow Start exponential factor of 3. Reno uses 2. [[BR]]
     180  • It cuts ssthresh to 3 × !FlightSize/4 when entering loss recovery. Reno cuts to !FlightSize/2.
    123181
    124 [  3]  0.0- 1.0 sec  56.0 KBytes    459 Kbits/sec
    125 [  3]  1.0- 2.0 sec    232 KBytes  1.90 Mbits/sec
    126 [  3]  2.0- 3.0 sec    680 KBytes  5.57 Mbits/sec
    127 [  3]  3.0- 4.0 sec  2.76 MBytes  23.1 Mbits/sec
    128 [  3]  4.0- 5.0 sec  4.11 MBytes  34.5 Mbits/sec
    129 [  3]  5.0- 6.0 sec  3.68 MBytes  30.9 Mbits/sec
    130 }}}
     182 '''3.5.2 Hints [[BR]]'''
     183 These hints and suggestions may help you get started. [[BR]]
     184 • The existing congestion avoidance modules are a good start. See ''net/ipv4/tcpcong.c'' in the Linux source for the Linux Reno implementation.[[BR]]
     185 • The file ''net/ipv4/tcp_input.c'' is a good place to learn how the congestion avoidance modules are used and invoked.[[BR]]
     186 • [http://tools.ietf.org/html/rfc5681 RFC 5681] specifies the Reno congestion control actions in detail, and may be helpful in understanding the kernel code.[[BR]]
     187 • The Linux Cross Reference at ''http://lxr.linux.no/linux'' may be useful for navigating and understanding how the code fits together.[[BR]]
     188 • If one of the hosts becomes unresponsive due to a bug in your congestion control module, you can restart the sliver to reboot it.[[BR]]
     189 • [http://tldp.org/LDP/lkmpg/2.6/html. The Linux Kernel Module Programming Guide] provides a good introduction to kernel module programming in general.[[BR]]
    131190
    132  '''Answer:''' I was hoping to see Cubic out-performs Reno but it seems that they are the same in this case.[[BR]]
    133  For the long run (e.g., 1800 seconds), Cubic and Reno perform similar under no loss and big delay [[BR]]
    134  For slow start, Cubic out-performs Reno (1.0 - 2.0 seconds) when no loss and high delay.
     191 '''3.5.3 Evaluation [[BR]]'''
     192 Once you have implemented the algorithm described above, answer the following questions:
     193  - 1. Question: Discuss the impact of these algorithmic changes in the context of traditional Reno congestion control.
     194  - 2. Question: Compare the convergence time and fairness of your algorithm with Reno and Cubic under (a) high delay (500 ms) and (2) high loss (5%) conditions. Use [https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CDkQFjAB&url=http%3A%2F%2Fwww1.cse.wustl.edu%2F~jain%2Fpapers%2Fftp%2Ffairness.pdf&ei=10-SUYGkPKvh4APIuYGIAQ&usg=AFQjCNHgCfUSby9WFtF5TJjpFSS3ncFw8Q&sig2=KxbbF8iM1IXC7peGZV-w3g&bvm=bv.46471029,d.dmg Jain’s fairness index], or some other quantitative measure of fairness, in your comparison.
    135195
    136 '''3. repeat the experiments with 30 parallel connections (-P 30 option in iperf) and see how it goes: [[BR]]'''
    137  With Cubic, here is the result:
    138 {{{
    139 [ 12]  0.0-57.0 sec  13.1 MBytes  1.93 Mbits/sec
    140 [ 16]  0.0-57.0 sec  13.8 MBytes  2.03 Mbits/sec
    141 [ 14]  0.0-57.3 sec  20.2 MBytes  2.96 Mbits/sec
    142 [ 18]  0.0-57.4 sec  18.4 MBytes  2.69 Mbits/sec
    143 [  8]  0.0-57.4 sec  20.3 MBytes  2.97 Mbits/sec
    144 [  7]  0.0-57.3 sec  23.7 MBytes  3.48 Mbits/sec
    145 [  6]  0.0-57.3 sec  23.3 MBytes  3.41 Mbits/sec
    146 [  5]  0.0-57.3 sec  29.4 MBytes  4.30 Mbits/sec
    147 [  4]  0.0-57.3 sec  21.0 MBytes  3.07 Mbits/sec
    148 [  3]  0.0-57.5 sec  23.3 MBytes  3.41 Mbits/sec
    149 [ 11]  0.0-57.5 sec  18.5 MBytes  2.70 Mbits/sec
    150 [ 15]  0.0-57.5 sec  23.7 MBytes  3.46 Mbits/sec
    151 [ 13]  0.0-57.6 sec  26.4 MBytes  3.85 Mbits/sec
    152 [ 17]  0.0-57.6 sec  19.3 MBytes  2.81 Mbits/sec
    153 [  9]  0.0-57.8 sec  15.3 MBytes  2.22 Mbits/sec
    154 [ 10]  0.0-57.9 sec  20.5 MBytes  2.97 Mbits/sec
    155 [ 28]  0.0-60.0 sec  23.8 MBytes  3.32 Mbits/sec
    156 [ 30]  0.0-60.0 sec  15.9 MBytes  2.22 Mbits/sec
    157 [ 29]  0.0-60.1 sec  14.7 MBytes  2.05 Mbits/sec
    158 [ 32]  0.0-60.1 sec  27.3 MBytes  3.81 Mbits/sec
    159 [ 19]  0.0-60.1 sec  20.5 MBytes  2.86 Mbits/sec
    160 [ 23]  0.0-60.1 sec  16.2 MBytes  2.25 Mbits/sec
    161 [ 20]  0.0-60.1 sec  30.0 MBytes  4.19 Mbits/sec
    162 [ 26]  0.0-60.1 sec  14.6 MBytes  2.04 Mbits/sec
    163 [ 21]  0.0-60.2 sec  22.1 MBytes  3.07 Mbits/sec
    164 [ 27]  0.0-60.3 sec  19.9 MBytes  2.77 Mbits/sec
    165 [ 22]  0.0-60.4 sec  24.7 MBytes  3.44 Mbits/sec
    166 [ 24]  0.0-60.4 sec  26.1 MBytes  3.62 Mbits/sec
    167 [ 25]  0.0-60.5 sec  28.0 MBytes  3.88 Mbits/sec
    168 [ 31]  0.0-60.5 sec  34.2 MBytes  4.74 Mbits/sec
    169 [SUM]  0.0-60.5 sec    648 MBytes  89.8 Mbits/sec
    170196
    171 [  6]  0.0-1797.2 sec    684 MBytes  3.19 Mbits/sec
    172 [  4]  0.0-1797.3 sec    678 MBytes  3.17 Mbits/sec
    173 [  3]  0.0-1797.3 sec    675 MBytes  3.15 Mbits/sec
    174 [ 10]  0.0-1797.8 sec    602 MBytes  2.81 Mbits/sec
    175 [ 12]  0.0-1797.8 sec    664 MBytes  3.10 Mbits/sec
    176 [ 17]  0.0-1797.9 sec    642 MBytes  3.00 Mbits/sec
    177 [ 13]  0.0-1797.9 sec    686 MBytes  3.20 Mbits/sec
    178 [  9]  0.0-1797.9 sec    707 MBytes  3.30 Mbits/sec
    179 [ 14]  0.0-1798.0 sec    679 MBytes  3.17 Mbits/sec
    180 [  5]  0.0-1798.2 sec    620 MBytes  2.89 Mbits/sec
    181 [  8]  0.0-1798.2 sec    671 MBytes  3.13 Mbits/sec
    182 [  7]  0.0-1798.2 sec    723 MBytes  3.37 Mbits/sec
    183 [ 11]  0.0-1798.3 sec    696 MBytes  3.25 Mbits/sec
    184 [ 16]  0.0-1798.3 sec    657 MBytes  3.07 Mbits/sec
    185 [ 15]  0.0-1798.4 sec    624 MBytes  2.91 Mbits/sec
    186 [ 18]  0.0-1798.8 sec    695 MBytes  3.24 Mbits/sec
    187 [ 28]  0.0-1800.1 sec    705 MBytes  3.29 Mbits/sec
    188 [ 23]  0.0-1800.1 sec    689 MBytes  3.21 Mbits/sec
    189 [ 32]  0.0-1800.1 sec    686 MBytes  3.20 Mbits/sec
    190 [ 31]  0.0-1800.2 sec    703 MBytes  3.28 Mbits/sec
    191 [ 21]  0.0-1800.2 sec    671 MBytes  3.13 Mbits/sec
    192 [ 30]  0.0-1800.4 sec    699 MBytes  3.26 Mbits/sec
    193 [ 20]  0.0-1800.5 sec    668 MBytes  3.11 Mbits/sec
    194 [ 22]  0.0-1800.6 sec    652 MBytes  3.04 Mbits/sec
    195 [ 27]  0.0-1800.6 sec    701 MBytes  3.27 Mbits/sec
    196 [ 19]  0.0-1800.6 sec    594 MBytes  2.77 Mbits/sec
    197 [ 29]  0.0-1800.7 sec    680 MBytes  3.17 Mbits/sec
    198 [ 26]  0.0-1800.8 sec    709 MBytes  3.30 Mbits/sec
    199 [ 25]  0.0-1800.9 sec    646 MBytes  3.01 Mbits/sec
    200 [ 24]  0.0-1801.1 sec    672 MBytes  3.13 Mbits/sec
    201 [SUM]  0.0-1801.1 sec  19.7 GBytes  94.0 Mbits/sec
    202 }}}
    203  With Reno, here is the result:
    204 {{{
    205 [ 17]  0.0-57.1 sec  7.38 MBytes  1.08 Mbits/sec
    206 [ 15]  0.0-57.0 sec  7.33 MBytes  1.08 Mbits/sec
    207 [ 14]  0.0-57.0 sec  7.35 MBytes  1.08 Mbits/sec
    208 [ 18]  0.0-57.0 sec  7.16 MBytes  1.05 Mbits/sec
    209 [ 13]  0.0-57.1 sec  7.31 MBytes  1.08 Mbits/sec
    210 [  3]  0.0-57.2 sec  25.7 MBytes  3.77 Mbits/sec
    211 [ 12]  0.0-57.2 sec  7.33 MBytes  1.08 Mbits/sec
    212 [  5]  0.0-57.2 sec  87.5 MBytes  12.8 Mbits/sec
    213 [  4]  0.0-57.2 sec  26.5 MBytes  3.88 Mbits/sec
    214 [ 11]  0.0-57.2 sec  7.32 MBytes  1.07 Mbits/sec
    215 [ 10]  0.0-57.3 sec  7.38 MBytes  1.08 Mbits/sec
    216 [ 16]  0.0-57.3 sec  7.41 MBytes  1.09 Mbits/sec
    217 [  8]  0.0-57.4 sec  29.6 MBytes  4.33 Mbits/sec
    218 [  7]  0.0-57.7 sec  23.7 MBytes  3.45 Mbits/sec
    219 [  9]  0.0-57.7 sec  23.3 MBytes  3.38 Mbits/sec
    220 [  6]  0.0-58.1 sec  64.6 MBytes  9.33 Mbits/sec
    221 [ 25]  0.0-60.0 sec  43.4 MBytes  6.06 Mbits/sec
    222 [ 21]  0.0-60.0 sec  36.2 MBytes  5.05 Mbits/sec
    223 [ 20]  0.0-60.2 sec  27.3 MBytes  3.81 Mbits/sec
    224 [ 24]  0.0-60.1 sec  28.2 MBytes  3.94 Mbits/sec
    225 [ 23]  0.0-60.1 sec  30.3 MBytes  4.23 Mbits/sec
    226 [ 27]  0.0-60.0 sec  7.80 MBytes  1.09 Mbits/sec
    227 [ 26]  0.0-60.1 sec  7.84 MBytes  1.09 Mbits/sec
    228 [ 30]  0.0-60.1 sec  7.84 MBytes  1.09 Mbits/sec
    229 [ 29]  0.0-60.1 sec  7.74 MBytes  1.08 Mbits/sec
    230 [ 31]  0.0-60.1 sec  7.82 MBytes  1.09 Mbits/sec
    231 [ 19]  0.0-60.3 sec  29.1 MBytes  4.04 Mbits/sec
    232 [ 22]  0.0-60.2 sec  30.9 MBytes  4.31 Mbits/sec
    233 [ 32]  0.0-60.1 sec  32.8 MBytes  4.58 Mbits/sec
    234 [ 28]  0.0-60.1 sec  7.82 MBytes  1.09 Mbits/sec
    235 [SUM]  0.0-60.3 sec    652 MBytes  90.7 Mbits/sec
    236 }}}
    237  '''Answer:''' The above results show that the total performance is about the same for Cubic and Reno. [[BR]]
    238  Apparently when u use multiple TCP connections the bandwidth utilization will be higher. [[BR]]
    239  But the single throughput for each TCP connection varies. [[BR]]
    240 
    241 '''4. remove 300ms delay, add 5% lossrate and see how it goes: [[BR]]'''
    242  With Cubic, here is the result:
    243 {{{
    244 [  3]  0.0-60.0 sec  73.7 MBytes  10.3 Mbits/sec
    245 
    246 10% lossrate: [  3]  0.0-60.6 sec  17.3 MBytes  2.39 Mbits/sec
    247 }}}
    248  With Reno, here is the result:
    249 {{{
    250 [  3]  0.0-60.0 sec  59.5 MBytes  8.32 Mbits/sec
    251 
    252 10% lossrate: [  3]  0.0-60.2 sec  13.5 MBytes  1.89 Mbits/sec
    253 }}}
    254  '''Answer:''' Apparently Cubic out-performs Reno under 5% lossrate.
    255 
    256 '''5. restore NIC back to no loss and no delay, run 10 TCP connections from right to left, while running 20Mbps UDP session from top to left'''[[BR]]
    257  UDP throughput:
    258 {{{
    259 [  3]  0.0-60.1 sec    141 MBytes  19.6 Mbits/sec  0.416 ms  431/100735 (0.43%)
    260 }}}
    261  TCP throughput:
    262 {{{
    263 [  5]  0.0-60.1 sec  50.2 MBytes  7.01 Mbits/sec
    264 [  4]  0.0-60.0 sec  78.8 MBytes  11.0 Mbits/sec
    265 [  7]  0.0-60.0 sec  55.0 MBytes  7.69 Mbits/sec
    266 [  6]  0.0-60.0 sec  71.1 MBytes  9.94 Mbits/sec
    267 [  8]  0.0-60.1 sec  39.5 MBytes  5.52 Mbits/sec
    268 [ 10]  0.0-60.0 sec  37.7 MBytes  5.27 Mbits/sec
    269 [ 11]  0.0-60.1 sec  39.5 MBytes  5.51 Mbits/sec
    270 [ 12]  0.0-60.0 sec  73.6 MBytes  10.3 Mbits/sec
    271 [  9]  0.0-60.1 sec  46.8 MBytes  6.54 Mbits/sec
    272 [  3]  0.0-60.3 sec  49.1 MBytes  6.83 Mbits/sec
    273 [SUM]  0.0-60.3 sec    541 MBytes  75.3 Mbits/sec
    274 }}}
    275  '''Answer:''' Apparently UDP will not care about loss: the client keeps sending at a rate of 20Mbps despite 0.43% loss.[[BR]]
    276  On the other hand, TCP will do its rate control/congestion control mechanism when facing with packets loss and hence has smaller throughputs.
    277 
    278 '''6. follow question 5, how to enforce fairness using tc qdisc for this 11 flows? Prove it'''[[BR]]
    279  Let's try the following command and see how it goes (it simply uses fair-queuing discipline):
    280 {{{
    281 sudo /sbin/tc qdisc add dev eth2 root handle 1:0 sfq
    282 }}}
    283  UDP throughput:
    284 {{{
    285 [  3]  0.0-60.0 sec    141 MBytes  19.7 Mbits/sec
    286 [  3] Sent 100367 datagrams
    287 [  3] Server Report:
    288 [  3]  0.0-60.0 sec  67.3 MBytes  9.40 Mbits/sec  2.355 ms 52361/100366 (52%)
    289 [  3]  0.0-60.0 sec  1 datagrams received out-of-order
    290 }}}
    291  TCP throughput:
    292 {{{
    293 [  5]  0.0-57.0 sec  58.6 MBytes  8.62 Mbits/sec
    294 [  4]  0.0-57.0 sec  58.7 MBytes  8.63 Mbits/sec
    295 [  3]  0.0-57.0 sec  58.6 MBytes  8.63 Mbits/sec
    296 [  9]  0.0-57.0 sec  58.3 MBytes  8.57 Mbits/sec
    297 [  8]  0.0-57.0 sec  58.6 MBytes  8.63 Mbits/sec
    298 [  7]  0.0-57.0 sec  58.2 MBytes  8.57 Mbits/sec
    299 [ 10]  0.0-57.1 sec  57.4 MBytes  8.44 Mbits/sec
    300 [  6]  0.0-57.0 sec  58.5 MBytes  8.61 Mbits/sec
    301 [ 11]  0.0-57.0 sec  57.4 MBytes  8.44 Mbits/sec
    302 [ 12]  0.0-60.0 sec  90.4 MBytes  12.6 Mbits/sec
    303 [SUM]  0.0-60.0 sec    615 MBytes  86.0 Mbits/sec
    304 }}}
    305  '''Answer:''' It works. UDP throughput is slightly bigger than TCP, probably because of TCP's slow start. [[BR]]
    306  A little bit surprising that one of the TCP flows has way better throughput than the rest. [[BR]]
    307  Maybe it is because that I ran both UDP and TCP for 60 seconds and that TCP connection is the last one being created[[BR]]
    308  As a result, when the UDP session ends, the last TCP session is still active for about 1 second, boosting up the total throughput. Just a guess[[BR]]
    309 
    310 '''7. change NIC delay to 100ms, remove fair queuing, see how it goes:'''[[BR]]
    311  Result: (I am using Cubic)
    312 {{{
    313 [  3]  0.0-60.0 sec    567 MBytes  79.3 Mbits/sec
    314 }}}
    315  Now add a 75ms delay variance and see how it goes:'''[[BR]]
    316  Result: (again, using Cubic)
    317 {{{
    318 [  3]  0.0-60.0 sec  24.4 MBytes  3.41 Mbits/sec
    319 }}}
    320  '''Answer: WoW! It surprised me that reordering can affect TCP's performance so much!'''
    321  
    322  Now tweak the parameters in /proc/sys/net/ipv4/tcp_reordering and see what's the best you can get:[[BR]]
    323  Default value is 3, meaning TCP will retransmit when 3 duplicate ACK is received. [[BR]]
    324  In our case, since no packet is really lost, meaning there is no retransmission needed. [[BR]]
    325  I changed the number to 100, here is the result:
    326 {{{
    327 [  3]  0.0-60.0 sec  32.6 MBytes  4.55 Mbits/sec
    328 }}}
    329  Well, not a big boost. Let me change it to 100000 and here is the result:
    330 {{{
    331 [  3]  0.0-60.3 sec  62.4 MBytes  8.69 Mbits/sec
    332 }}}
    333  Well, let me try a HUGE number 1000000000000000 which basically disables TCP's fast-retransmission and see how it goes:
    334 {{{
    335 [  3]  0.0-60.3 sec  71.0 MBytes  9.88 Mbits/sec
    336 }}}
    337  What about if I am using Reno? Just curious[[BR]]
    338  tcp_reordering = 3, result:
    339 {{{
    340 [  3]  0.0-60.1 sec  40.6 MBytes  5.67 Mbits/sec
    341 }}}
    342  tcp_reordering = 100000000000000, result:
    343 {{{
    344 [  3]  0.0-60.0 sec  71.8 MBytes  10.0 Mbits/sec
    345 }}}
    346  '''Answer: ''' A too high value of tcp_reordering disables TCP's fast retransmission. A too low value will cause unnecessary retransmissions, which is a waste of bandwidth.
    347 
    348 '''8. use Cubic, with SACK on(default), set loss to 10%, see how it goes'''[[BR]]
    349  Result (repeated for 5 times):
    350 {{{
    351 [  3]  0.0-60.9 sec  14.3 MBytes  1.97 Mbits/sec
    352 [  3]  0.0-60.0 sec  15.3 MBytes  2.13 Mbits/sec
    353 [  3]  0.0-60.0 sec  19.3 MBytes  2.70 Mbits/sec
    354 [  3]  0.0-60.2 sec  16.5 MBytes  2.30 Mbits/sec
    355 [  3]  0.0-60.1 sec  19.1 MBytes  2.67 Mbits/sec
    356 }}}
    357  Disable tcp_sack and here is the result:
    358 {{{
    359 [  3]  0.0-60.0 sec  9.91 MBytes  1.39 Mbits/sec
    360 [  3]  0.0-60.1 sec  11.4 MBytes  1.59 Mbits/sec
    361 [  3]  0.0-60.2 sec  13.4 MBytes  1.87 Mbits/sec
    362 [  3]  0.0-60.0 sec  10.0 MBytes  1.40 Mbits/sec
    363 [  3]  0.0-60.1 sec  10.5 MBytes  1.47 Mbits/sec
    364 }}}
    365  '''Answer:''' SACK is most beneficial when receiver keeps sending duplicated ACKs back to the sender[[BR]]
    366  So if it is a long delay high bandwidth lossy network, SACK will be very useful.
    367 
    368 '''9. compile and use a customized congestion control mechanism exp and see how it goes:'''[[BR]]
    369  In the new exp congestion control module, we use: [[BR]]
    370   a slow start exponential factor of 3 instead of 2 in Reno; [[BR]]
    371   ssthresh x 3 / 4 when entering loss recovery instead of ssthresh/2 as in Reno [[BR]]
    372 
    373  The source code for the congestion control can be found [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/tcp_exp.c tcp_exp.c] . [[BR]]
    374  As well as the [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/Makefile Makefile] [[BR]]
    375  Note: You Do need to change the source code in order to make it happen. [[BR]]
    376  You can check out [http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/net/ipv4/tcp_cong.c Reno Source Code] for reference [[BR]]
    377  (The answers to the source code can be found [http://www.gpolab.bbn.com/experiment-support/TCPExampleExperiment/tcp_exp_answer.c here]) [[BR]]
    378 
    379  Steps to compile the kernel code: [[BR]]
    380  1. Comment out the line: [[BR]]
    381    ''exclude=mkinitrd* kernel*''[[BR]]
    382    in the file ''/etc/yum.conf'', to allow yum to install kernel headers. [[BR]]
    383  2. Install the required packages with this command: [[BR]]
    384    ''sudo yum install kernel-devel kernel-headers'' [[BR]]
    385  3. Fix up the kernel version in the installed headers to match the running kernel; this can be tricky, but these steps should handle it:[[BR]]
    386    * a). Find your kernel sources. They are in ''/usr/src/kernel'', in a directory that depends on the installed version. As of the time this page was created, [[BR]]
    387    the directory is ''2.6.27.41-170.2.117.fc10.i686''. We call this directory ''$KERNELSRC''.[[BR]]
    388    * b). identify your running kernel version by running ''uname -r''. It will be something like ''2.6.27.5-117.emulab1.fc10.i686''. The first three dotted components [[BR]]
    389    (''2.6.27'', in this case) are the major, minor, and micro versions, respectively, and the remainder of the version string (''.5-117.emulab.fc10.i686'') is the extraversion. [[BR]]
    390    Note the extraversion of your kernel.[[BR]]
    391    * c). In''$KERNELSRC/Makefile'',find the line beginning with ''EXTRAVERSION''. Replace its value with the extraversion of your kernel.[[BR]]
    392    * d). Update the kernel header tree to this new version by running the command:
    393    {{{
    394    sudo make include/linux/utsrelease.h
    395    }}}
    396  After you compile the source code, you will find a kernel model named ''''tcp_exp.ko'''' being created. [[BR]]
    397  Use "''sudo insmod tcp_exp.ko''" to insert the module into the kernel. [[BR]]
    398  You can use "''sudo rmmod tcp_exp''" to remove the module later on [[BR]]
    399  Once the module is complete and loaded into the kernel, the algorithm implemented by the module can be selected in the same manner that reno and cubic were [[BR]]
    400  selected in previous sections, by placing the keyword ''exp'' in ''/proc/sys/net/ipv4/tcp_congestion_control.''
    401 
    402  '''Comparison: '''Apparently this will increase the sending rate of TCP during slow start time compared with Reno; [[BR]]
    403  This new mechanism will also cut less slow start threshold when entering loss recovery. [[BR]]
    404  Thus, it is a more aggressive algorithm and should out-perform Reno in one connection facing loss/delay. [[BR]]
    405  However, when number of connections is big, it can be defeated by Reno; simply because its aggressiveness will introduce more loss when network condition is bad [[BR]]
    406 
    407  Performance Results: [[BR]]
    408  Under 500ms delay: [[BR]]
    409  Single Reno connection:
    410 {{{
    411 [  3]  0.0-60.3 sec    127 MBytes  17.7 Mbits/sec
    412 }}}
    413  Single exp connection:
    414 {{{
    415 [  3]  0.0-60.3 sec  11.1 MBytes  1.54 Mbits/sec
    416 }}}
    417  30 Reno connection:
    418 {{{
    419 [ 12]  0.0-51.0 sec  3.06 MBytes    504 Kbits/sec
    420 [ 15]  0.0-51.0 sec  2.52 MBytes    414 Kbits/sec
    421 [ 10]  0.0-51.0 sec  2.64 MBytes    434 Kbits/sec
    422 [  3]  0.0-51.0 sec  3.00 MBytes    493 Kbits/sec
    423 [  4]  0.0-51.1 sec  4.94 MBytes    811 Kbits/sec
    424 [ 13]  0.0-51.1 sec  2.95 MBytes    485 Kbits/sec
    425 [ 14]  0.0-51.2 sec  2.88 MBytes    471 Kbits/sec
    426 [ 16]  0.0-51.2 sec  2.38 MBytes    390 Kbits/sec
    427 [ 11]  0.0-51.3 sec  2.55 MBytes    418 Kbits/sec
    428 [ 18]  0.0-51.3 sec  3.09 MBytes    505 Kbits/sec
    429 [  7]  0.0-51.3 sec  3.92 MBytes    641 Kbits/sec
    430 [  6]  0.0-51.4 sec  5.17 MBytes    845 Kbits/sec
    431 [ 17]  0.0-51.4 sec  2.41 MBytes    393 Kbits/sec
    432 [  9]  0.0-51.9 sec  5.90 MBytes    954 Kbits/sec
    433 [  8]  0.0-52.3 sec  4.63 MBytes    744 Kbits/sec
    434 [  5]  0.0-52.3 sec  4.33 MBytes    694 Kbits/sec
    435 [ 19]  0.0-54.3 sec  9.04 MBytes  1.40 Mbits/sec
    436 [ 23]  0.0-54.4 sec  6.91 MBytes  1.07 Mbits/sec
    437 [ 22]  0.0-54.4 sec  10.8 MBytes  1.67 Mbits/sec
    438 [ 21]  0.0-54.4 sec  6.48 MBytes  1.00 Mbits/sec
    439 [ 24]  0.0-54.4 sec  5.59 MBytes    862 Kbits/sec
    440 [ 25]  0.0-54.5 sec  9.11 MBytes  1.40 Mbits/sec
    441 [ 20]  0.0-54.9 sec  5.80 MBytes    887 Kbits/sec
    442 [ 32]  0.0-60.0 sec  3.20 MBytes    447 Kbits/sec
    443 [ 31]  0.0-60.1 sec  3.12 MBytes    435 Kbits/sec
    444 [ 27]  0.0-60.1 sec  2.52 MBytes    351 Kbits/sec
    445 [ 28]  0.0-60.1 sec  2.86 MBytes    399 Kbits/sec
    446 [ 30]  0.0-60.2 sec  2.01 MBytes    280 Kbits/sec
    447 [ 29]  0.0-60.3 sec  2.62 MBytes    365 Kbits/sec
    448 [ 26]  0.0-60.4 sec  2.92 MBytes    406 Kbits/sec
    449 [SUM]  0.0-60.4 sec    129 MBytes  18.0 Mbits/sec
    450 }}}
    451  30 exp connection:
    452 {{{
    453 [  5]  0.0-57.1 sec  8.42 MBytes  1.24 Mbits/sec
    454 [ 16]  0.0-57.2 sec  2.67 MBytes    392 Kbits/sec
    455 [ 14]  0.0-57.2 sec  2.63 MBytes    386 Kbits/sec
    456 [ 10]  0.0-57.3 sec  2.60 MBytes    381 Kbits/sec
    457 [  4]  0.0-57.3 sec  7.45 MBytes  1.09 Mbits/sec
    458 [ 11]  0.0-57.3 sec  2.32 MBytes    340 Kbits/sec
    459 [ 17]  0.0-57.3 sec  2.79 MBytes    408 Kbits/sec
    460 [ 12]  0.0-57.3 sec  3.04 MBytes    445 Kbits/sec
    461 [ 15]  0.0-57.4 sec  2.55 MBytes    372 Kbits/sec
    462 [ 13]  0.0-57.4 sec  2.93 MBytes    428 Kbits/sec
    463 [  7]  0.0-57.6 sec  4.09 MBytes    595 Kbits/sec
    464 [  3]  0.0-57.7 sec  9.19 MBytes  1.34 Mbits/sec
    465 [  8]  0.0-57.9 sec  2.77 MBytes    402 Kbits/sec
    466 [  6]  0.0-58.0 sec  28.8 MBytes  4.16 Mbits/sec
    467 [ 18]  0.0-58.7 sec  3.04 MBytes    434 Kbits/sec
    468 [ 31]  0.0-60.0 sec  10.1 MBytes  1.41 Mbits/sec
    469 [ 32]  0.0-60.0 sec  3.24 MBytes    453 Kbits/sec
    470 [ 24]  0.0-60.2 sec  4.41 MBytes    614 Kbits/sec
    471 [ 23]  0.0-60.3 sec  8.37 MBytes  1.16 Mbits/sec
    472 [ 28]  0.0-60.3 sec  3.45 MBytes    480 Kbits/sec
    473 [ 29]  0.0-60.3 sec  2.55 MBytes    356 Kbits/sec
    474 [ 30]  0.0-60.4 sec  3.30 MBytes    459 Kbits/sec
    475 [ 27]  0.0-60.3 sec  2.64 MBytes    367 Kbits/sec
    476 [ 26]  0.0-60.4 sec  2.66 MBytes    370 Kbits/sec
    477 [ 22]  0.0-60.3 sec  3.71 MBytes    516 Kbits/sec
    478 [ 19]  0.0-60.8 sec  3.48 MBytes    480 Kbits/sec
    479 [ 20]  0.0-61.0 sec  3.55 MBytes    489 Kbits/sec
    480 [ 25]  0.0-61.3 sec  4.31 MBytes    590 Kbits/sec
    481 [ 21]  0.0-61.5 sec  5.57 MBytes    759 Kbits/sec
    482 [  9]  0.0-61.9 sec  4.15 MBytes    563 Kbits/sec
    483 [SUM]  0.0-61.9 sec    151 MBytes  20.4 Mbits/sec
    484 }}}
    485  Under 5% loss: [[BR]]
    486  Single Reno connection:
    487 {{{
    488 [  3]  0.0-60.0 sec  64.0 MBytes  8.95 Mbits/sec
    489 }}}
    490  Single exp connection:
    491 {{{
    492 [  3]  0.0-60.0 sec    124 MBytes  17.3 Mbits/sec
    493 }}}
    494  30 Reno connection:
    495 {{{
    496 [ 12]  0.0-51.0 sec  17.8 MBytes  2.92 Mbits/sec
    497 [ 11]  0.0-51.0 sec  18.8 MBytes  3.09 Mbits/sec
    498 [ 10]  0.0-51.0 sec  19.1 MBytes  3.14 Mbits/sec
    499 [  4]  0.0-51.0 sec  16.5 MBytes  2.71 Mbits/sec
    500 [  6]  0.0-51.0 sec  18.6 MBytes  3.06 Mbits/sec
    501 [  8]  0.0-51.0 sec  18.8 MBytes  3.10 Mbits/sec
    502 [  3]  0.0-51.0 sec  19.9 MBytes  3.27 Mbits/sec
    503 [  7]  0.0-51.2 sec  18.3 MBytes  2.99 Mbits/sec
    504 [  9]  0.0-51.3 sec  19.5 MBytes  3.18 Mbits/sec
    505 [ 14]  0.0-54.0 sec  19.3 MBytes  3.00 Mbits/sec
    506 [ 13]  0.0-54.0 sec  19.5 MBytes  3.02 Mbits/sec
    507 [ 17]  0.0-54.0 sec  19.5 MBytes  3.03 Mbits/sec
    508 [ 24]  0.0-54.0 sec  19.8 MBytes  3.07 Mbits/sec
    509 [ 22]  0.0-54.0 sec  19.8 MBytes  3.08 Mbits/sec
    510 [ 23]  0.0-54.0 sec  19.2 MBytes  2.98 Mbits/sec
    511 [ 21]  0.0-54.0 sec  18.8 MBytes  2.91 Mbits/sec
    512 [ 20]  0.0-54.0 sec  19.6 MBytes  3.05 Mbits/sec
    513 [ 19]  0.0-54.1 sec  19.5 MBytes  3.03 Mbits/sec
    514 [ 32]  0.0-54.0 sec  19.5 MBytes  3.03 Mbits/sec
    515 [ 18]  0.0-54.2 sec  19.7 MBytes  3.06 Mbits/sec
    516 [ 15]  0.0-54.2 sec  19.2 MBytes  2.98 Mbits/sec
    517 [  5]  0.0-54.7 sec  19.3 MBytes  2.96 Mbits/sec
    518 [ 27]  0.0-60.0 sec  24.2 MBytes  3.39 Mbits/sec
    519 [ 28]  0.0-60.0 sec  25.7 MBytes  3.59 Mbits/sec
    520 [ 26]  0.0-60.0 sec  25.7 MBytes  3.59 Mbits/sec
    521 [ 25]  0.0-60.1 sec  25.0 MBytes  3.49 Mbits/sec
    522 [ 31]  0.0-60.0 sec  27.3 MBytes  3.82 Mbits/sec
    523 [ 30]  0.0-60.0 sec  24.7 MBytes  3.45 Mbits/sec
    524 [ 16]  0.0-60.0 sec  27.5 MBytes  3.85 Mbits/sec
    525 [ 29]  0.0-60.6 sec  23.4 MBytes  3.24 Mbits/sec
    526 [SUM]  0.0-60.6 sec    623 MBytes  86.3 Mbits/sec
    527 }}}
    528  30 exp connection:
    529 {{{
    530 [ 20]  0.0-39.0 sec  13.9 MBytes  2.99 Mbits/sec
    531 [ 10]  0.0-39.0 sec  13.8 MBytes  2.96 Mbits/sec
    532 [ 14]  0.0-39.0 sec  13.4 MBytes  2.89 Mbits/sec
    533 [  8]  0.0-39.0 sec  12.7 MBytes  2.73 Mbits/sec
    534 [  6]  0.0-39.0 sec  14.7 MBytes  3.15 Mbits/sec
    535 [  4]  0.0-39.1 sec  13.9 MBytes  2.97 Mbits/sec
    536 [  5]  0.0-39.0 sec  13.0 MBytes  2.79 Mbits/sec
    537 [  3]  0.0-39.0 sec  13.1 MBytes  2.81 Mbits/sec
    538 [ 11]  0.0-39.0 sec  14.4 MBytes  3.09 Mbits/sec
    539 [ 12]  0.0-39.0 sec  13.9 MBytes  2.98 Mbits/sec
    540 [  9]  0.0-39.0 sec  13.7 MBytes  2.95 Mbits/sec
    541 [ 13]  0.0-39.0 sec  14.8 MBytes  3.19 Mbits/sec
    542 [ 19]  0.0-39.0 sec  12.7 MBytes  2.73 Mbits/sec
    543 [ 18]  0.0-39.0 sec  12.9 MBytes  2.76 Mbits/sec
    544 [ 17]  0.0-39.0 sec  13.5 MBytes  2.90 Mbits/sec
    545 [  7]  0.0-39.2 sec  14.3 MBytes  3.07 Mbits/sec
    546 [ 23]  0.0-42.0 sec  16.7 MBytes  3.34 Mbits/sec
    547 [ 22]  0.0-42.0 sec  15.9 MBytes  3.18 Mbits/sec
    548 [ 27]  0.0-42.0 sec  16.9 MBytes  3.38 Mbits/sec
    549 [ 26]  0.0-42.0 sec  16.7 MBytes  3.33 Mbits/sec
    550 [ 25]  0.0-42.0 sec  16.6 MBytes  3.32 Mbits/sec
    551 [ 24]  0.0-42.0 sec  15.9 MBytes  3.18 Mbits/sec
    552 [ 28]  0.0-42.0 sec  16.3 MBytes  3.25 Mbits/sec
    553 [ 21]  0.0-42.0 sec  16.5 MBytes  3.28 Mbits/sec
    554 [ 16]  0.0-42.0 sec  16.5 MBytes  3.29 Mbits/sec
    555 [ 30]  0.0-48.0 sec  29.2 MBytes  5.09 Mbits/sec
    556 [ 29]  0.0-48.0 sec  27.8 MBytes  4.86 Mbits/sec
    557 [ 31]  0.0-48.0 sec  29.8 MBytes  5.21 Mbits/sec
    558 [ 32]  0.0-48.1 sec  25.5 MBytes  4.44 Mbits/sec
    559 [ 15]  0.0-60.0 sec  52.9 MBytes  7.40 Mbits/sec
    560 [SUM]  0.0-60.0 sec    532 MBytes  74.3 Mbits/sec
    561 }}}
    562  Under 500ms delay and 5% loss: [[BR]]
    563  Single Reno connection:
    564 {{{
    565 [  3]  0.0-61.0 sec    880 KBytes    118 Kbits/sec
    566 }}}
    567  Single exp connection:
    568 {{{
    569 [  3]  0.0-60.5 sec  1016 KBytes    138 Kbits/sec
    570 }}}
    571  30 Reno connection:
    572 {{{
    573 [ 16]  0.0-39.2 sec    528 KBytes    110 Kbits/sec
    574 [ 13]  0.0-39.4 sec    600 KBytes    125 Kbits/sec
    575 [ 12]  0.0-39.6 sec    368 KBytes  76.1 Kbits/sec
    576 [ 11]  0.0-39.7 sec    584 KBytes    120 Kbits/sec
    577 [ 14]  0.0-39.8 sec    560 KBytes    115 Kbits/sec
    578 [  8]  0.0-39.8 sec    448 KBytes  92.1 Kbits/sec
    579 [ 10]  0.0-40.0 sec    456 KBytes  93.5 Kbits/sec
    580 [ 15]  0.0-40.0 sec    392 KBytes  80.2 Kbits/sec
    581 [  5]  0.0-40.3 sec    448 KBytes  91.0 Kbits/sec
    582 [  6]  0.0-40.5 sec    400 KBytes  80.9 Kbits/sec
    583 [  3]  0.0-40.5 sec    512 KBytes    103 Kbits/sec
    584 [  4]  0.0-40.9 sec    416 KBytes  83.3 Kbits/sec
    585 [ 17]  0.0-41.3 sec    480 KBytes  95.1 Kbits/sec
    586 [  9]  0.0-41.6 sec    536 KBytes    105 Kbits/sec
    587 [ 18]  0.0-42.5 sec    496 KBytes  95.5 Kbits/sec
    588 [ 25]  0.0-42.6 sec    392 KBytes  75.5 Kbits/sec
    589 [ 29]  0.0-42.6 sec    504 KBytes  96.9 Kbits/sec
    590 [ 24]  0.0-42.7 sec    608 KBytes    117 Kbits/sec
    591 [ 19]  0.0-42.7 sec    520 KBytes  99.8 Kbits/sec
    592 [  7]  0.0-43.1 sec    584 KBytes    111 Kbits/sec
    593 [ 26]  0.0-43.1 sec    464 KBytes  88.1 Kbits/sec
    594 [ 23]  0.0-43.2 sec    512 KBytes  97.1 Kbits/sec
    595 [ 30]  0.0-43.2 sec    376 KBytes  71.3 Kbits/sec
    596 [ 32]  0.0-43.2 sec    576 KBytes    109 Kbits/sec
    597 [ 27]  0.0-43.5 sec    584 KBytes    110 Kbits/sec
    598 [ 31]  0.0-43.6 sec    456 KBytes  85.7 Kbits/sec
    599 [ 28]  0.0-43.8 sec    488 KBytes  91.3 Kbits/sec
    600 [ 21]  0.0-49.4 sec    592 KBytes  98.3 Kbits/sec
    601 [ 22]  0.0-51.6 sec    664 KBytes    105 Kbits/sec
    602 [ 20]  0.0-60.8 sec    696 KBytes  93.8 Kbits/sec
    603 [SUM]  0.0-60.8 sec  14.9 MBytes  2.05 Mbits/sec
    604 }}}
    605  30 exp connection:
    606 {{{
    607 [  3]  0.0-51.1 sec    824 KBytes    132 Kbits/sec
    608 [ 19]  0.0-51.2 sec    720 KBytes    115 Kbits/sec
    609 [ 14]  0.0-51.2 sec    816 KBytes    130 Kbits/sec
    610 [  5]  0.0-51.3 sec    888 KBytes    142 Kbits/sec
    611 [  8]  0.0-51.3 sec  1008 KBytes    161 Kbits/sec
    612 [ 13]  0.0-51.3 sec    832 KBytes    133 Kbits/sec
    613 [  6]  0.0-51.4 sec    776 KBytes    124 Kbits/sec
    614 [  4]  0.0-51.5 sec    808 KBytes    129 Kbits/sec
    615 [ 18]  0.0-51.5 sec    664 KBytes    106 Kbits/sec
    616 [  9]  0.0-51.7 sec    712 KBytes    113 Kbits/sec
    617 [ 15]  0.0-51.8 sec    944 KBytes    149 Kbits/sec
    618 [  7]  0.0-51.9 sec    600 KBytes  94.7 Kbits/sec
    619 [ 11]  0.0-51.9 sec    776 KBytes    122 Kbits/sec
    620 [ 17]  0.0-52.0 sec    744 KBytes    117 Kbits/sec
    621 [ 16]  0.0-52.0 sec    824 KBytes    130 Kbits/sec
    622 [ 12]  0.0-52.0 sec    656 KBytes    103 Kbits/sec
    623 [ 22]  0.0-54.4 sec  1.08 MBytes    166 Kbits/sec
    624 [ 25]  0.0-54.4 sec    888 KBytes    134 Kbits/sec
    625 [ 26]  0.0-54.6 sec  1.05 MBytes    161 Kbits/sec
    626 [ 21]  0.0-54.7 sec  1.00 MBytes    153 Kbits/sec
    627 [ 30]  0.0-54.8 sec    952 KBytes    142 Kbits/sec
    628 [ 23]  0.0-55.0 sec    960 KBytes    143 Kbits/sec
    629 [ 20]  0.0-55.0 sec  1008 KBytes    150 Kbits/sec
    630 [ 27]  0.0-55.2 sec  1.04 MBytes    158 Kbits/sec
    631 [ 28]  0.0-55.3 sec    872 KBytes    129 Kbits/sec
    632 [ 24]  0.0-55.5 sec    728 KBytes    107 Kbits/sec
    633 [ 29]  0.0-57.1 sec    848 KBytes    122 Kbits/sec
    634 [ 10]  0.0-60.4 sec    952 KBytes    129 Kbits/sec
    635 [ 31]  0.0-60.8 sec    808 KBytes    109 Kbits/sec
    636 [ 32]  0.0-61.7 sec  1.12 MBytes    152 Kbits/sec
    637 [SUM]  0.0-61.7 sec  25.4 MBytes  3.45 Mbits/sec
    638 }}}
    639197
    640198= [wiki:GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/Finish Next: Teardown Experiment] =