Version 8 (modified by 12 years ago) (diff) | ,
---|
<TCP ASSIGNMENT>
STEPS FOR EXECUTING EXERCISE
Now that we have reserved the nodes. Let's log on to each nodes and do experiment.
Recall that the nodes reserved are pc73, pc81, pc55, pc47, pc84.
pc47 is top; pc84 is bottom; pc81 is left; pc55 is right; pc73 is center. We can figure this out from the output of "createsliver" step.
As mentioned earlier, we can use "readyToLogin.py" to show the topology as well as the log in commands.
Or, if you are a GENI Portal user, use the "details" button to check details of your slice.
Useful commands:
Change the use of congestion control algorithm:echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control echo cubic | sudo tee /proc/sys/net/ipv4/tcp_congestion_controlChange the delay/loss of a particular interface:
sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms loss 5%Restore network delay/loss on nic card:
sudo /sbin/tc qdisc del dev eth1 root
It is a little bit tricky to configure delay/loss on a virtual machine
Step 1: find our qdisc family number by executing "sudo /sbin/tc qdisc", a sample output could be like the following:
[shufeng@center ~]$ sudo /sbin/tc qdisc qdisc htb 270: dev mv6.47 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 260: dev mv6.47 parent 270:1 limit 1000 qdisc htb 150: dev mv6.41 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 140: dev mv6.41 parent 150:1 limit 1000 qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000 qdisc htb 230: dev mv6.45 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 220: dev mv6.45 parent 230:1 limit 1000
Now if the ethernet card you want to change is mv6.43, you can find from following line:
qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000
As a result, you change the delay/loss by executing the following:
sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000 delay 100ms loss 5% sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000
1. compare cubic and reno under no loss/delay introduced:
use default TCP congestion control (cubic) on left and right, run iperf between them (TCP flow comes from right to left):
On left, run:/usr/local/etc/emulab/emulab-iperf -sOn right, run (10.10.1.1 is the ip address for left):
/usr/local/etc/emulab/emulab-iperf -c 10.10.1.1 -t 60Results: 94.2 Mbps for Cubic when there is no delay introduced
------------------------------------------------------------ Client connecting to 10.10.1.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.2.1 port 53755 connected with 10.10.1.1 port 5001 [ 3] 0.0-60.0 sec 674 MBytes 94.2 Mbits/sec
Let both left and right use reno as the TCP congestion control mechanism, repeat the experiments:
echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_controlResults: 94.2 Mbps for reno when there is no delay introduced
------------------------------------------------------------ Client connecting to 10.10.1.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.2.1 port 53073 connected with 10.10.1.1 port 5001 [ 3] 0.0-60.0 sec 674 MBytes 94.2 Mbits/sec
Answer: they are the same under no loss/delay
2. add 300ms delay and see how it goes:
We can introduce the delay by configuring the interface on center that is connected with left, using sudo /sbin/tc qdisc
With Cubic, here is the result:
[ 3] 0.0-1800.2 sec 6.57 GBytes 31.3 Mbits/sec [ 3] 0.0-60.2 sec 213 MBytes 29.7 Mbits/sec [ 3] 0.0- 1.0 sec 56.0 KBytes 459 Kbits/sec [ 3] 1.0- 2.0 sec 312 KBytes 2.56 Mbits/sec [ 3] 2.0- 3.0 sec 640 KBytes 5.24 Mbits/sec [ 3] 3.0- 4.0 sec 2.67 MBytes 22.4 Mbits/sec [ 3] 4.0- 5.0 sec 3.57 MBytes 29.9 Mbits/sec [ 3] 5.0- 6.0 sec 3.65 MBytes 30.6 Mbits/sec [ 3] 6.0- 7.0 sec 3.70 MBytes 31.1 Mbits/sec [ 3] 7.0- 8.0 sec 3.66 MBytes 30.7 Mbits/secWith Reno, here is the result:
[ 3] 0.0-1800.1 sec 6.57 GBytes 31.3 Mbits/sec [ 3] 0.0-60.2 sec 214 MBytes 29.8 Mbits/sec [ 3] 0.0- 1.0 sec 56.0 KBytes 459 Kbits/sec [ 3] 1.0- 2.0 sec 232 KBytes 1.90 Mbits/sec [ 3] 2.0- 3.0 sec 680 KBytes 5.57 Mbits/sec [ 3] 3.0- 4.0 sec 2.76 MBytes 23.1 Mbits/sec [ 3] 4.0- 5.0 sec 4.11 MBytes 34.5 Mbits/sec [ 3] 5.0- 6.0 sec 3.68 MBytes 30.9 Mbits/sec
Answer: I was hoping to see Cubic out-performs Reno but it seems that they are the same in this case.
For the long run (e.g., 1800 seconds), Cubic and Reno perform similar under no loss and big delay
For slow start, Cubic out-performs Reno (1.0 - 2.0 seconds) when no loss and high delay.
3. repeat the experiments with 30 parallel connections (-P 30 option in iperf) and see how it goes:
With Cubic, here is the result:
[ 12] 0.0-57.0 sec 13.1 MBytes 1.93 Mbits/sec [ 16] 0.0-57.0 sec 13.8 MBytes 2.03 Mbits/sec [ 14] 0.0-57.3 sec 20.2 MBytes 2.96 Mbits/sec [ 18] 0.0-57.4 sec 18.4 MBytes 2.69 Mbits/sec [ 8] 0.0-57.4 sec 20.3 MBytes 2.97 Mbits/sec [ 7] 0.0-57.3 sec 23.7 MBytes 3.48 Mbits/sec [ 6] 0.0-57.3 sec 23.3 MBytes 3.41 Mbits/sec [ 5] 0.0-57.3 sec 29.4 MBytes 4.30 Mbits/sec [ 4] 0.0-57.3 sec 21.0 MBytes 3.07 Mbits/sec [ 3] 0.0-57.5 sec 23.3 MBytes 3.41 Mbits/sec [ 11] 0.0-57.5 sec 18.5 MBytes 2.70 Mbits/sec [ 15] 0.0-57.5 sec 23.7 MBytes 3.46 Mbits/sec [ 13] 0.0-57.6 sec 26.4 MBytes 3.85 Mbits/sec [ 17] 0.0-57.6 sec 19.3 MBytes 2.81 Mbits/sec [ 9] 0.0-57.8 sec 15.3 MBytes 2.22 Mbits/sec [ 10] 0.0-57.9 sec 20.5 MBytes 2.97 Mbits/sec [ 28] 0.0-60.0 sec 23.8 MBytes 3.32 Mbits/sec [ 30] 0.0-60.0 sec 15.9 MBytes 2.22 Mbits/sec [ 29] 0.0-60.1 sec 14.7 MBytes 2.05 Mbits/sec [ 32] 0.0-60.1 sec 27.3 MBytes 3.81 Mbits/sec [ 19] 0.0-60.1 sec 20.5 MBytes 2.86 Mbits/sec [ 23] 0.0-60.1 sec 16.2 MBytes 2.25 Mbits/sec [ 20] 0.0-60.1 sec 30.0 MBytes 4.19 Mbits/sec [ 26] 0.0-60.1 sec 14.6 MBytes 2.04 Mbits/sec [ 21] 0.0-60.2 sec 22.1 MBytes 3.07 Mbits/sec [ 27] 0.0-60.3 sec 19.9 MBytes 2.77 Mbits/sec [ 22] 0.0-60.4 sec 24.7 MBytes 3.44 Mbits/sec [ 24] 0.0-60.4 sec 26.1 MBytes 3.62 Mbits/sec [ 25] 0.0-60.5 sec 28.0 MBytes 3.88 Mbits/sec [ 31] 0.0-60.5 sec 34.2 MBytes 4.74 Mbits/sec [SUM] 0.0-60.5 sec 648 MBytes 89.8 Mbits/sec [ 6] 0.0-1797.2 sec 684 MBytes 3.19 Mbits/sec [ 4] 0.0-1797.3 sec 678 MBytes 3.17 Mbits/sec [ 3] 0.0-1797.3 sec 675 MBytes 3.15 Mbits/sec [ 10] 0.0-1797.8 sec 602 MBytes 2.81 Mbits/sec [ 12] 0.0-1797.8 sec 664 MBytes 3.10 Mbits/sec [ 17] 0.0-1797.9 sec 642 MBytes 3.00 Mbits/sec [ 13] 0.0-1797.9 sec 686 MBytes 3.20 Mbits/sec [ 9] 0.0-1797.9 sec 707 MBytes 3.30 Mbits/sec [ 14] 0.0-1798.0 sec 679 MBytes 3.17 Mbits/sec [ 5] 0.0-1798.2 sec 620 MBytes 2.89 Mbits/sec [ 8] 0.0-1798.2 sec 671 MBytes 3.13 Mbits/sec [ 7] 0.0-1798.2 sec 723 MBytes 3.37 Mbits/sec [ 11] 0.0-1798.3 sec 696 MBytes 3.25 Mbits/sec [ 16] 0.0-1798.3 sec 657 MBytes 3.07 Mbits/sec [ 15] 0.0-1798.4 sec 624 MBytes 2.91 Mbits/sec [ 18] 0.0-1798.8 sec 695 MBytes 3.24 Mbits/sec [ 28] 0.0-1800.1 sec 705 MBytes 3.29 Mbits/sec [ 23] 0.0-1800.1 sec 689 MBytes 3.21 Mbits/sec [ 32] 0.0-1800.1 sec 686 MBytes 3.20 Mbits/sec [ 31] 0.0-1800.2 sec 703 MBytes 3.28 Mbits/sec [ 21] 0.0-1800.2 sec 671 MBytes 3.13 Mbits/sec [ 30] 0.0-1800.4 sec 699 MBytes 3.26 Mbits/sec [ 20] 0.0-1800.5 sec 668 MBytes 3.11 Mbits/sec [ 22] 0.0-1800.6 sec 652 MBytes 3.04 Mbits/sec [ 27] 0.0-1800.6 sec 701 MBytes 3.27 Mbits/sec [ 19] 0.0-1800.6 sec 594 MBytes 2.77 Mbits/sec [ 29] 0.0-1800.7 sec 680 MBytes 3.17 Mbits/sec [ 26] 0.0-1800.8 sec 709 MBytes 3.30 Mbits/sec [ 25] 0.0-1800.9 sec 646 MBytes 3.01 Mbits/sec [ 24] 0.0-1801.1 sec 672 MBytes 3.13 Mbits/sec [SUM] 0.0-1801.1 sec 19.7 GBytes 94.0 Mbits/secWith Reno, here is the result:
[ 17] 0.0-57.1 sec 7.38 MBytes 1.08 Mbits/sec [ 15] 0.0-57.0 sec 7.33 MBytes 1.08 Mbits/sec [ 14] 0.0-57.0 sec 7.35 MBytes 1.08 Mbits/sec [ 18] 0.0-57.0 sec 7.16 MBytes 1.05 Mbits/sec [ 13] 0.0-57.1 sec 7.31 MBytes 1.08 Mbits/sec [ 3] 0.0-57.2 sec 25.7 MBytes 3.77 Mbits/sec [ 12] 0.0-57.2 sec 7.33 MBytes 1.08 Mbits/sec [ 5] 0.0-57.2 sec 87.5 MBytes 12.8 Mbits/sec [ 4] 0.0-57.2 sec 26.5 MBytes 3.88 Mbits/sec [ 11] 0.0-57.2 sec 7.32 MBytes 1.07 Mbits/sec [ 10] 0.0-57.3 sec 7.38 MBytes 1.08 Mbits/sec [ 16] 0.0-57.3 sec 7.41 MBytes 1.09 Mbits/sec [ 8] 0.0-57.4 sec 29.6 MBytes 4.33 Mbits/sec [ 7] 0.0-57.7 sec 23.7 MBytes 3.45 Mbits/sec [ 9] 0.0-57.7 sec 23.3 MBytes 3.38 Mbits/sec [ 6] 0.0-58.1 sec 64.6 MBytes 9.33 Mbits/sec [ 25] 0.0-60.0 sec 43.4 MBytes 6.06 Mbits/sec [ 21] 0.0-60.0 sec 36.2 MBytes 5.05 Mbits/sec [ 20] 0.0-60.2 sec 27.3 MBytes 3.81 Mbits/sec [ 24] 0.0-60.1 sec 28.2 MBytes 3.94 Mbits/sec [ 23] 0.0-60.1 sec 30.3 MBytes 4.23 Mbits/sec [ 27] 0.0-60.0 sec 7.80 MBytes 1.09 Mbits/sec [ 26] 0.0-60.1 sec 7.84 MBytes 1.09 Mbits/sec [ 30] 0.0-60.1 sec 7.84 MBytes 1.09 Mbits/sec [ 29] 0.0-60.1 sec 7.74 MBytes 1.08 Mbits/sec [ 31] 0.0-60.1 sec 7.82 MBytes 1.09 Mbits/sec [ 19] 0.0-60.3 sec 29.1 MBytes 4.04 Mbits/sec [ 22] 0.0-60.2 sec 30.9 MBytes 4.31 Mbits/sec [ 32] 0.0-60.1 sec 32.8 MBytes 4.58 Mbits/sec [ 28] 0.0-60.1 sec 7.82 MBytes 1.09 Mbits/sec [SUM] 0.0-60.3 sec 652 MBytes 90.7 Mbits/secAnswer: The above results show that the total performance is about the same for Cubic and Reno.
Apparently when u use multiple TCP connections the bandwidth utilization will be higher.
But the single throughput for each TCP connection varies.
4. remove 300ms delay, add 5% lossrate and see how it goes:
With Cubic, here is the result:
[ 3] 0.0-60.0 sec 73.7 MBytes 10.3 Mbits/sec 10% lossrate: [ 3] 0.0-60.6 sec 17.3 MBytes 2.39 Mbits/secWith Reno, here is the result:
[ 3] 0.0-60.0 sec 59.5 MBytes 8.32 Mbits/sec 10% lossrate: [ 3] 0.0-60.2 sec 13.5 MBytes 1.89 Mbits/secAnswer: Apparently Cubic out-performs Reno under 5% lossrate.
5. restore NIC back to no loss and no delay, run 10 TCP connections from right to left, while running 20Mbps UDP session from top to left
UDP throughput:
[ 3] 0.0-60.1 sec 141 MBytes 19.6 Mbits/sec 0.416 ms 431/100735 (0.43%)TCP throughput:
[ 5] 0.0-60.1 sec 50.2 MBytes 7.01 Mbits/sec [ 4] 0.0-60.0 sec 78.8 MBytes 11.0 Mbits/sec [ 7] 0.0-60.0 sec 55.0 MBytes 7.69 Mbits/sec [ 6] 0.0-60.0 sec 71.1 MBytes 9.94 Mbits/sec [ 8] 0.0-60.1 sec 39.5 MBytes 5.52 Mbits/sec [ 10] 0.0-60.0 sec 37.7 MBytes 5.27 Mbits/sec [ 11] 0.0-60.1 sec 39.5 MBytes 5.51 Mbits/sec [ 12] 0.0-60.0 sec 73.6 MBytes 10.3 Mbits/sec [ 9] 0.0-60.1 sec 46.8 MBytes 6.54 Mbits/sec [ 3] 0.0-60.3 sec 49.1 MBytes 6.83 Mbits/sec [SUM] 0.0-60.3 sec 541 MBytes 75.3 Mbits/secAnswer: Apparently UDP will not care about loss: the client keeps sending at a rate of 20Mbps despite 0.43% loss.
On the other hand, TCP will do its rate control/congestion control mechanism when facing with packets loss and hence has smaller throughputs.
6. follow question 5, how to enforce fairness using tc qdisc for this 11 flows? Prove it
Let's try the following command and see how it goes (it simply uses fair-queuing discipline):
sudo /sbin/tc qdisc add dev eth2 root handle 1:0 sfqUDP throughput:
[ 3] 0.0-60.0 sec 141 MBytes 19.7 Mbits/sec [ 3] Sent 100367 datagrams [ 3] Server Report: [ 3] 0.0-60.0 sec 67.3 MBytes 9.40 Mbits/sec 2.355 ms 52361/100366 (52%) [ 3] 0.0-60.0 sec 1 datagrams received out-of-orderTCP throughput:
[ 5] 0.0-57.0 sec 58.6 MBytes 8.62 Mbits/sec [ 4] 0.0-57.0 sec 58.7 MBytes 8.63 Mbits/sec [ 3] 0.0-57.0 sec 58.6 MBytes 8.63 Mbits/sec [ 9] 0.0-57.0 sec 58.3 MBytes 8.57 Mbits/sec [ 8] 0.0-57.0 sec 58.6 MBytes 8.63 Mbits/sec [ 7] 0.0-57.0 sec 58.2 MBytes 8.57 Mbits/sec [ 10] 0.0-57.1 sec 57.4 MBytes 8.44 Mbits/sec [ 6] 0.0-57.0 sec 58.5 MBytes 8.61 Mbits/sec [ 11] 0.0-57.0 sec 57.4 MBytes 8.44 Mbits/sec [ 12] 0.0-60.0 sec 90.4 MBytes 12.6 Mbits/sec [SUM] 0.0-60.0 sec 615 MBytes 86.0 Mbits/secAnswer: It works. UDP throughput is slightly bigger than TCP, probably because of TCP's slow start.
A little bit surprising that one of the TCP flows has way better throughput than the rest.
Maybe it is because that I ran both UDP and TCP for 60 seconds and that TCP connection is the last one being created
As a result, when the UDP session ends, the last TCP session is still active for about 1 second, boosting up the total throughput. Just a guess
7. change NIC delay to 100ms, remove fair queuing, see how it goes:
Result: (I am using Cubic)
[ 3] 0.0-60.0 sec 567 MBytes 79.3 Mbits/secNow add a 75ms delay variance and see how it goes:
Result: (again, using Cubic)[ 3] 0.0-60.0 sec 24.4 MBytes 3.41 Mbits/secAnswer: WoW! It surprised me that reordering can affect TCP's performance so much'''
Now tweak the parameters in /proc/sys/net/ipv4/tcp_reordering and see what's the best you can get:
Default value is 3, meaning TCP will retransmit when 3 duplicate ACK is received.
In our case, since no packet is really lost, meaning there is no retransmission needed.
I changed the number to 100, here is the result:[ 3] 0.0-60.0 sec 32.6 MBytes 4.55 Mbits/secWell, not a big boost. Let me change it to 100000 and here is the result:
[ 3] 0.0-60.3 sec 62.4 MBytes 8.69 Mbits/secWell, let me try a HUGE number 1000000000000000 which basically disables TCP's fast-retransmission and see how it goes:
[ 3] 0.0-60.3 sec 71.0 MBytes 9.88 Mbits/secWhat about if I am using Reno? Just curious
tcp_reordering = 3, result:[ 3] 0.0-60.1 sec 40.6 MBytes 5.67 Mbits/sectcp_reordering = 100000000000000, result:
[ 3] 0.0-60.0 sec 71.8 MBytes 10.0 Mbits/secAnswer: A too high value of tcp_reordering disables TCP's fast retransmission. A too low value will cause unnecessary retransmissions, which is a waste of bandwidth.
8. use Cubic, with SACK on(default), set loss to 10%, see how it goes
Result (repeated for 5 times):
[ 3] 0.0-60.9 sec 14.3 MBytes 1.97 Mbits/sec [ 3] 0.0-60.0 sec 15.3 MBytes 2.13 Mbits/sec [ 3] 0.0-60.0 sec 19.3 MBytes 2.70 Mbits/sec [ 3] 0.0-60.2 sec 16.5 MBytes 2.30 Mbits/sec [ 3] 0.0-60.1 sec 19.1 MBytes 2.67 Mbits/secDisable tcp_sack and here is the result:
[ 3] 0.0-60.0 sec 9.91 MBytes 1.39 Mbits/sec [ 3] 0.0-60.1 sec 11.4 MBytes 1.59 Mbits/sec [ 3] 0.0-60.2 sec 13.4 MBytes 1.87 Mbits/sec [ 3] 0.0-60.0 sec 10.0 MBytes 1.40 Mbits/sec [ 3] 0.0-60.1 sec 10.5 MBytes 1.47 Mbits/secAnswer: SACK is most beneficial when receiver keeps sending duplicated ACKs back to the sender
So if it is a long delay high bandwidth lossy network, SACK will be very useful.
9. compile and use a customized congestion control mechanism exp and see how it goes:
In the new exp congestion control module, we use:
a slow start exponential factor of 3 instead of 2 in Reno;
ssthresh x 3 / 4 when entering loss recovery instead of ssthresh/2 as in Reno
The source code for the congestion control is in the tar ball we previously downloaded.
Note: You Do need to change the source code in order to make it happen.
You can check out Reno Source Code for reference
Steps to compile the kernel code:
- Comment out the line:
exclude=mkinitrd* kernel*
in the file /etc/yum.conf, to allow yum to install kernel headers.
- Install the required packages with this command:
sudo yum install kernel-devel kernel-headers
- Fix up the kernel version in the installed headers to match the running kernel; this can be tricky, but these steps should handle it:
- a). Find your kernel sources. They are in /usr/src/kernel, in a directory that depends on the installed version. As of the time this page was created,
the directory is 2.6.27.41-170.2.117.fc10.i686. We call this directory $KERNELSRC.
- b). identify your running kernel version by running uname -r. It will be something like 2.6.27.5-117.emulab1.fc10.i686. The first three dotted components
(2.6.27, in this case) are the major, minor, and micro versions, respectively, and the remainder of the version string (.5-117.emulab.fc10.i686) is the extraversion.
Note the extraversion of your kernel.
- c). In$KERNELSRC/Makefile,find the line beginning with EXTRAVERSION. Replace its value with the extraversion of your kernel.
- d). Update the kernel header tree to this new version by running the command:
sudo make include/linux/utsrelease.h
- a). Find your kernel sources. They are in /usr/src/kernel, in a directory that depends on the installed version. As of the time this page was created,
After you compile the source code, you will find a kernel model named 'tcp_exp.ko' being created. [[BR}} Use "sudo insmod tcp_exp.ko" to insert the module into the kernel.
You can use "sudo rmmod tcp_exp" to remove the module later on
Once the module is complete and loaded into the kernel, the algorithm implemented by the module can be selected in the same manner that reno and cubic were
selected in previous sections, by placing the keyword exp in /proc/sys/net/ipv4/tcp_congestion_control.
Comparison: Apparently this will increase the sending rate of TCP during slow start time compared with Reno;
This new mechanism will also cut less slow start threshold when entering loss recovery.
Thus, it is a more aggressive algorithm and should out-perform Reno in one connection facing loss/delay.
However, when number of connections is big, it can be defeated by Reno; simply because its aggressiveness will introduce more loss when network condition is bad
Performance Results:
Under 500ms delay:
Single Reno connection:[ 3] 0.0-60.3 sec 127 MBytes 17.7 Mbits/secSingle exp connection:
[ 3] 0.0-60.3 sec 11.1 MBytes 1.54 Mbits/sec30 Reno connection:
[ 12] 0.0-51.0 sec 3.06 MBytes 504 Kbits/sec [ 15] 0.0-51.0 sec 2.52 MBytes 414 Kbits/sec [ 10] 0.0-51.0 sec 2.64 MBytes 434 Kbits/sec [ 3] 0.0-51.0 sec 3.00 MBytes 493 Kbits/sec [ 4] 0.0-51.1 sec 4.94 MBytes 811 Kbits/sec [ 13] 0.0-51.1 sec 2.95 MBytes 485 Kbits/sec [ 14] 0.0-51.2 sec 2.88 MBytes 471 Kbits/sec [ 16] 0.0-51.2 sec 2.38 MBytes 390 Kbits/sec [ 11] 0.0-51.3 sec 2.55 MBytes 418 Kbits/sec [ 18] 0.0-51.3 sec 3.09 MBytes 505 Kbits/sec [ 7] 0.0-51.3 sec 3.92 MBytes 641 Kbits/sec [ 6] 0.0-51.4 sec 5.17 MBytes 845 Kbits/sec [ 17] 0.0-51.4 sec 2.41 MBytes 393 Kbits/sec [ 9] 0.0-51.9 sec 5.90 MBytes 954 Kbits/sec [ 8] 0.0-52.3 sec 4.63 MBytes 744 Kbits/sec [ 5] 0.0-52.3 sec 4.33 MBytes 694 Kbits/sec [ 19] 0.0-54.3 sec 9.04 MBytes 1.40 Mbits/sec [ 23] 0.0-54.4 sec 6.91 MBytes 1.07 Mbits/sec [ 22] 0.0-54.4 sec 10.8 MBytes 1.67 Mbits/sec [ 21] 0.0-54.4 sec 6.48 MBytes 1.00 Mbits/sec [ 24] 0.0-54.4 sec 5.59 MBytes 862 Kbits/sec [ 25] 0.0-54.5 sec 9.11 MBytes 1.40 Mbits/sec [ 20] 0.0-54.9 sec 5.80 MBytes 887 Kbits/sec [ 32] 0.0-60.0 sec 3.20 MBytes 447 Kbits/sec [ 31] 0.0-60.1 sec 3.12 MBytes 435 Kbits/sec [ 27] 0.0-60.1 sec 2.52 MBytes 351 Kbits/sec [ 28] 0.0-60.1 sec 2.86 MBytes 399 Kbits/sec [ 30] 0.0-60.2 sec 2.01 MBytes 280 Kbits/sec [ 29] 0.0-60.3 sec 2.62 MBytes 365 Kbits/sec [ 26] 0.0-60.4 sec 2.92 MBytes 406 Kbits/sec [SUM] 0.0-60.4 sec 129 MBytes 18.0 Mbits/sec30 exp connection:
[ 5] 0.0-57.1 sec 8.42 MBytes 1.24 Mbits/sec [ 16] 0.0-57.2 sec 2.67 MBytes 392 Kbits/sec [ 14] 0.0-57.2 sec 2.63 MBytes 386 Kbits/sec [ 10] 0.0-57.3 sec 2.60 MBytes 381 Kbits/sec [ 4] 0.0-57.3 sec 7.45 MBytes 1.09 Mbits/sec [ 11] 0.0-57.3 sec 2.32 MBytes 340 Kbits/sec [ 17] 0.0-57.3 sec 2.79 MBytes 408 Kbits/sec [ 12] 0.0-57.3 sec 3.04 MBytes 445 Kbits/sec [ 15] 0.0-57.4 sec 2.55 MBytes 372 Kbits/sec [ 13] 0.0-57.4 sec 2.93 MBytes 428 Kbits/sec [ 7] 0.0-57.6 sec 4.09 MBytes 595 Kbits/sec [ 3] 0.0-57.7 sec 9.19 MBytes 1.34 Mbits/sec [ 8] 0.0-57.9 sec 2.77 MBytes 402 Kbits/sec [ 6] 0.0-58.0 sec 28.8 MBytes 4.16 Mbits/sec [ 18] 0.0-58.7 sec 3.04 MBytes 434 Kbits/sec [ 31] 0.0-60.0 sec 10.1 MBytes 1.41 Mbits/sec [ 32] 0.0-60.0 sec 3.24 MBytes 453 Kbits/sec [ 24] 0.0-60.2 sec 4.41 MBytes 614 Kbits/sec [ 23] 0.0-60.3 sec 8.37 MBytes 1.16 Mbits/sec [ 28] 0.0-60.3 sec 3.45 MBytes 480 Kbits/sec [ 29] 0.0-60.3 sec 2.55 MBytes 356 Kbits/sec [ 30] 0.0-60.4 sec 3.30 MBytes 459 Kbits/sec [ 27] 0.0-60.3 sec 2.64 MBytes 367 Kbits/sec [ 26] 0.0-60.4 sec 2.66 MBytes 370 Kbits/sec [ 22] 0.0-60.3 sec 3.71 MBytes 516 Kbits/sec [ 19] 0.0-60.8 sec 3.48 MBytes 480 Kbits/sec [ 20] 0.0-61.0 sec 3.55 MBytes 489 Kbits/sec [ 25] 0.0-61.3 sec 4.31 MBytes 590 Kbits/sec [ 21] 0.0-61.5 sec 5.57 MBytes 759 Kbits/sec [ 9] 0.0-61.9 sec 4.15 MBytes 563 Kbits/sec [SUM] 0.0-61.9 sec 151 MBytes 20.4 Mbits/secUnder 5% loss:
Single Reno connection:[ 3] 0.0-60.0 sec 64.0 MBytes 8.95 Mbits/secSingle exp connection:
[ 3] 0.0-60.0 sec 124 MBytes 17.3 Mbits/sec30 Reno connection:
[ 12] 0.0-51.0 sec 17.8 MBytes 2.92 Mbits/sec [ 11] 0.0-51.0 sec 18.8 MBytes 3.09 Mbits/sec [ 10] 0.0-51.0 sec 19.1 MBytes 3.14 Mbits/sec [ 4] 0.0-51.0 sec 16.5 MBytes 2.71 Mbits/sec [ 6] 0.0-51.0 sec 18.6 MBytes 3.06 Mbits/sec [ 8] 0.0-51.0 sec 18.8 MBytes 3.10 Mbits/sec [ 3] 0.0-51.0 sec 19.9 MBytes 3.27 Mbits/sec [ 7] 0.0-51.2 sec 18.3 MBytes 2.99 Mbits/sec [ 9] 0.0-51.3 sec 19.5 MBytes 3.18 Mbits/sec [ 14] 0.0-54.0 sec 19.3 MBytes 3.00 Mbits/sec [ 13] 0.0-54.0 sec 19.5 MBytes 3.02 Mbits/sec [ 17] 0.0-54.0 sec 19.5 MBytes 3.03 Mbits/sec [ 24] 0.0-54.0 sec 19.8 MBytes 3.07 Mbits/sec [ 22] 0.0-54.0 sec 19.8 MBytes 3.08 Mbits/sec [ 23] 0.0-54.0 sec 19.2 MBytes 2.98 Mbits/sec [ 21] 0.0-54.0 sec 18.8 MBytes 2.91 Mbits/sec [ 20] 0.0-54.0 sec 19.6 MBytes 3.05 Mbits/sec [ 19] 0.0-54.1 sec 19.5 MBytes 3.03 Mbits/sec [ 32] 0.0-54.0 sec 19.5 MBytes 3.03 Mbits/sec [ 18] 0.0-54.2 sec 19.7 MBytes 3.06 Mbits/sec [ 15] 0.0-54.2 sec 19.2 MBytes 2.98 Mbits/sec [ 5] 0.0-54.7 sec 19.3 MBytes 2.96 Mbits/sec [ 27] 0.0-60.0 sec 24.2 MBytes 3.39 Mbits/sec [ 28] 0.0-60.0 sec 25.7 MBytes 3.59 Mbits/sec [ 26] 0.0-60.0 sec 25.7 MBytes 3.59 Mbits/sec [ 25] 0.0-60.1 sec 25.0 MBytes 3.49 Mbits/sec [ 31] 0.0-60.0 sec 27.3 MBytes 3.82 Mbits/sec [ 30] 0.0-60.0 sec 24.7 MBytes 3.45 Mbits/sec [ 16] 0.0-60.0 sec 27.5 MBytes 3.85 Mbits/sec [ 29] 0.0-60.6 sec 23.4 MBytes 3.24 Mbits/sec [SUM] 0.0-60.6 sec 623 MBytes 86.3 Mbits/sec30 exp connection:
[ 20] 0.0-39.0 sec 13.9 MBytes 2.99 Mbits/sec [ 10] 0.0-39.0 sec 13.8 MBytes 2.96 Mbits/sec [ 14] 0.0-39.0 sec 13.4 MBytes 2.89 Mbits/sec [ 8] 0.0-39.0 sec 12.7 MBytes 2.73 Mbits/sec [ 6] 0.0-39.0 sec 14.7 MBytes 3.15 Mbits/sec [ 4] 0.0-39.1 sec 13.9 MBytes 2.97 Mbits/sec [ 5] 0.0-39.0 sec 13.0 MBytes 2.79 Mbits/sec [ 3] 0.0-39.0 sec 13.1 MBytes 2.81 Mbits/sec [ 11] 0.0-39.0 sec 14.4 MBytes 3.09 Mbits/sec [ 12] 0.0-39.0 sec 13.9 MBytes 2.98 Mbits/sec [ 9] 0.0-39.0 sec 13.7 MBytes 2.95 Mbits/sec [ 13] 0.0-39.0 sec 14.8 MBytes 3.19 Mbits/sec [ 19] 0.0-39.0 sec 12.7 MBytes 2.73 Mbits/sec [ 18] 0.0-39.0 sec 12.9 MBytes 2.76 Mbits/sec [ 17] 0.0-39.0 sec 13.5 MBytes 2.90 Mbits/sec [ 7] 0.0-39.2 sec 14.3 MBytes 3.07 Mbits/sec [ 23] 0.0-42.0 sec 16.7 MBytes 3.34 Mbits/sec [ 22] 0.0-42.0 sec 15.9 MBytes 3.18 Mbits/sec [ 27] 0.0-42.0 sec 16.9 MBytes 3.38 Mbits/sec [ 26] 0.0-42.0 sec 16.7 MBytes 3.33 Mbits/sec [ 25] 0.0-42.0 sec 16.6 MBytes 3.32 Mbits/sec [ 24] 0.0-42.0 sec 15.9 MBytes 3.18 Mbits/sec [ 28] 0.0-42.0 sec 16.3 MBytes 3.25 Mbits/sec [ 21] 0.0-42.0 sec 16.5 MBytes 3.28 Mbits/sec [ 16] 0.0-42.0 sec 16.5 MBytes 3.29 Mbits/sec [ 30] 0.0-48.0 sec 29.2 MBytes 5.09 Mbits/sec [ 29] 0.0-48.0 sec 27.8 MBytes 4.86 Mbits/sec [ 31] 0.0-48.0 sec 29.8 MBytes 5.21 Mbits/sec [ 32] 0.0-48.1 sec 25.5 MBytes 4.44 Mbits/sec [ 15] 0.0-60.0 sec 52.9 MBytes 7.40 Mbits/sec [SUM] 0.0-60.0 sec 532 MBytes 74.3 Mbits/secUnder 500ms delay and 5% loss:
Single Reno connection:[ 3] 0.0-61.0 sec 880 KBytes 118 Kbits/secSingle exp connection:
[ 3] 0.0-60.5 sec 1016 KBytes 138 Kbits/sec30 Reno connection:
[ 16] 0.0-39.2 sec 528 KBytes 110 Kbits/sec [ 13] 0.0-39.4 sec 600 KBytes 125 Kbits/sec [ 12] 0.0-39.6 sec 368 KBytes 76.1 Kbits/sec [ 11] 0.0-39.7 sec 584 KBytes 120 Kbits/sec [ 14] 0.0-39.8 sec 560 KBytes 115 Kbits/sec [ 8] 0.0-39.8 sec 448 KBytes 92.1 Kbits/sec [ 10] 0.0-40.0 sec 456 KBytes 93.5 Kbits/sec [ 15] 0.0-40.0 sec 392 KBytes 80.2 Kbits/sec [ 5] 0.0-40.3 sec 448 KBytes 91.0 Kbits/sec [ 6] 0.0-40.5 sec 400 KBytes 80.9 Kbits/sec [ 3] 0.0-40.5 sec 512 KBytes 103 Kbits/sec [ 4] 0.0-40.9 sec 416 KBytes 83.3 Kbits/sec [ 17] 0.0-41.3 sec 480 KBytes 95.1 Kbits/sec [ 9] 0.0-41.6 sec 536 KBytes 105 Kbits/sec [ 18] 0.0-42.5 sec 496 KBytes 95.5 Kbits/sec [ 25] 0.0-42.6 sec 392 KBytes 75.5 Kbits/sec [ 29] 0.0-42.6 sec 504 KBytes 96.9 Kbits/sec [ 24] 0.0-42.7 sec 608 KBytes 117 Kbits/sec [ 19] 0.0-42.7 sec 520 KBytes 99.8 Kbits/sec [ 7] 0.0-43.1 sec 584 KBytes 111 Kbits/sec [ 26] 0.0-43.1 sec 464 KBytes 88.1 Kbits/sec [ 23] 0.0-43.2 sec 512 KBytes 97.1 Kbits/sec [ 30] 0.0-43.2 sec 376 KBytes 71.3 Kbits/sec [ 32] 0.0-43.2 sec 576 KBytes 109 Kbits/sec [ 27] 0.0-43.5 sec 584 KBytes 110 Kbits/sec [ 31] 0.0-43.6 sec 456 KBytes 85.7 Kbits/sec [ 28] 0.0-43.8 sec 488 KBytes 91.3 Kbits/sec [ 21] 0.0-49.4 sec 592 KBytes 98.3 Kbits/sec [ 22] 0.0-51.6 sec 664 KBytes 105 Kbits/sec [ 20] 0.0-60.8 sec 696 KBytes 93.8 Kbits/sec [SUM] 0.0-60.8 sec 14.9 MBytes 2.05 Mbits/sec30 exp connection:
[ 3] 0.0-51.1 sec 824 KBytes 132 Kbits/sec [ 19] 0.0-51.2 sec 720 KBytes 115 Kbits/sec [ 14] 0.0-51.2 sec 816 KBytes 130 Kbits/sec [ 5] 0.0-51.3 sec 888 KBytes 142 Kbits/sec [ 8] 0.0-51.3 sec 1008 KBytes 161 Kbits/sec [ 13] 0.0-51.3 sec 832 KBytes 133 Kbits/sec [ 6] 0.0-51.4 sec 776 KBytes 124 Kbits/sec [ 4] 0.0-51.5 sec 808 KBytes 129 Kbits/sec [ 18] 0.0-51.5 sec 664 KBytes 106 Kbits/sec [ 9] 0.0-51.7 sec 712 KBytes 113 Kbits/sec [ 15] 0.0-51.8 sec 944 KBytes 149 Kbits/sec [ 7] 0.0-51.9 sec 600 KBytes 94.7 Kbits/sec [ 11] 0.0-51.9 sec 776 KBytes 122 Kbits/sec [ 17] 0.0-52.0 sec 744 KBytes 117 Kbits/sec [ 16] 0.0-52.0 sec 824 KBytes 130 Kbits/sec [ 12] 0.0-52.0 sec 656 KBytes 103 Kbits/sec [ 22] 0.0-54.4 sec 1.08 MBytes 166 Kbits/sec [ 25] 0.0-54.4 sec 888 KBytes 134 Kbits/sec [ 26] 0.0-54.6 sec 1.05 MBytes 161 Kbits/sec [ 21] 0.0-54.7 sec 1.00 MBytes 153 Kbits/sec [ 30] 0.0-54.8 sec 952 KBytes 142 Kbits/sec [ 23] 0.0-55.0 sec 960 KBytes 143 Kbits/sec [ 20] 0.0-55.0 sec 1008 KBytes 150 Kbits/sec [ 27] 0.0-55.2 sec 1.04 MBytes 158 Kbits/sec [ 28] 0.0-55.3 sec 872 KBytes 129 Kbits/sec [ 24] 0.0-55.5 sec 728 KBytes 107 Kbits/sec [ 29] 0.0-57.1 sec 848 KBytes 122 Kbits/sec [ 10] 0.0-60.4 sec 952 KBytes 129 Kbits/sec [ 31] 0.0-60.8 sec 808 KBytes 109 Kbits/sec [ 32] 0.0-61.7 sec 1.12 MBytes 152 Kbits/sec [SUM] 0.0-61.7 sec 25.4 MBytes 3.45 Mbits/sec