Version 1 (modified by 12 years ago) (diff) | ,
---|
<TCP ASSIGNMENT>
STEPS FOR EXECUTING EXERCISE
Now that we have reserved the nodes. Let's log on to each nodes and do experiment.
Recall that the nodes reserved are pc73, pc81, pc55, pc47, pc84.
pc47 is top; pc84 is bottom; pc81 is left; pc55 is right; pc73 is center. We can figure this out from the output of "createsliver" step.
I typically draw a topology, naming the nodes correctly for quick reference; but again we can use the GENI Portal to see a graphical representation of the topology, with named nodes.
However, the flack interface (we can launch it via GENI Portal) still does not show the interface numbers of each nodes (we might need those interface numbers when we debug our experiment).
For example, we might want to see the traffic sent from center to top, then we need to do "tcpdump" on the interface that center uses to connect to top.
Right now we can not do this via the flack interface. But we can see it from the output of "createsliver" step (another reason I prefer to use command line omni :-))
Useful commands:
Change the use of congestion control algorithm:echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control echo cubic | sudo tee /proc/sys/net/ipv4/tcp_congestion_controlChange the delay/loss of a particular interface:
sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms loss 5%Restore network delay/loss on nic card:
sudo /sbin/tc qdisc del dev eth1 root
1. compare cubic and reno under no loss/delay introduced:
use default TCP congestion control (cubic) on left and right, run iperf between them:
On left, run:/usr/local/etc/emulab/emulab-iperf -sOn right, run (10.10.1.1 is the ip address for left):
/usr/local/etc/emulab/emulab-iperf -c 10.10.1.1 -t 60Results: 94.2 Mbps for Cubic when there is no delay introduced
------------------------------------------------------------ Client connecting to 10.10.1.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.2.1 port 53755 connected with 10.10.1.1 port 5001 [ 3] 0.0-60.0 sec 674 MBytes 94.2 Mbits/sec
Let both left and right use reno as the TCP congestion control mechanism, repeat the experiments:
echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_controlResults: 94.2 Mbps for reno when there is no delay introduced
------------------------------------------------------------ Client connecting to 10.10.1.1, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.2.1 port 53073 connected with 10.10.1.1 port 5001 [ 3] 0.0-60.0 sec 674 MBytes 94.2 Mbits/sec
Answer: they are the same under no loss/delay
- add 300ms delay and see how it goes:
With Cubic, here is the result:[ 3] 0.0-1800.2 sec 6.57 GBytes 31.3 Mbits/sec [ 3] 0.0-60.2 sec 213 MBytes 29.7 Mbits/sec
With Reno, here is the result:[ 3] 0.0-1800.1 sec 6.57 GBytes 31.3 Mbits/sec [ 3] 0.0-60.2 sec 214 MBytes 29.8 Mbits/sec
Answer: I was hoping to see Cubic out-performs Reno but it seems that they are the same in this case.
- repeat the experiments with 30 parallel connections (-P 30 option in iperf) and see how it goes:
With Cubic, here is the result:[ 12] 0.0-57.0 sec 13.1 MBytes 1.93 Mbits/sec [ 16] 0.0-57.0 sec 13.8 MBytes 2.03 Mbits/sec [ 14] 0.0-57.3 sec 20.2 MBytes 2.96 Mbits/sec [ 18] 0.0-57.4 sec 18.4 MBytes 2.69 Mbits/sec [ 8] 0.0-57.4 sec 20.3 MBytes 2.97 Mbits/sec [ 7] 0.0-57.3 sec 23.7 MBytes 3.48 Mbits/sec [ 6] 0.0-57.3 sec 23.3 MBytes 3.41 Mbits/sec [ 5] 0.0-57.3 sec 29.4 MBytes 4.30 Mbits/sec [ 4] 0.0-57.3 sec 21.0 MBytes 3.07 Mbits/sec [ 3] 0.0-57.5 sec 23.3 MBytes 3.41 Mbits/sec [ 11] 0.0-57.5 sec 18.5 MBytes 2.70 Mbits/sec [ 15] 0.0-57.5 sec 23.7 MBytes 3.46 Mbits/sec [ 13] 0.0-57.6 sec 26.4 MBytes 3.85 Mbits/sec [ 17] 0.0-57.6 sec 19.3 MBytes 2.81 Mbits/sec [ 9] 0.0-57.8 sec 15.3 MBytes 2.22 Mbits/sec [ 10] 0.0-57.9 sec 20.5 MBytes 2.97 Mbits/sec [ 28] 0.0-60.0 sec 23.8 MBytes 3.32 Mbits/sec [ 30] 0.0-60.0 sec 15.9 MBytes 2.22 Mbits/sec [ 29] 0.0-60.1 sec 14.7 MBytes 2.05 Mbits/sec [ 32] 0.0-60.1 sec 27.3 MBytes 3.81 Mbits/sec [ 19] 0.0-60.1 sec 20.5 MBytes 2.86 Mbits/sec [ 23] 0.0-60.1 sec 16.2 MBytes 2.25 Mbits/sec [ 20] 0.0-60.1 sec 30.0 MBytes 4.19 Mbits/sec [ 26] 0.0-60.1 sec 14.6 MBytes 2.04 Mbits/sec [ 21] 0.0-60.2 sec 22.1 MBytes 3.07 Mbits/sec [ 27] 0.0-60.3 sec 19.9 MBytes 2.77 Mbits/sec [ 22] 0.0-60.4 sec 24.7 MBytes 3.44 Mbits/sec [ 24] 0.0-60.4 sec 26.1 MBytes 3.62 Mbits/sec [ 25] 0.0-60.5 sec 28.0 MBytes 3.88 Mbits/sec [ 31] 0.0-60.5 sec 34.2 MBytes 4.74 Mbits/sec [SUM] 0.0-60.5 sec 648 MBytes 89.8 Mbits/sec
With Reno, here is the result:[ 17] 0.0-57.1 sec 7.38 MBytes 1.08 Mbits/sec [ 15] 0.0-57.0 sec 7.33 MBytes 1.08 Mbits/sec [ 14] 0.0-57.0 sec 7.35 MBytes 1.08 Mbits/sec [ 18] 0.0-57.0 sec 7.16 MBytes 1.05 Mbits/sec [ 13] 0.0-57.1 sec 7.31 MBytes 1.08 Mbits/sec [ 3] 0.0-57.2 sec 25.7 MBytes 3.77 Mbits/sec [ 12] 0.0-57.2 sec 7.33 MBytes 1.08 Mbits/sec [ 5] 0.0-57.2 sec 87.5 MBytes 12.8 Mbits/sec [ 4] 0.0-57.2 sec 26.5 MBytes 3.88 Mbits/sec [ 11] 0.0-57.2 sec 7.32 MBytes 1.07 Mbits/sec [ 10] 0.0-57.3 sec 7.38 MBytes 1.08 Mbits/sec [ 16] 0.0-57.3 sec 7.41 MBytes 1.09 Mbits/sec [ 8] 0.0-57.4 sec 29.6 MBytes 4.33 Mbits/sec [ 7] 0.0-57.7 sec 23.7 MBytes 3.45 Mbits/sec [ 9] 0.0-57.7 sec 23.3 MBytes 3.38 Mbits/sec [ 6] 0.0-58.1 sec 64.6 MBytes 9.33 Mbits/sec [ 25] 0.0-60.0 sec 43.4 MBytes 6.06 Mbits/sec [ 21] 0.0-60.0 sec 36.2 MBytes 5.05 Mbits/sec [ 20] 0.0-60.2 sec 27.3 MBytes 3.81 Mbits/sec [ 24] 0.0-60.1 sec 28.2 MBytes 3.94 Mbits/sec [ 23] 0.0-60.1 sec 30.3 MBytes 4.23 Mbits/sec [ 27] 0.0-60.0 sec 7.80 MBytes 1.09 Mbits/sec [ 26] 0.0-60.1 sec 7.84 MBytes 1.09 Mbits/sec [ 30] 0.0-60.1 sec 7.84 MBytes 1.09 Mbits/sec [ 29] 0.0-60.1 sec 7.74 MBytes 1.08 Mbits/sec [ 31] 0.0-60.1 sec 7.82 MBytes 1.09 Mbits/sec [ 19] 0.0-60.3 sec 29.1 MBytes 4.04 Mbits/sec [ 22] 0.0-60.2 sec 30.9 MBytes 4.31 Mbits/sec [ 32] 0.0-60.1 sec 32.8 MBytes 4.58 Mbits/sec [ 28] 0.0-60.1 sec 7.82 MBytes 1.09 Mbits/sec [SUM] 0.0-60.3 sec 652 MBytes 90.7 Mbits/sec
The above results show that the total performance is about the same for Cubic and Reno.
Apparently when u use multiple TCP connections the bandwidth utilization will be higher.
But the single throughput for each TCP connection varies.