wiki:GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/Execute

Version 24 (modified by Vic Thomas, 10 years ago) (diff)

--

<TCP ASSIGNMENT>

Hello GENI index Hello GENI index Hello GENI index

STEPS FOR EXECUTING EXERCISE

Now that we have reserved the nodes. Let's log on to each nodes and do experiment.
You can find the nodes you reserved from the output of "createsliver" step.
Or you can use "readyToLogin.py" to show the topology as well as the log in commands.
Or, if you are a GENI Portal user, use the "details" button to check details of your slice.

Useful commands:
Change the use of congestion control algorithm:

echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control
echo cubic | sudo tee /proc/sys/net/ipv4/tcp_congestion_control

Change the delay/loss of a particular interface:

sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms loss 5%

Restore network delay/loss on nic card:

sudo /sbin/tc qdisc del dev eth1 root

Exercises

3.1 Comparison of Reno and CUBIC:

GENI nodes provide two TCP congestion control algorithms, CUBIC and Reno, that can be chosen at run-time.
The list of available algorithms are listed in the file /proc/sys/net/ipv4/tcp_available_congestion_control.
The “Reno” congestion control provided by the Linux kernel is actually the NewReno algorithm, but we will refer to it as Reno here to be consistent with Linux terminology.
Note that congestion control actions are very similar between Reno and NewReno, but NewReno has a more nuanced approach to loss recovery..
These congestion control algorithms can be chosen by placing the keywords reno or cubic in the file /proc/sys/net/ipv4/tcp_congestion_control. For example, to configure a host to use the Reno algorithm, use:

echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control

The tc command will then be used to set up network conditions for observation and testing. For example, if eth1 is the physical interface representing the link L on the Center node, the following command on the Center node will add a 200 ms delay to all packets leaving the interface:

sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms

Specific network setup commands will be provided as needed.
Run an Iperf server on the Left node (see Hint/Guidance section for location of iperf). The Iperf client will be run on the Right node. The duration for an Iperf session (-t option) is 60 seconds unless otherwise mentioned. Note carefully that some exercises require a much longer duration. Ensure that your sliver lifetimes are long enough to capture the duration of your experiment. All of the experiments should be repeated at least a 5 times (especially when the interfaces include random delays or losses) to ensure confidence in the results, as transient conditions can cause significant variations in any individual run.

  • 1. Question: What are the goodputs when the reno and cubic algorithms are used on the network with no emulated delay or loss? Which is better?
  • 2. Question: Qualitatively, under what conditions does bic/cubic perform better than Reno’s AIMD?
  • 3. Question: Change the delay of interface L to 300 ms using the following command, and run an Iperf session for 1800 seconds.
    sudo /sbin/tc qdisc add dev L root handle 1:0 netem limit 1000000000 delay 300ms
    

What are the goodputs of reno and cubic? Which performed better? What do you conclude?

  • 4. Question: Repeat the above experiment with 30 parallel connections and 1800 seconds for each algorithm by using the -P 30 option on Iperf. How do CUBIC and Reno differ? What do you conclude?
  • 5. Question: Remove the netem queueing discipline which causes delay and add a loss of 5% by using the following commands on the center node. Replace L with the appropriate physical interface. Alternatively, one can change a queueing discipline instead of deleting and adding a new one.
    sudo /sbin/tc qdisc del dev L root
    sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 5%
    
    How do the goodputs of Reno and CUBIC differ under loss for 60 s Iperf sessions?

  • Some Hint/Guidance on how to run the experiments:
    Use default TCP congestion control (cubic) on left and right, run iperf between them (TCP flow comes from right to left):
    On left, run:
    /usr/local/etc/emulab/emulab-iperf -s 
    
    On right, run (10.10.1.1 is the ip address for left in our case, you need to find the actual IP address that is used by left in your own experiment):
    /usr/local/etc/emulab/emulab-iperf -c 10.10.1.1 -t 60
    
    Let both left and right use reno as the TCP congestion control mechanism, repeat the experiments:
    echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control
    

3.2 Ensuring Fairness Among Flows

Restore the network state with the following command:

sudo /sbin/tc qdisc del dev L root

Run an Iperf client on the Right node with 10 parallel TCP connections (use the -P option), connecting to an Iperf server on the Left node for 60 seconds. Simultaneously, run a 20 Mbps UDP Iperf client on the Top node connecting to an UDP Iperf server session running on the Left node for 60 seconds.

  • 1. Question: What are the throughput shown by the UDP and TCP Iperf server sessions? Why are they what they are?
  • 2. Question: Provide the necessary steps and commands to enable queueing disciplines that enforce fairness among all the 11 flows in the network, and demonstrate that your solution is effective.

3.3 Reordering

Delete the previous queuing discipline and use the following netem configuration on interface L to create an 100 ms delay:

sudo /sbin/tc qdisc del dev L root
sudo /sbin/tc qdisc add dev L root handle 1:0 netem delay 100ms

As before, run a TCP Iperf client on the Right node connecting an Iperf server on the Left for 60 seconds.

  • 1. Question: What is the TCP goodput?
  • 2. Question: Introduce packet reordering, adding a 75 ms delay variance to the interface L with the following command:
    sudo /sbin/tc qdisc change dev L root handle 1:0 netem delay 100ms 75ms
    
    What is the TCP goodput now?
  • 3. Question: By tweaking the parameters in the file /proc/sys/net/ipv4/tcp_reordering, how much can the TCP goodput be improved? What is the best goodput you can show? Why is too high or two low value bad for TCP?

3.4 Performance of SACK under Lossy Conditions

Using Cubic as the congestion avoidance algorithm, set the loss characteristics on interface L using the following commands:

sudo /sbin/tc qdisc del dev L root
sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 10%
  • 1. Question: What kind of goodput do you get using CUBIC with SACK (the default configuration)? Why do you see this performance?
  • 2. Question: Disable SACK at the sender using this command:
    echo 0 | sudo tee /proc/sys/net/ipv4/tcp_sack
    
    What is the goodput without SACK? In what circumstances is SACK most beneficial? Remember that, due to the random nature of loss events, these experiments must be repeated at least five times to draw any conclusions.

3.5 An Experimental Congestion Avoidance module for Linux

Instructions for step 3.5 haven't been tested with xen VMs.

Next: Teardown Experiment