= [wiki:GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout ] = {{{ #!html
Hello GENI index Hello GENI index Hello GENI index
}}} = STEPS FOR EXECUTING EXERCISE = Now that we have reserved the nodes. Let's log on to each nodes and do experiment. [[BR]] You can find the nodes you reserved from the output of "createsliver" step. [[BR]] Or you can use `readyToLogin` to show the topology as well as the log in commands. [[BR]] Or, if you are a GENI Portal user, use the "details" button to check details of your slice. [[BR]] '''Useful commands:''' [[BR]] Change the use of congestion control algorithm: {{{ echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control echo cubic | sudo tee /proc/sys/net/ipv4/tcp_congestion_control }}} Change the delay/loss of a particular interface: {{{ sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms loss 5% }}} Restore network delay/loss on nic card: {{{ sudo /sbin/tc qdisc del dev eth1 root }}} {{{ #!comment '''It is a little bit tricky to configure delay/loss on a virtual machine'''[[BR]] Step 1: find our qdisc family number by executing "sudo /sbin/tc qdisc", a sample output could be like the following: {{{ [shufeng@center ~]$ sudo /sbin/tc qdisc qdisc htb 270: dev mv6.47 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 260: dev mv6.47 parent 270:1 limit 1000 qdisc htb 150: dev mv6.41 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 140: dev mv6.41 parent 150:1 limit 1000 qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000 qdisc htb 230: dev mv6.45 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 220: dev mv6.45 parent 230:1 limit 1000 }}} Now if the ethernet card you want to change is mv6.43, you can find from following line: {{{ qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000 }}} As a result, you change the delay/loss by executing the following: {{{ sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000 delay 100ms loss 5% sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000 }}} }}} = Exercises = '''3.1 Comparison of Reno and CUBIC: [[BR]]''' GENI nodes provide two TCP congestion control algorithms, CUBIC and Reno, that can be chosen at run-time. [[BR]] The list of available algorithms are listed in the file ''/proc/sys/net/ipv4/tcp_available_congestion_control''. [[BR]] The “Reno” congestion control provided by the Linux kernel is actually the [http://tools.ietf.org/html/rfc3782 NewReno] algorithm, but we will refer to it as Reno here to be consistent with Linux terminology. [[BR]] Note that congestion control actions are very similar between Reno and [http://tools.ietf.org/html/rfc3782 NewReno], but [http://tools.ietf.org/html/rfc3782 NewReno] has a more nuanced approach to loss recovery.. [[BR]] These congestion control algorithms can be chosen by placing the keywords ''reno'' or ''cubic'' in the file ''/proc/sys/net/ipv4/tcp_congestion_control''. For example, to configure a host to use the Reno algorithm, use: {{{ echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control }}} The tc command will then be used to set up network conditions for observation and testing. For example, if eth1 is the physical interface representing the link L on the Center node, the following command on the Center node will add a 200 ms delay to all packets leaving the interface: {{{ sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 200ms }}} Specific network setup commands will be provided as needed. [[BR]] Run an Iperf server on the Left node (see Hint/Guidance section for location of iperf). The Iperf client will be run on the Right node. The duration for an Iperf session (''-t'' option) is 60 seconds unless otherwise mentioned. Note carefully that some exercises require a much longer duration. Ensure that your sliver lifetimes are long enough to capture the duration of your experiment. All of the experiments should be repeated at least a 5 times (especially when the interfaces include random delays or losses) to ensure confidence in the results, as transient conditions can cause significant variations in any individual run. - 1. Question: What are the goodputs when the reno and cubic algorithms are used on the network with no emulated delay or loss? Which is better? - 2. Question: Qualitatively, under what conditions does bic/cubic perform better than Reno’s AIMD? - 3. Question: Change the delay of interface L to 300 ms using the following command, and run an Iperf session for 1800 seconds. {{{ sudo /sbin/tc qdisc add dev L root handle 1:0 netem limit 1000000000 delay 300ms }}} What are the goodputs of reno and cubic? Which performed better? What do you conclude? - 4. Question: Repeat the above experiment with 30 parallel connections and 1800 seconds for each algorithm by using the ''-P 30'' option on Iperf. How do CUBIC and Reno differ? What do you conclude? - 5. Question: Remove the netem queueing discipline which causes delay and add a loss of 5% by using the following commands on the center node. Replace L with the appropriate physical interface. Alternatively, one can change a queueing discipline instead of deleting and adding a new one. {{{ sudo /sbin/tc qdisc del dev L root sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 5% }}} How do the goodputs of Reno and CUBIC differ under loss for 60 s Iperf sessions? [[BR]][[BR]] - '''Some Hint/Guidance on how to run the experiments: ''' [[BR]]Use default TCP congestion control (cubic) on left and right, run iperf between them (TCP flow comes from right to left): [[BR]] On left, run: {{{ /usr/local/etc/emulab/emulab-iperf -s }}} On right, run (10.10.1.1 is the ip address for left in our case, you need to find the actual IP address that is used by left in your own experiment): {{{ /usr/local/etc/emulab/emulab-iperf -c 10.10.1.1 -t 60 }}} Let both left and right use reno as the TCP congestion control mechanism, repeat the experiments: {{{ echo reno | sudo tee /proc/sys/net/ipv4/tcp_congestion_control }}} To find the network interface on the center node that is connected to, say the left node: * Use {{{/sbin/ifconfig}}} on the left node to find the IP address associated with the data interface. Let's say the IP address is 10.10.1.1. * Use {{{/sbin/ifconfig}}} on the center node to find the IP addresses associated with its four data interfaces. One of the interfaces will have an IP address of the form 10.10.1.x. This is the interface that connects to the left node. '''3.2 Ensuring Fairness Among Flows [[BR]]''' Restore the network state with the following command: {{{ sudo /sbin/tc qdisc del dev L root }}} Run an Iperf client on the Right node with 10 parallel TCP connections (use the -P option), connecting to an Iperf server on the Left node for 60 seconds. Simultaneously, run a 20 Mbps UDP Iperf client on the Top node connecting to an UDP Iperf server session running on the Left node for 60 seconds. - 1. Question: What are the throughput shown by the UDP and TCP Iperf server sessions? Why are they what they are? - 2. Question: Provide the necessary steps and commands to enable queueing disciplines that enforce fairness among all the 11 flows in the network, and demonstrate that your solution is effective. '''3.3 Reordering [[BR]]''' Delete the previous queuing discipline and use the following ''netem'' configuration on interface L to create an 100 ms delay: {{{ sudo /sbin/tc qdisc del dev L root sudo /sbin/tc qdisc add dev L root handle 1:0 netem delay 100ms }}} As before, run a TCP Iperf client on the Right node connecting an Iperf server on the Left for 60 seconds. - 1. Question: What is the TCP goodput? - 2. Question: Introduce packet reordering, adding a 75 ms delay variance to the interface L with the following command: {{{ sudo /sbin/tc qdisc change dev L root handle 1:0 netem delay 100ms 75ms }}} What is the TCP goodput now? - 3. Question: By tweaking the parameters in the file ''/proc/sys/net/ipv4/tcp_reordering'', how much can the TCP goodput be improved? What is the best goodput you can show? Why is too high or two low value bad for TCP? '''3.4 Performance of SACK under Lossy Conditions''' Using Cubic as the congestion avoidance algorithm, set the loss characteristics on interface L using the following commands: {{{ sudo /sbin/tc qdisc del dev L root sudo /sbin/tc qdisc add dev L root handle 1:0 netem loss 10% }}} - 1. Question: What kind of goodput do you get using CUBIC with SACK (the default configuration)? Why do you see this performance? - 2. Question: Disable SACK at the sender using this command: {{{ echo 0 | sudo tee /proc/sys/net/ipv4/tcp_sack }}} What is the goodput without SACK? In what circumstances is SACK most beneficial? Remember that, due to the random nature of loss events, these experiments must be repeated at least five times to draw any conclusions. '''3.5 An Experimental Congestion Avoidance module for Linux'''[[BR]] [wiki:GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/KernelMod Instructions for step 3.5]. = [wiki:GENIEducation/SampleAssignments/TcpAssignment/ExerciseLayout/Finish Next: Teardown Experiment] =