Sample Experiment: UDT Evaluation
In this experiment, you will compare the performance of UDT (UDP-based Data Transfer) to TCP (transmission control protocol) in a simple network under varying conditions.
This experiment is based on work carried out at the University of Missouri - Kansas City, by Sunae Shin, Kaustubh Dhondge, and Baek-Young Choi. Their paper "Understanding the Performance of TCP and UDP-based Data Transfer Protocols using EMULAB" was presented at the First GENI Research and Educational Experiment Workshop (GREE2012), March 15-16, 2012, Los Angeles, CA. link to paper
Before beginning this experiment, you should be prepared with the following.
- You have GENI credentials to obtain GENI resources. (If not, see SignMeUp).
- You are able to use Flack to request GENI resources. (If not, see the Flack tutorial).
- You are comfortable using ssh and executing basic commands using a UNIX shell. Tips about how to login to GENI hosts.
- Download the attached rspec file and save it on your machine. (Make sure to save in raw format.)
- Start Flack, create a new slice, load rspec udt.rspec and submit for sliver creation (also fine to use omni, if you prefer). Your sliver should look something like this:
You will use the following techniques during this experiment.
File Transfer Using UDT
Follow these steps to perform a file transfer using UDT.
- Log into pc1 and pc2 in separate windows.
- On pc1, start a UDT file transfer server, using this command:
% pc1:~% /local/udt4/app/sendfile server is ready at port: 9000
- On pc2, start a UDT file transfer client, using this command:
pc2:~% /local/udt4/app/recvfile pc1 9000 /local/datafiles/sm.10M /dev/null
You should see output like the following in your pc1 window, showing the results of the file transfer. Note the transfer rate.
new connection: 192.168.2.2:55839 speed = 7.14472Mbits/sec
- There are three data files available for transfer tests: /local/datafiles/sm.10M is 10MB, /local/datafiles/med.100M is 100MB, and /local/datafiles/lg.1G is 1000MB. Leave your transfer server running on pc1, and try transferring each of these files in turn by typing the appropriate commands on pc2. Keep track of the transfer rates in each case.
- You can leave your UDT server running or stop it with ctrl-C.
File Transfer Using FTP
Follow these steps to perform a file transfer using FTP.
- For a TCP-based (FTP) transfer, there's already a FTP server running on pc1. Log into pc2, and start an ftp client:
(You type ftp pc1, the user name anonymous, and any password you want, although your e-mail address is traditional.)
pc2:~% ftp pc1 Connected to PC1-lan1. 220 (vsFTPd 2.3.2) Name (pc1:mberman): anonymous 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp>
- Still on pc2, request a file transfer. Note the reported file size, transfer time, and transfer rate.
ftp> get med.100M /dev/null local: /dev/null remote: med.100M 200 PORT command successful. Consider using PASV. 150 Opening BINARY mode data connection for med.100M (104857600 bytes). 226 Transfer complete. 104857600 bytes received in 8.91 secs (11491.9 kB/s)
- You can perform additional transfers with additional get commands. When you're done, exit the ftp client with the quit command.
Viewing and Adjusting link characteristics
In this experiment, you'll be changing the characteristics of the link and measuring how they affect UDT and TCP performance.
- Log into your delay node as you do with any other node. Then, on your delay node, use this command:
%sudo ipfw pipe show
You'll get something like this:
60111: 100.000 Mbit/s 1 ms 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 220.127.116.11/0 18.104.22.168/6 7 1060 0 0 0 60121: 100.000 Mbit/s 1 ms 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 22.214.171.124/0 126.96.36.199/6 8 1138 0 0 0
This information shows the internal configuration of the "pipes" used to emulate network characteristics. (Your output may look different, depending on the version of ipfw installed on your delay node. In any case, the information you need is on the first line of output for each pipe.)
You'll want to make note of the two pipe numbers, one for each direction of traffic along your link. In the example above, they are 60111 and 60121.
There are three link characteristics we'll manipulate in this experiment: bandwidth, delay, and packet loss rate. You'll find their values listed in the ipfw output above. The link bandwidth appears on the first line immediately after the pipe number. It's 100Mbps in the example shown above. The next value shown is the delay, 1 ms in the example above. The packet loss rate (PLR) is omitted if it's zero, as shown above. If non-zero, you'll see something like plr 0.000100 immediately after the "50 sl." on the first output line.
It is possible to adjust the parameters of the two directions of your link separately, to emulate asymmetric links. In this experiment, however, we are looking at symmetric links, so we'll always change the settings on both pipes together.
Here are the command sequences you'll need to change your link parameters. In each case, you'll need to provide the correct pipe numbers, if they're different from the example.
- To change bandwidth (100M means 100Mbits/s):
sudo ipfw pipe 60111 config bw 100M sudo ipfw pipe 60121 config bw 100M
- Request a bandwidth of zero to use the full capacity of the link (unlimited):
sudo ipfw pipe 60111 config bw 0 sudo ipfw pipe 60121 config bw 0
- To change link delay (delays are measured in ms):
sudo ipfw pipe 60111 config delay 10 sudo ipfw pipe 60121 config delay 10
- To change packet loss rate (rate is a probability, so 0.001 means 0.1% packet loss):
sudo ipfw pipe 60111 config plr .0001 sudo ipfw pipe 60121 config plr .0001
- You can combine settings for bandwidth, delay, and loss by specifying more than one in a single ipfw command. We'll use this form in the procedure below.
- Set your link parameters to use maximum bandwidth, no delay, no packet loss:
sudo ipfw pipe 60111 config bw 0 delay 0 plr 0 sudo ipfw pipe 60121 config bw 0 delay 0 plr 0
- Verify with
sudo ipfw pipe show 60111: unlimited 0 ms 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 188.8.131.52/0 184.108.40.206/6 7 1060 0 0 0 60121: unlimited 0 ms 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 220.127.116.11/0 18.104.22.168/6 8 1138 0 0 0
Note that bandwidth is set to unlimited, delay to 0 ms, and no PLR is shown.
- Using this initial setting, try a few UDT transfers, including the larger files. Now try FTP transfers. Record the transfer sizes and rates.
- Now change the link parameters to reduce the available bandwidth to 10Mbps:
sudo ipfw pipe 60111 config bw 10M delay 0 plr 0 sudo ipfw pipe 60121 config bw 10M delay 0 plr 0
- Repeat your file transfers with the new settings. As before, note the transfer sizes and rates, as well as the link settings.
- Continue with additional trials, varying each of the three link parameters over a range sufficient to observe meaningful performance differences. Record your data.
What to hand in
- Your raw data and appropriate graphs illustrating changes in performance for the two transfer protocols with differing link parameters.
- Your analysis. Here are some questions to consider.
- Does one protocol outperform the other?
- Under what conditions are performance differences most clearly seen? Why?
- What shortcomings in the experiment design may affect your results? How might you improve the experiment design?
- What interesting characteristics of the transfer protocols are not measured in this experiment? How might you design an experiment to investigate these?