wiki:MEBSandbox/UDTExample

Version 9 (modified by Mark Berman, 8 years ago) (diff)

--

Holding Bin for UDT Example Stuff

Steps

  1. Create slice in Flack
  2. Load rspec udt.rspec and submit for sliver creation. (Include a picture here.)
  3. Log into pc1 and pc2 in separate windows.
  4. On pc1, start a UDT file transfer server, using this command:
    % pc1:~% /local/udt4/app/sendfile
    server is ready at port: 9000
    

On pc2, start a UDT file transfer client, using this command:

pc2:~% /local/udt4/app/recvfile pc1 9000 /local/datafiles/sm.10M /dev/null

You should see output like the following in your pc1 window, showing the results of the file transfer. Note the transfer rate.

new connection: 192.168.2.2:55839
speed = 7.14472Mbits/sec
  1. There are three data files available for transfer tests: /local/datafiles/sm.10M is 10MB, /local/datafiles/med.100M is 100MB, and /local/datafiles/lg.1G is 1GB. Leave your transfer server running on pc1, and try transferring each of these files in turn by typing the appropriate commands on pc2. Keep track of the transfer rates in each case.
  2. Now let's compare the results to a TCP-based (FTP) transfer. On pc2, start an ftp client:


(You type ftp pc1, the user name anonymous, and any password you want, although your e-mail address is traditional.)

pc2:~% ftp pc1
Connected to PC1-lan1.
220 (vsFTPd 2.3.2)
Name (pc1:mberman): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> 

Still on pc2, request a file transfer. Note the reported file size, transfer time, and transfer rate.

ftp> get med.100M
local: med.100M remote: med.100M
200 PORT command successful. Consider using PASV.
150 Opening BINARY mode data connection for med.100M (104857600 bytes).
226 Transfer complete.
104857600 bytes received in 1.75 secs (58508.9 kB/s)

Experiment

Now try changing the characteristics of the link and measuring how they affect UDT and TCP performance. You will need to log into your delay node to change the link characteristics. Then, on your delay node, use this command:

%sudo ipfw pipe show

You'll get something like this:

60111:   4.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 udp          0.0.0.0/68    255.255.255.255/67    170991 230659480  0    0 149
60121:   4.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 ip   207.167.231.176/0       195.123.192.8/6     20538  1663157  0    0   0

This information shows the internal configuration of the "pipes" used to emulate network characteristics. You'll want to make note of the two pipe numbers, one for each direction of traffic along your link. In the example above, they are 60111 and 60121.

There are three link characteristics we'll manipulate in this experiment: bandwidth, delay, and packet loss rate. You'll find their values listed on the

To change bandwidth (100M means 100Mbits/s):

sudo ipfw pipe 60111 config bw 100M
sudo ipfw pipe 60121 config bw 100M

To change link delay (delays are measured in ms):

sudo ipfw pipe 60111 config delay 10
sudo ipfw pipe 60121 config delay 10

To change packet loss rate (rate is a probability, so 0.001 means 0.1% packet loss):

sudo ipfw pipe 60111 config plr .0001
sudo ipfw pipe 60121 config plr .0001

Attachments (2)

Download all attachments as: .zip