Continuation Round 9
In this round, the plan was to run all ten slices on all seven sites, for a week.
Wisconsin suffered a major outage in the middle of the week, affecting all of the slices that they're a part of.
The raw logs are at http://www.gpolab.bbn.com/plastic-slices/continuation/round-9/.
plastic-101
SteadyPing, using interval=.005, and this table of client/server pairs:
client | server | server address |
ganel.gpolab.bbn.com | planetlab5.clemson.edu | server=10.42.101.105 |
planetlab4.clemson.edu | plnode2.cip.gatech.edu | server=10.42.101.101 |
plnode1.cip.gatech.edu | pl5.myplc.grnoc.iu.edu | server=10.42.101.73 |
pl4.myplc.grnoc.iu.edu | of-planet2.stanford.edu | server=10.42.101.91 |
of-planet1.stanford.edu | pl02.cs.washington.edu | server=10.42.101.81 |
pl01.cs.washington.edu | wings-openflow-3.wail.wisc.edu | server=10.42.101.96 |
wings-openflow-2.wail.wisc.edu | gardil.gpolab.bbn.com | server=10.42.101.52 |
Commands run on each client
echo "server=$server" sudo ping -q -i .005 -s $((1500-8-20)) $server
Results
Generated from the logs with
subnet=101 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 2 statistics pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
ganel.gpolab.bbn.com:
--- 10.42.101.105 ping statistics --- 60616348 packets transmitted, 60101723 received, +4555 errors, 0% packet loss, time 603873971ms rtt min/avg/max/mdev = 137.979/138.803/29421.397/74.601 ms, pipe 3868
planetlab4.clemson.edu:
--- 10.42.101.101 ping statistics --- 60428426 packets transmitted, 54618968 received, +4576 errors, 9% packet loss, time 604322150ms rtt min/avg/max/mdev = 5.320/56.341/24832.987/38.795 ms, pipe 2486
plnode1.cip.gatech.edu:
--- 10.42.101.73 ping statistics --- 63749841 packets transmitted, 63177150 received, +4782 errors, 0% packet loss, time 603992529ms rtt min/avg/max/mdev = 37.118/38.150/42375.436/75.287 ms, pipe 4211
pl4.myplc.grnoc.iu.edu:
--- 10.42.101.91 ping statistics --- 60563865 packets transmitted, 59943291 received, +5993 errors, 1% packet loss, time 603979365ms rtt min/avg/max/mdev = 102.673/102.975/14960.036/33.122 ms, pipe 1414
of-planet1.stanford.edu:
--- 10.42.101.81 ping statistics --- 59627849 packets transmitted, 58755149 received, +6237 errors, 1% packet loss, time 603972071ms rtt min/avg/max/mdev = 153.075/154.404/13256.334/43.909 ms, pipe 1356
pl01.cs.washington.edu:
--- 10.42.101.96 ping statistics --- 57466511 packets transmitted, 32971249 received, +257541 errors, 42% packet loss, time 603931891ms rtt min/avg/max/mdev = 59.706/61.876/10494.960/37.911 ms, pipe 1009
wings-openflow-2.wail.wisc.edu:
Comments
We didn't get any log data from wings-openflow-2.wail.wisc.edu, and high packet loss from pl01.cs.washington.edu to 1042.101.96 (on wings-openflow-3.wail.wisc.edu), due to the outage at Wisconsin.
The 9% packet loss from planetlab4.clemson.edu to 10.42.101.101 (on plnode2.cip.gatech.edu) is unexpectedly high; we haven't investigated the cause.
plastic-102
SteadyPing, using interval=.005, and this table of client/server pairs:
client | server | server address |
navis.gpolab.bbn.com | planetlab4.clemson.edu | server=10.42.102.104 |
planetlab5.clemson.edu | plnode1.cip.gatech.edu | server=10.42.102.100 |
plnode2.cip.gatech.edu | pl4.myplc.grnoc.iu.edu | server=10.42.102.72 |
pl5.myplc.grnoc.iu.edu | of-planet2.stanford.edu | server=10.42.102.91 |
of-planet1.stanford.edu | pl01.cs.washington.edu | server=10.42.102.80 |
pl02.cs.washington.edu | wings-openflow-2.wail.wisc.edu | server=10.42.102.95 |
wings-openflow-3.wail.wisc.edu | bain.gpolab.bbn.com | server=10.42.102.54 |
Commands run on each client
echo "server=$server" sudo ping -q -i .005 -s $((1500-8-20)) $server
Results
Generated from the logs with
subnet=102 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 2 statistics pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
navis.gpolab.bbn.com:
--- 10.42.102.104 ping statistics --- 60388197 packets transmitted, 59890888 received, +4716 errors, 0% packet loss, time 603564330ms rtt min/avg/max/mdev = 165.433/165.715/19394.280/32.904 ms, pipe 1793
planetlab5.clemson.edu:
--- 10.42.102.100 ping statistics --- 99111985 packets transmitted, 98010857 received, +4419 errors, 1% packet loss, time 603621429ms rtt min/avg/max/mdev = 4.902/6.045/4318.493/7.668 ms, pipe 687
plnode2.cip.gatech.edu:
--- 10.42.102.72 ping statistics --- 36404450 packets transmitted, 35859984 received, +4773 errors, 1% packet loss, time 603602380ms rtt min/avg/max/mdev = 143.039/175.294/17996.136/56.039 ms, pipe 1080
pl5.myplc.grnoc.iu.edu:
--- 10.42.102.91 ping statistics --- 62114299 packets transmitted, 61520938 received, +5814 errors, 0% packet loss, time 603590568ms rtt min/avg/max/mdev = 77.454/77.687/16509.553/22.526 ms, pipe 1475
of-planet1.stanford.edu:
--- 10.42.102.80 ping statistics --- 57598200 packets transmitted, 56930142 received, +5607 errors, 1% packet loss, time 603580960ms rtt min/avg/max/mdev = 20.442/21.395/13855.638/14.768 ms, pipe 1274
pl02.cs.washington.edu:
--- 10.42.102.95 ping statistics --- 56998165 packets transmitted, 31861651 received, +257001 errors, 44% packet loss, time 603573426ms rtt min/avg/max/mdev = 59.711/61.799/17626.355/39.431 ms, pipe 1535
wings-openflow-3.wail.wisc.edu:
Comments
We didn't get any log data from wings-openflow-3.wail.wisc.edu, and high packet loss from pl02.cs.washington.edu to 1042.101.95 (on wings-openflow-2.wail.wisc.edu), due to the outage at Wisconsin.
The absence of packet loss between Clemson and GT here, compared to plastic-101, is somewhat unexpected; we haven't investigated it.
plastic-103
SteadyPerf TCP, using port=5103, time=518400, and this table of client/server pairs:
client | server | server address |
sardis.gpolab.bbn.com | of-planet1.stanford.edu | server=10.42.103.90 |
of-planet2.stanford.edu | pl01.cs.washington.edu | server=10.42.103.80 |
pl02.cs.washington.edu | ganel.gpolab.bbn.com | server=10.42.103.51 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -B $server -p 5103 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -c $server -p 5103 -t 518400
Results
Generated with
subnet=103 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -a -A 5 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
sardis.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.103.90, TCP port 5103 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.103.53 port 60939 connected with 10.42.103.90 port 5103 write2 failed: No route to host [ 3] 0.0-359374.0 sec 104 GBytes 2.48 Mbits/sec -- ------------------------------------------------------------ Client connecting to 10.42.103.90, TCP port 5103 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.103.53 port 48795 connected with 10.42.103.90 port 5103 write2 failed: No route to host [ 3] 0.0-62326.2 sec 17.6 GBytes 2.43 Mbits/sec
of-planet2.stanford.edu:
------------------------------------------------------------ Client connecting to 10.42.103.80, TCP port 5103 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.103.91 port 36826 connected with 10.42.103.80 port 5103 [ ID] Interval Transfer Bandwidth [ 3] 0.0-423883.1 sec 17561894255500267520 bits 0.00 (null)s/sec -- ------------------------------------------------------------ Client connecting to 10.42.103.80, TCP port 5103 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.103.91 port 34411 connected with 10.42.103.80 port 5103 [ ID] Interval Transfer Bandwidth [ 3] 0.0-179565.0 sec 13485676214369468416 bits 0.00 (null)s/sec
pl02.cs.washington.edu:
------------------------------------------------------------ Client connecting to 10.42.103.51, TCP port 5103 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.103.81 port 46683 connected with 10.42.103.51 port 5103 write2 failed: No route to host [ 3] 0.0-486223.7 sec 127 GBytes 2.23 Mbits/sec
Comments
We've seen the "0.00 (null)s/sec" business from iperf TCP summaries before; traffic is clearly flowing, so we're not sure what causes that.
None of these ran to completion, due to temporary network failures (the "no route to host" errors). It'd be nice to have a more resilient TCP traffic generator.
plastic-104
SteadyPerf UDP, using port=5104, time=518400, rate=11, and this table of client/server pairs:
client | server | server address |
ganel.gpolab.bbn.com | wings-openflow-2.wail.wisc.edu | server=10.42.104.95 |
pl5.myplc.grnoc.iu.edu | sardis.gpolab.bbn.com | server=10.42.104.53 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -u -B $server -p 5104 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -u -c $server -p 5104 -t 518400 -b 11M
Results
Generated with
subnet=104 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 10 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
ganel.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.104.95, UDP port 5104 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.104.51 port 41464 connected with 10.42.104.95 port 5104 [ 3] 0.0-518400.0 sec 664 GBytes 11.0 Mbits/sec [ 3] Sent 484931597 datagrams [ 3] WARNING: did not receive ack of last datagram after 10 tries.
pl5.myplc.grnoc.iu.edu:
------------------------------------------------------------ Client connecting to 10.42.104.53, UDP port 5104 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.104.73 port 54328 connected with 10.42.104.53 port 5104 [ 3] 0.0-518400.0 sec 664 GBytes 11.0 Mbits/sec [ 3] Sent 484932280 datagrams [ 3] Server Report: [ 3] 0.0-518390.2 sec 657 GBytes 10.9 Mbits/sec 0.017 ms 5368914/484932279 (1.1%) [ 3] 0.0-518390.2 sec 16126 datagrams received out-of-order
Comments
We didn't get back a server report from ganel.gpolab.bbn.com, due to the outage at Wisconsin. (It was connecting to 10.42.104.95.)
We haven't analyzed whether the packet loss from Indiana to BBN was consistent throughout the run, or due to a single outage, or what.
plastic-105
SteadyPerf UDP, using port=5105, time=518400, rate=6, and this table of client/server pairs:
client | server | server address |
wings-openflow-2.wail.wisc.edu | planetlab5.clemson.edu | server=10.42.105.105 |
planetlab4.clemson.edu | navis.gpolab.bbn.com | server=10.42.105.55 |
bain.gpolab.bbn.com | plnode2.cip.gatech.edu | server=10.42.105.101 |
plnode1.cip.gatech.edu | wings-openflow-3.wail.wisc.edu | server=10.42.105.96 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -u -B $server -p 5105 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -u -c $server -p 5105 -t 518400 -b 6M
Results
Generated with
subnet=105 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 10 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
wings-openflow-2.wail.wisc.edu:
------------------------------------------------------------ Client connecting to 10.42.105.105, UDP port 5105 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.105.95 port 60158 connected with 10.42.105.105 port 5105
planetlab4.clemson.edu:
------------------------------------------------------------ Client connecting to 10.42.105.55, UDP port 5105 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.105.104 port 41408 connected with 10.42.105.55 port 5105 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 362 GBytes 6.00 Mbits/sec [ 3] Sent 264489797 datagrams [ 3] Server Report: [ 3] 0.0-518394.5 sec 358 GBytes 5.94 Mbits/sec 0.010 ms 2818608/264489796 (1.1%) [ 3] 0.0-518394.5 sec 8277 datagrams received out-of-order
bain.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.105.101, UDP port 5105 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.105.54 port 37129 connected with 10.42.105.101 port 5105 [ 3] 0.0-518400.0 sec 362 GBytes 6.00 Mbits/sec [ 3] Sent 264487534 datagrams [ 3] Server Report: [ 3] 0.0-518396.4 sec 357 GBytes 5.92 Mbits/sec 3.197 ms 3455085/264487534 (1.3%) [ 3] 0.0-518396.4 sec 7492 datagrams received out-of-order
plnode1.cip.gatech.edu:
------------------------------------------------------------ Client connecting to 10.42.105.96, UDP port 5105 Sending 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.105.100 port 47829 connected with 10.42.105.96 port 5105 [ 3] 0.0-518400.0 sec 362 GBytes 6.00 Mbits/sec [ 3] Sent 264473352 datagrams [ 3] WARNING: did not receive ack of last datagram after 10 tries.
Comments
We didn't get any log data from wings-openflow-2.wail.wisc.edu, due to the outage at Wisconsin.
We didn't get back a server report from plnode1.cip.gatech.edu, due to the outage at Wisconsin. (It was connecting to 10.42.104.96.)
We haven't analyzed whether the packet loss in the others was consistent throughout the run, or due to a single outage, or what.
plastic-106
SteadyPerf TCP, using port=5106, time=518400, and this table of client/server pairs:
client | server | server address |
planetlab5.clemson.edu | wings-openflow-2.wail.wisc.edu | server=10.42.106.95 |
wings-openflow-3.wail.wisc.edu | plnode1.cip.gatech.edu | server=10.42.106.100 |
plnode2.cip.gatech.edu | bain.gpolab.bbn.com | server=10.42.106.54 |
navis.gpolab.bbn.com | planetlab4.clemson.edu | server=10.42.106.104 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -B $server -p 5106 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -c $server -p 5106 -t 518400
Results
Generated with
subnet=106 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 5 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
planetlab5.clemson.edu:
------------------------------------------------------------ Client connecting to 10.42.106.95, TCP port 5106 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.106.105 port 55591 connected with 10.42.106.95 port 5106 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 14017702187097243648 bits 0.00 (null)s/sec
wings-openflow-3.wail.wisc.edu:
------------------------------------------------------------ Client connecting to 10.42.106.100, TCP port 5106 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.106.96 port 32950 connected with 10.42.106.100 port 5106
plnode2.cip.gatech.edu:
------------------------------------------------------------ Client connecting to 10.42.106.54, TCP port 5106 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.106.101 port 46041 connected with 10.42.106.54 port 5106 write2 failed: No route to host [ 3] 0.0-486087.3 sec 16.7 GBytes 296 Kbits/sec
navis.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.106.104, TCP port 5106 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.106.55 port 60960 connected with 10.42.106.104 port 5106 write2 failed: No route to host [ 3] 0.0-486070.0 sec 171 GBytes 3.02 Mbits/sec
Comments
We've seen the "0.00 (null)s/sec" business from iperf TCP summaries before; traffic is clearly flowing, so we're not sure what causes that.
We didn't get any log data from wings-openflow-3.wail.wisc.edu, due to the outage at Wisconsin.
Neither of the others ran to completion, due to temporary network failures (the "no route to host" errors). It'd be nice to have a more resilient TCP traffic generator.
plastic-107
SteadyPerf TCP, using port=5107, time=518400, and this table of client/server pairs:
client | server | server address |
planetlab5.clemson.edu | pl4.myplc.grnoc.iu.edu | server=10.42.107.72 |
pl5.myplc.grnoc.iu.edu | plnode2.cip.gatech.edu | server=10.42.107.101 |
plnode1.cip.gatech.edu | pl02.cs.washington.edu | server=10.42.107.81 |
pl01.cs.washington.edu | planetlab4.clemson.edu | server=10.42.107.104 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -B $server -p 5107 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -c $server -p 5107 -t 518400
Results
Generated with
subnet=107 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 5 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
planetlab5.clemson.edu:
------------------------------------------------------------ Client connecting to 10.42.107.72, TCP port 5107 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.107.105 port 50196 connected with 10.42.107.72 port 5107 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 9618427753724442624 bits 0.00 (null)s/sec
pl5.myplc.grnoc.iu.edu:
------------------------------------------------------------ Client connecting to 10.42.107.101, TCP port 5107 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.107.73 port 38928 connected with 10.42.107.101 port 5107 write2 failed: No route to host [ 3] 0.0-486026.4 sec 143 GBytes 2.53 Mbits/sec
plnode1.cip.gatech.edu:
------------------------------------------------------------ Client connecting to 10.42.107.81, TCP port 5107 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.107.100 port 55218 connected with 10.42.107.81 port 5107 write2 failed: No route to host [ 3] 0.0-486028.4 sec 255 GBytes 4.51 Mbits/sec
pl01.cs.washington.edu:
------------------------------------------------------------ Client connecting to 10.42.107.104, TCP port 5107 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.107.80 port 57380 connected with 10.42.107.104 port 5107 write2 failed: No route to host [ 3] 0.0-486023.2 sec 279 GBytes 4.92 Mbits/sec
Comments
We've seen the "0.00 (null)s/sec" business from iperf TCP summaries before; traffic is clearly flowing, so we're not sure what causes that.
None of the others ran to completion, due to temporary network failures (the "no route to host" errors). It'd be nice to have a more resilient TCP traffic generator.
plastic-108
SteadyPerf UDP, using port=5108, time=518400, rate=8, and this table of client/server pairs:
client | server | server address |
wings-openflow-3.wail.wisc.edu | of-planet2.stanford.edu | server=10.42.108.91 |
of-planet1.stanford.edu | pl5.myplc.grnoc.iu.edu | server=10.42.108.73 |
pl4.myplc.grnoc.iu.edu | wings-openflow-2.wail.wisc.edu | server=10.42.108.95 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -u -B $server -p 5108 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -u -c $server -p 5108 -t 518400 -b 8M
Results
Generated with
subnet=108 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 10 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
wings-openflow-3.wail.wisc.edu:
------------------------------------------------------------ Client connecting to 10.42.108.91, UDP port 5108 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.108.96 port 46435 connected with 10.42.108.91 port 5108
of-planet1.stanford.edu:
------------------------------------------------------------ Client connecting to 10.42.108.73, UDP port 5108 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.108.90 port 60229 connected with 10.42.108.73 port 5108 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 483 GBytes 8.00 Mbits/sec [ 3] Sent 352626691 datagrams [ 3] Server Report: [ 3] 0.0-518393.7 sec 476 GBytes 7.88 Mbits/sec 0.027 ms 5105757/352626690 (1.4%) [ 3] 0.0-518393.7 sec 16667 datagrams received out-of-order
pl4.myplc.grnoc.iu.edu:
------------------------------------------------------------ Client connecting to 10.42.108.95, UDP port 5108 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.108.72 port 46159 connected with 10.42.108.95 port 5108 [ 3] 0.0-518400.0 sec 483 GBytes 8.00 Mbits/sec [ 3] Sent 352649481 datagrams [ 3] WARNING: did not receive ack of last datagram after 10 tries.
Comments
We didn't get any log data from wings-openflow-3.wail.wisc.edu, due to the outage at Wisconsin.
We didn't get back a server report from pl4.myplc.grnoc.iu.edu, due to the outage at Wisconsin. (It was connecting to 10.42.108.95.)
We haven't analyzed whether the packet loss in the other was consistent throughout the run, or due to a single outage, or what.
plastic-109
SteadyPerf UDP, using port=5109, time=518400, rate=4, and this table of client/server pairs:
client | server | server address |
gardil.gpolab.bbn.com | pl5.myplc.grnoc.iu.edu | server=10.42.109.73 |
pl4.myplc.grnoc.iu.edu | pl02.cs.washington.edu | server=10.42.109.81 |
pl01.cs.washington.edu | planetlab5.clemson.edu | server=10.42.109.105 |
planetlab4.clemson.edu | of-planet1.stanford.edu | server=10.42.109.90 |
of-planet2.stanford.edu | wings-openflow-3.wail.wisc.edu | server=10.42.109.96 |
wings-openflow-2.wail.wisc.edu | ganel.gpolab.bbn.com | server=10.42.109.51 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -u -B $server -p 5109 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -u -c $server -p 5109 -t 518400 -b 4M
Results
Generated with
subnet=109 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 10 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
gardil.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.109.73, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.52 port 34116 connected with 10.42.109.73 port 5109 [ 3] 0.0-518400.0 sec 241 GBytes 4.00 Mbits/sec [ 3] Sent 176326532 datagrams [ 3] Server Report: [ 3] 0.0-518397.1 sec 239 GBytes 3.96 Mbits/sec 0.005 ms 1921335/176326532 (1.1%) [ 3] 0.0-518397.1 sec 8568 datagrams received out-of-order
pl4.myplc.grnoc.iu.edu:
------------------------------------------------------------ Client connecting to 10.42.109.81, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.72 port 42414 connected with 10.42.109.81 port 5109 [ 3] 0.0-518400.0 sec 241 GBytes 4.00 Mbits/sec [ 3] Sent 176326102 datagrams [ 3] Server Report: [ 3] 0.0-518397.2 sec 239 GBytes 3.96 Mbits/sec 0.891 ms 1925666/176326102 (1.1%) [ 3] 0.0-518397.2 sec 79755 datagrams received out-of-order
pl01.cs.washington.edu:
------------------------------------------------------------ Client connecting to 10.42.109.105, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.80 port 57597 connected with 10.42.109.105 port 5109 [ 3] 0.0-518400.0 sec 241 GBytes 4.00 Mbits/sec [ 3] Sent 176247368 datagrams [ 3] Server Report: [ 3] 0.0-518395.7 sec 238 GBytes 3.95 Mbits/sec 0.100 ms 2084033/176247368 (1.2%) [ 3] 0.0-518395.7 sec 24113 datagrams received out-of-order
planetlab4.clemson.edu:
------------------------------------------------------------ Client connecting to 10.42.109.90, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.104 port 38981 connected with 10.42.109.90 port 5109 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 241 GBytes 4.00 Mbits/sec [ 3] Sent 176326532 datagrams [ 3] Server Report: [ 3] 0.0-518397.4 sec 238 GBytes 3.94 Mbits/sec 0.019 ms 2472993/176326531 (1.4%) [ 3] 0.0-518397.4 sec 10171 datagrams received out-of-order
of-planet2.stanford.edu:
------------------------------------------------------------ Client connecting to 10.42.109.96, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.91 port 43669 connected with 10.42.109.96 port 5109 [ ID] Interval Transfer Bandwidth [ 3] 0.0-518400.0 sec 241 GBytes 4.00 Mbits/sec [ 3] Sent 176324610 datagrams [ 3] WARNING: did not receive ack of last datagram after 10 tries.
wings-openflow-2.wail.wisc.edu:
------------------------------------------------------------ Client connecting to 10.42.109.51, UDP port 5109 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.109.95 port 59778 connected with 10.42.109.51 port 5109
Comments
We didn't get back a server report from of-planet2.stanford.edu, due to the outage at Wisconsin. (It was connecting to 10.42.109.96.)
We didn't get any log data from wings-openflow-2.wail.wisc.edu, due to the outage at Wisconsin.
We haven't analyzed whether the packet loss in the others was consistent throughout the run, or due to a single outage, or what.
plastic-110
SteadyPerf TCP, using port=5110, time=518400, and this table of client/server pairs:
client | server | server address |
bain.gpolab.bbn.com | pl01.cs.washington.edu | server=10.42.110.80 |
pl02.cs.washington.edu | of-planet1.stanford.edu | server=10.42.110.90 |
of-planet2.stanford.edu | pl4.myplc.grnoc.iu.edu | server=10.42.110.72 |
pl5.myplc.grnoc.iu.edu | plnode1.cip.gatech.edu | server=10.42.110.100 |
plnode2.cip.gatech.edu | navis.gpolab.bbn.com | server=10.42.110.55 |
One-time prep commands run on each client and server
sudo yum -y install iperf
Commands run on each server
echo "server=$server" nice -n 19 iperf -B $server -p 5110 -s -i 1
Commands run on each client
echo "server=$server" nice -n 19 iperf -c $server -p 5110 -t 518400
Results
Generated with
subnet=110 for host in $(awk 'NR%2==1' ~/plastic-slices/logins/logins-plastic-$subnet.txt | sed -r -e 's/.+@//') ; do echo -e "$host:\n\n{{{" ; grep -A 5 -B 1 "Client connecting" pgenigpolabbbncom_plastic$subnet\@$host.log ; echo -e "}}}\n" ; done
and then edited slightly to remove artifacts (like control characters, my prompt, etc).
bain.gpolab.bbn.com:
------------------------------------------------------------ Client connecting to 10.42.110.80, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.54 port 59225 connected with 10.42.110.80 port 5110 write2 failed: No route to host [ 3] 0.0-485821.8 sec 346 GBytes 6.12 Mbits/sec
pl02.cs.washington.edu:
------------------------------------------------------------ Client connecting to 10.42.110.90, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.81 port 46991 connected with 10.42.110.90 port 5110 write2 failed: No route to host [ 3] 0.0-359034.3 sec 937 GBytes 22.4 Mbits/sec -- ------------------------------------------------------------ Client connecting to 10.42.110.90, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.81 port 55111 connected with 10.42.110.90 port 5110 write2 failed: No route to host [ 3] 0.0-63692.8 sec 170 GBytes 22.9 Mbits/sec
of-planet2.stanford.edu:
------------------------------------------------------------ Client connecting to 10.42.110.72, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.91 port 59670 connected with 10.42.110.72 port 5110 [ ID] Interval Transfer Bandwidth [ 3] 0.0-422191.7 sec 17205709671801458688 bits 0.00 (null)s/sec -- ------------------------------------------------------------ Client connecting to 10.42.110.72, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.91 port 57556 connected with 10.42.110.72 port 5110 [ ID] Interval Transfer Bandwidth [ 3] 0.0-179228.7 sec 13026921957609984000 bits 0.00 (null)s/sec
pl5.myplc.grnoc.iu.edu:
------------------------------------------------------------ Client connecting to 10.42.110.100, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.73 port 34143 connected with 10.42.110.100 port 5110 write2 failed: No route to host [ 3] 0.0-134175.6 sec 45.2 GBytes 2.89 Mbits/sec -- ------------------------------------------------------------ Client connecting to 10.42.110.100, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.73 port 55400 connected with 10.42.110.100 port 5110 write2 failed: No route to host [ 3] 0.0-240750.2 sec 83.3 GBytes 2.97 Mbits/sec
plnode2.cip.gatech.edu:
------------------------------------------------------------ Client connecting to 10.42.110.55, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.101 port 52109 connected with 10.42.110.55 port 5110 write2 failed: No route to host [ 3] 0.0-134135.9 sec 8.72 GBytes 559 Kbits/sec -- ------------------------------------------------------------ Client connecting to 10.42.110.55, TCP port 5110 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.110.101 port 35891 connected with 10.42.110.55 port 5110 write2 failed: No route to host [ 3] 0.0-240675.2 sec 14.1 GBytes 503 Kbits/sec
Comments
We've seen the "0.00 (null)s/sec" business from iperf TCP summaries before; traffic is clearly flowing, so we're not sure what causes that.
None of the others ran to completion, due to temporary network failures (the "no route to host" errors). It'd be nice to have a more resilient TCP traffic generator.