wiki:PlasticSlices/BaselineEvaluation

Version 15 (modified by hdempsey@bbn.com, 13 years ago) (diff)

fix typo

Plastic Slices Baseline Plan

The GENI Plastic Slices baseline evaluations capture progress throughout the Plastic Slices project. Each planned baseline will capture information in the following major areas:

  • Environment: System configuration, software versions, and resources for compute, OF, VLAN, and Monitoring.
  • Evaluation details: Detailed description of the Topology, Traffic Profiles and tools used in the evaluation.
  • Evaluation criteria: Detailed description of the Criteria used to determine success along with any assumption made in defining success.
  • Baseline Evaluation Report (BER): Report capturing all results and analysis for each Baseline and an overall cumulative final version.

In addition to the above areas, the GPO will also actively conduct project level coordination that will include requirements tracking, experiment design, GMOC monitoring data reviews. GPO will also provide campuses support for OpenFlow resource, MyPLC resources, and campus GMOC monitoring data feeds. Additionally, the GPO will provide and support for a GENI AM API compliant ProtoGENI aggregate with four hosts.

Baseline Areas

This section captures the implementation details for all major areas in each of the baselines planned.

Environment

Capturing the definition of the environment used to validate the baselines scenarios is a crucial part of this activity. A complete definition will be captured to facilitate the repeatability of the baseline scenario in the Quality Assurance step to be performed by Georgia Tech. Environment captures will detail:

  • Configuration for all compute resources:
    • Prerequisite environment changes (ex. ports to be opened in FW).
    • software versions
    • configuration settings (ports)
    • OS versions, hardware plaforms
  • Configuration for all OF and VLAN resources:
    • Firmaware versions
    • Hardware platform, device models.
    • FlowVisor and Expedient Controllers (OF)
  • Configuration for all Management and Monitoring resources
    • MyPLC (OS, SW version, Settings)
    • GMOC API version, Data monitoring Host,
    • GMOC STOP procedure definition.
    • NCSA security plan versions

Evaluation details

  • Definition of each entity in test and of its role.
  • Detailed topology of each test.
  • Tools used for the evaluation (traffic generation, site monitoring, test scripts?)
  • Traffic Profiles (iperf?, traffic type, packet rates, packet sizes, duration, etc)
  • Result gathering (collect logs(OF,expedient,MyPLC), collect monitoring statistics, collect traffic generation results, other?)

Evaluation criteria

Each baseline run will be evaluated to determine success, the following will be considered:

  • Traffic sent was successfully received.
  • All nodes are up and available through out the baseline time frame.
  • Logs and statistics are in line with the expected results and no FATAL or CRITICAL failure are found. Lesser priority issues, such as WARNINGS are acceptable as long as they can be shown no to impact ability to communicate between endpoints.
  • Runtime assumptions can be made as long as reviewed and deemed reasonable. For example, assuming that ?? (need relevant example).

Baseline Evaluation Report

As each baseline is completed, a Baseline Evaluation Report is generated that captures the results of the baseline, an analysis of the results, and any impact that the findings may have on the current Plastic Slices Plan.

Baseline evaluation reports will also capture progress for requirements being tracked as well as progress towards the overall goals. A final version of the BER is generated to captures a cumulative representation of all Plastic Slices baseline assessments.

Question: Does it make sense to report by major areas? Ex. OF, Compute resources,Monitoring, etc?

Baseline Detailed Plans

This section provides a detailed description for each baseline and their respective Evaluation details (topology, tools, traffic profiles, etc), Evaluation Criteria, and reporting.

Baseline 1

FIXME: How much of this next section makes sense as a per-baseline thing, and how much needs to be added to each slice as a per-slice thing?

Due 2011-05-09
Completed 2011-05-19
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 24 hours.

Detailed Description:

Cause the experiment running in each slice to move at least 1 GB of data over the course of a 24-hour period. Multiple slices should be moving data simultaneously, but it can be slow, or bursty, as long as it reaches 1 GB total over the course of the day.

The purpose of this baseline is to confirm basic functionality of the experiments, and stability of the aggregates.

Summary of overall results:

  • All slices had at least one client/server pair complete successfully, with results consistent with what we'd expect.
  • Most slicess had all client/server pairs complete successfully.

plastic-101

GigaPing, using count=120000, and this table of client/server pairs:

client server server address
ganel.gpolab.bbn.com planetlab5.clemson.edu server=10.42.101.105
planetlab4.clemson.edu pl5.myplc.grnoc.iu.edu server=10.42.101.73
pl4.myplc.grnoc.iu.edu of-planet2.stanford.edu server=10.42.101.91
of-planet1.stanford.edu pl02.cs.washington.edu server=10.42.101.81
pl01.cs.washington.edu.edu wings-openflow-3.wail.wisc.edu server=10.42.101.96
wings-openflow-2.wail.wisc.edu gardil.gpolab.bbn.com server=10.42.101.52

Commands run on each client

server=<ipaddr>
sudo ping -i .001 -s $((1500-8-20)) -c 120000 $server

Results

ganel.gpolab.bbn.com:

--- 10.42.101.105 ping statistics ---
120000 packets transmitted, 119971 received, 0% packet loss, time 970088ms
rtt min/avg/max/mdev = 59.356/59.555/617.615/6.479 ms, pipe 76

planetlab4.clemson.edu:

--- 10.42.101.73 ping statistics ---
120000 packets transmitted, 120000 received, 0% packet loss, time 834526ms
rtt min/avg/max/mdev = 39.670/39.861/626.202/6.815 ms, pipe 62

pl4.myplc.grnoc.iu.edu:

--- 10.42.101.91 ping statistics ---
120000 packets transmitted, 119945 received, 0% packet loss, time 996653ms
rtt min/avg/max/mdev = 102.824/103.855/2228.383/32.965 ms, pipe 373

of-planet1.stanford.edu:

--- 10.42.101.81 ping statistics ---
120000 packets transmitted, 117874 received, 1% packet loss, time 893630ms
rtt min/avg/max/mdev = 152.657/177.853/30634.399/631.352 ms, pipe 3394

pl01.cs.washington.edu:

--- 10.42.101.96 ping statistics ---
120000 packets transmitted, 0 received, 100% packet loss, time 1357084ms

wings-openflow-2.wail.wisc.edu:

--- 10.42.101.52 ping statistics ---
120000 packets transmitted, 120000 received, 0% packet loss, time 944479ms
rtt min/avg/max/mdev = 29.548/29.607/276.653/2.024 ms, pipe 32

Analysis

pl01.cs.washington.edu was unable to ping 10.42.101.96 (on wings-openflow-3.wail.wisc.edu) at all.

The other results seem consistent with what we'd expect.

plastic-102

GigaPing, using count=120000, and this table of client/server pairs:

client server server address
sardis.gpolab.bbn.com planetlab4.clemson.edu server=10.42.102.104
planetlab5.clemson.edu pl4.myplc.grnoc.iu.edu server=10.42.102.72
pl5.myplc.grnoc.iu.edu of-planet4.stanford.edu server=10.42.102.93
of-planet3.stanford.edu pl01.cs.washington.edu server=10.42.102.80
pl02.cs.washington.edu wings-openflow-2.wail.wisc.edu server=10.42.102.95
wings-openflow-3.wail.wisc.edu bain.gpolab.bbn.com server=10.42.102.54

Commands run on each client

server=<ipaddr>
sudo ping -i .001 -s $((1500-8-20)) -c 120000 $server

Results

sardis.gpolab.bbn.com:

--- 10.42.102.104 ping statistics ---
120000 packets transmitted, 119950 received, 0% packet loss, time 950547ms
rtt min/avg/max/mdev = 164.723/165.996/1954.900/39.054 ms, pipe 202

planetlab5.clemson.edu:

--- 10.42.102.72 ping statistics ---
120000 packets transmitted, 119865 received, 0% packet loss, time 922229ms
rtt min/avg/max/mdev = 145.015/146.233/2030.306/37.677 ms, pipe 204

pl5.myplc.grnoc.iu.edu:

--- 10.42.102.93 ping statistics ---
120000 packets transmitted, 0 received, 100% packet loss, time 1227139ms

of-planet3.stanford.edu:

--- 10.42.102.80 ping statistics ---
120000 packets transmitted, 0 received, 100% packet loss, time 1349857ms

pl02.cs.washington.edu:

--- 10.42.102.95 ping statistics ---
120000 packets transmitted, 119888 received, 0% packet loss, time 861916ms
rtt min/avg/max/mdev = 59.672/60.787/1546.134/20.695 ms, pipe 151

wings-openflow-3.wail.wisc.edu:

--- 10.42.102.54 ping statistics ---
120000 packets transmitted, 119808 received, 0% packet loss, time 940112ms
rtt min/avg/max/mdev = 29.551/29.701/554.670/6.144 ms, pipe 48

Analysis

pl5.myplc.grnoc.iu.edu was unable to ping 10.42.102.93 (on of-planet4.stanford.edu) at all, and of-planet3.stanford.edu was unable to ping 10.42.102.80 (on pl01.cs.washington.edu) at all.

The other results seem consistent with what we'd expect.

plastic-103

GigaPerf TCP, using port=5103, size=350, and this table of client/server pairs:

client server server address
of-planet1.stanford.edu navis.gpolab.bbn.com server=10.42.103.55
ganel.gpolab.bbn.com pl01.cs.washington.edu server=10.42.103.80
pl02.cs.washington.edu of-planet2.stanford.edu server=10.42.103.91

One-time prep commands run on each client and server

sudo yum -y install iperf

Commands run on each server

server=<ipaddr>
nice -n 19 iperf -B $server -p 5103 -s -i 1

Commands run on each client

server=<ipaddr>
nice -n 19 iperf -c $server -p 5103 -n 350M

Results

of-planet1.stanford.edu:

------------------------------------------------------------
Client connecting to 10.42.103.55, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.103.90 port 54079 connected with 10.42.103.55 port 5103
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-708.0 sec   350 MBytes  4.15 Mbits/sec

ganel.gpolab.bbn.com:

connect failed: Connection timed out
write1 failed: Broken pipe
write2 failed: Broken pipe
------------------------------------------------------------
Client connecting to 10.42.103.80, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 0.0.0.0 port 48977 connected with 10.42.103.80 port 5103
[  3]  0.0- 0.0 sec  0.00 Bytes  0.00 bits/sec

pl02.cs.washington.edu:

------------------------------------------------------------
Client connecting to 10.42.103.91, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.103.81 port 38247 connected with 10.42.103.91 port 5103
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-918.1 sec   350 MBytes  3.20 Mbits/sec

Analysis

ganel.gpolab.bbn.com was unable to connect to 10.42.103.80 (on pl01.cs.washington.edu).

The other results seem consistent with what we'd expect.

plastic-104

GigaPerf UDP, using port=5104, size=1000, rate=100, and this table of client/server pairs:

client server server address
planetlab4.clemson.edu gardil.gpolab.bbn.com server=10.42.104.52

One-time prep commands run on each client and server

sudo yum -y install iperf

Commands run on each server

server=<ipaddr>
nice -n 19 iperf -u -B $server -p 5104 -s -i 1

Commands run on each client

server=<ipaddr>
nice -n 19 iperf -u -c $server -p 5104 -n 1000M -b 100M

Results

planetlab4.clemson.edu:

------------------------------------------------------------
Client connecting to 10.42.104.52, UDP port 5104
Sending 1470 byte datagrams
UDP buffer size:  109 KByte (default)
------------------------------------------------------------
[  3] local 10.42.104.104 port 33857 connected with 10.42.104.52 port 5104
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-83.8 sec  1000 MBytes   100 Mbits/sec
[  3] Sent 713318 datagrams
[  3] Server Report:
[  3]  0.0-83.1 sec  8.05 MBytes   812 Kbits/sec  27.059 ms 707567/713308 (99%)

Analysis

99% packet loss suggests a lot of network congestion. We didn't attempt to discover where the congestion was (e.g. on the host interface, a switch between the host and the backbone, within the backbone, in the OpenFlow control path, etc).

plastic-105

GigaPerf TCP, using port=5105, size=350, and this table of client/server pairs:

client server server address
wings-openflow-2.wail.wisc.edu planetlab5.clemson.edu server=10.42.105.105
planetlab4.clemson.edu sardis.gpolab.bbn.com server=10.42.105.53
bain.gpolab.bbn.com wings-openflow-3.wail.wisc.edu server=10.42.105.96

One-time prep commands run on each client and server

sudo yum -y install iperf

Commands run on each server

server=<ipaddr>
nice -n 19 iperf -B $server -p 5103 -s -i 1

Commands run on each client

server=<ipaddr>
nice -n 19 iperf -c $server -p 5103 -n 350M

Results

wings-openflow-2.wail.wisc.edu:

------------------------------------------------------------
Client connecting to 10.42.105.105, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.105.95 port 49534 connected with 10.42.105.105 port 5103
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-249.4 sec   350 MBytes  11.8 Mbits/sec

planetlab4.clemson.edu:

------------------------------------------------------------
Client connecting to 10.42.105.53, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.105.104 port 40099 connected with 10.42.105.53 port 5103
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-348.4 sec   350 MBytes  8.43 Mbits/sec

bain.gpolab.bbn.com:

------------------------------------------------------------
Client connecting to 10.42.105.96, TCP port 5103
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.105.54 port 35328 connected with 10.42.105.96 port 5103
[  3]  0.0-676.9 sec    350 MBytes  4.34 Mbits/sec

Analysis

We used the wrong port for this slice by mistake, but think that this didn't cause any problems, since the traffic also uses to a unique IP address.

The other results seem consistent with what we'd expect.

plastic-106

GigaPerf UDP, using port=5106, size=1000, rate=100, and this table of client/server pairs:

client server server address
planetlab5.clemson.edu wings-openflow-2.wail.wisc.edu server=10.42.106.95

One-time prep commands run on each client and server

sudo yum -y install iperf

Commands run on each server

server=<ipaddr>
nice -n 19 iperf -u -B $server -p 5104 -s -i 1

Commands run on each client

server=<ipaddr>
nice -n 19 iperf -u -c $server -p 5104 -n 1000M -b 100M

Results

planetlab5.clemson.edu:

------------------------------------------------------------
Client connecting to 10.42.106.95, UDP port 5104
Sending 1470 byte datagrams
UDP buffer size:  109 KByte (default)
------------------------------------------------------------
[  3] local 10.42.106.105 port 43743 connected with 10.42.106.95 port 5104
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-83.6 sec  1000 MBytes   100 Mbits/sec
[  3] Sent 713318 datagrams
[  3] Server Report:
[  3]  0.0-79.1 sec  9.71 MBytes  1.03 Mbits/sec  21.837 ms 706350/713275 (99%)
[  3]  0.0-79.1 sec  76 datagrams received out-of-order

Analysis

We used the wrong port for this slice by mistake, but think that this didn't cause any problems, since the traffic also uses to a unique IP address.

99% packet loss suggests a lot of network congestion. We didn't attempt to discover where the congestion was (e.g. on the host interface, a switch between the host and the backbone, within the backbone, in the OpenFlow control path, etc).

plastic-107

GigaWeb, using count=40, port=4107, file=substrate.doc, md5sum=d4fcf71833327fbfef98be09deef8bfb, and this table of client/server pairs:

client server server address
planetlab5.clemson.edu pl4.myplc.grnoc.iu.edu server=10.42.107.72
pl5.myplc.grnoc.iu.edu pl02.cs.washington.edu server=10.42.107.81
pl01.cs.washington.edu planetlab4.clemson.edu server=10.42.107.104

One-time prep commands run on each server

sudo yum -y install pyOpenSSL patch
rm -rf ~/gigaweb
mkdir -p ~/gigaweb/docroot
cd ~/gigaweb
wget http://code.activestate.com/recipes/442473-simple-http-server-supporting-ssl-secure-communica/download/1/ -O httpsd.py
wget http://groups.geni.net/geni/attachment/wiki/PlasticSlices/Experiments/httpsd.py.patch?format=raw -O httpsd.py.patch
patch httpsd.py httpsd.py.patch
rm httpsd.py.patch

openssl genrsa -passout pass:localhost -des3 -rand /dev/urandom -out localhost.localdomain.key 1024
openssl req -subj /CN=localhost.localdomain -passin pass:localhost -new -key localhost.localdomain.key -out localhost.localdomain.csr
openssl x509 -passin pass:localhost -req -days 3650 -in localhost.localdomain.csr -signkey localhost.localdomain.key -out localhost.localdomain.crt
openssl rsa -passin pass:localhost -in localhost.localdomain.key -out decrypted.localhost.localdomain.key
mv decrypted.localhost.localdomain.key localhost.localdomain.key
cat localhost.localdomain.key localhost.localdomain.crt > localhost.localdomain.pem
rm localhost.localdomain.key localhost.localdomain.crt localhost.localdomain.csr

Commands run on each server

server=<ipaddr>
cd ~/gigaweb/docroot
python ../httpsd.py $server 4107

Commands run on each client

server=<ipaddr>
rm -rf ~/gigaweb
mkdir ~/gigaweb
cd ~/gigaweb
for i in {1..40} ; do wget --no-check-certificate https://$server:4107/substrate.doc -O substrate.doc.$i ; done
du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Results

planetlab5.clemson.edu:

[pgenigpolabbbncom_plastic107@planetlab5 gigaweb]$ du -sb .
339501056       .
[pgenigpolabbbncom_plastic107@planetlab5 gigaweb]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl5.myplc.grnoc.iu.edu:

[pgenigpolabbbncom_plastic107@pl5 gigaweb]$ du -sb .
339501056       .
[pgenigpolabbbncom_plastic107@pl5 gigaweb]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl01.cs.washington.edu:

[pgenigpolabbbncom_plastic107@pl01 gigaweb]$ du -sb .
339501056       .
[pgenigpolabbbncom_plastic107@pl01 gigaweb]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

Analysis

All results seem consistent with what we'd expect.

plastic-108

GigaWeb, using count=40, port=4108, file=substrate.doc, md5sum=d4fcf71833327fbfef98be09deef8bfb, and this table of client/server pairs:

client server server address
wings-openflow-3.wail.wisc.edu of-planet3.stanford.edu server=10.42.108.92
of-planet4.stanford.edu pl5.myplc.grnoc.iu.edu server=10.42.108.73
pl4.myplc.grnoc.iu.edu wings-openflow-2.wail.wisc.edu server=10.42.108.95

One-time prep commands run on each server

sudo yum -y install pyOpenSSL patch
rm -rf ~/gigaweb
mkdir -p ~/gigaweb/docroot
cd ~/gigaweb
wget http://code.activestate.com/recipes/442473-simple-http-server-supporting-ssl-secure-communica/download/1/ -O httpsd.py
wget http://groups.geni.net/geni/attachment/wiki/PlasticSlices/Experiments/httpsd.py.patch?format=raw -O httpsd.py.patch
patch httpsd.py httpsd.py.patch
rm httpsd.py.patch

openssl genrsa -passout pass:localhost -des3 -rand /dev/urandom -out localhost.localdomain.key 1024
openssl req -subj /CN=localhost.localdomain -passin pass:localhost -new -key localhost.localdomain.key -out localhost.localdomain.csr
openssl x509 -passin pass:localhost -req -days 3650 -in localhost.localdomain.csr -signkey localhost.localdomain.key -out localhost.localdomain.crt
openssl rsa -passin pass:localhost -in localhost.localdomain.key -out decrypted.localhost.localdomain.key
mv decrypted.localhost.localdomain.key localhost.localdomain.key
cat localhost.localdomain.key localhost.localdomain.crt > localhost.localdomain.pem
rm localhost.localdomain.key localhost.localdomain.crt localhost.localdomain.csr

Commands run on each server

server=<ipaddr>
cd ~/gigaweb/docroot
python ../httpsd.py $server 4108

Commands run on each client

server=<ipaddr>
rm -rf ~/gigaweb
mkdir ~/gigaweb
cd ~/gigaweb
for i in {1..40} ; do wget --no-check-certificate https://$server:4108/substrate.doc -O substrate.doc.$i ; done
du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Results

wings-openflow-3.wail.wisc.edu:

[pgenigpolabbbncom_plastic108@wings-openflow-3 gigaweb]$ du -sb .
339501056       .
[pgenigpolabbbncom_plastic108@wings-openflow-3 gigaweb]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

of-planet4.stanford.edu:

--2011-05-18 21:22:39--  https://10.42.108.73:4108/substrate.doc
Connecting to 10.42.108.73:4108... connected.
WARNING: cannot verify 10.42.108.73's certificate, issued by `/CN=localhost.localdomain':
  Self-signed certificate encountered.
WARNING: certificate common name `localhost.localdomain' doesn't match requested host name `10.42.108.73'.
HTTP request sent, awaiting response... Read error (Connection timed out) in headers.
Retrying.

pl4.myplc.grnoc.iu.edu:

[pgenigpolabbbncom_plastic108@pl4 gigaweb]$ du -sb .
339501056       .
[pgenigpolabbbncom_plastic108@pl4 gigaweb]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

Analysis

of-planet4.stanford.edu timed out trying to download from 10.42.108.73 (on pl5.myplc.grnoc.iu.edu).

The other results seem consistent with what we'd expect.

plastic-109

GigaNetcat, using count=20, port=6109, file=substrate.doc, and this table of client/server pairs:

client server server address
navis.gpolab.bbn.com pl5.myplc.grnoc.iu.edu server=10.42.109.73
pl4.myplc.grnoc.iu.edu pl02.cs.washington.edu server=10.42.109.81
pl01.cs.washington.edu planetlab5.clemson.edu server=10.42.109.105
planetlab4.clemson.edu of-planet3.stanford.edu server=10.42.109.92
of-planet4.stanford.edu wings-openflow-3.wail.wisc.edu server=10.42.109.96
wings-openflow-2.wail.wisc.edu ganel.gpolab.bbn.com server=10.42.109.51

One-time prep commands run on each client and server

sudo yum -y install nc

Commands run on each server

server=<ipaddr>
for i in {1..20} ; do nc -l $server 6109 < substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; done

Commands run on each client

server=<ipaddr>
rm -rf ~/giganetcat
mkdir ~/giganetcat
cd ~/giganetcat
for i in {1..20} ; do nc $server 6109 > substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; mv substrate.doc substrate.doc.$i ; done
du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Results

navis.gpolab.bbn.com:

[pgenigpolabbbncom_plastic109@navis giganetcat]$ du -sb .
169752576       .
[pgenigpolabbbncom_plastic109@navis giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl4.myplc.grnoc.iu.edu:

[pgenigpolabbbncom_plastic109@pl4 giganetcat]$ du -sb .
169752576       .
[pgenigpolabbbncom_plastic109@pl4 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl01.cs.washington.edu:

[pgenigpolabbbncom_plastic109@pl01 giganetcat]$ du -sb .
169752576       .
[pgenigpolabbbncom_plastic109@pl01 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

planetlab4.clemson.edu:

[pgenigpolabbbncom_plastic109@planetlab4 giganetcat]$ du -sb .
169752576       .
[pgenigpolabbbncom_plastic109@planetlab4 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

of-planet4.stanford.edu (and wings-openflow-3.wail.wisc.edu, the server):

[pgenigpolabbbncom_plastic109@of-planet4 giganetcat]$ for i in {1..20} ; do nc $server 6109 > substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; mv substrate.doc substrate.doc.$i ; done

[pgenigpolabbbncom_plastic109@wings-openflow-3 ~]$ for i in {1..20} ; do nc -l $server 6109 < substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; done
completed transfer #1
d4fcf71833327fbfef98be09deef8bfb  substrate.doc

wings-openflow-2.wail.wisc.edu:

[pgenigpolabbbncom_plastic109@wings-openflow-2 giganetcat]$ du -sb .
169752576       .
[pgenigpolabbbncom_plastic109@wings-openflow-2 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

Analysis

of-planet4.stanford.edu failed to download from 10.42.109.96 (on wings-openflow-3.wail.wisc.edu).

The other results seem consistent with what we'd expect.

plastic-110

GigaNetcat, using count=30, port=6110, file=substrate.doc, and this table of client/server pairs:

client server server address
gardil.gpolab.bbn.com pl01.cs.washington.edu server=10.42.110.80
pl02.cs.washington.edu of-planet1.stanford.edu server=10.42.110.90
of-planet2.stanford.edu pl4.myplc.grnoc.iu.edu server=10.42.110.72
pl5.myplc.grnoc.iu.edu sardis.gpolab.bbn.com server=10.42.110.53

One-time prep commands run on each client and server

sudo yum -y install nc

Commands run on each server

server=<ipaddr>
for i in {1..30} ; do nc -l $server 6110 < substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; done

Commands run on each client

server=<ipaddr>
rm -rf ~/giganetcat
mkdir ~/giganetcat
cd ~/giganetcat
for i in {1..30} ; do nc $server 6110 > substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; mv substrate.doc substrate.doc.$i ; done
du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Results

gardil.gpolab.bbn.com:

[pgenigpolabbbncom_plastic110@gardil giganetcat]$ du -sb .
254626816       .
[pgenigpolabbbncom_plastic110@gardil giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl02.cs.washington.edu:

[pgenigpolabbbncom_plastic110@pl02 giganetcat]$ du -sb .
254626816       .
[pgenigpolabbbncom_plastic110@pl02 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

of-planet2.stanford.edu:

[pgenigpolabbbncom_plastic110@of-planet2 giganetcat]$ du -sb .
254626816       .
[pgenigpolabbbncom_plastic110@of-planet2 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

pl5.myplc.grnoc.iu.edu:

[pgenigpolabbbncom_plastic110@pl5 giganetcat]$ du -sb .
254626816       .
[pgenigpolabbbncom_plastic110@pl5 giganetcat]$ md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."
All checksums match.

Analysis

All results seem consistent with what we'd expect.

Baseline 2

Due 2011-05-16
Completed
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 72 hours.

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 1 GB of data per day, but do so repeatedly for 72 hours.

The purpose of this baseline is to confirm longer-term stability of the aggregates.

Baseline 3

Due 2011-05-23
Completed
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 144 hours.

Detailed descritpion:

Similar to the previous baseline, cause the experiment running in each slice to move at least 1 GB of data per day, but do so repeatedly for 144 hours. The purpose of this baseline is to confirm even longer-term stability of the aggregates.

Baseline 4

Due 2011-05-30
Completed
Summary Descr Ten slices, each moving at least 1 Mb/s continuously, for 24 hours.

Detailed Description:

Cause the experiment running in each slice to move at least 1 MB/second continuously over the course of a 24-hour period (approximately 10 GB total).

The purpose of this baseline is to confirm that an experiment can send data continuously without interruption.

Baseline 5

Due 2011-06-06
Completed
Summary Descr Ten slices, each moving at least 10 Mb/s continuously, for 24 hours.

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 100 MB/second continuously over the course of a 24-hour period (approximately 1 TB total).

The purpose of this baseline is to confirm that an experiment can send a higher volume of data continuously without interruption.

Baseline 6

Due 2011-06-13
Completed
Summary Descr Ten slices, each moving at least 10 Mb/s continuously, for 144 hours.

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 100 MB/second continuously over the course of a 144-hour period.

The purpose of this baseline is to confirm that an experiment can send a higher volume of data continuously without interruption, for several days running.

Baseline 7

Due 2011-06-20
Completed
Summary Descr Perform an Emergency Stop test while running ten slices,each moving at least 10 Mb/s continuously, for 144 hours.

Detailed Description:

Repeat the previous baseline, but call an Emergency Stop while it's running, once per slice for each of the ten slices. Campuses will not be informed in advance about when each Emergency Stop will be called. There will be at least one instance of two simultaneous Emergency Stops, and at least one instance of a single campus being asked to respond to two simultaneous Emergency Stops. After each Emergency Stop, verify that all resources have been successfully restored to service.

GMOC will define precisely how each Emergency Stop test will be conducted, and what resources will be stopped, presumably selecting a combination campus resources (e.g. disconnecting an on-campus network connection) and backbone resources (e.g. disabling a campus's connection to an inter-campus VLAN).

The purpose of this baseline is to test Emergency Stop procedures.

Baseline 8

Due 2011-06-20
Completed
Summary Descr Create one slice per second for 1000 seconds; then create and delete one slice per second for 24 hours.

Detailed Description:

This baseline creates new temporary slices, rather than using the existing ten slices, creating a thousand slices at the rate of one slice per second, and then continuing to delete and create a new slice every second, for 24 hours. Each slice will include resources at three campuses, selected randomly for each slice. Automated tools will confirm that the resources are available, e.g. by logging in to a host and running 'uname -a'.

The purpose of this baseline is to confirm that many users can create slices and allocate resources at the same time.

Future Baselines

Due: 2011-06-27 Summary Description: Define additional July baselines.