wiki:PlasticSlices/BaselineEvaluation

Version 9 (modified by Josh Smift, 13 years ago) (diff)

--

Plastic Slices Baseline Plan

The GENI Plastic Slices baselines evaluations capture progress throughout the Plastic Slices project. Each planned baseline will capture of the following major areas:

  • Environment: System configuration, software versions, and resources for compute, OF, VLAN, and Monitoring.
  • Evaluation details: Detailed description of the Topology, Traffic Profiles and tools used in the evaluation.
  • Evaluation criteria: Detailed description of the Criteria used to determine success along with any assumption made in defining success.
  • Baseline Evaluation Report (BER): Report capturing all results and analysis for each Baseline and an overall cumulative final version.

In addition to the above areas, the GPO will also actively conduct project level coordination that will include requirements tracking, experiment design, GMOC monitoring data reviews. GPO will also provide campuses support for OpenFlow resource, MyPLC resources, and campus GMOC monitoring data feeds. Additionally, the GPO will provide and support for a GENI AM API compliant ProtoGENI aggregate with four hosts.

Baseline Areas

This section captures the implementation details for all major areas in each of the baselines planned.

Environment

Capturing the definition of the environment used to validate the baselines scenarios is a crucial part of this activity. A complete definition will be captured to facilitate the repeatability of the baseline scenario in the Quality Assurance step to be performed by Georgia Tech. Environment captures will detail:

  • Configuration for all compute resources:
    • Prerequisite environment changes (ex. ports to be opened in FW).
    • software versions
    • configuration settings (ports)
    • OS versions, hardware plaforms
  • Configuration for all OF and VLAN resources:
    • Firmaware versions
    • Hardware platform, device models.
    • FlowVisor and Expedient Controllers (OF)
  • Configuration for all Management and Monitoring resources
    • MyPLC (OS, SW version, Settings)
    • GMOC API version, Data monitoring Host,
    • GMOC STOP procedure definition.
    • NCSA security plan versions

Evaluation details

  • Definition of each entity in test and of its role.
  • Detailed topology of each test.
  • Tools used for the evaluation (traffic generation, site monitoring, test scripts?)
  • Traffic Profiles (iperf?, traffic type, packet rates, packet sizes, duration, etc)
  • Result gathering (collect logs(OF,expedient,MyPLC), collect monitoring statistics, collect traffic generation results, other?)

Evaluation criteria

Each baseline run will be evaluated to determine success, the following will be considered:

  • Traffic sent was successfully received.
  • All nodes are up and available through out the baseline time frame.
  • Logs and statistics are in line with the expected results and no FATAL or CRITICAL failure are found. Lesser priority issues, such as WARNINGS are acceptable as long as they can be shown no to impact ability to communicate between endpoints.
  • Runtime assumptions can be made as long as reviewed and deemed reasonable. For example, assuming that ?? (need relevant example).

Baseline Evaluation Report

As each baseline is completed, a Baseline Evaluation Report is generated that captures the results of the baseline, an analysis of the results, and any impact that the findings may have on the current Plastic Slices Plan.

Baseline evaluation reports will also capture progress for requirements being tracked as well as progress towards the overall goals. A final version of the BER is generated to captures a cumulative representation of all Plastic Slices baseline assessments.

Question: Does it make sense to report by major areas? Ex. OF, Compute resources,Monitoring, etc?

Baseline Detailed Plans

This section provides a detailed description for each baseline and their respective Evaluation details (topology, tools, traffic profiles, etc), Evaluation Criteria, and reporting.

Baseline 1

FIXME: This all needs to be cleaned up; I'm just capturing data hastily as I run things.

Due 2011-05-09
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 24 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Cause the experiment running in each slice to move at least 1 GB of data over the course of a 24-hour period. Multiple slices should be moving data simultaneously, but it can be slow, or bursty, as long as it reaches 1 GB total over the course of the day.

The purpose of this baseline is to confirm basic functionality of the experiments, and stability of the aggregates.

plastic-101

GigaPing, using count=120000, and this table of client/server pairs:

client server server address
ganel.gpolab.bbn.com planetlab5.clemson.edu server=10.42.101.105
planetlab4.clemson.edu pl5.myplc.grnoc.iu.edu server=10.42.101.73
pl4.myplc.grnoc.iu.edu of-planet2.stanford.edu server=10.42.101.91
of-planet1.stanford.edu pl02.cs.washington.edu server=10.42.101.81
pl01.cs.washington.edu.edu wings-openflow-3.wail.wisc.edu server=10.42.101.96
wings-openflow-2.wail.wisc.edu gardil.gpolab.bbn.com server=10.42.101.52

Commands to run on each client:

server=<ipaddr>
sudo ping -i .001 -s $((1500-8-20)) -c 120000 $server

plastic-102

GigaPing, using count=120000, and this table of client/server pairs:

client server server address
sardis.gpolab.bbn.com planetlab4.clemson.edu server=10.42.102.104
planetlab5.clemson.edu pl4.myplc.grnoc.iu.edu server=10.42.102.72
pl5.myplc.grnoc.iu.edu of-planet4.stanford.edu server=10.42.102.93
of-planet3.stanford.edu pl01.cs.washington.edu server=10.42.102.80
pl02.cs.washinton.edu wings-openflow-2.wail.wisc.edu server=10.42.102.95
wings-openflow-3.wail.wisc.edu bain.gpolab.bbn.com server=10.42.102.54

Commands to run on each client:

server=<ipaddr>
sudo ping -i .001 -s $((1500-8-20)) -c 120000 $server

plastic-103

GigaPerf TCP, using port=5103, size=350, and this table of client/server pairs:

client server server address
of-planet1.stanford.edu navis.gpolab.bbn.com server=10.42.103.55
ganel.gpolab.bbn.com pl01.cs.washington.edu server=10.42.103.80
pl02.cs.washinton.edu of-planet2.stanford.edu server=10.42.103.91

One-time prep to run on each server:

sudo yum -y install iperf

Commands to run on each server:

server=<ipaddr>
nice -n 19 iperf -B $server -p 5103 -s -i 1

Commands to run on each client:

nice -n 19 iperf -c $server -p 5103 -n 350M

plastic-104

GigaPerf UDP, using port=5104, size=1000, rate=100, and this table of client/server pairs:

client server server address
planetlab4.clemson.edu gardil.gpolab.bbn.com server=10.42.104.52

One-time prep to run on each server:

sudo yum -y install iperf

Commands to run on each server:

server=<ipaddr>
nice -n 19 iperf -u -B $server -p 5104 -s -i 1

Commands to run on each client:

server=<ipaddr>
nice -n 19 iperf -u -c $server -p 5104 -n 1000M -b 100M

plastic-105

GigaPerf TCP, using port=5105, size=350, and this table of client/server pairs:

client server server address
wings-openflow-2.wail.wisc.edu planetlab5.clemson.edu server=10.42.105.105
planetlab4.clemson.edu sardis.gpolab.bbn.com server=10.42.105.53
bain.gpolab.bbn.com wings-openflow-3.wail.wisc.edu server=10.42.105.96

One-time prep to run on each server:

sudo yum -y install iperf

Commands to run on each server:

server=<ipaddr>
nice -n 19 iperf -B $server -p 5103 -s -i 1

Commands to run on each client:

nice -n 19 iperf -c $server -p 5103 -n 350M

plastic-106

GigaPerf UDP, using port=5106, size=1000, rate=100, and this table of client/server pairs:

client server server address
planetlab5.clemson.edu wings-openflow-2.wail.wisc.edu server=10.42.106.95

One-time prep to run on each server:

sudo yum -y install iperf

Commands to run on each server:

server=<ipaddr>
nice -n 19 iperf -u -B $server -p 5104 -s -i 1

Commands to run on each client:

server=<ipaddr>
nice -n 19 iperf -u -c $server -p 5104 -n 1000M -b 100M

plastic-107

GigaWeb, using count=40, port=4107, file=substrate.doc, md5sum=d4fcf71833327fbfef98be09deef8bfb, and this table of client/server pairs:

client server server address
planetlab5.clemson.edu pl4.myplc.grnoc.iu.edu server=10.42.107.72
pl5.myplc.grnoc.iu.edu pl02.cs.washington.edu server=10.42.107.81
pl01.cs.washington.edu planetlab4.clemson.edu server=10.42.107.104

One-time prep to run on each server:

sudo yum -y install pyOpenSSL patch
rm -rf ~/gigaweb
mkdir -p ~/gigaweb/docroot
cd ~/gigaweb
wget http://code.activestate.com/recipes/442473-simple-http-server-supporting-ssl-secure-communica/download/1/ -O httpsd.py
wget http://groups.geni.net/geni/attachment/wiki/PlasticSlices/Experiments/httpsd.py.patch?format=raw -O httpsd.py.patch
patch httpsd.py httpsd.py.patch
rm httpsd.py.patch

openssl genrsa -passout pass:localhost -des3 -rand /dev/urandom -out localhost.localdomain.key 1024
openssl req -subj /CN=localhost.localdomain -passin pass:localhost -new -key localhost.localdomain.key -out localhost.localdomain.csr
openssl x509 -passin pass:localhost -req -days 3650 -in localhost.localdomain.csr -signkey localhost.localdomain.key -out localhost.localdomain.crt
openssl rsa -passin pass:localhost -in localhost.localdomain.key -out decrypted.localhost.localdomain.key
mv decrypted.localhost.localdomain.key localhost.localdomain.key
cat localhost.localdomain.key localhost.localdomain.crt > localhost.localdomain.pem
rm localhost.localdomain.key localhost.localdomain.crt localhost.localdomain.csr

Commands to run on each server:

server=<ipaddr>
cd ~/gigaweb/docroot
python ../httpsd.py $server 4107

Commands to run on each client:

server=<ipaddr>
rm -rf ~/gigaweb
mkdir ~/gigaweb
cd ~/gigaweb
for i in {1..40} ; do wget --no-check-certificate https://$server:4107/substrate.doc -O substrate.doc.$i ; done

Check results on each client:

du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Clean up on each client:

cd
rm -rf ~/gigaweb

plastic-108

GigaWeb, using count=40, port=4108, file=substrate.doc, md5sum=d4fcf71833327fbfef98be09deef8bfb, and this table of client/server pairs:

client server server address
wings-openflow-3.wail.wisc.edu of-planet3.stanford.edu server=10.42.108.92
of-planet4.stanford.edu pl5.myplc.grnoc.iu.edu server=10.42.108.73
pl4.myplc.grnoc.iu.edu wings-openflow-2.wail.wisc.edu server=10.42.108.95

One-time prep to run on each server:

sudo yum -y install pyOpenSSL patch
rm -rf ~/gigaweb
mkdir -p ~/gigaweb/docroot
cd ~/gigaweb
wget http://code.activestate.com/recipes/442473-simple-http-server-supporting-ssl-secure-communica/download/1/ -O httpsd.py
wget http://groups.geni.net/geni/attachment/wiki/PlasticSlices/Experiments/httpsd.py.patch?format=raw -O httpsd.py.patch
patch httpsd.py httpsd.py.patch
rm httpsd.py.patch

openssl genrsa -passout pass:localhost -des3 -rand /dev/urandom -out localhost.localdomain.key 1024
openssl req -subj /CN=localhost.localdomain -passin pass:localhost -new -key localhost.localdomain.key -out localhost.localdomain.csr
openssl x509 -passin pass:localhost -req -days 3650 -in localhost.localdomain.csr -signkey localhost.localdomain.key -out localhost.localdomain.crt
openssl rsa -passin pass:localhost -in localhost.localdomain.key -out decrypted.localhost.localdomain.key
mv decrypted.localhost.localdomain.key localhost.localdomain.key
cat localhost.localdomain.key localhost.localdomain.crt > localhost.localdomain.pem
rm localhost.localdomain.key localhost.localdomain.crt localhost.localdomain.csr

Commands to run on each server:

server=<ipaddr>
cd ~/gigaweb/docroot
python ../httpsd.py $server 4108

Commands to run on each client:

server=<ipaddr>
rm -rf ~/gigaweb
mkdir ~/gigaweb
cd ~/gigaweb
for i in {1..40} ; do wget --no-check-certificate https://$server:4108/substrate.doc -O substrate.doc.$i ; done

Check results on each client:

du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Clean up on each client:

cd
rm -rf ~/gigaweb

plastic-109

GigaNetcat, using count=20, port=6109, file=substrate.doc, and this table of client/server pairs:

client server server address
navis.gpolab.bbn.com pl5.myplc.grnoc.iu.edu server=10.42.109.73
pl4.myplc.grnoc.iu.edu pl02.cs.washington.edu server=10.42.109.81
pl01.cs.washinton.edu planetlab5.clemson.edu server=10.42.109.105
planetlab4.clemson.edu of-planet3.stanford.edu server=10.42.109.92
of-planet4.stanford.edu wings-openflow-3.wail.wisc.edu server=10.42.109.96
wings-openflow-2.wail.wisc.edu ganel.gpolab.bbn.com server=10.42.109.51

Prep to run on each server and client:

sudo yum -y install nc

Commands to run on each server:

server=<ipaddr>
cd ~/plastic-slices
for i in {1..20} ; do nc -l $server 6109 < substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; done

Commands to run on each client:

server=<ipaddr>
rm -rf ~/giganetcat
mkdir ~/giganetcat
cd ~/giganetcat
for i in {1..20} ; do nc $server 6109 > substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; mv substrate.doc substrate.doc.$i ; done

Check results on each client:

du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Clean up on each client:

cd
rm -rf ~/giganetcat

plastic-110

GigaNetcat, using count=30, port=6110, file=substrate.doc, and this table of client/server pairs:

client server server address
gardil.gpolab.bbn.com pl01.cs.washington.edu server=10.42.110.80
pl02.cs.washington.edu of-planet1.stanford.edu server=10.42.110.90
of-planet2.stanford.edu pl4.myplc.grnoc.iu.edu server=10.42.110.72
pl5.myplc.grnoc.iu.edu sardis.gpolab.bbn.com server=10.42.110.53

Prep to run on each server and client:

sudo yum -y install nc

Commands to run on each server:

server=<ipaddr>
cd ~/plastic-slices
for i in {1..30} ; do nc -l $server 6110 < substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; done

Commands to run on each client:

server=<ipaddr>
rm -rf ~/giganetcat
mkdir ~/giganetcat
cd ~/giganetcat
for i in {1..30} ; do nc $server 6110 > substrate.doc ; echo "completed transfer #$i" ; md5sum substrate.doc ; mv substrate.doc substrate.doc.$i ; done

Check results on each client:

du -sb .
md5sum * | grep -v d4fcf71833327fbfef98be09deef8bfb || echo "All checksums match."

Clean up on each client:

cd
rm -rf ~/giganetcat

Baseline 2

Due 2011-05-16
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 72 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 1 GB of data per day, but do so repeatedly for 72 hours.

The purpose of this baseline is to confirm longer-term stability of the aggregates.

Baseline 3

Due 2011-05-23
Summary Descr Ten slices, each moving at least 1 GB of data per day, for 144 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed descritpion:

Similar to the previous baseline, cause the experiment running in each slice to move at least 1 GB of data per day, but do so repeatedly for 144 hours. The purpose of this baseline is to confirm even longer-term stability of the aggregates.

Baseline 4

Due 2011-05-30
Summary Descr Ten slices, each moving at least 1 Mb/s continuously, for 24 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Cause the experiment running in each slice to move at least 1 MB/second continuously over the course of a 24-hour period (approximately 10 GB total).

The purpose of this baseline is to confirm that an experiment can send data continuously without interruption.

Baseline 5

Due 2011-06-06
Summary Descr Ten slices, each moving at least 10 Mb/s continuously, for 24 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 100 MB/second continuously over the course of a 24-hour period (approximately 1 TB total).

The purpose of this baseline is to confirm that an experiment can send a higher volume of data continuously without interruption.

Baseline 6

Due 2011-06-13
Summary Descr Ten slices, each moving at least 10 Mb/s continuously, for 144 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Similar to the previous baseline, cause the experiment running in each slice to move at least 100 MB/second continuously over the course of a 144-hour period.

The purpose of this baseline is to confirm that an experiment can send a higher volume of data continuously without interruption, for several days running.

Baseline 7

Due 2011-06-20
Summary Descr Perform an Emergency Stop test while running ten slices,each moving at least 10 Mb/s continuously, for 144 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

Repeat the previous baseline, but call an Emergency Stop while it's running, once per slice for each of the ten slices. Campuses will not be informed in advance about when each Emergency Stop will be called. There will be at least one instance of two simultaneous Emergency Stops, and at least one instance of a single campus being asked to respond to two simultaneous Emergency Stops. After each Emergency Stop, verify that all resources have been successfully restored to service.

GMOC will define precisely how each Emergency Stop test will be conducted, and what resources will be stopped, presumably selecting a combination campus resources (e.g. disconnecting an on-campus network connection) and backbone resources (e.g. disabling a campus's connection to an inter-campus VLAN).

The purpose of this baseline is to test Emergency Stop procedures.

Baseline 8

Due 2011-06-20
Summary Descr Create one slice per second for 1000 seconds; then create and delete one slice per second for 24 hours.
Traffic Model TBD
Tools TBD
Results TBD
Monitoring TBD
Assumptions

Detailed Description:

This baseline creates new temporary slices, rather than using the existing ten slices, creating a thousand slices at the rate of one slice per second, and then continuing to delete and create a new slice every second, for 24 hours. Each slice will include resources at three campuses, selected randomly for each slice. Automated tools will confirm that the resources are available, e.g. by logging in to a host and running 'uname -a'.

The purpose of this baseline is to confirm that many users can create slices and allocate resources at the same time.

Future Baselines

Due: 2011-06-27 Summary Description: Define additional July baselines.