wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Version 44 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

IG-EXP-5: InstaGENI Network Resources Acceptance Test

This page captures status for the test case IG-EXP-5, which verifies the ability to support OpenFlow operations and integration with meso-scale compute resources and other compute resources external to the InstaGENI rack. For overall status see the InstaGENI Acceptance Test Status page.

Last Update: 08/30/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Ticket Comments
Step 1 Color(yellow,Complete)? Experiment modified to use Utah rack
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)?
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)?
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 Color(yellow,Complete)?
Step 14 Color(yellow,Complete)?
Step 15 Color(yellow,Complete)?
Step 16 Color(yellow,Complete)?
Step 17 Color(yellow,Complete)?
Step 18 Color(yellow,Complete)?
Step 19 Color(yellow,Complete)?
Step 20 Color(yellow,Complete)?
Step 21 Color(yellow,Complete)?
Step 22 Color(yellow,Complete)?
Step 23 Color(yellow,Complete)?
Step 24 Color(yellow,Complete)?
Step 25 Color(yellow,Complete)?
Step 26 Color(yellow,Complete)?
Step 27 Color(#63B8FF,In Progress)?
Step 28 Color(yellow,Complete)?
Step 29 Color(yellow,Complete)?


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This procedure are executed at Utah InstaGENI, not as originally planned at BBN additionally, due to rack delivery delays. Additionally, the initial run through for this procedure is also modified to use one set of user credentials, running two slices for one experiment.

The following aggregate managers nick_names are define in the omni_config used for this test:

ig-utah=,http://utah.geniracks.net/protogeni/xmlrpc/am
pg=,http://www.emulab.net/protogeni/xmlrpc/am
pg2=,https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0
of-bbn=,https://foam.gpolab.bbn.com:3626/foam/gapi/1
of-clemson=,https://foam.clemson.edu:3626/foam/gapi/1
of-i2=,https://foam.net.internet2.edu:3626/foam/gapi/1
of-ig=,https://foam.utah.geniracks.net:3626/foam/gapi/1
of-uen=,https://foamyflow.chpc.utah.edu:3626/foam/gapi/1
of-rutgers=,https://nox.orbit-lab.org:3626/foam/gapi/1
plc-bbn=,http://myplc.gpolab.bbn.com:12346/
plc-clemson=,http://myplc.clemson.edu:12346/

1. As Experimenter1 (lnevers@bbn.com), determine PG site compute resources and define RSpec

Collect list resources from InstaGENI compute and network aggregate managers.

$ omni.py -a ig-utah listresources -o
$ omni.py -a pg listresources -o
$ omni.py -a of-ig listresources -V1 -o
$ omni.py -a of-i2 listresources -V1 -o

Defined the IG-EXP-5-exp1-pg-site.rspec file to request the ProtoGENI site compute resources request.

2. Determine remote meso-scale compute resources and define RSpec

The RSpecs for experiment 1 are defined for the meso-scale sites at Rutgers, where one WAPG node (pg51) is used via Internet2. Note, WAPG node pg51 stopped appearing in the listresources output for Emulab in early August. This was reported, but never resolved.

3. Define a request RSpec for OpenFlow network resources at the InstaGENI AM

Defined the RSpec IG-EXP-5-exp1-openflow-ig.rspec for the Utah InstaGENI FOAM Aggregate to allow the site PG site compute resources to access the core VLAN 3716.

4. Define a request RSpec for OpenFlow network resources at the remote I2 Meso-scale site

RSpecs were defined for Rutgers FOAM aggregate and WAPG compute resource. The following RSpecs were used for the meso-scale site:

5. Define a request RSpec for the OpenFlow Core resources

The file IG-EXP-5-exp1-openflow-i2.rspec defines the Internet2 Core FOAM Aggregate network resources request RSpec.

6. Create the first slice

Created the first slice as with GPO ProtoGENI credentials:

 $ omni.py createslice IG-EXP-5-exp1

7. Create a sliver for the BBN compute resources

Created slivers at the PG Utah compute resource aggregate

 $ omni.py -a pg createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-pg-site.rspec

8. Create a sliver at the I2 meso-scale site using FOAM at site

Created a sliver at the Rutgers FOAM for VLAN 3716:

 $ omni.py -a of-rutgers createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-rutgers.rspec -V1

9. Create a sliver at of the Utah InstaGENI AM

Created slivers at the InstaGENI rack FOAM network resource aggregate and at the UEN Regional:

 $ omni.py -a of-ig createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-ig.rspec -V1
 $ omni.py -a of-uen createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-uen.rspec -V1

10. Create a sliver for the OpenFlow resources in the core

Created slivers at Internet2 and NLR FOAM network resource aggregate:

 $ omni.py -a of-i2 createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-i2.rspec -V1

10a. Create a sliver for all remaining Meso-scale compute and network resources

WAPG nodes are part of the Utah PG aggregate, therefore a second request to add a resource does not work. Also modify sliver is not an available feature at this time, so this step is modified to create a second slice for experiment1. In order to create a second request at the PG site the following slice and sliver were created for the Rutgers WAPG node:

 $ omni.py createslice IG-EXP-5-exp1a 
 $ omni.py -a pg2 createsliver IG-EXP-5-exp1a --api-version 2 -t GENI 3 IG-EXP-5-exp1-rutgers-wapg.rspec

11. Log in to each of the compute resources and send traffic to the other end-point

Verify that all slivers are ready:

 $ omni.py -a pg sliverstatus IG-EXP-5-exp1 
 $ omni.py -a of-rutgers sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-ig sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a pg2 sliverstatus IG-EXP-5-exp1a

When all slivers are ready, determine which nodes are assigned for compute resources:

$ omni.py -a pg  sliverstatus IG-EXP-5-exp1 -o
$ omni.py -a pg2  sliverstatus IG-EXP-5-exp1a --api-version 2 -t GENI 3 -o

The above created output files from which the assigned hosts can be determined:

 $ egrep "hostname" IG-EXP-5-sliverstatus-SITENAME.json

Or you me also use the gcf/examples/readyToLogin.py script:

$ ./examples/readyToLogin.py -a pg  IG-EXP-5-exp1
...
================================================================================
Aggregate [http://www.emulab.net/protogeni/xmlrpc/am] has a ProtoGENI sliver.

pc444.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i ~/.ssh/id_rsa lnevers@pc444.emulab.net -p 31802 &
================================================================================

And for the Rutgers WAPG node:

$ ./examples/readyToLogin.py -a pg2  IG-EXP-5-exp1a -V2 
================================================================================
Aggregate [https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0] has a ProtoGENI sliver.

pg51.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i ~/.ssh/id_rsa lnevers@pg51.emulab.net &
================================================================================

12. Verify that traffic is delivered to target

Logged in with the following:

$ ssh lnevers@pc444.emulab.net -p 31802 &

Exchanged traffic with the remote meso-scale resource at Rutgers:

$ ssh lnevers@pg51.emulab.net

13. Review baseline, GMOC, and meso-scale monitoring statistics

Iperf tests were run for a few scenarios and are captured in this step. The Iperf server is run on the InstaGENI VM for each scenario.

Two InstaGENI VMs on the Shared VLAN 1750

In a scenario where two VMs where reserved on the shared VLAN 1750, Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

One InstaGENI VMs and One Emulab VM node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one VM node reserved on VLAN 1750 at the Emulab aggregate. Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

One InstaGENI VMs and One Rutgers WAPG node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one WAPG node at Rutgers reserved on VLAN 1750. Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth

14. As Experimenter2, determine Utah site compute resources and define RSpec

As Experimenter2 (lnevers2@bbn.com) determined available resources:

$ omni.py -a plc-clemson listresources -o
$ omni.py -a of-clemson listresources -V1 -o
$ omni.py -a of-nlr listresources -V1 -o
$ omni.py -a of-ig listresources -V1 -o

15. Determine remote meso-scale NLR compute resources and define RSpec

The Clemson site was used as a remote meso-scale site and the file IG-EXP-5-myplc-clemson.rspec captures the Clemson MyPLC node planetlab4 compute resource request.

16. Define a request RSpec for OpenFlow network resources at the Utah InstaGENI AM

Defined the IG-EXP-5-exp2-pg-site.rspec file to request the ProtoGENI site compute resources request for experiment2.

17. Define a request RSpec for OpenFlow network resources at the remote NLR Meso-scale site

Defined the file IG-EXP-5-openflow-clemson.rspec to capture the Clemson FOAM Aggregate network resource request needed to allow the MyPLC node (planetlab4.clemson.edu) access to the OpenFlow network core.

18. Define a request RSpec for the OpenFlow Core resources

Defined IG-EXP-5-openflow-nlr.rspec to capture the NLR Core FOAM Aggregate network resources request.

19. Create the second slice

As experimenter2 (lnevers2@bbn.com) created a slice:

$ omni.py createslice IG-EXP-5-exp2

20. Create a sliver for the Utah compute resources

Created a sliver for the site resources at the Utah PG as follows:

$ omni.py -a pg createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-pg-site.rspec

21. Create a sliver at the meso-scale site using FOAM at site

Created a sliver at the Clemson FOAM aggregate requesting the network resources required to allow the MyPLC host to access the OpenFlow Backbone VLAN 3716:

$ omni.py -a of-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-clemson.rspec -V1

22. Create a sliver at of the Utah InstaGENI AM

Created slivers at the InstaGENI rack FOAM network resource aggregate to allow the PG site resources to access the OpenFlow backbone:

 $ omni.py -a of-ig createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-ig.rspec -V1

23. Create a sliver for the OpenFlow resources in the core

Created sliver at NLR FOAM network resource aggregates:

 $ omni.py -a of-nlr createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-nlr.rspec -V1

24. Create a sliver for the meso-scale compute resources

Requested the MyPLC node at the Clemson resource aggregate:

$ omni.py -a plc-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-myplc-clemson.rspec -V1 

25. Log in to each of the compute resources and send traffic to the other endpoint

Verify that each sliver is ready, but checking the "geni_status" for each sliver:

 $ omni.py -a pg sliverstatus IG-EXP-5-exp2 
 $ omni.py -a of-ig sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-clemson sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson sliverstatus IG-EXP-5-exp2

Verify the status for each compute resource sliver, and use login information when ready:

$ omni.py -a plc-clemson  sliverstatus IG-EXP-5-exp2 -o
$ omni.py -a pg sliverstatus IG-EXP-5-exp2 -o

Used the readToLogin.py script to determine state of the sliver and the login information for the PG compute resource sliver :

$ ./examples/readyToLogin.py -a pg IG-EXP-5-exp2
....
================================================================================
Aggregate [http://www.emulab.net/protogeni/xmlrpc/am] has a ProtoGENI sliver.

pc515.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key lnevers2@pc515.emulab.net -p 35386 &
================================================================================

Used the readToLogin.py script to determine state of the sliver and the login information for the MyPLC compute resource sliver at Clemson:

$ ./examples/readyToLogin.py -a plc-clemson IG-EXP-5-exp2
...
================================================================================
Aggregate [http://myplc.clemson.edu:12346/] has a PlanetLab sliver.

planetlab4.clemson.edu's geni_status is: ready (pl_boot_state:boot) 
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key pgenigpolabbbncom_IGEXP5exp2@planetlab4.clemson.edu &
================================================================================

Was able to exchange ping traffic between each of the compute resources.

At PG site resource:

[lnevers2@utah-pg ~]$ ping 10.42.18.104

At Clemson MyPLC site:

pgenigpolabbbncom_IGEXP5exp2@planetlab4 ~]$ ping 10.42.18.34 

26. As Experimenter1, insert flowmods and send packet-outs only for traffic assigned to the slivers

For this portion of testing, the FloodLight OpenFlow controller was used and these nodes were reserved:

  • At InstaGENI Rack, the two nodes using the addresses "10.42.11.32" and "10.42.11.33"
  • At Emulab on Shared VLAN 1750 one node using address "10.42.11.34"
  • At Rutgers WAPG node pg51 using address "10.42.11.151".

First, checked the existing switches:

 $ curl  http://localhost:9090/wm/core/controller/switches/json 
  [{"dpid":"00:00:0e:84:40:39:18:1b"},  ## Internet2 New York
   {"dpid":"06:d6:ac:16:2d:f5:2d:00"},  ## UEN OF Switch
   {"dpid":"00:00:0e:84:40:39:19:96"},  ## Internet2 Atlanta (VLAN 1750 to 3716)
   {"dpid":"00:00:0e:84:40:39:1b:93"},  ## Internet2 Los Angeles OF Switch (VLAN 3716)
   {"dpid":"00:00:06:d6:40:39:1b:93"},  ## Internet2 Los Angeles OF Switch (VLAN 1750 to 3716)
   {"dpid":"00:00:0e:84:40:39:1a:57"},  ## Internet2 OF Houston Switch 
   {"dpid":"00:00:0e:84:40:39:18:58"},  ## Internet2 OF Washington Switch 
   {"dpid":"06:d6:00:24:a8:5d:0b:00"},   ## InstaGENI Rack OF Switch 
   {"dpid":"00:00:00:10:10:17:50:01"}]   ## Rutgers OF Switch

For this section of the test, each of the 4 hosts ("10.42.11.32" (InstaGENI), "10.42.11.33" (InstaGENI), "10.42.11.34" (Emulab), and "10.42.11.151" (WAPG)) are pinging each other to generate flows. Following is a list of flows on the FloodLight controller:

$ curl http://localhost:9090/wm/core/switch/06:d6:00:24:a8:5d:0b:00/flow/json
{"06:d6:00:24:a8:5d:0b:00":
[{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"00:1f:29:32:72:b5",
"dataLayerSource":"02:3a:2d:fc:da:7a","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":10,"networkDestination":"10.42.11.151",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.32","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":842,
"durationNanoseconds":387000000,"packetCount":836,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":19,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"00:1f:29:32:72:b5",
"dataLayerSource":"02:fc:91:ac:8d:d8","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":12,"networkDestination":"10.42.11.151","networkDestinationMaskLen":32,
"networkProtocol":0,"networkSource":"10.42.11.33","networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,
"transportSource":0,"wildcards":3145952},"durationSeconds":842,"durationNanoseconds":179000000,"packetCount":836,"byteCount":0,
"tableId":0,"actions":[{"maxLength":0,"port":19,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:3a:2d:fc:da:7a",
"dataLayerSource":"00:1f:29:32:72:b5","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":19,"networkDestination":"10.42.11.32","networkDestinationMaskLen":32,
"networkProtocol":0,"networkSource":"10.42.11.151","networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,
"transportSource":0,"wildcards":3145952},"durationSeconds":842,"durationNanoseconds":500000000,"packetCount":836,"byteCount":0,
"tableId":0,"actions":[{"maxLength":0,"port":10,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:fc:91:ac:8d:d8",
"dataLayerSource":"00:1f:29:32:72:b5","dataLayerType":"0x0800","dataLayerVirtualLan":-1,"dataLayerVirtualLanPriorityCodePoint":0,
"inputPort":19,"networkDestination":"10.42.11.33","networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.151",
"networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},
"durationSeconds":841,"durationNanoseconds":277000000,"packetCount":837,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":12,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0}

Inserted a flow modification, which was rejected traffic to the WAPG node pg51 (10.41.11.151).

$ curl -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}
$ 
 
$ curl -X DELETE -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}
$

27. Verify that traffic is delivered to target according to the flowmods settings

Unable to determine how to define a valid flow-mods??? Step incomplete!!!!!

28. Review baseline, GMOC, and monitoring statistics

Insert Iperf results here:

Reviewed the monitoring information available about this experiment, first found the sliver in the list of slices:

No image "IG-EXP-5.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5 The selected the detail panel:

No image "IG-EXP-5-detail.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Then selected the sliver resources panel:

No image "IG-EXP-5-sliver-resources.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Also checked the sliver measurements:

No image "IG-EXP-5-sliver-measuraments.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

29. Stop traffic and delete slivers

Stopped traffic, and deleted slivers.

As Expeirenter1:

 $ omni.py -a pg deletesliver IG-EXP-5-exp1 
 $ omni.py -a of-rutgers deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a pg2 deletesliver IG-EXP-5-exp1a

As Expeirenter2:

 $ omni.py -a pg deletesliver IG-EXP-5-exp2 
 $ omni.py -a of-clemson deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson deletesliver IG-EXP-5-exp2

Attachments (7)