wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Version 41 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

IG-EXP-5: InstaGENI Network Resources Acceptance Test

This page captures status for the test case IG-EXP-5, which verifies the ability to support OpenFlow operations and integration with meso-scale compute resources and other compute resources external to the InstaGENI rack. For overall status see the InstaGENI Acceptance Test Status page.

Last Update: 08/30/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Ticket Comments
Step 1 Color(yellow,Complete)? Experiment modified to use Utah rack
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)?
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)?
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 Color(yellow,Complete)?
Step 14 Color(yellow,Complete)?
Step 15 Color(yellow,Complete)?
Step 16 Color(yellow,Complete)?
Step 17 Color(yellow,Complete)?
Step 18 Color(yellow,Complete)?
Step 19 Color(yellow,Complete)?
Step 20 Color(yellow,Complete)?
Step 21 Color(yellow,Complete)?
Step 22 Color(yellow,Complete)?
Step 23 Color(yellow,Complete)?
Step 24 Color(yellow,Complete)?
Step 25 Color(yellow,Complete)?
Step 26 Color(#63B8FF,In Progress)?
Step 27 Color(#63B8FF,In Progress)?
Step 28 Color(yellow,Complete)?
Step 29 Color(yellow,Complete)?


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This procedure are executed at Utah InstaGENI, not as originally planned at BBN additionally, due to rack delivery delays. Additionally, the initial run through for this procedure is also modified to use one set of user credentials, running two slices for one experiment.

The following aggregate managers nick_names are define in the omni_config used for this test:

ig-utah=,http://utah.geniracks.net/protogeni/xmlrpc/am
pg=,http://www.emulab.net/protogeni/xmlrpc/am
pg2=,https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0
of-bbn=,https://foam.gpolab.bbn.com:3626/foam/gapi/1
of-clemson=,https://foam.clemson.edu:3626/foam/gapi/1
of-i2=,https://foam.net.internet2.edu:3626/foam/gapi/1
of-ig=,https://foam.utah.geniracks.net:3626/foam/gapi/1
of-uen=,https://foamyflow.chpc.utah.edu:3626/foam/gapi/1
of-rutgers=,https://nox.orbit-lab.org:3626/foam/gapi/1
plc-bbn=,http://myplc.gpolab.bbn.com:12346/
plc-clemson=,http://myplc.clemson.edu:12346/

1. As Experimenter1 (lnevers@bbn.com), determine PG site compute resources and define RSpec

Collect list resources from InstaGENI compute and network aggregate managers.

$ omni.py -a ig-utah listresources -o
$ omni.py -a pg listresources -o
$ omni.py -a of-ig listresources -V1 -o
$ omni.py -a of-i2 listresources -V1 -o

Defined the IG-EXP-5-exp1-pg-site.rspec file to request the ProtoGENI site compute resources request.

2. Determine remote meso-scale compute resources and define RSpec

The RSpecs for experiment 1 are defined for the meso-scale sites at Rutgers, where one WAPG node (pg51) is used via Internet2. Note, WAPG node pg51 stopped appearing in the listresources output for Emulab in early August. This was reported, but never resolved.

3. Define a request RSpec for OpenFlow network resources at the InstaGENI AM

Defined the RSpec IG-EXP-5-exp1-openflow-ig.rspec for the Utah InstaGENI FOAM Aggregate to allow the site PG site compute resources to access the core VLAN 3716.

4. Define a request RSpec for OpenFlow network resources at the remote I2 Meso-scale site

RSpecs were defined for Rutgers FOAM aggregate and WAPG compute resource. The following RSpecs were used for the meso-scale site:

5. Define a request RSpec for the OpenFlow Core resources

The file IG-EXP-5-exp1-openflow-i2.rspec defines the Internet2 Core FOAM Aggregate network resources request RSpec.

6. Create the first slice

Created the first slice as with GPO ProtoGENI credentials:

 $ omni.py createslice IG-EXP-5-exp1

7. Create a sliver for the BBN compute resources

Created slivers at the PG Utah compute resource aggregate

 $ omni.py -a pg createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-pg-site.rspec

8. Create a sliver at the I2 meso-scale site using FOAM at site

Created a sliver at the Rutgers FOAM for VLAN 3716:

 $ omni.py -a of-rutgers createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-rutgers.rspec -V1

9. Create a sliver at of the Utah InstaGENI AM

Created slivers at the InstaGENI rack FOAM network resource aggregate and at the UEN Regional:

 $ omni.py -a of-ig createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-ig.rspec -V1
 $ omni.py -a of-uen createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-uen.rspec -V1

10. Create a sliver for the OpenFlow resources in the core

Created slivers at Internet2 and NLR FOAM network resource aggregate:

 $ omni.py -a of-i2 createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-i2.rspec -V1

10a. Create a sliver for all remaining Meso-scale compute and network resources

WAPG nodes are part of the Utah PG aggregate, therefore a second request to add a resource does not work. Also modify sliver is not an available feature at this time, so this step is modified to create a second slice for experiment1. In order to create a second request at the PG site the following slice and sliver were created for the Rutgers WAPG node:

 $ omni.py createslice IG-EXP-5-exp1a 
 $ omni.py -a pg2 createsliver IG-EXP-5-exp1a --api-version 2 -t GENI 3 IG-EXP-5-exp1-rutgers-wapg.rspec

11. Log in to each of the compute resources and send traffic to the other end-point

Verify that all slivers are ready:

 $ omni.py -a pg sliverstatus IG-EXP-5-exp1 
 $ omni.py -a of-rutgers sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-ig sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a pg2 sliverstatus IG-EXP-5-exp1a

When all slivers are ready, determine which nodes are assigned for compute resources:

$ omni.py -a pg  sliverstatus IG-EXP-5-exp1 -o
$ omni.py -a pg2  sliverstatus IG-EXP-5-exp1a --api-version 2 -t GENI 3 -o

The above created output files from which the assigned hosts can be determined:

 $ egrep "hostname" IG-EXP-5-sliverstatus-SITENAME.json

Or you me also use the gcf/examples/readyToLogin.py script:

$ ./examples/readyToLogin.py -a pg  IG-EXP-5-exp1
...
================================================================================
Aggregate [http://www.emulab.net/protogeni/xmlrpc/am] has a ProtoGENI sliver.

pc444.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i ~/.ssh/id_rsa lnevers@pc444.emulab.net -p 31802 &
================================================================================

And for the Rutgers WAPG node:

$ ./examples/readyToLogin.py -a pg2  IG-EXP-5-exp1a -V2 
================================================================================
Aggregate [https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0] has a ProtoGENI sliver.

pg51.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i ~/.ssh/id_rsa lnevers@pg51.emulab.net &
================================================================================

12. Verify that traffic is delivered to target

Logged in with the following:

$ ssh lnevers@pc444.emulab.net -p 31802 &

Exchanged traffic with the remote meso-scale resource at Rutgers:

$ ssh lnevers@pg51.emulab.net

13. Review baseline, GMOC, and meso-scale monitoring statistics

Iperf tests were run for a few scenarios and are captured in this step. The Iperf server is run on the InstaGENI VM for each scenario.

Two InstaGENI VMs on the Shared VLAN 1750

In a scenario where two VMs where reserved on the shared VLAN 1750, Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

One InstaGENI VMs and One Emulab VM node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one VM node reserved on VLAN 1750 at the Emulab aggregate. Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec

One InstaGENI VMs and One Rutgers WAPG node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one WAPG node at Rutgers reserved on VLAN 1750. Iperf was run to capture the following statistics:

[ ID] Interval Transfer Bandwidth

14. As Experimenter2, determine Utah site compute resources and define RSpec

As Experimenter2 (lnevers2@bbn.com) determined available resources:

$ omni.py -a plc-clemson listresources -o
$ omni.py -a of-clemson listresources -V1 -o
$ omni.py -a of-nlr listresources -V1 -o
$ omni.py -a of-ig listresources -V1 -o

15. Determine remote meso-scale NLR compute resources and define RSpec

The Clemson site was used as a remote meso-scale site and the file IG-EXP-5-myplc-clemson.rspec captures the Clemson MyPLC node planetlab4 compute resource request.

16. Define a request RSpec for OpenFlow network resources at the Utah InstaGENI AM

Defined the IG-EXP-5-exp2-pg-site.rspec file to request the ProtoGENI site compute resources request for experiment2.

17. Define a request RSpec for OpenFlow network resources at the remote NLR Meso-scale site

Defined the file IG-EXP-5-openflow-clemson.rspec to capture the Clemson FOAM Aggregate network resource request needed to allow the MyPLC node (planetlab4.clemson.edu) access to the OpenFlow network core.

18. Define a request RSpec for the OpenFlow Core resources

Defined IG-EXP-5-openflow-nlr.rspec to capture the NLR Core FOAM Aggregate network resources request.

19. Create the second slice

As experimenter2 (lnevers2@bbn.com) created a slice:

$ omni.py createslice IG-EXP-5-exp2

20. Create a sliver for the Utah compute resources

Created a sliver for the site resources at the Utah PG as follows:

$ omni.py -a pg createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-pg-site.rspec

21. Create a sliver at the meso-scale site using FOAM at site

Created a sliver at the Clemson FOAM aggregate requesting the network resources required to allow the MyPLC host to access the OpenFlow Backbone VLAN 3716:

$ omni.py -a of-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-clemson.rspec -V1

22. Create a sliver at of the Utah InstaGENI AM

Created slivers at the InstaGENI rack FOAM network resource aggregate to allow the PG site resources to access the OpenFlow backbone:

 $ omni.py -a of-ig createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-ig.rspec -V1

23. Create a sliver for the OpenFlow resources in the core

Created sliver at NLR FOAM network resource aggregates:

 $ omni.py -a of-nlr createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-nlr.rspec -V1

24. Create a sliver for the meso-scale compute resources

Requested the MyPLC node at the Clemson resource aggregate:

$ omni.py -a plc-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-myplc-clemson.rspec -V1 

25. Log in to each of the compute resources and send traffic to the other endpoint

Verify that each sliver is ready, but checking the "geni_status" for each sliver:

 $ omni.py -a pg sliverstatus IG-EXP-5-exp2 
 $ omni.py -a of-ig sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-clemson sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson sliverstatus IG-EXP-5-exp2

Verify the status for each compute resource sliver, and use login information when ready:

$ omni.py -a plc-clemson  sliverstatus IG-EXP-5-exp2 -o
$ omni.py -a pg sliverstatus IG-EXP-5-exp2 -o

Used the readToLogin.py script to determine state of the sliver and the login information for the PG compute resource sliver :

$ ./examples/readyToLogin.py -a pg IG-EXP-5-exp2
....
================================================================================
Aggregate [http://www.emulab.net/protogeni/xmlrpc/am] has a ProtoGENI sliver.

pc515.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key lnevers2@pc515.emulab.net -p 35386 &
================================================================================

Used the readToLogin.py script to determine state of the sliver and the login information for the MyPLC compute resource sliver at Clemson:

$ ./examples/readyToLogin.py -a plc-clemson IG-EXP-5-exp2
...
================================================================================
Aggregate [http://myplc.clemson.edu:12346/] has a PlanetLab sliver.

planetlab4.clemson.edu's geni_status is: ready (pl_boot_state:boot) 
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key pgenigpolabbbncom_IGEXP5exp2@planetlab4.clemson.edu &
================================================================================

Was able to exchange ping traffic between each of the compute resources.

At PG site resource:

[lnevers2@utah-pg ~]$ ping 10.42.18.104

At Clemson MyPLC site:

pgenigpolabbbncom_IGEXP5exp2@planetlab4 ~]$ ping 10.42.18.34 

26. As Experimenter2, insert flowmods and send packet-outs only for traffic assigned to the slivers

For this portion of testing, the FloodLight OpenFlow controller was used and additional nodes were reserved within the InstaGENI Rack, the two nodes used the addresses "10.42.11.32" and "10.42.11.33". Also a node on the Emulab Shared VLAN 1750 was part of this experiment using address "10.42.11.34".

First, checked the existing switches:

{{{
 $ curl  http://localhost:9090/wm/core/controller/switches/json 
[{"dpid":"00:00:0e:84:40:39:18:1b"},
 {"dpid":"06:d6:ac:16:2d:f5:2d:00"},
 {"dpid":"00:00:0e:84:40:39:19:96"},
 {"dpid":"00:00:0e:84:40:39:1b:93"},
 {"dpid":"00:00:0e:84:40:39:1a:57"},
 {"dpid":"00:00:0e:84:40:39:18:58"},
 {"dpid":"06:d6:00:24:a8:5d:0b:00"},
 {"dpid":"00:01:08:17:f4:b5:2a:00"},
 {"dpid":"00:00:00:10:10:17:50:01"}]
}}}

The InstaGENI !OpenFlow switch is "06:d6:00:24:a8:5d:0b:00", determined the flows for this DPID:
{{{
$ curl  http://localhost:9090/wm/core/switch/06:d6:00:24:a8:5d:0b:00/flow/json
{"06:d6:00:24:a8:5d:0b:00":
[{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:fc:91:ac:8d:d8",
"dataLayerSource":"02:3a:2d:fc:da:7a","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":10,"networkDestination":"10.42.11.33",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.32","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":1259,
"durationNanoseconds":858000000,"packetCount":2493,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":12,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:a5:09:f2:32:bb",
"dataLayerSource":"02:3a:2d:fc:da:7a","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":10,"networkDestination":"10.42.11.34",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.32","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":1229,
"durationNanoseconds":137000000,"packetCount":1222,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":19,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:3a:2d:fc:da:7a",
"dataLayerSource":"02:a5:09:f2:32:bb","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":19,"networkDestination":"10.42.11.32",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.34","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},
"durationSeconds":1229,"durationNanoseconds":50000000,"packetCount":1221,"byteCount":0,"tableId":0,
"actions":[{"maxLength":0,"port":10,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:3a:2d:fc:da:7a","dataLayerSource":"02:fc:91:ac:8d:d8","dataLayerType":"0x0800",
"dataLayerVirtualLan":-1,"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":12,"networkDestination":"10.42.11.32",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.33","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":1259,
"durationNanoseconds":773000000,"packetCount":2492,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":10,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},
}}}

Inserted an invalid flow modification, which was rejected. Deleted flow-mod after failure:
{{{
$ curl -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}lnevers@mallorea:~$ 
 
$ curl -X DELETE -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}
$
}}} 


== 27. Verify that traffic is delivered to target according to the flowmods settings ==
See step 26.

== 28. Review baseline, GMOC, and monitoring statistics ==

Insert Iperf results here:

Reviewed the monitoring information available about this experiment, first found the sliver in the list of slices:

[[Image(IG-EXP-5.jpg)]]
 
The selected the detail panel:

[[Image(IG-EXP-5-detail.jpg)]]

Then selected the sliver resources panel:

[[Image(IG-EXP-5-sliver-resources.jpg)]]


Also checked the sliver measurements:

[[Image(IG-EXP-5-sliver-measuraments.jpg)]]

== 29. Stop traffic and delete slivers ==

Stopped traffic, and deleted slivers.

As Expeirenter1:
{{{
 $ omni.py -a pg deletesliver IG-EXP-5-exp1 
 $ omni.py -a of-rutgers deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a pg2 deletesliver IG-EXP-5-exp1a
}}}

As Expeirenter2:
{{{
 $ omni.py -a pg deletesliver IG-EXP-5-exp2 
 $ omni.py -a of-clemson deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson deletesliver IG-EXP-5-exp2
}}}

Attachments (7)