wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Version 46 (modified by lnevers@bbn.com, 7 years ago) (diff)

--

IG-EXP-5: InstaGENI Network Resources Acceptance Test

This page captures status for the test case IG-EXP-5, which verifies the ability to support OpenFlow operations and integration with meso-scale compute resources and other compute resources external to the InstaGENI rack. For overall status see the InstaGENI Acceptance Test Status page.

Last Update: 2013/01/15

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Ticket Comments
Step 1 Color(yellow,Complete)?
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)?
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)?
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 Color(yellow,Complete)?
Step 14 Color(yellow,Complete)?
Step 15 Color(yellow,Complete)?
Step 16 Color(yellow,Complete)?
Step 17 Color(yellow,Complete)?
Step 18 Color(yellow,Complete)?
Step 19 Color(yellow,Complete)?
Step 20 Color(yellow,Complete)?
Step 21 Color(yellow,Complete)?
Step 22 Color(yellow,Complete)?
Step 23 Color(yellow,Complete)?
Step 24 Color(yellow,Complete)?
Step 25 Color(yellow,Complete)?
Step 26 Color(yellow,Complete)?
Step 27 Color(#63B8FF,In Progress)?
Step 28 Color(yellow,Complete)?
Step 29 Color(yellow,Complete)?


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This procedure is executed at GPO InstaGENI and it uses three set of credentials to setup 3 separate experiments. The PG Utah aggregate is used to reserve two WAPG nodes, one at Rutgers (Internet2) and one at Indiana (NLR).

The following aggregate managers nick_names are define in the omni_config used for this test:

ig-utah=,http://utah.geniracks.net/protogeni/xmlrpc/am/2.0
of-ig=,https://foam.utah.geniracks.net:3626/foam/gapi/1
pg-utah=,https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0
pg-gpo=,https://pgeni.gpolab.bbn.com/protogeni/xmlrpc/am/2.0 # WAPG nodes


#OpenFlow Aggregates
ig-of-gpo=,https://foam.instageni.gpolab.bbn.com:3626/foam/gapi/1  #GPO  IG
of-gpo=,https://foam.gpolab.bbn.com:3626/foam/gapi/1               #GPO Site
of-nlr=,https://foam.nlr.net:3626/foam/gapi/1                      
of-i2=,https://foam.net.internet2.edu:3626/foam/gapi/1
of-indiana=,https://foam.noc.iu.edu:3626/foam/gapi/1
of-rutgers=,https://nox.orbit-lab.org:3626/foam/gapi/1

1. As Experimenter1 (lnevers@bbn.com), determine PG site compute resources and define RSpec

Collect list resources from InstaGENI compute and network aggregate managers for the first experiment:

$ omni.py -a ig-gpo listresources -o
$ omni.py -a ig-of-gpo listresources -V1 -o
$ omni.py -a of-i2 listresources -V1 -o
$ omni.py -a of-nlr listresources -V1 -o
$ omni.py -a of-rutgers listresources -V1 -o

2. Determine remote Meso-scale compute resources and define RSpec

For Experiment1, a Rutgers WAPG node is used as a remote Meso-scale. RSpecs were defined for Rutgers FOAM aggregate and WAPG compute resource. The following RSpec is used for the Rutgers Meso-scale site:

3. Define a request RSpec for OpenFlow network resources at the InstaGENI AM

GPO campus resources RSpec IG-EXP-5-exp1-openflow-ig-gpo.rspec which access the OpenFlow VLAN 1750 via the InstaGENI OpenFlow switch in the GPO InstaGENI Rack.

The InstaGENI Rack access the backbone via the GPO OpenFlow Switch, which require a separate request to the GPO Site FOAM IG-EXP-5-exp1-openflow-gpo.rspec.

4. Define a request RSpec for OpenFlow network resources at the remote I2 Meso-scale site

The following RSpec is used for the Rutgers FOAM Aggregate:

5. Define a request RSpec for the OpenFlow Core resources

The RSpec IG-EXP-5-exp1-openflow-i2.rspec defines the Internet2 Core FOAM Aggregate network resources request.

The RSpec IG-EXP-5-exp1-openflow-nlr.rspec defines the NLR Core FOAM Aggregate network resources request.

6. Create the first slice

Created the first slice as with GPO ProtoGENI credentials lnevers@bbn.com:

 $ omni.py createslice IG-EXP-5-exp1

7. Create a sliver for the GPO compute resources

The compute resources at the BBN campus only require a FOAM request to allow the connections though the OpenFlow switch in the InstaGENI rack.

 $ omni.py -a of-gpo createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-gpo.rspec -V1

8. Create a sliver at the I2 meso-scale site using FOAM at site

Created a sliver at the Rutgers FOAM for VLAN 3716:

 $ omni.py -a of-rutgers createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-rutgers.rspec -V1

9. Create a sliver at of the GPO InstaGENI AM

The GPO InstaGENI FOAM requires a sliver to allow the InstaGENI campus traffic to the core:

 $ omni.py -a ig-of-gpo createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-ig-gpo.rspec -V1

10. Create a sliver for the OpenFlow resources in the core

Created slivers at Internet2 and NLR FOAM network resource aggregate:

 $ omni.py -a of-i2 createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-i2.rspec -V1
 $ omni.py -a of-nlr createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-openflow-nlr.rspec -V1

10a. Create a sliver for all remaining Meso-scale compute and network resources

WAPG nodes are part of the Utah PG aggregate, created a sliver at the PG site to request the Rutgers WAPG node:

 $ omni.py -a pg createsliver IG-EXP-5-exp1 IG-EXP-5-exp1-rutgers-wapg.rspec

11. Log in to each of the compute resources and send traffic to the other end-point

Verify that all slivers are ready:

 $ omni.py -a ig-of-gpo sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-gpo sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-nlr sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a of-rutgers sliverstatus IG-EXP-5-exp1 -V1
 $ omni.py -a pg sliverstatus IG-EXP-5-exp1 

The site campus resource are not allocated by the InstaGENI Compute resource aggregate, therefore they are setup before the experiment to have interfaces on VLAN 1750 via a port on the GPO InstaGENI Switch.

When all slivers are ready, determine login for Rutgers nodes:

$ readyToLogin.py -a pg  IG-EXP-5-exp1

12. Verify that traffic is delivered to target

Exchanged traffic from the remote meso-scale resource at Rutgers to the GPO campus resource siovale:

$ ssh lnevers@pg51.emulab.net

13. Review baseline, GMOC, and meso-scale monitoring statistics

Iperf tests were run for a few scenarios and are captured in this step. The Iperf server is run on the InstaGENI VM for each scenario.

Two InstaGENI VMs on the Shared VLAN 1750

In a scenario where two VMs where reserved on the shared VLAN 1750, Iperf was run to capture the following statistics:

  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

One InstaGENI VMs and One Emulab VM node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one VM node reserved on VLAN 1750 at the Emulab aggregate. Iperf was run to capture the following statistics:

  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

One InstaGENI VMs and One Rutgers WAPG node both on share VLAN 1750

In a scenario where one VMs was reserved on the shared VLAN 1750 and one WAPG node at Rutgers reserved on VLAN 1750. Iperf was run to capture the following statistics:

  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.6 sec  3.29 MBytes  2.61 Mbits/sec

14. As Experimenter2, determine Utah site compute resources and define RSpec

As Experimenter2 (lnevers2@bbn.com) determined available resources:

$ omni.py -a plc-clemson listresources -o
$ omni.py -a of-clemson listresources -V1 -o
$ omni.py -a of-nlr listresources -V1 -o
$ omni.py -a of-ig listresources -V1 -o

15. Determine remote meso-scale NLR compute resources and define RSpec

The Clemson site was used as a remote meso-scale site and the file IG-EXP-5-myplc-clemson.rspec captures the Clemson MyPLC node planetlab4 compute resource request.

16. Define a request RSpec for OpenFlow network resources at the Utah InstaGENI AM

Defined the IG-EXP-5-exp2-pg-site.rspec file to request the ProtoGENI site compute resources request for experiment2.

17. Define a request RSpec for OpenFlow network resources at the remote NLR Meso-scale site

Defined the file IG-EXP-5-openflow-clemson.rspec to capture the Clemson FOAM Aggregate network resource request needed to allow the MyPLC node (planetlab4.clemson.edu) access to the OpenFlow network core.

18. Define a request RSpec for the OpenFlow Core resources

Defined IG-EXP-5-openflow-nlr.rspec to capture the NLR Core FOAM Aggregate network resources request.

19. Create the second slice

As experimenter2 (lnevers2@bbn.com) created a slice:

$ omni.py createslice IG-EXP-5-exp2

20. Create a sliver for the Utah compute resources

Created a sliver for the site resources at the Utah PG as follows:

$ omni.py -a pg createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-pg-site.rspec

21. Create a sliver at the meso-scale site using FOAM at site

Created a sliver at the Clemson FOAM aggregate requesting the network resources required to allow the MyPLC host to access the OpenFlow Backbone VLAN 3716:

$ omni.py -a of-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-clemson.rspec -V1

22. Create a sliver at of the Utah InstaGENI AM

Created slivers at the InstaGENI rack FOAM network resource aggregate to allow the PG site resources to access the OpenFlow backbone:

 $ omni.py -a of-ig createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-ig.rspec -V1

23. Create a sliver for the OpenFlow resources in the core

Created sliver at NLR FOAM network resource aggregates:

 $ omni.py -a of-nlr createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-openflow-nlr.rspec -V1

24. Create a sliver for the meso-scale compute resources

Requested the MyPLC node at the Clemson resource aggregate:

$ omni.py -a plc-clemson createsliver IG-EXP-5-exp2 IG-EXP-5-exp2-myplc-clemson.rspec -V1 

25. Log in to each of the compute resources and send traffic to the other endpoint

Verify that each sliver is ready, but checking the "geni_status" for each sliver:

 $ omni.py -a pg sliverstatus IG-EXP-5-exp2 
 $ omni.py -a of-ig sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a of-clemson sliverstatus IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson sliverstatus IG-EXP-5-exp2

Verify the status for each compute resource sliver, and use login information when ready:

$ omni.py -a plc-clemson  sliverstatus IG-EXP-5-exp2 -o
$ omni.py -a pg sliverstatus IG-EXP-5-exp2 -o

Used the readToLogin.py script to determine state of the sliver and the login information for the PG compute resource sliver :

$ ./examples/readyToLogin.py -a pg IG-EXP-5-exp2
....
================================================================================
Aggregate [http://www.emulab.net/protogeni/xmlrpc/am] has a ProtoGENI sliver.

pc515.emulab.net's geni_status is: ready
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key lnevers2@pc515.emulab.net -p 35386 &
================================================================================

Used the readToLogin.py script to determine state of the sliver and the login information for the MyPLC compute resource sliver at Clemson:

$ ./examples/readyToLogin.py -a plc-clemson IG-EXP-5-exp2
...
================================================================================
Aggregate [http://myplc.clemson.edu:12346/] has a PlanetLab sliver.

planetlab4.clemson.edu's geni_status is: ready (pl_boot_state:boot) 
Login using:
	xterm -e ssh -i /home/lnevers2/.ssh/geni_key pgenigpolabbbncom_IGEXP5exp2@planetlab4.clemson.edu &
================================================================================

Was able to exchange ping traffic between each of the compute resources.

At PG site resource:

[lnevers2@utah-pg ~]$ ping 10.42.18.104

At Clemson MyPLC site:

pgenigpolabbbncom_IGEXP5exp2@planetlab4 ~]$ ping 10.42.18.34 

26. As Experimenter1, insert flowmods and send packet-outs only for traffic assigned to the slivers

For this portion of testing, the FloodLight OpenFlow controller was used and these nodes were reserved:

  • At InstaGENI Rack, the two nodes using the addresses "10.42.11.32" and "10.42.11.33"
  • At Emulab on Shared VLAN 1750 one node using address "10.42.11.34"
  • At Rutgers WAPG node pg51 using address "10.42.11.151".

First, checked the existing switches:

 $ curl  http://localhost:9090/wm/core/controller/switches/json 
  [{"dpid":"00:00:0e:84:40:39:18:1b"},  ## Internet2 New York
   {"dpid":"06:d6:ac:16:2d:f5:2d:00"},  ## UEN OF Switch
   {"dpid":"00:00:0e:84:40:39:19:96"},  ## Internet2 Atlanta (VLAN 1750 to 3716)
   {"dpid":"00:00:0e:84:40:39:1b:93"},  ## Internet2 Los Angeles OF Switch (VLAN 3716)
   {"dpid":"00:00:06:d6:40:39:1b:93"},  ## Internet2 Los Angeles OF Switch (VLAN 1750 to 3716)
   {"dpid":"00:00:0e:84:40:39:1a:57"},  ## Internet2 OF Houston Switch 
   {"dpid":"00:00:0e:84:40:39:18:58"},  ## Internet2 OF Washington Switch 
   {"dpid":"06:d6:00:24:a8:5d:0b:00"},   ## InstaGENI Rack OF Switch 
   {"dpid":"00:00:00:10:10:17:50:01"}]   ## Rutgers OF Switch

For this section of the test, each of the 4 hosts ("10.42.11.32" (InstaGENI), "10.42.11.33" (InstaGENI), "10.42.11.34" (Emulab), and "10.42.11.151" (WAPG)) are pinging each other to generate flows. Following is a list of flows on the FloodLight controller:

$ curl http://localhost:9090/wm/core/switch/06:d6:00:24:a8:5d:0b:00/flow/json
{"06:d6:00:24:a8:5d:0b:00":
[{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"00:1f:29:32:72:b5",
"dataLayerSource":"02:3a:2d:fc:da:7a","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":10,"networkDestination":"10.42.11.151",
"networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.32","networkSourceMaskLen":32,
"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":842,
"durationNanoseconds":387000000,"packetCount":836,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":19,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"00:1f:29:32:72:b5",
"dataLayerSource":"02:fc:91:ac:8d:d8","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":12,"networkDestination":"10.42.11.151","networkDestinationMaskLen":32,
"networkProtocol":0,"networkSource":"10.42.11.33","networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,
"transportSource":0,"wildcards":3145952},"durationSeconds":842,"durationNanoseconds":179000000,"packetCount":836,"byteCount":0,
"tableId":0,"actions":[{"maxLength":0,"port":19,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:3a:2d:fc:da:7a",
"dataLayerSource":"00:1f:29:32:72:b5","dataLayerType":"0x0800","dataLayerVirtualLan":-1,
"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":19,"networkDestination":"10.42.11.32","networkDestinationMaskLen":32,
"networkProtocol":0,"networkSource":"10.42.11.151","networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,
"transportSource":0,"wildcards":3145952},"durationSeconds":842,"durationNanoseconds":500000000,"packetCount":836,"byteCount":0,
"tableId":0,"actions":[{"maxLength":0,"port":10,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0},

{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"02:fc:91:ac:8d:d8",
"dataLayerSource":"00:1f:29:32:72:b5","dataLayerType":"0x0800","dataLayerVirtualLan":-1,"dataLayerVirtualLanPriorityCodePoint":0,
"inputPort":19,"networkDestination":"10.42.11.33","networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.151",
"networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},
"durationSeconds":841,"durationNanoseconds":277000000,"packetCount":837,"byteCount":0,"tableId":0,"actions":
[{"maxLength":0,"port":12,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0}

Inserted a flow modification, which was rejected traffic to the WAPG node pg51 (10.41.11.151).

$ curl -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}
$ 
 
$ curl -X DELETE -d '{"switch": "06:d6:00:24:a8:5d:0b:00","name":"flow-mod-1", "cookie":"0", "match":{"dataLayerDestination":"02:e9:c1:7d:03:c7","dataLayerSource":"02:43:26:66:f8:20","dataLayerType":"0x0800", "inputPort":7,"networkDestination":"10.42.11.37","networkDestinationMaskLen":32,"networkSource":"10.42.11.38","networkSourceMaskLen":32,"action":[{"port":7,"type":"OUTPUT"}]  }' http://localhost:9090/wm/staticflowentrypusher/json
{"success":false,"informational":false,"reasonPhrase":"Not Found","uri":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.5","serverError":false,"connectorError":false,"clientError":true,"globalError":false,"redirection":false,"recoverableError":false,"name":"Not Found","error":true,"throwable":null,"description":"The server has not found anything matching the request URI","code":404}
$

27. Verify that traffic is delivered to target according to the flowmods settings

Unable to determine how to define a valid flow-mods??? Step incomplete!!!!!

28. Review baseline, GMOC, and monitoring statistics

Insert Iperf results here:

Reviewed the monitoring information available about this experiment, first found the sliver in the list of slices:

No image "IG-EXP-5.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5 The selected the detail panel:

No image "IG-EXP-5-detail.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Then selected the sliver resources panel:

No image "IG-EXP-5-sliver-resources.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

Also checked the sliver measurements:

No image "IG-EXP-5-sliver-measuraments.jpg" attached to GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-5

29. Stop traffic and delete slivers

Stopped traffic, and deleted slivers.

As Expeirenter1:

 $ omni.py -a pg deletesliver IG-EXP-5-exp1 
 $ omni.py -a of-rutgers deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a of-i2 deletesliver IG-EXP-5-exp1 -V1
 $ omni.py -a pg2 deletesliver IG-EXP-5-exp1a

As Expeirenter2:

 $ omni.py -a pg deletesliver IG-EXP-5-exp2 
 $ omni.py -a of-clemson deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-ig deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a of-nlr deletesliver IG-EXP-5-exp2 -V1
 $ omni.py -a plc-clemson deletesliver IG-EXP-5-exp2

Attachments (7)