wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-6

Version 6 (modified by lnevers@bbn.com, 11 years ago) (diff)

--

IG-EXP-6: InstaGENI and Meso-scale Multi-site OpenFlow Acceptance Test

This page captures status for the test case IG-EXP-6, which verifies InstaGENI rack interoperability with other meso-scale GENI sites. For overall status see the InstaGENI Acceptance Test Status page.

Last Updates: 2013/01/07

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Date completed Ticket Comments
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11
Step 12
Step 13
Step 14
Step 15
Step 16
Step 17
Step 18
Step 19
Step 20
Step 21
Step 22
Step 23
Step 24
Step 25
Step 26
Step 27
Step 28
Step 29
Step 30
Step 31
Step 32
Step 33
Step 34
Step 35
Step 36
Step 37
Step 38
Step 39
Step 40
Step 41


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

The tests described in this page are executed at Utah and GPO InstaGENI racks, as planned. Three user credentials are used to execute the experiments, lnevers@bbn.com, lnevers1@bbn.com, and lnever2@bbn.com. Additionally the following aggregate manager nicknames are used:

ig-utah=,https://utah.geniracks.net/protogeni/xmlrpc/am/2.0
ig-gpo=,https://instageni.gpolab.bbn.com/protogeni/xmlrpc/am/2.0
pg-utah=,https://www.emulab.net:12369/protogeni/xmlrpc/am/2.0

Note: The WAPG node at Indiana and the PG Utah node on shared VLAN 1750 are reserved with the same RSpec.

1. As Experimenter1, request ListResources

As user lnevers@bbn.com get a listersources at GPO InstaGENI, Utah InstaGENI, and from FOAM at InstaGENI GPO, GPO FOAM, NLR FOAM, UEN FOAM and Utah InstaGENI:

$ omni.py listresources -a ig-utah -o          # InstaGENI Utah
$ omni.py listresources -a ig-of-utah -V1 -o   # InstaGENI FOAM 
$ omni.py listresources -a of-uen -V1 -o       # FOAM UEN Regional 
$ omni.py listresources -a of-nlr -V1 -o       # FOAM NLR
$ omni.py listresources -a of-gpo -V1 -o       # GPO SITE FOAM
$ omni.py listresources -a ig-of-gpo -V1 -o    # InstaGENI FOAM
$ omni.py listresources -a ig-gpo -o           # InstaGENI GPO 

2. Review ListResources

Output files for all AMs are reviewed to determine resources.

3. Define a request RSpec for a VM at the BBN InstaGENI

Defined an RSpec that requests one VM on shared VLAN 1750 in the GPO Rack and one VM on shared VLAN 1750 in the Utah rack. The RSpec generated is [

4. Define a request RSpec for a VM at the Utah InstaGENI

  1. Define request RSpecs for OpenFlow resources from BBN FOAM to access GENI OpenFlow core resources.
  2. Define request RSpecs for OpenFlow core resources at I2 FOAM
  3. Define request RSpecs for OpenFlow core resources at NLR FOAM.
  4. Create the first slice.
  5. Create a sliver in the first slice at each AM, using the RSpecs defined above.
  6. Log in to each of the systems, verify IP address assignment. Send traffic to the other system, leave traffic running.
  7. As Experimenter2, define a request RSpec for one VM and one physical node at BBN InstaGENI.

As user lnevers1@bbn.com get a listersources from BBN InstaGENI, Utah InstaGENI, and from FOAM at I2 and NLR Site:

$ omni.py listresources -a ig-utah -o     # InstaGENI Utah 
$ omni.py listresources -a ig-of-utah -V1 -o  # InstaGENI Utah FOAM
$ omni.py listresources -a of-uen -V1 -o      # FOAM UEN Regional   
$ omni.py listresources -a of-nlr -V1 -o      # FOAM NLR
$ omni.py listresources -a of-indiana -V1 -o  # FOAM Indiana 
$ omni.py listresources -a of-i2 -V1 -o       # FOAM Internet2 
$ omni.py listresources -a of-gpo -V1 -o       # GPO SITE FOAM
$ omni.py listresources -a pg-utah  -o
  1. Define a request RSpec for two VMs on the same experiment node at Utah InstaGENI.
  2. Define request RSpecs for OpenFlow resources from BBN FOAM to access GENI OpenFlow core resources.
  3. Define request RSpecs for OpenFlow core resources at I2 FOAM.
  4. Define request RSpecs for OpenFlow core resources at NLR FOAM.
  5. Create a second slice.
  6. Create a sliver in the second slice at each AM, using the RSpecs defined above.
  7. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
  8. As Experimenter3, request ListResources from BBN InstaGENI, BBN meso-scale FOAM, and FOAM at Meso-scale Site (Internet2 Site BBN and NLR site is TBD).

As user lnevers2@bbn.com get a listersources from BBN InstaGENI, Utah PG, and from FOAM at I2 and NLR Site:

$ omni.py listresources -a pg-utah  -o         # PG Utah 
$ omni.py listresources -a ig-of-utah -V1 -o 
$ omni.py listresources -a of-uen -V1 -o
$ omni.py listresources -a of-nlr -V1 -o
$ omni.py listresources -a of-indiana -V1 -o
$ omni.py listresources -a of-i2 -V1 -o
$ omni.py listresources -a of-gpo -V1 -o
$ omni.py listresources -a eg-of-gpo -V1 -o
$ omni.py listresources -a eg-of-renci -V1 -o
$ omni.py listresources -a ig-of-gpo -V1 -o
$ omni.py listresources -a pg-utah  -o
$ omni.py listresources -a ig-utah -o
$ omni.py listresources -a ig-gpo -o
  1. Review ListResources output from all AMs.
  2. Define a request RSpec for a VM at the BBN InstaGENI.
  3. Define a request RSpec for a compute resource at the BBN meso-scale site.
  4. Define a request RSpec for a compute resource at a meso-scale site.
  5. Define request RSpecs for OpenFlow resources to allow connection from OpenFlow BBN InstaGENI to Meso-scale OpenFlow sites(BBN and second site TBD) (I2 and NLR).
  6. If PG access to OpenFlow is available, define a request RSpec for the PG OpenFlow resource.
  7. Create a third slice.
  8. Create slivers that connects the TBD Internet2 Meso-scale OpenFlow site to the BBN InstaGENI Site, and the BBN Meso-scale site; and if available, to PG node.
  9. Log in to each of the compute resources in the slice, configure data plane network interfaces on any non-InstaGENI resources as necessary, and send traffic to each other systems; leave traffic running.
  10. Verify that all three experiment continue to run without impacting each other's traffic, and that data is exchanged over the path along which data is supposed to flow.
  11. Review baseline monitoring statistics and checks.
  12. As site administrator, identify all controllers that the BBN InstaGENI OpenFlow switch is connected to
  13. As Experimenter3, verify that traffic only flows on the network resources assigned to slivers as specified by the controller.
  1. Verify that no default controller, switch fail-open behavior, or other resource other than experimenters' controllers, can control how traffic flows on network resources assigned to experimenters' slivers.
  2. Set the hard and soft timeout of flowtable entries
  3. Get switch statistics and flowtable entries for slivers from the OpenFlow switch.
  4. Get layer 2 topology information about slivers in each slice.
  5. Install flows that match only on layer 2 fields, and confirm whether the matching is done in hardware.
  6. If supported, install flows that match only on layer 3 fields, and confirm whether the matching is done in hardware.
  7. Run test for at least 4 hours.
  8. Review monitoring statistics and checks as above.
  9. Delete slivers.

Attachments (11)