wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-MON-2

Version 11 (modified by lnevers@bbn.com, 7 years ago) (diff)

--

Detailed test plan for IG-MON-2: GENI Software Configuration Inspection Test

This page captures status for the test case IG-MON-2, which verifies the ability to determine software configuration in the InstaGENI racks. For overall status see the InstaGENI Acceptance Test Status page.

Last update: 2013-03-05

Test Status

Step State Tickets Notes
1 Color(green,Pass)?
2 Color(green,Pass)?
3
4


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

Step 1: determine experimental node allocations

This step will verify the ability to determine VM in use as well as VMs available as follows:

  • On boss and ops, use available system data sources (process listings, monitoring output, system logs, etc) and/or AM administrative interfaces (Emulab UI, testbed database) to determine the experimental state of each node.
  • For each OpenVZ node found, determine what operating system that node makes available to users.

As a site admin login to https://boss.instageni.gpolab.bbn.com, select "red dot" mode and select pulldown "Experimentation"->"Node Status", which will show this page:

[Image(IG-MON-2-nodes.jpg)]].

On the resulting page select "physical" from the tabular view listing across the top of the panel. This will show all physical systems in the rack, along with the OS running on those nodes.

The above shows:

  • pc3 is in used by an experiment (EID) named "jbstmp"
  • pc4 and pc5 are available
  • pc1 and pc2 are part of the "shared-nodes" experiment used to reserve the PCs that provide VMs.

To determine what VMs are in use, while still in the Experimentation"->"Node Status" page (in red dot mode) select "virtual" from the tabular view across the top of the panel, which will provide a list of VMs in use in the rack:

If further information is needed for any of the VMs on the "virtual" table, simply click on the VM "ID" and all VM information will be shown.

Step 2: determine rack VLAN configuration

Determine all VLANs available to experimenters. For each available VLAN, determine whether it is available for exclusive OpenFlow control. This step verifies that the site administrator can determine how many VLANs are available for use and which are for OpenFlow only.

$ omni.py listresources -a ig-gpo -o

The output file show the following OpenFlow VLANs:

  <rspec_shared_vlan xmlns="http://www.geni.net/resources/rspec/ext/shared-vlan/1">    
      <available name="mesoscale-openflow"/>    
      <available name="exclusive-openflow-1755"/>    
      <available name="exclusive-openflow-1756"/>    
      <available name="exclusive-openflow-1757"/>    
      <available name="exclusive-openflow-1758"/>    
      <available name="exclusive-openflow-1759"/>    
      <available name="L2-ping-tutorial"/>    
  </rspec_shared_vlan>  

The following VLAN are available on the switch "procurve2" (dataplane switch) for stitching use:

      <node id="urn:publicid:IDN+instageni.gpolab.bbn.com+node+procurve2">        
                            <vlanRangeAvailability>                    3747-3749                  </vlanRangeAvailability>                  
                            <vlanTranslation>                    false                  </vlanTranslation>                  
      <node id="urn:publicid:IDN+instageni.gpolab.bbn.com+node+procurve2">        
                            <vlanRangeAvailability>                    2644-2649                  </vlanRangeAvailability>                  
                            <vlanTranslation>                    false                  </vlanTranslation>                              

Additional information can be determined by logging into the dataplane switch and showing VLAN information:

$ ssh boss.instageni.gpolab.com
[lnevers@boss ~]$ sudo more /usr/testbed/etc/switch.pswd
XXXXX
[lnevers@boss ~]$ ssh manager@procurve2

manager@procurve2's password: 

HP-E5406zl# show vlans 

 Status and Counters - VLAN Information

  Maximum VLANs to support : 256                  
  Primary VLAN : DEFAULT_VLAN    
  Management VLAN : control-hardware

  VLAN ID Name                             | Status     Voice Jumbo
  ------- -------------------------------- + ---------- ----- -----
  1       DEFAULT_VLAN                     | Port-based No    No   
  10      control-hardware                 | Port-based No    No   
  257     _8                               | Port-based No    No   
  1750    _11                              | Port-based No    No   
  1755    _347                             | Port-based No    No   
  1756    _348                             | Port-based No    No   
  1757    _349                             | Port-based No    No   
  1758    _350                             | Port-based No    No   
  1759    _351                             | Port-based No    No   
  3705    _222                             | Port-based No    No   
  3742    _481                             | Port-based No    No   
 
HP-E5406zl# show vlans 1750

 Status and Counters - VLAN Information - VLAN 1750

  VLAN ID : 1750   
  Name : _11                             
  Status : Port-based
  Voice : No 
  Jumbo : No 

  Port Information Mode     Unknown VLAN Status    
  ---------------- -------- ------------ ----------
  E1               Tagged   Learn        Up        
  E4               Tagged   Learn        Up        
  E5               Tagged   Learn        Up        
  E23              Tagged   Learn        Up        
  E24              Tagged   Learn        Up        

The overall configuration can also be shown on the switch to determine configured VLAN information as well as details for the OpenFlow VLANs:

HP-E5406zl# show running-config 
<...>
vlan 1 
   name "DEFAULT_VLAN" 
   forbid E3,E6 
   untagged A1-A24,E7-E19,E21-E22 
   no untagged E1-E6,E20,E23-E24 
   no ip address 
   exit 
vlan 10 
   name "control-hardware" 
   untagged E20 
   ip address 10.2.1.253 255.255.255.0 
   ip address 10.3.1.253 255.255.255.0 
   exit 
vlan 1750 
   name "_11" 
   tagged E1,E4-E5,E23-E24 
   no ip address 
   exit 
vlan 3705 
   name "_222" 
   tagged E23-E24 
   no ip address 
vlan 1755 
   name "_347" 
   tagged E23-E24 
   no ip address 
   exit 
vlan 1756 
   name "_348" 
   tagged E23-E24 
   no ip address 
   exit 
vlan 1757 
   name "_349" 
   tagged E23-E24 
   no ip address 
   exit 
vlan 1758 
   name "_350" 
   tagged E23-E24 
   no ip address 
   exit 
vlan 1759 
   name "_351" 
   tagged E23-E24 
   no ip address 
   exit 
vlan 257 
   name "_8" 
   untagged E3,E6 
   tagged E1-E2,E4-E5 
   no ip address 
   exit 
vlan 3742 
   name "_481" 
   tagged E1,E4,E24 
   no ip address 
   exit 
<...>
openflow
   vlan 1750
      enable
      controller "tcp:10.3.1.7:6633" fail-secure on
      exit
   vlan 1755
      enable
      controller "tcp:10.3.1.7:6633"
      exit
   vlan 1756
      enable
      controller "tcp:10.3.1.7:6633"
      exit
   vlan 1757
      enable
      controller "tcp:10.3.1.7:6633"
      exit
   vlan 1758
      enable
      controller "tcp:10.3.1.7:6633"
      exit
   vlan 1759
      enable
      controller "tcp:10.3.1.7:6633"
      exit
   exit

Step 3: determine which GENI SAs are trusted by InstaGENI AM

This step verified that an experimenter can use the trusted SAs and that the site administrator can determine the full set of trusted GENI Slice Authorities:

Use Omni tools with pgeni.gpolab.bbn.com credentials to query the GPO rack. The omni_config is defined as follows:

[omni]
default_cf = pg
users = lnevers
# ---------- Users ----------
[lnevers]
urn = urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers
keys = /home/lnevers/.ssh/id_rsa.pub
# ---------- Frameworks ----------
[pg]
type = pg
ch = https://www.emulab.net:12369/protogeni/xmlrpc/ch
sa = https://www.pgeni.gpolab.bbn.com:443/protogeni/xmlrpc/sa
cert = /home/lnevers/.ssl/pgeni/encrypted-cleartext.pem
key = /home/lnevers/.ssl/pgeni/encrypted-cleartext.pem

Create a slice and a sliver at the GPO InstaGENI:

$ omni.py createslice ln-pgeni-cred
$ omni.py createsliver  ln-pgeni-cred -a ig-gpo  ./insta-gpo-1vm.rspec 
<...>
INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ln-pgeni-cred expires on 2013-03-06 21:58:49 UTC
<..>

The Slice urn shows the pgeni.gpolab.bbn.com SA was used to reserve resources within the rack. Before proceeding delete the sliver:

$ omni.py deletesliver  ln-pgeni-cred -a ig-gpo 

To verify support for PG Utah SA, modify the omni_config to use emulab as a default framwork and selcte the urn for the emulab account:

[omni]
default_cf = emulab 
users = lnevers
# ---------- Users ----------
[lnevers]
urn = urn:publicid:IDN+emulab.net+user+lnevers
keys = /home/lnevers/.ssh/id_rsa.pub
# ---------- Frameworks ----------
[emulab]
type = pg
ch = https://www.emulab.net:12369/protogeni/xmlrpc/ch
sa = https://www.emulab.net:12369/protogeni/xmlrpc/sa
cert = ~/.ssl/protogeni/encrypted-cleartext.pem
key = ~/.ssl/protogeni/encrypted-cleartext.pem
verbose=false

Create a new slice with the PG Utah SA credentials and sliver:

$ omni.py createslice ln-pgutah-cred
$ omni.py createsliver ln-pgutah-cred -a ig-gpo ./insta-gpo-1vm.rspec 
<...>
INFO:omni:Slice urn:publicid:IDN+emulab.net+slice+ln-pgutah-cred expires within 1 day on 2013-03-06 03:06:59 UTC
<...>

The Slice urn shows the emulab.net SA was used to reserve resources within the rack.

TO BE DONE:

Show

  • On boss, use available system data sources and/or AM administrative interfaces to determine which GENI slice authorities the InstaGENI AM trusts.
  • On foam, use available system data sources and/or AM administrative interfaces to determine which GENI slice authorities the FOAM AM trusts.

Step 4: determine rack OpenFlow state

Using:

  • From a login to the dataplane switch, view the OpenFlow configuration.
  • On flowvisor, use fvctl to view the set of devices reporting to the FlowVisor
  • Use the GENI AM API to view the set of datapaths advertised by FOAM
  • On boss or ops, use available system or AM tools to determine the configuration which the InstaGENI AM will use to install OpenFlow configuration on the switch and share it with FOAM

Verify:

  • All datapaths on the rack switch report either to FlowVisor or directly to experimental controllers
  • All datapaths on the rack switch which are shared with FlowVisor are advertised by FOAM
  • All datapaths reporting to FlowVisor or to FOAM come from the rack switch
  • A site administrator can look at flowvisor's state using fvctl
  • A site administrator can look at FOAM's state using foamctl
  • A site administrator can look at InstaGENI's OpenFlow configuration

Attachments (3)

Download all attachments as: .zip