wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-EXP-3

Version 11 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

EG-EXP-3: ExoGENI Single Site 100 VM Test

This page captures status for the test case EG-EXP-3, which verifies the ability to support 100 VM in one rack. For overall status see the ExoGENI Acceptance Test Status page.

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Tickets Comments
Step 1 Color(yellow,Complete)? 50/50 allocation forces use of both ExoSM and BBN SM
Step 2 Color(yellow,Complete)? To use 100VM used resources from ExoSM and BBN SM
Step 3 Color(#63B8FF,In Progress)? Writing RSpecs
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

The steps in this test case review baseline monitoring, but its features are still being implemented, therefore results are not yet final.

The omni_configuration file used in this test case includes the following nick_named:

exobbn=,https://bbn-hn.exogeni.net:11443/orca/xmlrpc
exorci=,https://rci-hn.exogeni.net:11443/orca/xmlrpc
exosm=,https://geni.renci.org:11443/orca/xmlrpc

Test Plan Steps

Step 1. Request ListResources from BBN ExoGENI.

The following command is executed to collect resource from the ExoGENI rack:

$ omni.py -a exobbn listresources -o

The rack resource allocation is 50/50 for the local BBN rack SM and the ExoSM, therefore collecting listresources on both to get the complete list of resources available within the BBN Rack.

$ omni.py -a exosm listresources -o

Step 2. Review ListResources output, and identify available resources.

The listresources output file for the ExoSM named rspec-geni-renci-org-11443-orca.xml shows:

<node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="51"/>
            </hardware_type>
            <available now="true"/>

The listresources output file for the BBN SM named rspec-bbn-hn-exogeni-net-11443-orca.xml shows:

      <node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="52"/>
            </hardware_type>
            <available now="true"/>

Step 3. Write a RSpec that requests 100 VMs evenly distributed across the worker nodes using the default image.

This step cannot be executed for 100 VM in one sliver, the step was modified to create two RSpecs, where each has 50 VMs. Rspecs are being written.

Step 4. Create a slice.

The slice is created as follows:

$ omni.py createslice EG-EXP-3

Step 5. Create a sliver in the slice, using the RSpec defined in step 3.

This step cannot be executed for 100 VM in one sliver, the step was modified to create two slivers, where each has 50 VMs. One 50-node sliver is created via the ExoSM:

$ omni.py -a exosm createsliver EG-EXP-3 EG-EXP-3-exosm.rspec

The second 50-node sliver is created via the BBN SM:

$ omni.py -a exobbn createsliver EG-EXP-3 EG-EXP-3-exobbn.rspec

Step 6. Log into several of the VMs, and send traffic to several other systems.

Step 7. Step up traffic rates to verify VMs continue to operate with realistic traffic loads.

Step 8. Review system statistics and VM isolation (does not include network isolation)

Step 9. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.

Step 8. Verify that several VMs running on the same worker node have a distinct MAC address for their interface.

Step 9. Verify for several VMs running on the same worker node, that their MAC addresses are learned on the data plane switch.

Step 10. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.

Step 11. Stop traffic and delete sliver.