wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-EXP-3

Version 23 (modified by lnevers@bbn.com, 11 years ago) (diff)

--

EG-EXP-3: ExoGENI Single Site 100 VM Test

This page captures status for the test case EG-EXP-3, which verifies the ability to support 100 VM in one ExoGENI rack. For overall status see the ExoGENI Acceptance Test Status page. Last update: 12/06/12

Due to the rack configuration a set of scenarios are being tested to capture scaling to 100 VMs, the following table captures scenario tested:

Test Scenario Results Notes
Test 1: 10 slivers with 10 VMs eachColor(#63B8FF,In Progress)? 89VMs ok, 11 VMs failed, exoticket:134
Test 2: 4 slivers w/20VMS +2 slivers w/10 VMsColor(#63B8FF,In Progress)? 88VMs ok, 12 VMs failed, exoticket:134
Test 3: 20 slivers with 5 VMs each Color(#63B8FF,In Progress)? 90VMs ok, 10 VMs failed, exoticket:134
Test 4: 100 Sliver with 1 VM Color(#63B8FF,In Progress)? 41 VMs ok, BBN SM failed, exoticket:135
Test 5: 1 slivers with 100 VMs Color(red,Fail)? Not allowed with current rack configuration
Test 6: 2 slivers with 50 VMs each Color(#63B8FF,In Progress)? Allowed with current rack configuration, but untested

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Tickets Comments
Step 1 Color(yellow,Complete)? Rack 50/50 resource allocation forces use of ExoSM and BBN SM
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(orange,Blocked)?
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

The Omni configuration file used in this test case includes the following nick_names for ExoGENI Aggregates:

eg-gpo=,https://bbn-hn.exogeni.net:11443/orca/xmlrpc
eg-renci=,https://rci-hn.exogeni.net:11443/orca/xmlrpc
eg-sm=,https://geni.renci.org:11443/orca/xmlrpc

Testing uses <sliver_type name="m1.small"/> for the resource requests, which generates a 1 core VM with 128 MBytes of RAM and 3 GBytes of disk space. For a full list of available resource types and associated cores, RAM and Disk space allocation see the following resource type table at the ExoGENI wiki site.

Step 1. Request ListResources from BBN ExoGENI.

The following command is executed to collect resource from the BBN ExoGENI rack:

$ omni.py -a eg-bbn listresources -o

The rack resource allocation is 50% of resources for the local rack SM and 50% for the ExoSM, therefore one must collect listresources on both to get the complete list of resources available within an ExoGENI Rack.

$ omni.py -a eg-sm listresources -o

Note: Some of the tests completed on this page were executed on the RENCI ExoGENI rack.

Step 2. Review ListResources output, and identify available resources.

The ExoGENI RSpec provides an indication of the potentially available VMs in the "type_slots" element. The listresources output file generated from previous step for the ExoSM named rspec-geni-renci-org-11443-orca.xml shows:

<node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="54"/>
            </hardware_type>
            <available now="true"/>

The listresources output file for the BBN SM named rspec-bbn-hn-exogeni-net-11443-orca.xml shows:

      <node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="48"/>
            </hardware_type>
            <available now="true"/>

Step 3. Write a RSpec that requests 100 VMs evenly distributed across the worker nodes using the default image.

This step cannot be executed for 100 VM in one sliver due to the current rack resource allocation, which allocates 50% of the resources to the local aggregate manager (BBN SM) and 50% of the resources to the global aggregate manager (ExoSM). The test case step was modified to create RSpecs which are used at each local and global aggregate manager.

Step 4. Create a slice.

The slice is created as follows:

$ omni.py createslice EG-EXP-3

Step 5. Create a sliver in the slice, using the RSpec defined in step 3.

This step cannot be executed for 100 VM in one sliver, due to the resource allocation which is 50/50 between the local BBN SM and the global ExoSM. Based on the allocation, the test case description was modified to create two slivers, where each has 50 VMs in the BBN Rack, with one request to the BBN SM and one request to the ExoSM.

NOTE: Rspecs with 50 VMs have not been tests, but other scenarios have been tested and are covered in this page.

To set up a sliver in the BBN rack that is created via the ExoSM, this type of command is used:

$ omni.py -a eg-sm createsliver EG-EXP-3 EG-EXP-3.rspec

To set up a sliver in the BBN rack that is created via the BBN SM:

$ omni.py -a eg-gpo createsliver EG-EXP-3 EG-EXP-3.rspec

Note: The same sliver name and Rspec can be used concurrently via BBN Sm and ExoSM, and that is the approach used in all tests described in this page.

Step 6. Log into several of the VMs, and send traffic to several other systems.

Was able to login to some nodes after the slivers were ready.

Step 7. Step up traffic rates to verify VMs continue to operate with realistic traffic loads.

Step 8. Review system statistics and VM isolation (does not include network isolation)

Step 9. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.

Step 8. Verify that several VMs running on the same worker node have a distinct MAC address for their interface.

Allocated VMs have unique MAC addresses.

Step 9. Verify for several VMs running on the same worker node, that their MAC addresses are learned on the data plane switch.

The successful exchange of traffic on the dataplane interface shows that are learned on data plane switch.

Step 10. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts.

This step is replaced by iperf for ExoGENI test cases. Not yet completed during scalability tests.

Step 11. Stop traffic and delete sliver.

Slivers are deleted as follows throughout testing:

$ omni.py deletesliver -a eg-sm EG-EXP-3
$ omni.py deletesliver -a eg-bbn EG-EXP-3

Step 12. Re-execute procedure in steps 1-11 with changes required for Test 1: 10 slivers with 10 VMs each

Step 13. Re-execute procedure in steps 1-11 with changes required for Test 2: 4 slivers w/20VMS +2 slivers w/10 VMs

Step 14. Re-execute procedure in steps 1-11 with changes required for Test 3: 20 slivers with 5 VMs each

Step 14. Re-execute procedure in steps 1-11 with changes required for Test 4: 100 Sliver with 1 VM

Step 14. Re-execute procedure in steps 1-11 with changes required for Test 5: 1 slivers with 100 VMs

Step 14. Re-execute procedure in steps 1-11 with changes required for Test 6: 2 slivers with 50 VMs each