Version 38 (modified by 12 years ago) (diff) | ,
---|
IG-EXP-3: InstaGENI Single Site 100 VM Test
This page captures status for the test case EG-EXP-3, which verifies the ability to support 100 VM in one rack. For overall status see the InstaGENI Acceptance Test Status page.
Last update: 08/20/12
Due to the current rack configuration a set of scenarios are being tested to capture the findings of each 100VM scenario, the following table is added for each scenario tested:
Test Scenario | Results | Notes |
Scenario 1: 1 Slice with 100 VMs | Color(red,Fail)? | instaticket:32, Not allowed with current rack configuration |
Scenario 2: 2 Slices with 50 VMs each | Color(red,Fail)? | instaticket:32, Not allowed with current rack configuration |
Scenario 3: 4 Slices with 25 VMS each | Color(red,Fail)? | instaticket:32, Not allowed with current rack configuration |
Scenario 7: 5 slices with 20 VMs each | Color(green,Pass)? | Allocation:pc3=50 VMs, pc5=40 VMs, pc1=10 VMs |
Scenario 6: 10 Slices with 10 VMs each | Color(green,Pass)? | Allocation:pc3=90 VMs, pc5=10 VMs |
Scenario 4: 50 Slices with 2 VMs each | Color(green,Pass)? | Allocation:pc3=59 VMs, pc5=42 VMs |
Scenario 5: 100 Slices with 1 VM each | Color(green,Pass)? | Allocation:pc3=61 VMs, pc5=39 VMs |
Test Status
This section captures the status for each step in the acceptance test plan.
Step | State | Date completed | Ticket | Comments |
Step 1 | Color(green,Pass)? | |||
Step 2 | Color(green,Pass)? | |||
Step 3 | Color(green,Pass)? | |||
Step 4 | Color(green,Pass)? | |||
Step 5 | Color(red,Fail)? | instaticket:32 | Cannot create 1 exp w/100 VMs with current rack configuration | |
Step 6 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 7 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 8 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 9 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 10 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 11 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 12 | Color(orange,Blocked)? | Cannot execute due to step 5 | ||
Step 13 | Color(red,Fail)? | instaticket:32 | Cannot create 2 exp w/50 VMs with current rack configuration | |
Step 14 | Color(red,Fail)? | instaticket:32 | Cannot create 4 exp w/25 VMs with current rack configuration | |
Step 15 | Color(green,Pass)? | |||
Step 16 | Color(green,Pass)? |
State Legend | Description |
Color(green,Pass)? | Test completed and met all criteria |
Color(#98FB98,Pass: most criteria)? | Test completed and met most criteria. Exceptions documented |
Color(red,Fail)? | Test completed and failed to meet criteria. |
Color(yellow,Complete)? | Test completed but will require re-execution due to expected changes |
Color(orange,Blocked)? | Blocked by ticketed issue(s). |
Color(#63B8FF,In Progress)? | Currently under test. |
Test Plan Steps
Experimenter1 account uses credentials for lnevers@bbn.com from the GPO PG.
The following nick_name is used in the omni_config:
ig-utah=,http://utah.geniracks.net/protogeni/xmlrpc/am
Step 1. As Experimenter1, request ListResources from Utah InstaGENI
Issue the following:
$ omni.py -a ig-utah listresources --api-version 2 -t GENI 3 --available -o
Step 2. Review ListResources output, and identify available resources
The RSpec show information for each non-exclusive node about the number of VMs that are possible. For example the excerpt below :
<node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc5" exclusive="false"> <hardware_type name="pcvm"> <emulab:node_type type_slots="99"/> </hardware_type>
The above states that the node pc5, has 99 VM slots available.
Step 3. Write the Scenario 1 RSpec that requests 100 VMs evenly distributed across the experiment nodes using the default image
Created a 100 VM grid scenario.
Step 4. Create a slice
$ omni.py -a ig-utah createslice 100vmgrid
Step 5. Create a sliver in the slice, using the RSpec defined in step 3
Created the 100 VM sliver with the following command:
$ omni.py -a ig-utah createsliver 100vmgrid insta-100vm-grid.rspec --api-version 2 -t GENI 3
FAILED: See instaticket:32
The Request RSpec for the 100VM sliver can be found here
Step 6. Log into several of the VMs, and send traffic to several other systems
FAILED: See instaticket:32
Step 7. Step up traffic rates to verify VMs continue to operate with realistic traffic loads
FAILED: See instaticket:32
Step 8. Review system statistics and VM isolation (does not include network isolation)
FAILED: See instaticket:32
Step 9. Verify that several VMs running on the same experiment node have a distinct MAC address for their interface
FAILED: See instaticket:32
Step 10. Verify for several VMs running on the same experiment node, that their MAC addresses are learned on the data plane switch
FAILED: See instaticket:32
Step 11. Review monitoring statistics and check for resource status for CPU, disk, memory utilization, interface counters, uptime, process counts, and active user counts
FAILED: See instaticket:32
Step 12. Stop traffic and delete sliver
FAILED: See instaticket:32
Step 13. Re-execute the procedure described in steps 1-12 with changes required for Scenario 2 (2 Slices with 50 VMs each)
Commands used:
$ omni.py -a ig-utah createslice 2exp-50vm $ omni.py -a ig-utah createsliver 2exp-50vm insta-50vm-grid.rspec --api-version 2 -t GENI 3
First sliver completed, checked for node allocation:
$ omni.py -a ig-utah sliverstatus 2exp-50vm --api-version 2 -t GENI 3 -o
Create a second experiment with 50 VMs:
FAILED: See instaticket:32
The Request RSpec for the 50VM sliver can be found here
Step 14. Re-execute the procedure described in steps 1-12 with changes required for Scenario 3 (4 Slices with 25 VMS each)
With the current rack configuration, this scenario is not allowed, only 3 slices set up:
$ omni.py -a ig-utah createslice 4exp-25vm $ omni.py -a ig-utah createsliver 4exp-25vm --api-version 2 -t GENI 3 ./instarspec/insta-25vm-grid.rspec $ omni.py -a ig-utah createslice 4exp-25vma $ omni.py -a ig-utah createsliver 4exp-25vma --api-version 2 -t GENI 3 ./instarspec/insta-25vm-grid.rspec ASSIGN FAILED: *** 25 nodes of type pcvm requested, but only 20 available nodes of type pcvm found *** Type precheck failed!
The above experiments resulted in the following allocation:
- pc3=30 VMs
- pc5=30 VMS
- pc1=5 VMs
- pc4=5 VMs
- pc2=5 VMs
The Request RSpec for the 25VM slivers can be found here
Step 15. Re-execute the procedure described in steps 1-12 with changes required for Scenario 4 (50 Slices with 2 VMs each)
Successfully execute this scenario in 06/01/12 and was able to login to several of the VMs and exchange traffic. Final allocation distribution was applied as follows on the two pcshared nodes:
- 58 VMs on pc3
- 42 VMs on pc5
The Request RSpec for the 2VM slivers can be found here
08/21/12 test run
Re-executed procedure on 08/21/12 and was once again able to reserve 100 VMS (50 slices with 2 VMs each). Final allocation distribution was applied as follows on the three pcshared nodes:
- 38 VMs on pc5
- 34 VMs on pc3
- 28 VMs on pc1.
Extended the test to 150 WMs could be reserved with 75 slivers, and added 25 more slices to the 08/21/12 run. The resulting allocation:
- 42 VMs on pc5
- 50 VMs on pc3
- 58 VMs on pc1.
Ran the GCF script readyToLogin.py to determine access information for the 150 VMs allocated:
j=1 while [ $j -le 75 ]; do ./examples/readyToLogin.py 2vmslice$j -a ig-utah sleep 2; j=`expr $j + 1` done
The readyToLogin script is part of the GCF examples directory and provides login information for active slivers, which looks as follows:
$ ./examples/readyToLogin.py 2vmslice1 -a ig-utah -q ================================================================================ Aggregate [http://utah.geniracks.net/protogeni/xmlrpc/am] has a ProtoGENI sliver. pc3.utah.geniracks.net's geni_status is: ready Login using: xterm -e ssh -i ~/.ssh/id_rsa lnevers@pc3.utah.geniracks.net -p 30778 & pc3.utah.geniracks.net's geni_status is: ready Login using: xterm -e ssh -i ~/.ssh/id_rsa lnevers@pc3.utah.geniracks.net -p 30779 & ================================================================================
Step 16. Re-execute the procedure described in steps 1-12 with changes required for Scenario 5 (100 Slices with 1 VM each)
Successfully execute this scenario and was able to login to several of the VMs. Final allocation distribution was applied as follows on the two pcshared nodes:
- 39 vm on pc5
- 61 vm on pc3
The Request RSpec for the 1 VM slivers can be found here
NOTE:
An additional scenario was executed, "Scenario 7: 5 slices with 20 VMs each" which was successfully completed. The Request RSpec for the 20 VM slivers can be found here