Opened 12 years ago
Closed 12 years ago
#46 closed (fixed)
Unable to determine number of available resources from listresources output
Reported by: | lnevers@bbn.com | Owned by: | somebody |
---|---|---|---|
Priority: | major | Milestone: | EG-EXP-3 |
Component: | Experiment | Version: | SPIRAL4 |
Keywords: | vm support | Cc: | |
Dependencies: |
Description
Test scenario:
- Collected listresources from BBN SM before test start.
- Created 9 sliver each with two VMs at the BBN SM, which used up all available compute resources.
- Collected listresources from BBN SM after resources were exhausted.
- Compared the listresources output collected in step 1 and 3. Only differences are time stamps.
I had assumed that the following listresources information was describing how many VMs are available at the BBN SM:
<node component_id="urn:publicid:IDN+geni-orca.renci.org+node+bbnvmsite.rdf#bbnvmsite/Domain/vm" component_manager_id="urn:publicid:IDN+geni- orca.renci.org+authority+bbnvmsite.rdf#bbnvmsite/Domain/vm+orca-sm" component_name="bbnvmsite.rdf#bbnvmsite/Domain/vm" exclusive="false"> <available now="true"/> <hardware_type name="orca-vm-cloud"> <ns3:node_type type_slots="36"/> <<<<==== Availabel VM count??? </hardware_type>
This does not seem to be the case.
Q1. How do I tell how many VMs can be reserved from listresources? Q2. Only 18 VMs could be reserved at each of the ExoSM and BBN SM. These number of resources are much less than I expected. Is there some per-user limit for compute resource access?
Change History (4)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
This ticket actually contains two issues:
- Cannot determine how many VMs are available for allocation.
- Only able to get 18 VMs at each BBN SM and ExoSM. Overall the BBN rack has only 36 VM (18 for local SM and 18 for ExoSM) this number is lower than expected.
comment:3 Changed 12 years ago by
Capturing comments from email exchange previous to 6/6 meeting:
On 6/5/12 7:56 PM, Luisa Nevers wrote:
- Ticket 46 - This ticket contains several issues:
- Cannot determine how many VMs are available for allocation from listresource information. There is a type_slots="36" for the site component ns3 node type count. The type_slots seems to indicate the overall rack resources (local SM + ExoSM), but this counter never changes, even when resources are exhausted, it still show 36.
- Overall, only able to reserve 18 VMs (whether from ExoSM or from BBN SM, 36 total for the BBN rack) So, 18 VMs is less that the number expected for the BBN Rack, is there a per-user upper limit? Or is 18 VMs what I should be getting??
- The number of VMs that I am able to reserve does not change based on the sliver type that is requested. I always get 18 VMs, whether using m1.small or m1.large. There was an expectation, perhaps incorrect, that the sliver type impacted the potential number of VMs that can be reserved. Does it?
On 6/6/12 10:28 AM, Ilia Baldine wrote:
Ticket 46.1: that is the correct behavior for the current implementation.
46.2 is also the correct behavior as resources are split 50/50 between rack sm and exosm. 46.3 is also correct as currently all VM types have only 1 core and there are a total of 36 cores available in the rack. We will be adding multi-core vms this summer (probably not before gec14).
On 6/6/12 10:47 AM, Aaron Helsinger wrote:
I think the confusion however is that you advertise 36 slots as available. However, due to the 50/50 split, only 18 are available from a given Orca Broker or GENI AM. I would have expected the Ad RSpec to
reflect the split.
comment:4 Changed 12 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
The current output from listresources at the BBN SM, RENCI SM and ExoSM reflect the resources available. Issue is addressed, closing ticket.
The request for compute resources which allows 18 VMs to be allocated used "<sliver_type name="m1.small">".
Re-ran test scenario with "<sliver_type name="m1.large">", which had the same results, only 9 slivers with 2 nodes each were set up successfully.