OG-MON-2: GENI Software Configuration Inspection Test
This page captures status for the test case OG-MON-2. For additional information see the Acceptance Test Status - May 2013 page overall status, or the OpenGENI Acceptance Test Plan for details about the planned evaluation.
Last Update: 2013/05/15"
Step | State | Notes | Tickets |
Step 1 | Pass | ||
Step 2 | Pass |
State Legend | Description |
Pass | Test completed and met all criteria |
Pass: most criteria | Test completed and met most criteria. Exceptions documented |
Fail | Test completed and failed to meet criteria. |
Complete | Test completed but will require re-execution due to expected changes |
Blocked | Blocked by ticketed issue(s). |
In Progress | Currently under test. |
Test Plan Steps
Step 1. Review resource allocation
A site administrator uses available system data sources (process listings, monitoring output, system logs, etc) and/or AM administrative interfaces to determine the configuration of OpenGENI resources:
- How many experimental nodes are available for bare metal use, how many are configured as OpenStack containers, and how many are configured as PlanetLab containers
- What operating system each OpenStack container makes available for experimental VMs.
- How many unbound VLANs are in the rack's available pool.
A list of experiments and experimenters can be obtained on the control node:
lnevers@boscontroller:~$ python /etc/gram/dump_gram_snapshot.py --directory ./output/ --snapshot /etc/gram/snapshots/gram/2013_05_15_09_34_40_0.json Dumping snapshot /etc/gram/snapshots/gram/2013_05_15_09_34_40_0.json: Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm35d7da5c-b898-4794-a7c6-d25ea1d339cd User: urn:publicid:IDN+geni:bos:gcf+user+lnevers lnevers@boscontroller:~$
To determine a list of current or past experiments, administrator can review content of the /etc/gram/snapshots/gram directory, where the following types of information can be found:
[{"tenant_router_uuid": "831f26f9-3cb3-48d9-8475-ded73a5336f1", "manifest_rspec": " <?xml version=\"1.0\" ?> <rspec type=\"manifest\" xmlns=\"http://www.geni.net/resources/rspec/3\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd\"> <node client_id=\"My-node-name\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm35d7da5c-b898-4794-a7c6-d25ea1d339cd\"> <sliver_type name=\"m1.small\"> <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> </sliver_type> <host name=\"My-node-name\"/> </node> </rspec> ", "controller_url": null, "user_urn": null, "tenant_admin_pwd": "sliceMaster:-)", "tenant_name": "geni:bos:gcf+slice+OG-EXP-1", "last_subnet_assigned": 2, "slice_urn": "urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1", "__type__": "Slice", "tenant_router_name": "externalRouter", "slivers": ["urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm35d7da5c-b898-4794-a7c6-d25ea1d339cd"], "request_rspec": " <rspec type=\"request\" \txmlns=\"http://www.geni.net/resources/rspec/3\" \txmlns:flack=\"http://www.protogeni.net/resources/rspec/ext/flack/1\" \txmlns:planetlab=\"http://www.planet-lab.org/resources/sfa/ext/planetlab/1\" \txmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \txsi:schemaLocation=\"http://www.geni.net/resources/rspec/3 \thttp://www.geni.net/resources/rspec/3/request.xsd\"> <node client_id=\"My-node-name\" component_manager_id=\"urn:publicid:geni:bos:gcf+authority+am\" > <sliver_type name=\"m1.small\"> <disk_image description=\"\" name=\"ubuntu-12.04\" os=\"Linux\" version=\"12\"/> </sliver_type> </node> </rspec> ", "tenant_admin_uuid": "7e0c8b1f1b2848abba483e627fad3446", "expiration": 1368639858.0, "next_vm_num": 100, "tenant_uuid": "14574e422f8a4903b78edb1ad10342ab", "tenant_admin_name": "admin-geni:bos:gcf+slice+OG-EXP-1"}, {"user_urn": "urn:publicid:IDN+geni:bos:gcf+user+lnevers", "slice": "14574e422f8a4903b78edb1ad10342ab", "name": "My-node-name", "vm_flavor": "m1.small", "installs": [], "request_rspec": " <rspec type=\"request\" \txmlns=\"http://www.geni.net/resources/rspec/3\" \txmlns:flack=\"http://www.protogeni.net/resources/rspec/ext/flack/1\" \txmlns:planetlab=\"http://www.planet-lab.org/resources/sfa/ext/planetlab/1\" \txmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" \txsi:schemaLocation=\"http://www.geni.net/resources/rspec/3 \thttp://www.geni.net/resources/rspec/3/request.xsd\"> <node client_id=\"My-node-name\" component_manager_id=\"urn:publicid:geni:bos:gcf+authority+am\" > <sliver_type name=\"m1.small\"> <disk_image description=\"\" name=\"ubuntu-12.04\" os=\"Linux\" version=\"12\"/> </sliver_type> </node> </rspec> ", "network_interfaces": [], "__type__": "VirtualMachine", "last_octet": "100", "operational_state": "geni_notready", "os_version": "12", "mgmt_net_addr": "192.168.10.7", "manifest_rspec": " <node client_id=\"My-node-name\" component_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute4\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm35d7da5c-b898-4794-a7c6-d25ea1d339cd\"> <sliver_type name=\"m1.small\"> <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> </sliver_type> <services> <login authentication=\"ssh-keys\" hostname=\"boscontroller\" port=\"3003\" username=\"lnevers\"/> </services> <host name=\"My-node-name\"/> </node>", "executes": [], "expiration": 1368646161.0, "host": "boscompute4", "os_image": "ubuntu-12.04", "os_type": "Linux", "sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm35d7da5c-b898-4794-a7c6-d25ea1d339cd", "allocation_state": "geni_provisioned", "uuid": "c41599a1-3316-4687-9db0-934c8d2ea50e"}]
Also possible to get a list of slivers and how they map to OpenStack Containers:
lnevers@boscontroller:/etc/gram$ source /etc/novarc lnevers@boscontroller:/etc/gram$ nova list --all-tenants +--------------------------------------+--------------+--------+------------------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+--------------+--------+------------------------------------------------+ | 2be4a562-66ca-4604-9519-44084833ff3d | My-node-name | ACTIVE | GRAM-mgmt-net=192.168.10.8 | | 6fbbfc19-f85b-454b-aa82-e04f892b2231 | My-node-name | ACTIVE | GRAM-mgmt-net=192.168.10.4 | | c41599a1-3316-4687-9db0-934c8d2ea50e | My-node-name | ACTIVE | GRAM-mgmt-net=192.168.10.7 | | 5aeb7be4-2547-47af-9c33-acd32fb28300 | exp1-host1 | ACTIVE | link-0=10.0.36.100; GRAM-mgmt-net=192.168.10.5 | | de0e444c-cbe0-4c23-acda-4ef4b2be7bca | exp1-host2 | ACTIVE | link-0=10.0.36.101; GRAM-mgmt-net=192.168.10.6 | | eed27e53-3041-431d-b4aa-da92b3d34d1b | johren-test1 | ACTIVE | GRAM-mgmt-net=192.168.10.3 | +--------------------------------------+--------------+--------+------------------------------------------------+ lnevers@boscontroller:/etc/gram$ keystone tenant-list +----------------------------------+----------------------------------+---------+ | id | name | enabled | +----------------------------------+----------------------------------+---------+ | 00a5763513d5466795560ede0a9093ab | demo | True | | 10d7ab4e360947cfbd23f5214452962d | geni:bos:gcf+slice+OG-EXP-2-exp1 | True | | 14574e422f8a4903b78edb1ad10342ab | geni:bos:gcf+slice+OG-EXP-1 | True | | 1d778c9ae30141299a70a0ba82c1a079 | geni:bos:gcf+slice+OG-EXP-13 | True | | de55883bcddf4e6581aa4874aea08801 | admin | True | | e46be309ac9f4e21a265b9b9022cbe0e | invisible_to_admin | True | | f4ee28be1b2746ff85826598dd3bfcfd | service | True | | fc6e82e446ad4c6ca234a098fd877358 | geni:bos:gcf+slice+lngram | True | +----------------------------------+----------------------------------+---------+
Available operating systems are as follows:
lnevers@arendia:~$ omni.py listresources -a gram2 ... INFO:omni: <rspec type="advertisement" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/ad.xsd http://www.geni.net/resources/rspec/ext/opstate/1 http://www.geni.net/resources/rspec/ext/opstate/1/ad.xsd"> <node client_id="VM" component_id="urn:public:geni:gpo:vm+3a619e5c-82d9-4bb4-a62e-e4d7bcc81e38" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" component_name="3a619e5c-82d9-4bb4-a62e-e4d7bcc81e38" exclusive="False"> <node_type type_name="m1.tiny"/> <node_type type_name="m1.small"/> <node_type type_name="m1.medium"/> <node_type type_name="m1.large"/> <node_type type_name="m1.xlarge"/> <disk_image description="" name="ubuntu-12.04" os="Linux" version="12"/> <sliver_type name="m1.small"/> <available now="True"/> </node></rspec> ...
The nodes configured as compute nodes are available in /etc/gram/config.json:
Step 2. Review OpenFlow resource configuration
A site administrator uses available system data sources to determine the configuration of OpenFlow resources according to VMOC and OpenGENI.
The VMOC constoller service starts the following processes:
lnevers@boscontroller:/opt/pox$ ps -eaf|grep pox gram 18680 1 0 May10 ? 00:05:55 python2.7 -u -O /opt/pox/pox.py log.level --DEBUG openflow.of_01 --port=9000 vmoc.l2_simple_learning gram 18694 1 0 May10 ? 00:10:57 python2.7 -u -O /opt/pox/pox.py log.level --DEBUG vmoc.VMOC --management_port=7001 --default_controller_url=https://localhost:9000
Available VLAN are captured in OG-ADM-2 Step 2.