11 | | * The status chart summarizes the state of this test |
12 | | * The high-level description from test plan contains text copied exactly from the public test plan and acceptance criteria pages. |
13 | | * The steps contain things i will actually do/verify: |
14 | | * Steps may be composed of related substeps where i find this useful for clarity |
15 | | * Each step is either a preparatory step (identified by "(prep)") or a verification step (the default): |
16 | | * Preparatory steps are just things we have to do. They're not tests of the rack, but are prerequisites for subsequent verification steps |
17 | | * Verification steps are steps in which we will actually look at rack output and make sure it is as expected. They contain a '''Using:''' block, which lists the steps to run the verification, and an '''Expect:''' block which lists what outcome is expected for the test to pass. |
| 10 | == Test Status == |
21 | | || '''Step''' || '''State''' || '''Date completed''' || '''Open Tickets''' || '''Closed Tickets/Comments''' || |
22 | | || 1 || [[Color(green,Pass)]] || 2012-05-17 || || || |
23 | | || 2 || || || || ready to test now that !OpenFlow and shared VLANs are available || |
24 | | || 3 || || || || ready to test now that FOAM is available || |
25 | | || 4 || || || || ready to test now that FOAM is available || |
| 18 | [[BR]] |
| 19 | || '''State Legend''' || '''Description''' || |
| 20 | || [[Color(green,Pass)]] || Test completed and met all criteria || |
| 21 | || [[Color(#98FB98,Pass: most criteria)]] || Test completed and met most criteria. Exceptions documented || |
| 22 | || [[Color(red,Fail)]] || Test completed and failed to meet criteria. || |
| 23 | || [[Color(yellow,Complete)]] || Test completed but will require re-execution due to expected changes || |
| 24 | || [[Color(orange,Blocked)]] || Blocked by ticketed issue(s). || |
| 25 | || [[Color(#63B8FF,In Progress)]] || Currently under test. || |
| 26 | [[BR]] |
27 | | == High-level description from test plan == |
28 | | |
29 | | This test inspects the state of the GENI AM software in use on the rack. |
30 | | |
31 | | === Procedure === |
32 | | |
33 | | * A site administrator uses available system data sources (process listings, monitoring output, system logs, etc) and/or AM administrative interfaces to determine the configuration of InstaGENI resources: |
34 | | * How many experimental nodes are available for bare metal use, how many are configured as OpenVZ containers, and how many are configured as !PlanetLab containers. |
35 | | * What operating system each OpenVZ container makes available for experimental VMs. |
36 | | * How many unbound VLANs are in the rack's available pool. |
37 | | * Whether the ProtoGENI and FOAM AMs trust the pgeni.gpolab.bbn.com slice authority, which will be used for testing. |
38 | | * A site administrator uses available system data sources to determine the configuration of !OpenFlow resources according to FOAM, InstaGENI, and !FlowVisor. |
39 | | |
40 | | === Criteria to verify as part of this test === |
41 | | |
42 | | * VI.12. A public document describes all the GENI experimental resources within the rack, and explains what policy options exist for each, including: how to configure rack nodes as bare metal vs. VM server, what options exist for configuring automated approval of compute and network resource requests and how to set them, how to configure rack aggregates to trust additional GENI slice authorities, whether it is possible to trust local users within the rack. (F.7) |
43 | | * VI.13. A public document describes the expected state of all the GENI experimental resources in the rack, including how to determine the state of an experimental resource and what state is expected for an unallocated bare metal node. (F.5) |
44 | | * VII.11. A site administrator can locate current configuration of flowvisor, FOAM, and any other OpenFlow services, and find logs of recent activity and changes. (D.6.a) |
| 28 | = Test Plan Steps = |
57 | | * As a site admin: |
58 | | * Browse to [https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360] |
59 | | * Login |
60 | | * Enter red dot mode |
61 | | * The white dots in the "Up?" column and lack of Name/PID/EID info show that pc1, pc2, and pc4 are available for experimenter use, and are not in use right now |
62 | | * pc3 and pc5 are in use. Of these: |
63 | | * pc5 is in an special experiment, `emulab-ops/shared-nodes`, and is running the OS `FEDORA15-OPENVZ-STD`. It's a reasonable guess that this is the OpenVZ shared node, which provides Fedora 15 VMs to experimenters. Browsing to [https://boss.utah.geniracks.net/showexp.php3?pid=emulab-ops&eid=shared-nodes#nsfile], corroborate that by finding the setting: |
64 | | {{{ |
65 | | tb-set-node-sharingmode $vhost1 "shared_local" |
66 | | }}} |
67 | | * pc3 is not special. It is an experimental node which is in use. |
68 | | * Thus, a site admin can learn that the current allocation is 1 OpenVZ node, 0 !PlanetLab nodes, and 4 experimental nodes. |
69 | | * As an experimenter, run: |
70 | | {{{ |
71 | | omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am listresources -o |
72 | | }}} |
73 | | and view the output. Learn the following things: |
74 | | * the node entry for pc3 has a field showing it's unavailable (this is "true" for e.g. pc1): |
75 | | {{{ |
76 | | <available now="false"/> |
77 | | }}} |
78 | | * the node entry for pc5 contains these lines (10034 is the OSID for FEDORA15-OPENVZ-STD, though i don't know how you'd know that as an experimenter. But "pcshared" seems like a clue): |
79 | | {{{ |
80 | | <emulab:fd name="pcshared" violatable="true" weight="1.0"/> |
81 | | <emulab:fd name="OS-10034" weight="0.5"/> |
82 | | }}} |
83 | | * also, the node entry for pc5 contains 49 slots of type "pcvm", whereas the other nodes contain 50. I bet that is a clue to how many VMs pc5 thinks it can run, since i have one VM allocated on there right now (that is, i bet pc5 thinks it can allocate 50 VMs): |
84 | | {{{ |
85 | | <hardware_type name="pcvm"> |
86 | | <emulab:node_type type_slots="49"/> |
87 | | </hardware_type> |
88 | | }}} |
| 41 | On the resulting page select "physical" from the tabular view listing across the top of the panel. This will show all physical systems in the rack, along with the OS running on those nodes. |
| 42 | |
| 43 | [[Image(IG-MON-2-physical.jpg)]] |
| 44 | |
| 45 | The above shows: |
| 46 | * pc3 is in used by an experiment (EID) named "jbstmp" |
| 47 | * pc4 and pc5 are available |
| 48 | * pc1 and pc2 are part of the "shared-nodes" experiment used to reserve the PCs that provide VMs. |
| 49 | |
| 50 | To determine what VMs are in use, while still in the Experimentation"->"Node Status" page (in red dot mode) select "virtual" from the tabular view across the top of the panel, which will provide a list of VMs in use in the rack: |
| 51 | |
| 52 | [[Image(IG-MON-2-VMs.jpg)]] |
| 53 | |