| 1 | = GRAM Rack Power Up Sequence = |
| 2 | |
| 3 | It is recommended that GRAM rack devices be powered in the following order: |
| 4 | |
| 5 | 1. UPS (if included) |
| 6 | 2. Management Switch - Dell Powerconnect 7048 |
| 7 | 3. !OpenFlow Switch - Dell Force10 |
| 8 | 4. Control Node |
| 9 | 5. Compute Nodes |
| 10 | |
| 11 | Make sure that the above sequence is followed in starting the rack devices. If not followed, GRAM services may fail to start. |
| 12 | |
| 13 | |
| 14 | == Checking GRAM Services == |
| 15 | |
| 16 | Once devices have powered up, the local administrator shoul check the services status and restore any suspended VMs. This section outlines the produre to be used: |
| 17 | |
| 18 | 1. Login to the Control Node as an admin. For example, type "ssh -Y -i ~/.ssh/id_rsa ''admin_user''@<control addr>" |
| 19 | 2. Run the GRAM tool healthcheck.py as directed in the [wiki:GENIRacksHome/GRAMRacks/AdministrationTools Administration Tools] page. |
| 20 | 3. If processes have been started properly, you should expected output similar to the following, but specific to your site naming: |
| 21 | {{{ |
| 22 | gram-mon - running |
| 23 | gram-ch - running |
| 24 | checking OpenStack services... |
| 25 | nova-api - running |
| 26 | nova-cert - running |
| 27 | nova-conductor - running |
| 28 | nova-consoleauth - running |
| 29 | nova-novncproxy - running |
| 30 | nova-scheduler - running |
| 31 | quantum-dhcp-agent - running |
| 32 | quantum-metadata-agent - running |
| 33 | quantum-server - running |
| 34 | quantum-l3-agent - running |
| 35 | quantum-plugin-openvswitch-agent - running |
| 36 | glance-registry - running |
| 37 | glance-api - running |
| 38 | keystone - running |
| 39 | WARNING: Management namespace NOT found |
| 40 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 0 |
| 41 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 1 |
| 42 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 2 |
| 43 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 3 |
| 44 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 4 |
| 45 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 5 |
| 46 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 6 |
| 47 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 7 |
| 48 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 8 |
| 49 | Restarting Quantum-L3 service to attempt to recover the namespace - attempt 9 |
| 50 | Found management namespace and it matches config |
| 51 | Checking the status of the compute hosts: |
| 52 | |
| 53 | Binary Host Zone Status State Updated_At |
| 54 | nova-compute boscompute2 nova enabled :-) 2014-04-15 13:39:58 |
| 55 | nova-compute boscompute1 nova enabled :-) 2014-04-15 13:39:58 |
| 56 | |
| 57 | Checking status of Openstack newtworking software modules: |
| 58 | |
| 59 | +--------------------------------------+--------------------+-----------------------------------+-------+----------------+ |
| 60 | | id | agent_type | host | alive | admin_state_up | |
| 61 | +--------------------------------------+--------------------+-----------------------------------+-------+----------------+ |
| 62 | | 97c56061-2dc7-4760-90c4-77fffea0e10b | Open vSwitch agent | boscompute1.gram.gpolab.bbn.com | :-) | True | |
| 63 | | a6b774d4-1e9d-4db3-b7f7-5ed39324598d | DHCP agent | boscontroller.gram.gpolab.bbn.com | :-) | True | |
| 64 | | c3cfb1ce-1cb0-4651-854c-65a1280a54d5 | L3 agent | boscontroller.gram.gpolab.bbn.com | :-) | True | |
| 65 | | cae16682-40db-488b-8b43-12812c8618a7 | Open vSwitch agent | boscompute2.gram.gpolab.bbn.com | :-) | True | |
| 66 | | d2387a09-1767-4798-a754-196bd2e842cc | Open vSwitch agent | boscontroller.gram.gpolab.bbn.com | :-) | True | |
| 67 | +--------------------------------------+--------------------+-----------------------------------+-------+----------------+ |
| 68 | |
| 69 | Keystone - pass |
| 70 | Nova - pass |
| 71 | Glance - pass |
| 72 | Quantum - pass |
| 73 | AM is up : Get-Version succeeded at AM |
| 74 | Allocate - success |
| 75 | Provision - success |
| 76 | Delete - success |
| 77 | AM is functioning |
| 78 | }}} |
| 79 | ''Note: '''AM is functioning''' signifies that the rack is operational. If the AM is not functioning, the healthcheck script will print out out warnings and attempts to fix them. '' |
| 80 | |
| 81 | 4. Resume VMs that were suspended on the powering down of the rack, they must be restored to their original state. For each ID suspended, you should issue a "nova resume ''ID''". For example, "nova resume b840058e-4511-420c-aaf4-577562b2dce6". So, when typing "nova list --all-tenants", the state of this ID is back to '''ACTIVE'''. |
| 82 | |
| 83 | {{{ |
| 84 | +--------------------------------------+------+--------+--------------------------------------------+ |
| 85 | | ID | Name | Status | Networks | |
| 86 | +--------------------------------------+------+--------+--------------------------------------------+ |
| 87 | | b840058e-4511-420c-aaf4-577562b2dce6 | VM-1 | ACTIVE | GRAM-mgmt-net=192.168.10.3; lan0=10.0.37.1 | |
| 88 | | cfa1aa58-e68f-4176-beed-60e9a4257ab3 | VM-2 | ACTIVE | GRAM-mgmt-net=192.168.10.4; lan0=10.0.37.2 | |
| 89 | +--------------------------------------+------+--------+--------------------------------------------+ |
| 90 | }}} |
| 91 | |