wiki:GENIRacksAdministration/OpenGENIFAQ

Version 12 (modified by mbrinn@bbn.com, 10 years ago) (diff)

--

OpenGENI Administrative Frequently Asked Questions

This page shows frequently asked question from administrators of OpenGENI racks and is part of the GENI Rack Administration pages.

User Administration

Q. I am the administrator. What can I access in my rack?

  • Local Administrator: Each OpenGENI rack is initially configured to allow ssh access for one administrator account to all rack devices. If you are this person, than you have access to the control node, compute nodes, and switches in the rack.
  • Additional administrative accounts: Any existing administrative account can be used to create additional administrative accounts. Procedure to create more Administrative accounts detailed in the OpenGENI Rack Administration instructions.

Q. I tried logging into the boss node with my administrative ID and got a "permission denied" error. What is wrong?

Access to the hosts and devices in the OpenGENI rack requires SSH public keys. Your SSH public key must be installed in the authorized_keys file to access the control and compute nodes.

If you want access the rack switches, you must first login to the control node and then ssh to the switches as detailed in your rack control plane details table. For example here is the Clemson rack details for the switches.

Rack Resources

Q. What types of servers are in the racks ?

The OpenGENI rack is implemented on Dell hardware, here is the parts list.

Q. How can I tell if experiments are running in my OpenGENI rack?

On the control node you can check the list of experiments that are currently active by issuing an OpenStack command as follows:

 $ source /etc/novarc && keystone tenant-list|egrep slice 
 | 8f38b505cbef4c4a82463f250d3f9ad6 |    ch.geni.net:ln-test+slice+OG-EXP-1    |   True  |
 | cc16776ac7b44341a2430546c2907684 | ch.geni.net:ln-test+slice+gr-clem-ig-gpo |   True  |
 | 559624c7d7b043aeb3a3128bb393e9d3 |    ch.geni.net:ln-test+slice+lnexp23     |   True  |

The list returned shows that there are 3 experiments that have active resource allocations, the experiment names are OG-EXP-1, gr-clem-ig-gpo, and lnexp23.

Q. Are there OpenGENI logs?

The OpenGENI rack runs GENI Rack Aggregate Manager (GRAM) software to manage rack compute and network resources. There are 4 gram processes to provide these functions and their logs are:

  • /var/log/upstart/gram-am.log - The aggregate manager logs which AM API requests and the quantum commands used to allocate the requested resources.
  • /var/log/upstart/gram-ctrl.log - The OpenFlow Controller running on the OpenGENI rack which will capture request for OF resources.
  • /var/log/upstart/gram-opsmon.log - Monitoring log is created by the gram-mon daemon, file captures relevant information from the GRAM snapshots which are logged in this file easier readability.
  • /var/log/upstart/gram-vmoc.log - VLAN-based Multiplexing OpenFlow Controller (VMOC) is a switch to experimenter provided controller, that multiples based on VLAN tag and switch DPID/port. A default Layer2 Learning controller is used for slices that do not specify a controller.

Network Requirements

Q. Which ports will need to be open for my OpenGENI rack?

The list of required ports for the OpenGENI rack can be found in the OpenGENI Checklist. If your rack is inside a campus firewall then the following ports must be allowed through your campus network firewall to the entire rack subnet:

  • 22 - SSH
  • 25 - SMTP (outbound connections only, from control node)
  • 80 - HTTP (must also allow outbound connections from control node)
  • 443 - HTTPS (must also allow outbound connections from control node)
  • 843 - Flash Policy Server
  • 3000-3300 - SSH access for experimenter resources
  • 5001 - GRAM AM API V3
  • 5002 - GRAM AM API V2
  • 9000 - VMOC Controller
  • 30000-65535 - General application traffic such as connections to OpenFlow controllers.
  • ICMP - ping

Q. Which ports will GENI experimenters need on my OpenGENI rack?

Ports used by experimenters depend on the type of experiments run. For a list of current experimenter ports see the Known GENI Ports page.

Monitoring

Q. Can I check the status for the OpenGENI software components?

Yes, the GRAM software that runs on the control node includes a healthcheck.py script which can be used to check the status for all aggregate component. Following is an example:

# source /etc/novarc ; export PYTHONPATH=$PYTHONPATH:/home/gram/gram/src
# python healthcheck.py
Starting healthcheck
Checking GRAM services...
gram-am - running
gram-ctrl - running
gram-vmoc - running
gram-mon - running
gram-ch - running
checking OpenStack services...
nova-api - running
nova-cert - running
nova-conductor - running
nova-consoleauth  - running
nova-novncproxy - running
nova-scheduler - running
quantum-dhcp-agent - running
quantum-metadata-agent - running
quantum-server - running
quantum-l3-agent - running
quantum-plugin-openvswitch-agent - running
glance-registry - running
glance-api - running
keystone - running
Found management namespace and it matches config
Checking the status of the compute hosts: 

Binary           Host                                 Zone             Status     State Updated_At
nova-compute     clemson-clemson-compute-1            nova             enabled    :-)   2014-05-28 18:42:41
nova-compute     clemson-clemson-compute-2            nova             enabled    :-)   2014-05-28 18:42:41

Checking status of Openstack networking software modules: 
+--------------------------------------+--------------------+------------------------+-------+----------------+
| id                                   | agent_type         | host                   | alive | admin_state_up |
+--------------------------------------+--------------------+------------------------+-------+----------------+
| 47648f6d-4b89-457a-b9e8-61b48f7a6e49 | DHCP agent         | bbn-cam-ctrl-1         | :-)   | True           |
| 54ad9b0e-a92e-4ab0-bf99-55aabcc86237 | L3 agent           | bbn-cam-ctrl-1.bbn.com | :-)   | True           |
| 74426394-c899-4632-9591-0afd7a004e14 | Open vSwitch agent | bbn-cam-ctrl-1         | :-)   | True           |
| 7e198c11-253b-448f-b6b2-b288446be95e | Open vSwitch agent | bbn-cam-cmpe-1         | :-)   | True           |
| 83c68b64-2223-47d3-b4d7-6b4c94e251e9 | Open vSwitch agent | bbn-cam-cmpe-2         | :-)   | True           |
+--------------------------------------+--------------------+------------------------+-------+----------------+

Keystone - pass
Nova - pass
Glance - pass
Quantum - pass
AM is up : Get-Version succeeded at AM
Allocate - success
Provision - success
Delete - success
AM is functioning

Q. How can I tell what VLAN tags are currently allocated?

Use the following command from the controller node. The last section will list all the VLAN tags that are currently in use, and by what slice.

gram@boscontroller:~$ echo dump | nc localhost 7001
VMOCSwitchControllerMap:
   Switches:
   Switches(unindexed):
   Switches (by Controller):
   Controllers (by VLAN) 

VMOC Slice Registry: 
urn:publicid:IDN+ch-mb.gpolab.bbn.com:VEEP+slice+QUAYLE: {'vlan_configurations': [{'vlan': 1002, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-mb.gpolab.bbn.com:VEEP+slice+QUAYLE'}
urn:publicid:IDN+ch-ph.gpolab.bbn.com:GRAM+slice+rrh27: {'vlan_configurations': [{'vlan': 1001, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-ph.gpolab.bbn.com:GRAM+slice+rrh27'}
https://localhost:9000
    {'vlan_configurations': [{'vlan': 1001, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-ph.gpolab.bbn.com:GRAM+slice+rrh27'}
   {'vlan_configurations': [{'vlan': 1002, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-mb.gpolab.bbn.com:VEEP+slice+QUAYLE'}
1001: {'vlan_configurations': [{'vlan': 1001, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-ph.gpolab.bbn.com:GRAM+slice+rrh27'}
1002: {'vlan_configurations': [{'vlan': 1002, 'controller_url': 'https://localhost:9000'}], 'slice_id': u'urn:publicid:IDN+ch-mb.gpolab.bbn.com:VEEP+slice+QUAYLE'}