Changes between Version 13 and Version 14 of GENIRacksHome/OpenGENIRacks/InstallationGuideJunoOffline


Ignore:
Timestamp:
10/05/15 14:16:21 (9 years ago)
Author:
rrhain@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/OpenGENIRacks/InstallationGuideJunoOffline

    v13 v14  
    55
    66== Introduction ==
    7 
    8 This document describes the procedures and context for installing GRAM software. There are these aspects which will be covered individually:
    9 * Configuration Overview
    10 * Hardware Requirements
    11 * Software Requirements
    12 * Network configuration
    13 * !OpenStack Installation and Configuration
    14 * GRAM Installation and Configuration
    15 
    16 == Hardware Requirements ==
    17 The minimum requirements are:
    18 * 1 Control Server
    19 * 1 Compute Server (Can be more)
    20 * 1 Switch with at least (number of servers)*3 ports  [For non-dataplane traffic]
    21 * 1 !OpenFlow Switch with at least (number of servers) ports  [For dataplane traffic]
    22 * Each server should have at least 4 Ethernet ports
    23 == Software Requirements ==
    24 
    25 
    26 ==== Ports ====
    27 The following ports will be used by GRAM components.  Verify that these ports are not already in use.  If so, change the configuration of the gram component below to use a different port.
    28 
    29    * Controller node
    30       * 8000: GRAM Clearinghouse (Unless you are using a different clearinghouse).  See [wiki:InstallationGuide#ConfigureandStartClearinghouseontheControlNode this section] to change this port.
    31       * 8001: GRAM Aggregate Manager.  See [wiki:InstallationGuide#ConfigureandStartAggregateManager this section] to change this  port.
    32       * 9000: VMOC Default Controller
    33       * 7001: VMOC Management.  See [wiki:InstallationGuide#Setup this section] to change this port.
    34       * 6633: VMOC
    35 
    36 === Openstack Requirements ===
    37 
    38 * This guide was written for Ubuntu 12.04
    39 * All dependencies will be downloaded for the Ubuntu repository.
    40 
    41 
    42 
    43 ==== Image requirements ====
    44    * Currently, nova images must meet the following requirements for GRAM:
    45       1.  Must have the following packages installed:
    46           * cloud-utils
    47           * openssh-server
    48           * bash
    49           * apt
    50 
    51 
    52 == Configuration Overview ==
    53 
    54 !OpenStack and GRAM present software layers on top of rack hardware. It is expected that rack compute nodes are broken into two categories:
    55 * '''Controller Node''': The central management and coordination point for !OpenStack and Gram operations and services. There is one of these per rack.
    56 * '''Compute Node''': The resource from which VM's and network connections are sliced and allocated on requrest. There are many of these per rack.
    57 
    58 * !OpenStack and GRAM require establishing four distinct networks among the different nodes of the rack:
    59   * '''Control Network''': The network over which !OpenFlow and GRAM commands flow between control and compute nodes. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
    60   * '''Data Network''': The allocated network and associated interfaces between created VM's representing the requested compute/network resource topology. This network IS !OpenFlow controlled.
    61   * '''External Network''': The network connecting the controller node to the external internet. The compute nodes may or may not also have externaly visible addresses on this network, for convenience.
    62   * '''Management Network''': Enables SSH entry into and between created VM's. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
    63 
    64 The mapping of the networks to interfaces is arbitrary and can be changed by the installer. For this document we assume the following convention:
    65  * eth0: Control network
    66  * eth1 : Data network
    67  * eth2 : External network
    68  * eth3 : Management network
    69 
    70 The Controller node will have four interfaces, one for each of the above networks. The Compute nodes will have three (Control, Data and Management) with one (External) optional.
    71 
    72 More details on the network configuration are provided in wiki:"ArchitectureDescription".
    73 
    74 
    75 == Network Configuration ==
    76 
    77  The first step of !OpenStack/Gram configuration is establishing the networks described above.
    78 
    79 We need to define a range of VLAN's for the data network (say, 1000-2000) and separate VLANs for the external, control, and management networks (say, 5, 6, and 7) on the management switch.
    80 The external and control network ports should be configured untagged and the management port should be configured tagged.
    81 
    82 The Control, External and Management networks are connected between the rack management switch and ethernet interfaces on the Controller or Compute nodes.
    83 
    84 The Data network is connected between the rack !OpenFlow switch and an ethernet interface on the Control and Compute nodes.
    85 
    86 
    87 [[Image(wiki:GENIRacksHome/OpenGENIRacks/InstallationGuideJuno:GRAMSwitchDiag.jpg)]]
    88 
    89 === OpenFlow Switch for the Data Network ===
    90 
    91 The ports on the !OpenFlow switch to which data network interfaces have been connected need to be configured to ''trunk'' the VLANs of the data network. How this is done varies from switch to switch but typical commands look something like
    92 
    93 {{{
    94 conf t
    95   vlan <vlanid>
    96     tagged <ports>
    97     exit
    98   exit
    99 write memory
    100 }}}
    101 
    102 
    103 On the !OpenFlow switch, for each VLAN used in the data network (1000-2000), set the controller to point to the VMOC running on the control node. The command will vary from switch to switch but this is typical:
    104 
    105 {{{
    106 conf t
    107   vlan <vlanid>
    108     openflow controller tcp:<controller_addr>:6633 enable
    109     openflow fail-secure on
    110     exit
    111   exit
    112 write memory
    113 }}}
    114 
    115 For the Dell Force 10 switch, The following lines set up the vlan trunking in the data network, and sets up the default openflow controller on the VMOC.
    116 {{{
    117 !
    118 interface Vlan 1001 of-instance 1
    119  no ip address
    120  tagged TenGigabitEthernet 0/0-2
    121  no shutdown
    122 !
    123 .........
    124 !
    125 openflow of-instance 1
    126  controller 1 128.89.72.112  tcp
    127  flow-map l2 enable
    128  flow-map l3 enable
    129  interface-type vlan
    130  multiple-fwd-table enable
    131  no shutdown
    132 !
    133 }}}
    134 The above snippet assumes that the controller node, running VMOC, is at 128.89.72.112
    135 
    136 For a sample configuration file for the Dell Force10, see attachment:force10-running:wiki:GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzly
    137 
    138 === Management Switch ====
    139 
    140 The ports on the management switch to which management network interfaces have been connected need to be configured to ''trunk'' the VLAN of the management network. How this is done varies from switch to switch, but typical commands look something like:
    141 
    142 {{{
    143 conf t
    144   int gi0/1/3
    145   switchport mode trunk
    146   switchport trunk native vlan 1
    147   switchport trunk allowed vlan add 7
    148   no shutdown
    149   end
    150 write memory
    151 }}}
    152 
    153 Here is a config file for a Dell Powerconnect 7048: attachment:powerconnect-running:wiki:GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzly. We use VLAN 200, 300 and 2500 for the control plane, management plane and external network respectively.
     7Refer to the Online Installation.  This document will only talk about what is different if you are trying an installation where you are not connected to the internet.
     8It is important that NTP is configured on your rack and that DNS or the host files are functioning on the nodes correctly.
    1549
    15510== GRAM and !OpenStack Installation and Configuration ==
    156 
    157 GRAM provides a custom installation script for installing and configuring !OpenStack/Folsom particularly for GRAM requirements as well as GRAM itself.
    15811
    159121. '''Install fresh Ubuntu 14.04 image on control, network and N compute nodes'''
     
    19851}}}   
    19952
    200 2. '''Install mysql on the control node'''
    201 
    202 {{{
    203 sudo apt-get install mariadb-server python-mysqldb
    204 }}}
    205 
    206 - You will be prompted for the password of the mysql admin. Type it in (twice) and remember it: it will be needed in the config.json file for the value of mysql_password.
     53Continue with the instructions on the [InstallationGuideJuno]
    20754
    20855
     563. '''Installing OS Images : Only on the Control Node'''
    20957
    210 3. '''Install !OpenStack and GRAM on the control and compute nodes'''
     58At this point, OS images must be placed in !OpenStack Glance (the image repository service) to support creation of virtual machines.  Since you are offline you must be able to provide the images via USB or CD.
    21159
    212 - Get the DEBIAN files gram_control.deb and gram_compute.deb. These are not available on an apt server currently but can be obtained by request from '''gram-dev@bbn.com'''.
    213 
    214 {{{
    215 sudo apt-get install -y ubuntu-cloud-keyring --force-yes
    216 sudo apt-get install gdebi-core
    217 }}}
    218 
    219 - Install the gram package (where <type> is ''control'', ''network'', or ''compute'' depending on what machine type is being installed):
    220 {{{
    221 sudo gdebi gram_<control/network/compute>.deb
    222 }}}
    223 
    224 - '''Edit /etc/gram/config.json'''. ''NOTE: This is the most critical step of the process. This specifies your passwords, network configurations, so that !OpenStack will be configured properly.'' [See section "Configuring config.json" below for details on the variables in that file]
    225 
    226 - Place holder to discuss editing l2_simple_learning.py - need different lines for HP vs Dell switch
    227 
    228 - Run the GRAM installation script (where <type> is ''control'', ''network'' or ''compute'' depending on what machine type is being installed):
    229 {{{
    230 sudo /etc/gram/install_gram.sh <control/network/compute>
    231 }}}
    232 
    233 - Configure the OS and Network. You will lose network connectivity in the step, it is recommended that the following command is run directly on the machine or using the Linux 'screen' program.
    234 {{{
    235 sudo /tmp/install/install_operating_system_[control/network/compute].sh
    236 }}}
    237 
    238 
    239 - Configure everything else. Use a ''' root shell '''
    240 {{{
    241 /tmp/install/install_[control/network/compute].sh
    242 }}}
    243 
    244 This last command will do a number of things:
    245 - Read in all apt dependencies required
    246 - Configure the !OpenStack configuration files based on values set in config.json
    247 - Start all !OpenStack services
    248 - Start all GRAM services
    249 
    250 If something goes wrong (you'll see errors in the output stream), then the scripts it is running are in /tmp/install/install*.sh (install_compute.sh or install_control.sh). You can usually run the commands by hand and get things to work or at least see where things went wrong (often a problem in the configuration file).
    251 
    252 4.  - Set up the namespace only on the control node. Use a '''root shell'''.
    253      1. Check that sudo ip netns has two entries - the qrouter-* is the important one.
    254      1. If qdhcp-* namespace is not there, type sudo quantum-dhcp-agent-restart
    255      1. If you still cannot get 2 entries, try restarting all the quantum services:
    256             * sudo service quantum-server restart
    257             * sudo service quantum-plugin-openvswitch-agent restart
    258             * sudo service quantum-dhcp-agent restart
    259             * sudo service quantum-l3-agent restart
    260 
    261 
    262      On the '''control node ONLY'''. In the root shell, type
    263 
    264 {{{
    265 export PYTHONPATH=$PYTHONPATH:/opt/gcf/src:/home/gram/gram/src:/home/gram/gram/src/gram/am/gram
    266 python /home/gram/gram/src/gram/am/gram/set_namespace.py
    267 }}}
    268 
    269 5. '''Installing OS Images : Only on the Control Node'''
    270 
    271 At this point, OS images must be placed in !OpenStack Glance (the image repository service) to support creation of virtual machines.
    272 
    273 The choice of images is installation-specific, but these commands are provided as a reasonable example of a first image, a 64-bit Ubuntu 12.04 server in qcow2 format (http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img)
    274 
    275 {{{
    276 wget http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
    277 glance image-create --name "ubuntu-12.04" --is-public=true \
    278 --disk-format=qcow2 --container-format=bare < \
    279 ubuntu-12.04-server-cloudimg-amd64-disk1.img
    280 #Make sure your default_OS_image in /etc/gram/config.json is set to
    281 # the name of an existing image
    282 }}}
    283 
    284 Another image, a 64-bit Fedora 19 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
    285 
    286 {{{
    287 wget http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2
    288 glance image-create --name "fedora-19" --is-public=true \
    289 --disk-format=qcow2 --container-format=bare < \
    290 Fedora-x86_64-19-20130627-sda.qcow2
    291 }}}
    292 
    293 Another image, a 64-bit CentOS 6.5 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
    294 
    295 {{{
    296 wget http://repos.fedorapeople.org/repos/openstack/guest-images/centos-6.5-20140117.0.x86_64.qcow2
    297 glance image-create --name "centos-6.5" --is-public=true \
    298 --disk-format=qcow2 --container-format=bare < \
    299 centos-6.5-20140117.0.x86_64.qcow2
    300 }}}
    301 
    302 *In the event, these links no longer work, copies of the images have been put on an internal projects directory in the GPO infrastructure.
    303 6. '''Edit gcf_config'''
    304 If using the GENI Portal as the clearinghouse:
    305 - Copy the cert:
    306 {{{
    307 cp nye-ca-cert.pem /etc/gram/certs/trusted_roots/
    308 sudo service gram-am restart
    309 }}}
    310 - Instal user certs and configure omni (instructions: http://trac.gpolab.bbn.com/gcf/wiki/OmniConfigure/Automatic )
    311 
    312 
    313 If using the local gcf clearinghouse, set up gcf_config:
    314 In ~/.gcf/gcf_config change hostname to be the fully qualified domain name of the control host for the clearinghouse portion and the aggregate manager portion (2x) eg,
    315 {{{
    316 host=boscontroller.gram.gpolab.bbn.com
    317 }}}
    318 Change the base_name to reflect the service token (the same service token used in config.json).  Use the FQDN of the control for the token.
    319 {{{
    320 base_name=geni//boscontroller.gram.gpolab.bbn.com//gcf
    321 }}}
    322 Generate new credentials:
    323 {{{
    324 cd /opt/gcf/src
    325 ./gen-certs.py --exp -u <username>
    326 ./gen-certs.py --exp -u <username> --notAll
    327 }}}
    328 This has to be done twice as the first creates certificates for the aggregate manager and the clearinghouse.   The second creates the username certificates appropriately based on the previous certificates.
    329 
    330 Generate public key pair
    331 {{{
    332 ssh-keygen -t rsa -C "gram@bbn.com"
    333 }}}
    334 
    335 Modify ~/.gcf/omni_config to reflect the service token used in config.json: (Currently using FQDN as token)
    336 {{{
    337 authority=geni:boscontroller.gram.gpolab.bbn.com:gcf
    338 }}}
    339 Set the ip addresses of the ch and sa to the external IP address of the controler
    340 {{{
    341 ch = https://128.89.91.170:8000
    342 sa = https://128.89.91.170:8000
    343 or
    344 ch = https://boscontroller.gram.gpolab.bbn.com:8000
    345 sa = https://boscontroller.gram.gpolab.bbn.com:8000
    346 
    347 }}}
    348 
    349 
    350 
    351 
    352 
    353 
    354 
    355 === Configuring ''config.json'' ===
    356 
    357 The config.json file (in /etc/gram) is a JSON file that is parsed
    358 by GRAM code at configre/install time as well as run time.
    359 
    360 JSON is a format for expressing dictionaries of name/value
    361 pairs where the values can be constants, lists or dictionaries. There are
    362 no constants, per se, in JSON, but the file as provided has some 'dummy'
    363 variables (e.g. "000001") against which comments can be added.
    364 
    365 The following is a list of all the configuration variables that can
    366 be set in the config.json JSON file. For some, defaults are provided in the
    367 code but it is advised that the values of these parameters be explicitly set.
    368 
    369 || '''parameter''' || '''definition''' ||
    370 || default_VM_flavor || Name of the default VM flavor (if not provided in request RSpec), e.g. 'm1.small' ||
    371 || default_OS_image ||  Name of default VM image (if not provided in request RSpec), e.g. 'ubuntu-12.04' ||
    372 || default_OS_type ||  Name of OS of default VM image, e.g. 'Linux' ||
    373 || default OS_version ||  Version of OS of default VM image, e.g. '12' ||
    374 || external_interface || name of the nic connected to the external network (internet) e.g. eth0. GRAM configures this interface with a static IP address to be specified by the user ||
    375 || external_address || IP address of the interface connected to the external network ||
    376 || external_netmask || netmask associated with the above IP address ||
    377 || control_interface || name of the nic that is to be on the control plane ||
    378 || control_address || IP address of control address. This should be a private address ||
    379 || data_interface || name of the nic that is to be on the data plane ||
    380 || data_address || IP address of the data interface ||
    381 || internal_vlans || Set of VLAN tags for internal links and networks, not for stitching, this must match the !OpenFlow switch configuration  ||
    382 || management_interface || name of the nic that is to be on the management plane ||
    383 || management_address || IP address of the management interface ||
    384 || management_network_name || Quantum will create a network with this name to provide an interface to the VMs through the controller ||
    385 || management_network_cidr || The cidr of the quantum management network. It is recommended that this address space be different from the addresses used on the physical interfaces (control, management, data interfaces) of the control and compute nodes
    386 || management_network_vlan || The vlan used on the management switch to connect the management interfaces of the compute/control nodes.
    387 || mysql_user ||  The name of the mysql_user for !OpenStack operations ||
    388 || mysql_password ||  The password of the mysql_user for !OpenStack operations. ([1] above.] ||
    389 || rabbit_password ||  The password for RabbitMQ interface !OpenStack operations ||
    390 || nova_password ||  The password of the nova mysql database for the nova user ||
    391 || glance_password ||  The password of the glance mysql database for the glance user ||
    392 || keystone_password ||  The password of the keystone mysql database for the keystone user ||
    393 || quantum_password ||  The password of the quantum mysql database for the quantum user ||
    394 || os_tenant_name ||  The name of the !OpenStack admin tenant (e.g. admin) ||
    395 || os_username ||  The name of the !OpenStack admin user (e.g. admin) ||
    396 || os_password ||  The password of the !OpenStack admin user ||
    397 || os_auth_url ||  The URL for accessing !OpenStack authorization services ||
    398 || os_region_name ||  The name of the !OpenStack region namespace (default = RegionOne) ||
    399 || os_no_cache ||  Whether to enable/disable caching (default = 1) ||
    400 || service_token ||  The unique token for identifying this rack, shared by all control and compute nodes of the rack in the same !OpenStack instance (ie. the ''name'' of the rack, suggest FQDN of host) ||
    401 || service_endpoint ||  The URL by which !OpenStack services are identified within keystone ||
    402 || public_gateway_ip ||  The address of the default gateway on the external network interface ||
    403 || public_subnet_cidr ||  the range of address from which quantum may assign addresses on the external network ||
    404 || public_subnet_start_ip ||  the first address of the public addresses available on the external network ||
    405 || public_subnet_end_ip ||  the last address of the public addresses available on the external network ||
    406 || metadata_port ||  The port on which !OpenStack shares meta-data (defult 8775) ||
    407 || backup_directory ||  The directory in which the GRAM install process places original versions of config files in case of the need to roll-back to  a previous state. ||
    408 || allocation_expiration_minutes ||  Time at which allocations expire (in minutes), default=10 ||
    409 || lease_expiration_minutes ||  Time at which provisioned resources expire (in minutes), default = 7 days ||
    410 || gram_snapshot_directory ||  Directory of GRAM snapshots, default '/etc/gram/snapshots' ||
    411 || recover_from_snapshot ||  Whether GRAM should, on initialization, reinitialize from a particular snapshot (default = None or "" meaning no file provided) ||
    412 || recover_from_most_recent_snapshot ||  Whether GRAM should, on initialization, reinitialize from the most recent snapshot (default = True) ||
    413 || snapshot_maintain_limit ||  Number of most recent snapshots maintained by GRAM (default = 10) ||
    414 || subnet_numfile ||  File where gram stores the subnet number for last allocated subnet, default  = '/etc/gram/GRAM-next-subnet.txt'. ''Note: This is temporary until we have namespaces working. '' ||
    415 || port_table_file ||  File where GRAM stores the SSH proxy port state table, default  = '/etc/gram/gram-ssh-port-table.txt' ||
    416 || port_table_lock_file ||  File where SSH port table lock state is stored, default =  = '/etc/gram/gram-ssh-port-table.lock' ||
    417 || ssh_proxy_exe ||  Location of GRAM SSH proxy utility, which enables GRAM to create and delete proxies for each user requested, default  = '/usr/local/bin/gram_ssh_proxy' ||
    418 || ssh_proxy_start_port ||  Start of SSH proxy ports, default = 3000 ||
    419 || ssh_proxy_end_port ||  End of SSH proxy ports, default = 3999 ||
    420 || vmoc_interface_port ||  Port on which to communicate to VMOC interface manager, default = 7001 ||
    421 || vmoc_slice_autoregister ||  SHould GRAM automatically reigster slices to VMOC? Default = True ||
    422 || vmoc_set_vlan_on_untagged_packet_out ||  Should VMOC set VLAN on untagged outgoing packet, default = False ||
    423 || vmoc_set_vlan_on_untagged_flow_mod ||  Should VMOC set VLAN on untagged outgoing flowmod, default = True ||
    424 || vmoc_accept_clear_all_flows_on_startup ||  Should VMOC clear all flows on startup, default  = True ||
    425 || control_host_address || The IP address of the controller node's control interface (used to set the /etc/hosts on the compute nodes ||
    426 || mgmt_ns || DO NOT set this field, it will be set up installation and is the name of the namespace containing the Quantum management network. This namespace can be used to access the VMs using their management address ||
    427 || disk_image_metadata ||  This provides a dictionary mapping names of images (as registered in Glance) with tags for 'os' (operating system of image), 'version' (version of OS of image) and 'description' (human readable description of image) e.g. ||
    428 
    429 {{{
    430    {
    431    "ubuntu-2nic":
    432        {
    433         "os": "Linux",
    434         "version": "12.0",
    435         "description":"Ubuntu image with 2 NICs configured"
    436         },
    437    "cirros-2nic-x86_64":
    438        {
    439         "os": "Linux",
    440         "version": "12.0",
    441         "description":"Cirros image with 2 NICs configured"
    442         }
    443 }}}
    444 || control_host ||  The name or IP address of the control node host ||
    445 || compute_hosts ||  The names/addresses of the compute node hosts, e.g. ||
    446 {{{
    447 {
    448    "boscompute1": "10.10.8.101",
    449    "boscompute2": "10.10.8.102",
    450    "boscompute4": "10.10.8.104"
    451 }
    452 }}}
    453 || host_file_entries ||  The names/addresses of machines to be included in /etc/hosts, e.g. ||
    454 {{{
    455 {
    456    "boscontrol": "128.89.72.112",
    457    "boscompute1": "128.89.72.113",
    458    "boscompute2": "128.89.72.114"
    459 }
    460 }}}
    461 ||= stitching_info =||  Information necessary for the Stitching Infrastructure
    462    ||aggregate_id  || The URN of this AM
    463    ||aggregate_url || The URL of this AM
    464    ||edge_points   || A list of dictionaries for which:
    465       || local_switch               || URN of local switch                                || mandatory
    466       || port                       || URN port on local switch leading to remote switch  || mandatory
    467       || remote_switch              || URN of remote switch                               || mandatory
    468       || vlans                      || VLAN tags configured on this port                  || mandatory
    469       || traffic_engineering_metric || configurable metric for traffic engineering        || optional, default value = 10 (no units)
    470       || capacity                   || Capacity of the link between endpoints             || optional, default value = 1000000000 (bytes/sec)
    471       || interface_mtu              || MTU of interface                                   || optional, default value = 900 (bytes)
    472       || maximum_reservable_capacity|| Maximum reservable capacity between endpoints      || optional, default value = 1000000000 (bytes/sec)
    473       || minimum_reservable_capacity|| Minimum reservable capacity between endpoints      || optional, default value = 1000000 (bytes/sec)
    474       || granularity                || Increments for reservations                        || optional, default value = 1000000 (bytes/sec)
    475 
    476 == Installing Operations Monitoring ==
    477 Monitoring can be installed after testing the initial installation of GRAM.   Most supporting infrastructure was installed by
    478 the steps above.   Some steps, however, still need to be done by hand and the instructions can be found here: [wiki:InstallOpsMon Installing Monitoring on GRAM]
    479 
    480 
    481 == Testing GRAM installation ==
    482 This simple rspec can be used to test the gram installation - attachment:2n-1l.rspec
    483 
    484 
    485 {{{
    486 # Restart gram-am and clearinghose
    487 sudo service gram-am restart
    488 sudo service gram-ch restart
    489 
    490 # check omni/gcf config
    491 cd /opt/gcf/src
    492 ./omni.py getusercred
    493 
    494 # allocate and provision a slice
    495 # I created an rspec in /home/gram called 2n-1l.rspec
    496 ./omni.py -V 3 -a http://130.127.39.170:5001 allocate a1 ~/2n-1l.rspec
    497 ./omni.py -V 3 -a http://130.127.39.170:5001 provision a1 ~/2n-1l.rspec
    498 
    499 # check that the VMs were created
    500 nova list --all-tenants
    501 
    502 # check that the VMs booted, using the VM IDs from the above command:
    503 nova console-log <ID>
    504 
    505 # look at the 192.x.x.x IP in the console log
    506 
    507 # find the namespace for the management place:
    508 sudo ip netns list
    509      # look at each qrouter-.... for one that has the external (130) and management (192)
    510 sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ifconfig
    511 
    512 # using this namespace, ssh into the VM:
    513 sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ssh -i ~/.ssh/id_rsa ssh gramuser@192.168.10.4
    514 
    515 # verify that the data plane is working by pinging across VMs on the 10.x.x.x addresses
    516 # The above VM has 10.0.21.4 and the other VM i created has 10.0.21.3
    517 ping 10.10.21.3
    518 }}}
    519 
    520 == Turn off Password Authentication on the Control and Compute Nodes
    521 
    522 1.  Generate an rsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_rsa and ~gram/.ssh/id_rsa.pub
    523     ssh-keygen -t rsa -C "gram@address"
    524 2.  Generate a dsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_dsa and ~gram/.ssh/id_dsa.pub.  Some components could only deal well with dsa keys and
    525 so from the control node access to other resources on the rack should be using the dsa key.
    526     ssh-keygen -t dsa -C "gram@address"
    527 3.  Copy the public key to the compute nodes, i.e. id_dsa.pub
    528 4.  On the control and compute nodes,  cat id_rsa.pub >> ~/.ssh/authorized_keys
    529 5.  As sudo, edit /etc/sshd/config and ensure that these entries are set this way:
    530      RSAAuthentication yes[[BR]]
    531      !PubkeyAuthentication yes[[BR]]
    532      !PasswordAuthentication no[[BR]]
    533 6.  Restart the ssh service, sudo service ssh restart.
    534 7.  Verify by login in using the key ssh -i ~/.ssh/id_dsa gram@address
    535 
    536 == TODO ==
    537 *Need to make link to /opt/gcf in compute nodes
    538 
    539 *Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node: (broken sed in OpenVSwitch.py)
    540 
    541 *Service token not set in keystone.conf
    542 
    543 *add a step in the installation process that checks the status of the services before we start our installation scripts - check dependencies
    544 
    545 *fix installation such as the gcf_config has the proper entry for host in aggregate and clearinghouse portions - also need to check where the port number is actually read from
    546 for the AM - as it is not the gcf_config
    547 
    548 
    549 == DEBUGGING NOTES ==
    550 * If it gets stuck at provisioning, you may have lost connectivity with one or more compute nodes. Check that network-manager is removed
    551 
    552 * If ip addresses are not being assigned and the VMs stall on boot: quantum port-delete 192.168.10.2 (the dhcp agent) and restart quantum-* services
    553 
    554 * To create the deb package check the Software Release Page for instructions [wiki:SoftwareReleaseNotes Software Release Procedure]
    555 .
    556 
    557 * To create a tar file for offline Ubuntu installation, follow the instructions on this page [wiki:MakeLocalUbuntuRepo Create Local Ubuntu Repository]
    558 
    559 * To create a tar file for offline CentOS installation, follow the instructions on this page [wiki:MakeLocalCentOSRepo Create Local CentOS Repository]
    560 .
    561 
    562 * To create a tar file for offline python package installation, follow the instructions on this page [wiki:MakeLocalPythonRepo Create Local Python Repository]
    563 .
    564 
    565 [https://superior.bbn.com/trac/bbn-rack/wiki/DebuggingNotes More Debugging Notes][[BR]]