Changes between Initial Version and Version 1 of GENIRacksHome/OpenGENIRacks/InstallationGuideJuno


Ignore:
Timestamp:
05/11/15 09:12:53 (9 years ago)
Author:
rrhain@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/OpenGENIRacks/InstallationGuideJuno

    v1 v1  
     1= OpenGENI GRAM Installation Guide =
     2
     3[[PageOutline]]
     4
     5== Introduction ==
     6
     7This document describes the procedures and context for installing GRAM software. There are these aspects which will be covered individually:
     8  * Configuration Overview
     9  * Hardware Requirements
     10  * Software Requirements
     11  * Network configuration
     12  * !OpenStack Installation and Configuration
     13  * GRAM Installation and Configuration
     14
     15== Hardware Requirements ==
     16The minimum requirements are:
     17  * 1 Control Server
     18  * 1 Compute Server (Can be more)
     19  * 1 Switch with at least (number of servers)*3 ports  [For non-dataplane traffic]
     20  * 1 !OpenFlow Switch with at least (number of servers) ports  [For dataplane traffic]
     21  * Each server should have at least 4 Ethernet ports
     22  * Each server should have internet connectivity for downloading packages
     23
     24== Software Requirements ==
     25
     26=== Packages ===
     27The following Debian packages are required on the controller node
     28   * git
     29
     30=== Ports ===
     31The following ports will be used by GRAM components.  Verify that these ports are not already in use.  If so, change the configuration of the gram component below to use a different port.
     32
     33   * Controller node
     34      * 8000: GRAM Clearinghouse (Unless you are using a different clearinghouse).  See [wiki:GENIRacksHome/OpenGENIRacks/InstallationGuide#ConfigureandStartClearinghouseontheControlNode this section] to change this port.
     35      * 8001: GRAM Aggregate Manager.  See [wiki:GENIRacksHome/OpenGENIRacks/InstallationGuide#ConfigureandStartAggregateManager this section] to change this  port.
     36      * 9000: VMOC Default Controller
     37      * 7001: VMOC Management.  See [wiki:GENIRacksHome/OpenGENIRacks/InstallationGuide#Setup this section] to change this port.
     38      * 6633: VMOC
     39
     40=== Openstack Requirements ===
     41
     42  * This guide was written for Ubuntu 12.04
     43  * All dependencies will be downloaded for the Ubuntu repository.
     44
     45
     46
     47==== Image requirements ====
     48   * Currently, nova images must meet the following requirements for GRAM:
     49      1.  Must have the following packages installed:
     50          * cloud-utils
     51          * openssh-server
     52          * bash
     53          * apt
     54
     55
     56== Configuration Overview ==
     57
     58!OpenStack and GRAM present software layers on top of rack hardware. It is expected that rack compute nodes are broken into two categories:
     59  * '''Controller Node''': The central management and coordination point for !OpenStack and GRAM operations and services. There is one of these per rack.
     60  * '''Compute Node''': The resource from which VM's and network connections are sliced and allocated on requrest. There are many of these per rack.
     61
     62  * !OpenStack and GRAM require establishing four distinct networks among the different nodes of the rack:
     63    * '''Control Network''': The network over which !OpenFlow and GRAM commands flow between control and compute nodes. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
     64    * '''Data Network''': The allocated network and associated interfaces between created VM's representing the requested compute/network resource topology. This network IS !OpenFlow controlled.
     65    * '''External Network''': The network connecting the controller node to the external internet. The compute nodes may or may not also have externaly visible addresses on this network, for convenience.
     66    * '''Management Network''': Enables SSH entry into and between created VM's. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
     67
     68The mapping of the networks to interfaces is arbitrary and can be changed by the installer. For this document we assume the following convention:
     69  * eth0: Control network
     70  * eth1 : Data network
     71  * eth2 : External network
     72  * eth3 : Management network
     73
     74The Controller node will have four interfaces, one for each of the above networks. The Compute nodes will have three (Control, Data and Management) with one (External) optional.
     75
     76More details on the network configuration are provided in wiki:"GENIRacksHome/OpenGENIRacks/ArchitectureDescription".
     77
     78
     79== Network Configuration ==
     80
     81 The first step of !OpenStack/GRAM configuration is establishing the networks described above.
     82
     83We need to define a range of VLAN's for the data network (say, 1000-2000) and separate VLANs for the external, control, and management networks (say, 5, 6, and 7) on the management switch.
     84The external and control network ports should be configured untagged and the management port should be configured tagged.
     85
     86The Control, External and Management networks are connected between the rack management switch and ethernet interfaces on the Controller or Compute nodes.
     87
     88The Data network is connected between the rack !OpenFlow switch and an ethernet interface on the Control and Compute nodes.
     89
     90
     91[[Image(GRAMSwitchDiag.jpg)]]
     92
     93=== OpenFlow Switch for the Data Network ===
     94
     95The ports on the !OpenFlow switch to which data network interfaces have been connected need to be configured to ''trunk'' the VLANs of the data network. How this is done varies from switch to switch but typical commands look something like
     96
     97{{{
     98conf t
     99  vlan <vlanid>
     100    tagged <ports>
     101    exit
     102  exit
     103write memory
     104}}}
     105
     106
     107On the !OpenFlow switch, for each VLAN used in the data network (1000-2000), set the controller to point to the VMOC running on the control node. The command will vary from switch to switch but this is typical:
     108
     109{{{
     110conf t
     111  vlan <vlanid>
     112    openflow controller tcp:<controller_addr>:6633 enable
     113    openflow fail-secure on
     114    exit
     115  exit
     116write memory
     117}}}
     118
     119For the Dell Force 10 switch, The following lines set up the vlan trunking in the data network, and sets up the default openflow controller on the VMOC.
     120{{{
     121!
     122interface Vlan 1001 of-instance 1
     123 no ip address
     124 tagged TenGigabitEthernet 0/0-2
     125 no shutdown
     126!
     127.........
     128!
     129openflow of-instance 1
     130 controller 1 128.89.72.112  tcp
     131 flow-map l2 enable
     132 flow-map l3 enable
     133 interface-type vlan
     134 multiple-fwd-table enable
     135 no shutdown
     136!
     137}}}
     138The above snippet assumes that the controller node, running VMOC, is at 128.89.72.112
     139
     140For a sample configuration file for the Dell Force10, see attachment:force10-running
     141
     142=== Management Switch ===
     143
     144The ports on the management switch to which management network interfaces have been connected need to be configured to ''trunk'' the VLAN of the management network. How this is done varies from switch to switch, but typical commands look something like:
     145
     146{{{
     147conf t
     148  int gi0/1/3
     149  switchport mode trunk
     150  switchport trunk native vlan 1
     151  switchport trunk allowed vlan add 7
     152  no shutdown
     153  end
     154write memory
     155}}}
     156
     157Here is a config file for a Dell Powerconnect 7048: attachment:powerconnect-running. We use VLAN 200, 300 and 2500 for the control plane, management plane and external network respectively.
     158
     159== GRAM and !OpenStack Installation and Configuration ==
     160
     161GRAM provides a custom installation script for installing and configuring !OpenStack/Folsom particularly for GRAM requirements as well as GRAM itself.
     162
     1631. '''Install fresh Ubuntu 12.04 image on control and N compute nodes'''
     164
     165  - From among the rack nodes, select one to be the 'control' and others to be the compute nodes. The control node should have at least 4 NIC's and the compute should have at least 3 NIC's.
     166  - Install Ubuntu 12.04 image on each selected rack. Server is preferred
     167  - Create 'gram' user with sudo/admin privileges
     168  - If there are additional admin accounts, you must manually install omni for each of these accounts.
     169
     170
     1712. '''Install mysql on the control node'''
     172
     173{{{
     174sudo apt-get install mysql-server python-mysqldb
     175}}}
     176
     177  - You will be prompted for the password of the mysql admin. Type it in (twice) and remember it: it will be needed in the config.json file for the value of mysql_password.
     178
     179
     180
     1813. '''Install !OpenStack and GRAM on the control and compute nodes'''
     182
     183  - Get the DEBIAN files gram_control.deb and gram_compute.deb. These are not available on an apt server currently but can be obtained by request from '''gram-dev@bbn.com'''.
     184  - Set up the APT repository to read the correct version of grizzly packages:
     185{{{
     186sudo apt-get install -y ubuntu-cloud-keyring
     187echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> grizzly.list
     188sudo mv grizzly.list /etc/apt/sources.list.d/
     189}}}
     190  - Update the current repository:
     191{{{
     192sudo apt-get -y update && sudo apt-get -y dist-upgrade
     193}}}
     194
     195  - Get the gdebi package for direct installation of deb files
     196{{{
     197sudo apt-get install gdebi-core
     198}}}
     199  - Install the gram package (where <type> is ''control'' or ''compute'' depending on what machine type is being installed):
     200{{{
     201sudo gdebi gram_<control/compute>.deb
     202}}}
     203
     204  - '''Edit /etc/gram/config.json'''. ''NOTE: This is the most critical step of the process. This specifies your passwords, network configurations, so that !OpenStack will be configured properly.'' [See section "Configuring config.json" below for details on the variables in that file]
     205
     206  - Run the GRAM installation script (where <type> is ''control'' or ''compute'' depending on what machine type is being installed):
     207{{{
     208sudo /etc/gram/install_gram.sh <control/compute>
     209}}}
     210
     211  - Configure the OS and Network. You will lose network connectivity in the step, it is recommended that the following command is run directly on the machine or using the Linux 'screen' program.
     212{{{
     213sudo /tmp/install/install_operating_system_[control/compute].sh
     214}}}
     215
     216
     217  - Configure everything else. Use a ''' root shell '''
     218{{{
     219/tmp/install/install_[control/compute].sh
     220}}}
     221
     222This last command will do a number of things:
     223  - Read in all apt dependencies required
     224  - Configure the !OpenStack configuration files based on values set in config.json
     225  - Start all !OpenStack services
     226  - Start all GRAM services
     227
     228If something goes wrong (you'll see errors in the output stream), then the scripts it is running are in /tmp/install/install*.sh (install_compute.sh or install_control.sh). You can usually run the commands by hand and get things to work or at least see where things went wrong (often a problem in the configuration file).
     229
     230  - Set up the namespace only on the control node. Use a '''root shell'''.
     231    1. Check that sudo ip netns has two entries - the qrouter-* is the important one.
     232    1. If qdhcp-* namespace is not there, type sudo quantum-dhcp-agent-restart
     233    1. If you still cannot get 2 entries, try restarting all the quantum services:
     234      * sudo service quantum-server restart
     235      * sudo service quantum-plugin-openvswitch-agent restart
     236      * sudo service quantum-dhcp-agent restart
     237      * sudo service quantum-l3-agent restart
     238
     239On the control node ONLY = In the root shell, type
     240
     241{{{
     242export PYTHONPATH=$PYTHONPATH:/opt/gcf/src:/home/gram/gram/src:/home/gram/gram/src/gram/am/gram
     243python /home/gram/gram/src/gram/am/gram/set_namespace.py
     244}}}
     245
     2465. '''Edit /etc/hosts''' - Not clear that this is necessary anymore.
     247Each control/compute node must be associated with the external ip address.
     248It should look similar to:
     249{{{
     250127.0.0.1       localhost
     251128.89.72.112   bbn-cam-ctrl-1
     252128.89.72.113   bbn-cam-cmpe-1
     253128.89.72.114   bbn-cam-cmpe-2
     254}}}
     255
     256
     257
     2586. '''Installing OS Images : Only on the Control Node'''
     259
     260At this point, OS images must be placed in !OpenStack Glance (the image repository service) to support creation of virtual machines.
     261
     262The choice of images is installation-specific, but these commands are provided as a reasonable example of a first image, a 64-bit Ubuntu 12.04 server in qcow2 format (http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img)
     263
     264{{{
     265wget http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
     266glance image-create --name "ubuntu-12.04" --is-public=true \
     267--disk-format=qcow2 --container-format=bare < \
     268ubuntu-12.04-server-cloudimg-amd64-disk1.img
     269#Make sure your default_OS_image in /etc/gram/config.json is set to
     270# the name of an existing image
     271}}}
     272
     273Another image, a 64-bit Fedora 19 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
     274
     275{{{
     276wget http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2
     277glance image-create --name "fedora-19" --is-public=true \
     278--disk-format=qcow2 --container-format=bare < \
     279Fedora-x86_64-19-20130627-sda.qcow2
     280}}}
     281
     282Another image, a 64-bit CentOS 6.5 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
     283
     284{{{
     285wget http://repos.fedorapeople.org/repos/openstack/guest-images/centos-6.5-20140117.0.x86_64.qcow2
     286glance image-create --name "centos-6.5" --is-public=true \
     287--disk-format=qcow2 --container-format=bare < \
     288centos-6.5-20140117.0.x86_64.qcow2
     289}}}
     290
     291*In the event, these links no longer work, copies of the images have been put on an internal projects directory in the GPO infrastructure.[[BR]]
     292
     2937.  '''Adding Another !OpenStack OS Flavor
     294We also wanted to add another !OpenStack OS flavor.  Some are created by default by !OpenStack.  We wanted a super image.   As sudo, type:
     295
     296{{{
     297nova flavor-create m1.super 7 32768 160 16
     298nova flavor-list
     299}}}
     300
     3017 is the ID of the flavor.  Generally, only 5 are installed by default.   So using 7 should be safe.  Otherwise pick a number, one larger than
     302the number of flavors you have.   Check using nova flavor-list.
     303
     3048. '''Edit gcf_config'''
     305If using the GENI Portal as the clearinghouse:
     306  - Copy the cert:
     307{{{
     308cp nye-ca-cert.pem /etc/gram/certs/trusted_roots/
     309sudo service gram-am restart
     310}}}
     311  - Install user certs and configure omni (instructions: http://trac.gpolab.bbn.com/gcf/wiki/OmniConfigure/Automatic )
     312
     313
     314If using the local gcf clearinghouse, set up gcf_config:
     315In ~/.gcf/gcf_config change hostname to be the fully qualified domain name of the control host for the clearinghouse portion and the aggregate manager portion (2x) eg,
     316{{{
     317host=boscontroller.gram.gpolab.bbn.com
     318}}}
     319Change the base_name to reflect the service token (the same service token used in config.json).  Use the FQDN of the control for the token.
     320{{{
     321base_name=geni//boscontroller.gram.gpolab.bbn.com//gcf
     322}}}
     323Generate new credentials:
     324{{{
     325cd /opt/gcf/src
     326./gen-certs.py --exp -u <username>
     327./gen-certs.py --exp -u <username> --notAll
     328}}}
     329This has to be done twice as the first creates certificates for the aggregate manager and the clearinghouse.   The second creates the username certificates appropriately based on the previous certificates.
     330
     331Generate public key pair
     332{{{
     333ssh-keygen -t rsa -C "gram@bbn.com"
     334}}}
     335
     336Modify ~/.gcf/omni_config to reflect the service token used in config.json: (Currently using FQDN as token)
     337{{{
     338authority=geni:boscontroller.gram.gpolab.bbn.com:gcf
     339}}}
     340Set the ip addresses of the ch and sa to the external IP address of the controler
     341{{{
     342ch = https://128.89.91.170:8000
     343sa = https://128.89.91.170:8000
     344or
     345ch = https://boscontroller.gram.gpolab.bbn.com:8000
     346sa = https://boscontroller.gram.gpolab.bbn.com:8000
     347
     348}}}
     349
     350
     3519. '''Enable Flash for Flack'''
     352
     353Install xinetd:
     354{{{
     355apt-get install xinetd
     356}}}
     357
     358Add this line to /etc/services:
     359{{{
     360flashpolicy     843/tcp    # ProtoGENI flashpolicy service
     361}}}
     362
     363Add this file to /etc/xinetd.d as flashpolicy:
     364
     365{{{
     366# The flashpolicy service allows connections to ports 443 (HTTPS) and 8443
     367# (geni-pgch), as well as ports 8001-8002 which may be used by gcf-am
     368# or related local services.  It is harmless to allow these ports via
     369# flashpolicy if they are closed in the firewall.
     370service flashpolicy
     371{
     372       disable         = no
     373       id              = flashpolicy
     374       protocol        = tcp
     375       user            = root
     376       wait            = no
     377       server          = /bin/echo
     378       server_args     = <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="*" to-ports="80,443,5001,5002"/> </cross-domain-policy>
     379}
     380}}}
     381
     382Restart xinetd
     383{{{
     384sudo service xinetd restart
     385}}}
     386
     387
     388
     389=== Configuring ''config.json'' ===
     390
     391The config.json file (in /etc/gram) is a JSON file that is parsed
     392by GRAM code at configre/install time as well as run time.
     393
     394JSON is a format for expressing dictionaries of name/value
     395pairs where the values can be constants, lists or dictionaries. There are
     396no constants, per se, in JSON, but the file as provided has some 'dummy'
     397variables (e.g. "000001") against which comments can be added.
     398
     399The following is a list of all the configuration variables that can
     400be set in the config.json JSON file. For some, defaults are provided in the
     401code but it is advised that the values of these parameters be explicitly set.
     402
     403|| '''parameter''' || '''definition''' ||
     404|| default_VM_flavor || Name of the default VM flavor (if not provided in request RSpec), e.g. 'm1.small' ||
     405|| default_OS_image ||  Name of default VM image (if not provided in request RSpec), e.g. 'ubuntu-12.04' ||
     406|| default_OS_type ||  Name of OS of default VM image, e.g. 'Linux' ||
     407|| default OS_version ||  Version of OS of default VM image, e.g. '12' ||
     408|| external_interface || name of the nic connected to the external network (internet) e.g. eth0. GRAM configures this interface with a static IP address to be specified by the user ||
     409|| external_address || IP address of the interface connected to the external network ||
     410|| external_netmask || netmask associated with the above IP address ||
     411|| control_interface || name of the nic that is to be on the control plane ||
     412|| control_address || IP address of control address. This should be a private address ||
     413|| data_interface || name of the nic that is to be on the data plane ||
     414|| data_address || IP address of the data interface ||
     415|| internal_vlans || Set of VLAN tags for internal links and networks, not for stitching, this must match the !OpenFlow switch configuration  ||
     416|| management_interface || name of the nic that is to be on the management plane ||
     417|| management_address || IP address of the management interface ||
     418|| management_network_name || Quantum will create a network with this name to provide an interface to the VMs through the controller ||
     419|| management_network_cidr || The cidr of the quantum management network. It is recommended that this address space be different from the addresses used on the physical interfaces (control, management, data interfaces) of the control and compute nodes
     420|| management_network_vlan || The vlan used on the management switch to connect the management interfaces of the compute/control nodes.
     421|| mysql_user ||  The name of the mysql_user for !OpenStack operations ||
     422|| mysql_password ||  The password of the mysql_user for !OpenStack operations. ([1] above.] ||
     423|| rabbit_password ||  The password for RabbitMQ interface !OpenStack operations ||
     424|| nova_password ||  The password of the nova mysql database for the nova user ||
     425|| glance_password ||  The password of the glance mysql database for the glance user ||
     426|| keystone_password ||  The password of the keystone mysql database for the keystone user ||
     427|| quantum_password ||  The password of the quantum mysql database for the quantum user ||
     428|| os_tenant_name ||  The name of the !OpenStack admin tenant (e.g. admin) ||
     429|| os_username ||  The name of the !OpenStack admin user (e.g. admin) ||
     430|| os_password ||  The password of the !OpenStack admin user ||
     431|| os_auth_url ||  The URL for accessing !OpenStack authorization services ||
     432|| os_region_name ||  The name of the !OpenStack region namespace (default = !RegionOne) ||
     433|| os_no_cache ||  Whether to enable/disable caching (default = 1) ||
     434|| service_token ||  The unique token for identifying this rack, shared by all control and compute nodes of the rack in the same !OpenStack instance (ie. the ''name'' of the rack, suggest FQDN of host) ||
     435|| service_endpoint ||  The URL by which !OpenStack services are identified within keystone ||
     436|| public_gateway_ip ||  The address of the default gateway on the external network interface ||
     437|| public_subnet_cidr ||  the range of address from which quantum may assign addresses on the external network ||
     438|| public_subnet_start_ip ||  the first address of the public addresses available on the external network ||
     439|| public_subnet_end_ip ||  the last address of the public addresses available on the external network ||
     440|| metadata_port ||  The port on which !OpenStack shares meta-data (defult 8775) ||
     441|| backup_directory ||  The directory in which the GRAM install process places original versions of config files in case of the need to roll-back to  a previous state. ||
     442|| allocation_expiration_minutes ||  Time at which allocations expire (in minutes), default=10 ||
     443|| lease_expiration_minutes ||  Time at which provisioned resources expire (in minutes), default = 7 days ||
     444|| gram_snapshot_directory ||  Directory of GRAM snapshots, default '/etc/gram/snapshots' ||
     445|| recover_from_snapshot ||  Whether GRAM should, on initialization, reinitialize from a particular snapshot (default = None or "" meaning no file provided) ||
     446|| recover_from_most_recent_snapshot ||  Whether GRAM should, on initialization, reinitialize from the most recent snapshot (default = True) ||
     447|| snapshot_maintain_limit ||  Number of most recent snapshots maintained by GRAM (default = 10) ||
     448|| subnet_numfile ||  File where gram stores the subnet number for last allocated subnet, default  = '/etc/gram/GRAM-next-subnet.txt'. ''Note: This is temporary until we have namespaces working. '' ||
     449|| port_table_file ||  File where GRAM stores the SSH proxy port state table, default  = '/etc/gram/gram-ssh-port-table.txt' ||
     450|| port_table_lock_file ||  File where SSH port table lock state is stored, default =  = '/etc/gram/gram-ssh-port-table.lock' ||
     451|| ssh_proxy_exe ||  Location of GRAM SSH proxy utility, which enables GRAM to create and delete proxies for each user requested, default  = '/usr/local/bin/gram_ssh_proxy' ||
     452|| ssh_proxy_start_port ||  Start of SSH proxy ports, default = 3000 ||
     453|| ssh_proxy_end_port ||  End of SSH proxy ports, default = 3999 ||
     454|| vmoc_interface_port ||  Port on which to communicate to VMOC interface manager, default = 7001 ||
     455|| vmoc_slice_autoregister ||  SHould GRAM automatically reigster slices to VMOC? Default = True ||
     456|| vmoc_set_vlan_on_untagged_packet_out ||  Should VMOC set VLAN on untagged outgoing packet, default = False ||
     457|| vmoc_set_vlan_on_untagged_flow_mod ||  Should VMOC set VLAN on untagged outgoing flowmod, default = True ||
     458|| vmoc_accept_clear_all_flows_on_startup ||  Should VMOC clear all flows on startup, default  = True ||
     459|| control_host_address || The IP address of the controller node's control interface (used to set the /etc/hosts on the compute nodes ||
     460|| mgmt_ns || DO NOT set this field, it will be set up installation and is the name of the namespace containing the Quantum management network. This namespace can be used to access the VMs using their management address ||
     461|| disk_image_metadata ||  This provides a dictionary mapping names of images (as registered in Glance) with tags for 'os' (operating system of image), 'version' (version of OS of image) and 'description' (human readable description of image) e.g. ||
     462
     463{{{
     464   {
     465   "ubuntu-2nic":
     466       {
     467        "os": "Linux",
     468        "version": "12.0",
     469        "description":"Ubuntu image with 2 NICs configured"
     470        },
     471   "cirros-2nic-x86_64":
     472       {
     473        "os": "Linux",
     474        "version": "12.0",
     475        "description":"Cirros image with 2 NICs configured"
     476        }
     477}}}
     478|| control_host ||  The name or IP address of the control node host ||
     479|| compute_hosts ||  The names/addresses of the compute node hosts, e.g. ||
     480{{{
     481{
     482   "boscompute1": "10.10.8.101",
     483   "boscompute2": "10.10.8.102",
     484   "boscompute4": "10.10.8.104"
     485}
     486}}}
     487|| host_file_entries ||  The names/addresses of machines to be included in /etc/hosts, e.g. ||
     488{{{
     489{
     490   "boscontrol": "128.89.72.112",
     491   "boscompute1": "128.89.72.113",
     492   "boscompute2": "128.89.72.114"
     493}
     494}}}
     495||stitching_info||  Information necessary for the Stitching Infrastructure
     496   ||aggregate_id  || The URN of this AM
     497   ||aggregate_url || The URL of this AM
     498   ||edge_points   || A list of dictionaries for which:
     499      || local_switch               || URN of local switch                                || mandatory
     500      || port                       || URN port on local switch leading to remote switch  || mandatory
     501      || remote_switch              || URN of remote switch                               || mandatory
     502      || vlans                      || VLAN tags configured on this port                  || mandatory
     503      || traffic_engineering_metric || configurable metric for traffic engineering        || optional, default value = 10 (no units)
     504      || capacity                   || Capacity of the link between endpoints             || optional, default value = 1000000000 (bytes/sec)
     505      || interface_mtu              || MTU of interface                                   || optional, default value = 900 (bytes)
     506      || maximum_reservable_capacity|| Maximum reservable capacity between endpoints      || optional, default value = 1000000000 (bytes/sec)
     507      || minimum_reservable_capacity|| Minimum reservable capacity between endpoints      || optional, default value = 1000000 (bytes/sec)
     508      || granularity                || Increments for reservations                        || optional, default value = 1000000 (bytes/sec)
     509
     510== Installing Operations Monitoring ==
     511Monitoring can be installed after testing the initial installation of GRAM.   Most supporting infrastructure was installed by
     512the steps above.   Some steps, however, still need to be done by hand and the instructions can be found here: [wiki:InstallOpsMon Installing Monitoring on GRAM]
     513
     514
     515== Testing GRAM installation ==
     516This simple rspec can be used to test the gram installation - attachment:2n-1l.rspec
     517
     518
     519{{{
     520# Restart gram-am and clearinghose
     521sudo service gram-am restart
     522sudo service gram-ch restart
     523
     524# check omni/gcf config
     525cd /opt/gcf/src
     526./omni.py getusercred
     527
     528# allocate and provision a slice
     529# I created an rspec in /home/gram called 2n-1l.rspec
     530./omni.py -V 3 -a http://130.127.39.170:5001 allocate a1 ~/2n-1l.rspec
     531./omni.py -V 3 -a http://130.127.39.170:5001 provision a1 ~/2n-1l.rspec
     532
     533# check that the VMs were created
     534nova list --all-tenants
     535
     536# check that the VMs booted, using the VM IDs from the above command:
     537nova console-log <ID>
     538
     539# look at the 192.x.x.x IP in the console log
     540
     541# find the namespace for the management place:
     542sudo ip netns list
     543     # look at each qrouter-.... for one that has the external (130) and management (192)
     544sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ifconfig
     545
     546# using this namespace, ssh into the VM:
     547sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ssh -i ~/.ssh/id_rsa ssh gramuser@192.168.10.4
     548
     549# verify that the data plane is working by pinging across VMs on the 10.x.x.x addresses
     550# The above VM has 10.0.21.4 and the other VM i created has 10.0.21.3
     551ping 10.10.21.3
     552}}}
     553
     554== Turn off Password Authentication on the Control and Compute Nodes ==
     555
     5561.  Generate an rsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_rsa and ~gram/.ssh/id_rsa.pub
     557    ssh-keygen -t rsa -C "gram@address"
     5582.  Generate a dsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_dsa and ~gram/.ssh/id_dsa.pub.  Some components could only deal well with dsa keys and
     559so from the control node access to other resources on the rack should be using the dsa key.
     560    ssh-keygen -t dsa -C "gram@address"
     5613.  Copy the public key to the compute nodes, i.e. id_dsa.pub
     5624.  On the control and compute nodes,  cat id_rsa.pub >> ~/.ssh/authorized_keys
     5635.  As sudo, edit /etc/sshd/config and ensure that these entries are set this way:
     564     RSAAuthentication yes[[BR]]
     565     !PubkeyAuthentication yes[[BR]]
     566     !PasswordAuthentication no[[BR]]
     5676.  Restart the ssh service, sudo service ssh restart.
     5687.  Verify by login in using the key ssh -i ~/.ssh/id_dsa gram@address
     569
     570== TODO ==
     571  * Need to make link to /opt/gcf in compute nodes
     572
     573  * Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node: (broken sed in OpenVSwitch.py)
     574
     575  * Service token not set in keystone.conf
     576
     577  * add a step in the installation process that checks the status of the services before we start our installation scripts - check dependencies
     578
     579  * fix installation such as the gcf_config has the proper entry for host in aggregate and clearinghouse portions - also need to check where the port number is actually read from
     580for the AM - as it is not the gcf_config
     581
     582
     583== DEBUGGING NOTES ==
     584  * If it gets stuck at provisioning, you may have lost connectivity with one or more compute nodes. Check that network-manager is removed
     585
     586  * If ip addresses are not being assigned and the VMs stall on boot: quantum port-delete 192.168.10.2 (the dhcp agent) and restart quantum-* services
     587
     588  * To create the deb package check the Software Release Page for instructions [wiki:GENIRacksHome/OpenGENIRacks/SoftwareReleaseNotes Software Release Procedure]
     589.
     590
     591
     592[wiki:GENIRacksHome/OpenGENIRacks/DebuggingNotes More Debugging Notes][[BR]]