Changes between Initial Version and Version 1 of GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzlyOffline


Ignore:
Timestamp:
07/09/14 15:15:09 (10 years ago)
Author:
sdabideen@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzlyOffline

    v1 v1  
     1= GRAM Offline Installation Guide =
     2
     3
     4[[PageOutline(2-100,Table of Contents,inline,unnumbered)]]
     5
     6== Introduction ==
     7
     8This document describes the procedures and context for installing GRAM software. There are these aspects which will be covered individually:
     9* Configuration Overview
     10* Hardware Requirements
     11* Software Requirements
     12* Network configuration
     13* !OpenStack Installation and Configuration
     14* GRAM Installation and Configuration
     15
     16== Hardware Requirements ==
     17The minimum requirements are:
     18* 1 Control Server
     19* 1 Compute Server (Can be more)
     20* 1 Switch with at least (number of servers)*3 ports  [For non-dataplane traffic]
     21* 1 !OpenFlow Switch with at least (number of servers) ports  [For dataplane traffic]
     22* Each server should have at least 4 Ethernet ports
     23== Software Requirements ==
     24
     25
     26==== Ports ====
     27The following ports will be used by GRAM components.  Verify that these ports are not already in use.  If so, change the configuration of the gram component below to use a different port.
     28
     29   * Controller node
     30      * 8000: GRAM Clearinghouse (Unless you are using a different clearinghouse).  See [wiki:InstallationGuide#ConfigureandStartClearinghouseontheControlNode this section] to change this port.
     31      * 8001: GRAM Aggregate Manager.  See [wiki:InstallationGuide#ConfigureandStartAggregateManager this section] to change this  port.
     32      * 9000: VMOC Default Controller
     33      * 7001: VMOC Management.  See [wiki:InstallationGuide#Setup this section] to change this port.
     34      * 6633: VMOC
     35
     36=== Openstack Requirements ===
     37
     38* This guide was written for Ubuntu 12.04
     39* All dependencies will be downloaded for the Ubuntu repository.
     40
     41
     42
     43==== Image requirements ====
     44   * Currently, nova images must meet the following requirements for GRAM:
     45      1.  Must have the following packages installed:
     46          * cloud-utils
     47          * openssh-server
     48          * bash
     49          * apt
     50
     51
     52== Configuration Overview ==
     53
     54!OpenStack and GRAM present software layers on top of rack hardware. It is expected that rack compute nodes are broken into two categories:
     55* '''Controller Node''': The central management and coordination point for !OpenStack and Gram operations and services. There is one of these per rack.
     56* '''Compute Node''': The resource from which VM's and network connections are sliced and allocated on requrest. There are many of these per rack.
     57
     58* !OpenStack and GRAM require establishing four distinct networks among the different nodes of the rack:
     59  * '''Control Network''': The network over which !OpenFlow and GRAM commands flow between control and compute nodes. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
     60  * '''Data Network''': The allocated network and associated interfaces between created VM's representing the requested compute/network resource topology. This network IS !OpenFlow controlled.
     61  * '''External Network''': The network connecting the controller node to the external internet. The compute nodes may or may not also have externaly visible addresses on this network, for convenience.
     62  * '''Management Network''': Enables SSH entry into and between created VM's. This network is NOT !OpenFlow controlled and has internal IP addresses for all nodes.
     63
     64The mapping of the networks to interfaces is arbitrary and can be changed by the installer. For this document we assume the following convention:
     65 * eth0: Control network
     66 * eth1 : Data network
     67 * eth2 : External network
     68 * eth3 : Management network
     69
     70The Controller node will have four interfaces, one for each of the above networks. The Compute nodes will have three (Control, Data and Management) with one (External) optional.
     71
     72More details on the network configuration are provided in wiki:"ArchitectureDescription".
     73
     74
     75== Network Configuration ==
     76
     77 The first step of !OpenStack/Gram configuration is establishing the networks described above.
     78
     79We need to define a range of VLAN's for the data network (say, 1000-2000) and separate VLANs for the external, control, and management networks (say, 5, 6, and 7) on the management switch.
     80The external and control network ports should be configured untagged and the management port should be configured tagged.
     81
     82The Control, External and Management networks are connected between the rack management switch and ethernet interfaces on the Controller or Compute nodes.
     83
     84The Data network is connected between the rack !OpenFlow switch and an ethernet interface on the Control and Compute nodes.
     85
     86
     87[[Image(GRAMSwitchDiag.jpg)]]
     88
     89=== OpenFlow Switch for the Data Network ===
     90
     91The ports on the !OpenFlow switch to which data network interfaces have been connected need to be configured to ''trunk'' the VLANs of the data network. How this is done varies from switch to switch but typical commands look something like
     92
     93{{{
     94conf t
     95  vlan <vlanid>
     96    tagged <ports>
     97    exit
     98  exit
     99write memory
     100}}}
     101
     102
     103On the !OpenFlow switch, for each VLAN used in the data network (1000-2000), set the controller to point to the VMOC running on the control node. The command will vary from switch to switch but this is typical:
     104
     105{{{
     106conf t
     107  vlan <vlanid>
     108    openflow controller tcp:<controller_addr>:6633 enable
     109    openflow fail-secure on
     110    exit
     111  exit
     112write memory
     113}}}
     114
     115For the Dell Force 10 switch, The following lines set up the vlan trunking in the data network, and sets up the default openflow controller on the VMOC.
     116{{{
     117!
     118interface Vlan 1001 of-instance 1
     119 no ip address
     120 tagged TenGigabitEthernet 0/0-2
     121 no shutdown
     122!
     123.........
     124!
     125openflow of-instance 1
     126 controller 1 128.89.72.112  tcp
     127 flow-map l2 enable
     128 flow-map l3 enable
     129 interface-type vlan
     130 multiple-fwd-table enable
     131 no shutdown
     132!
     133}}}
     134The above snippet assumes that the controller node, running VMOC, is at 128.89.72.112
     135
     136For a sample configuration file for the Dell Force10, see attachment:force10-running
     137
     138=== Management Switch ====
     139
     140The ports on the management switch to which management network interfaces have been connected need to be configured to ''trunk'' the VLAN of the management network. How this is done varies from switch to switch, but typical commands look something like:
     141
     142{{{
     143conf t
     144  int gi0/1/3
     145  switchport mode trunk
     146  switchport trunk native vlan 1
     147  switchport trunk allowed vlan add 7
     148  no shutdown
     149  end
     150write memory
     151}}}
     152
     153Here is a config file for a Dell Powerconnect 7048: attachment:powerconnect-running. We use VLAN 200, 300 and 2500 for the control plane, management plane and external network respectively.
     154
     155== GRAM and !OpenStack Installation and Configuration ==
     156
     157GRAM provides a custom installation script for installing and configuring !OpenStack/Folsom particularly for GRAM requirements as well as GRAM itself.
     158
     1591. '''Install fresh Ubuntu 12.04 image on control and N compute nodes'''
     160
     161- From among the rack nodes, select one to be the 'control' and others to be the compute nodes. The control node should have at least 4 NIC's and the compute should have at least 3 NIC's.
     162- Install Ubuntu 12.04 image on each selected rack. Server is preferred
     163- Create 'gram' user with sudo/admin privileges
     164- If there are additional admin accounts, you must manually install omni for each of these accounts.
     165
     1661a. ''' Set up a local repository with the packages and dependencies'''
     167- unpack gram_pkgs.gz
     168{{{
     169cd /home/gram/
     170tar -zxvf gram_pkgs.tar.gz
     171}}}
     172
     173- add gram_pkgs to the list of repositories:
     174add the line
     175{{{ deb file:/home/gram/gram_pkgs.gz ./ }}}
     176to your /etc/apt/sources.list
     177
     178- run the command
     179{{{ sudo apt-get update }}}   
     180
     1812. '''Install mysql on the control node'''
     182
     183{{{
     184sudo apt-get install mysql-server python-mysqldb
     185}}}
     186
     187- You will be prompted for the password of the mysql admin. Type it in (twice) and remember it: it will be needed in the config.json file for the value of mysql_password.
     188
     189
     190
     1913. '''Install !OpenStack and GRAM on the control and compute nodes'''
     192
     193- Get the DEBIAN files gram_control.deb and gram_compute.deb. These are not available on an apt server currently but can be obtained by request from '''gram-dev@bbn.com'''.
     194
     195{{{
     196sudo apt-get install -y ubuntu-cloud-keyring
     197sudo apt-get install gdebi-core
     198}}}
     199
     200- Install the gram package (where <type> is ''control'' or ''compute'' depending on what machine type is being installed):
     201{{{
     202sudo gdebi gram_<control/compute>.deb
     203}}}
     204
     205- '''Edit /etc/gram/config.json'''. ''NOTE: This is the most critical step of the process. This specifies your passwords, network configurations, so that !OpenStack will be configured properly.'' [See section "Configuring config.json" below for details on the variables in that file]
     206
     207- Run the GRAM installation script (where <type> is ''control'' or ''compute'' depending on what machine type is being installed):
     208{{{
     209sudo /etc/gram/install_gram.sh <control/compute>
     210}}}
     211
     212- Configure the OS and Network. You will lose network connectivity in the step, it is recommended that the following command is run directly on the machine or using the Linux 'screen' program.
     213{{{
     214sudo /tmp/install/install_operating_system_[control/compute].sh
     215}}}
     216
     217
     218- Configure everything else. Use a ''' root shell '''
     219{{{
     220/tmp/install/install_[control/compute].sh
     221}}}
     222
     223This last command will do a number of things:
     224- Read in all apt dependencies required
     225- Configure the !OpenStack configuration files based on values set in config.json
     226- Start all !OpenStack services
     227- Start all GRAM services
     228
     229If something goes wrong (you'll see errors in the output stream), then the scripts it is running are in /tmp/install/install*.sh (install_compute.sh or install_control.sh). You can usually run the commands by hand and get things to work or at least see where things went wrong (often a problem in the configuration file).
     230
     231- Set up the namespace only on the control node. Use a '''root shell'''.
     2321. Check that sudo ip netns has two entries - the qrouter-* is the important one.
     2331. If qdhcp-* namespace is not there, type sudo quantum-dhcp-agent-restart
     2341. If you still cannot get 2 entries, try restarting all the quantum services:
     235* sudo service quantum-server restart
     236* sudo service quantum-plugin-openvswitch-agent restart
     237* sudo service quantum-dhcp-agent restart
     238* sudo service quantum-l3-agent restart
     239
     240And then in the root shell, type
     241
     242{{{
     243export PYTHONPATH=$PYTHONPATH:/opt/gcf/src:/home/gram/gram/src:/home/gram/gram/src/gram/am/gram
     244python /home/gram/gram/src/gram/am/gram/set_namespace.py
     245}}}
     246
     2475. '''Edit /etc/hosts''' - Not clear that this is necessary anymore.
     248Each control/compute node must be associated with the external ip address.
     249It should look similar to:
     250{{{
     251127.0.0.1       localhost
     252128.89.72.112   bbn-cam-ctrl-1
     253128.89.72.113   bbn-cam-cmpe-1
     254128.89.72.114   bbn-cam-cmpe-2
     255}}}
     256
     257
     258
     2596. '''Installing OS Images : Only on the Control Node'''
     260
     261At this point, OS images must be placed in !OpenStack Glance (the image repository service) to support creation of virtual machines.
     262
     263The choice of images is installation-specific, but these commands are provided as a reasonable example of a first image, a 64-bit Ubuntu 12.04 server in qcow2 format (http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img)
     264
     265{{{
     266wget http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
     267glance image-create --name "ubuntu-12.04" --is-public=true \
     268--disk-format=qcow2 --container-format=bare < \
     269ubuntu-12.04-server-cloudimg-amd64-disk1.img
     270#Make sure your default_OS_image in /etc/gram/config.json is set to
     271# the name of an existing image
     272}}}
     273
     274Another image, a 64-bit Fedora 19 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
     275
     276{{{
     277wget http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2
     278glance image-create --name "fedora-19" --is-public=true \
     279--disk-format=qcow2 --container-format=bare < \
     280Fedora-x86_64-19-20130627-sda.qcow2
     281}}}
     282
     283Another image, a 64-bit CentOS 6.5 in qcow2 format (http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2)
     284
     285{{{
     286wget http://repos.fedorapeople.org/repos/openstack/guest-images/centos-6.5-20140117.0.x86_64.qcow2
     287glance image-create --name "centos-6.5" --is-public=true \
     288--disk-format=qcow2 --container-format=bare < \
     289centos-6.5-20140117.0.x86_64.qcow2
     290}}}
     291
     292*In the event, these links no longer work, copies of the images have been put on an internal projects directory in the GPO infrastructure.
     2937. '''Edit gcf_config'''
     294If using the GENI Portal as the clearinghouse:
     295- Copy the cert:
     296{{{
     297cp nye-ca-cert.pem /etc/gram/certs/trusted_roots/
     298sudo service gram-am restart
     299}}}
     300- Instal user certs and configure omni (instructions: http://trac.gpolab.bbn.com/gcf/wiki/OmniConfigure/Automatic )
     301
     302
     303If using the local gcf clearinghouse, set up gcf_config:
     304In ~/.gcf/gcf_config change hostname to be the fully qualified domain name of the control host for the clearinghouse portion and the aggregate manager portion (2x) eg,
     305{{{
     306host=boscontroller.gram.gpolab.bbn.com
     307}}}
     308Change the base_name to reflect the service token (the same service token used in config.json).  Use the FQDN of the control for the token.
     309{{{
     310base_name=geni//boscontroller.gram.gpolab.bbn.com//gcf
     311}}}
     312Generate new credentials:
     313{{{
     314cd /opt/gcf/src
     315./gen-certs.py --exp -u <username>
     316./gen-certs.py --exp -u <username> --notAll
     317}}}
     318This has to be done twice as the first creates certificates for the aggregate manager and the clearinghouse.   The second creates the username certificates appropriately based on the previous certificates.
     319
     320Generate public key pair
     321{{{
     322ssh-keygen -t rsa -C "gram@bbn.com"
     323}}}
     324
     325Modify ~/.gcf/omni_config to reflect the service token used in config.json: (Currently using FQDN as token)
     326{{{
     327authority=geni:boscontroller.gram.gpolab.bbn.com:gcf
     328}}}
     329Set the ip addresses of the ch and sa to the external IP address of the controler
     330{{{
     331ch = https://128.89.91.170:8000
     332sa = https://128.89.91.170:8000
     333or
     334ch = https://boscontroller.gram.gpolab.bbn.com:8000
     335sa = https://boscontroller.gram.gpolab.bbn.com:8000
     336
     337}}}
     338
     339
     340
     341
     342
     343
     344
     345=== Configuring ''config.json'' ===
     346
     347The config.json file (in /etc/gram) is a JSON file that is parsed
     348by GRAM code at configre/install time as well as run time.
     349
     350JSON is a format for expressing dictionaries of name/value
     351pairs where the values can be constants, lists or dictionaries. There are
     352no constants, per se, in JSON, but the file as provided has some 'dummy'
     353variables (e.g. "000001") against which comments can be added.
     354
     355The following is a list of all the configuration variables that can
     356be set in the config.json JSON file. For some, defaults are provided in the
     357code but it is advised that the values of these parameters be explicitly set.
     358
     359|| '''parameter''' || '''definition''' ||
     360|| default_VM_flavor || Name of the default VM flavor (if not provided in request RSpec), e.g. 'm1.small' ||
     361|| default_OS_image ||  Name of default VM image (if not provided in request RSpec), e.g. 'ubuntu-12.04' ||
     362|| default_OS_type ||  Name of OS of default VM image, e.g. 'Linux' ||
     363|| default OS_version ||  Version of OS of default VM image, e.g. '12' ||
     364|| external_interface || name of the nic connected to the external network (internet) e.g. eth0. GRAM configures this interface with a static IP address to be specified by the user ||
     365|| external_address || IP address of the interface connected to the external network ||
     366|| external_netmask || netmask associated with the above IP address ||
     367|| control_interface || name of the nic that is to be on the control plane ||
     368|| control_address || IP address of control address. This should be a private address ||
     369|| data_interface || name of the nic that is to be on the data plane ||
     370|| data_address || IP address of the data interface ||
     371|| internal_vlans || Set of VLAN tags for internal links and networks, not for stitching, this must match the !OpenFlow switch configuration  ||
     372|| management_interface || name of the nic that is to be on the management plane ||
     373|| management_address || IP address of the management interface ||
     374|| management_network_name || Quantum will create a network with this name to provide an interface to the VMs through the controller ||
     375|| management_network_cidr || The cidr of the quantum management network. It is recommended that this address space be different from the addresses used on the physical interfaces (control, management, data interfaces) of the control and compute nodes
     376|| management_network_vlan || The vlan used on the management switch to connect the management interfaces of the compute/control nodes.
     377|| mysql_user ||  The name of the mysql_user for !OpenStack operations ||
     378|| mysql_password ||  The password of the mysql_user for !OpenStack operations. ([1] above.] ||
     379|| rabbit_password ||  The password for RabbitMQ interface !OpenStack operations ||
     380|| nova_password ||  The password of the nova mysql database for the nova user ||
     381|| glance_password ||  The password of the glance mysql database for the glance user ||
     382|| keystone_password ||  The password of the keystone mysql database for the keystone user ||
     383|| quantum_password ||  The password of the quantum mysql database for the quantum user ||
     384|| os_tenant_name ||  The name of the !OpenStack admin tenant (e.g. admin) ||
     385|| os_username ||  The name of the !OpenStack admin user (e.g. admin) ||
     386|| os_password ||  The password of the !OpenStack admin user ||
     387|| os_auth_url ||  The URL for accessing !OpenStack authorization services ||
     388|| os_region_name ||  The name of the !OpenStack region namespace (default = RegionOne) ||
     389|| os_no_cache ||  Whether to enable/disable caching (default = 1) ||
     390|| service_token ||  The unique token for identifying this rack, shared by all control and compute nodes of the rack in the same !OpenStack instance (ie. the ''name'' of the rack, suggest FQDN of host) ||
     391|| service_endpoint ||  The URL by which !OpenStack services are identified within keystone ||
     392|| public_gateway_ip ||  The address of the default gateway on the external network interface ||
     393|| public_subnet_cidr ||  the range of address from which quantum may assign addresses on the external network ||
     394|| public_subnet_start_ip ||  the first address of the public addresses available on the external network ||
     395|| public_subnet_end_ip ||  the last address of the public addresses available on the external network ||
     396|| metadata_port ||  The port on which !OpenStack shares meta-data (defult 8775) ||
     397|| backup_directory ||  The directory in which the GRAM install process places original versions of config files in case of the need to roll-back to  a previous state. ||
     398|| allocation_expiration_minutes ||  Time at which allocations expire (in minutes), default=10 ||
     399|| lease_expiration_minutes ||  Time at which provisioned resources expire (in minutes), default = 7 days ||
     400|| gram_snapshot_directory ||  Directory of GRAM snapshots, default '/etc/gram/snapshots' ||
     401|| recover_from_snapshot ||  Whether GRAM should, on initialization, reinitialize from a particular snapshot (default = None or "" meaning no file provided) ||
     402|| recover_from_most_recent_snapshot ||  Whether GRAM should, on initialization, reinitialize from the most recent snapshot (default = True) ||
     403|| snapshot_maintain_limit ||  Number of most recent snapshots maintained by GRAM (default = 10) ||
     404|| subnet_numfile ||  File where gram stores the subnet number for last allocated subnet, default  = '/etc/gram/GRAM-next-subnet.txt'. ''Note: This is temporary until we have namespaces working. '' ||
     405|| port_table_file ||  File where GRAM stores the SSH proxy port state table, default  = '/etc/gram/gram-ssh-port-table.txt' ||
     406|| port_table_lock_file ||  File where SSH port table lock state is stored, default =  = '/etc/gram/gram-ssh-port-table.lock' ||
     407|| ssh_proxy_exe ||  Location of GRAM SSH proxy utility, which enables GRAM to create and delete proxies for each user requested, default  = '/usr/local/bin/gram_ssh_proxy' ||
     408|| ssh_proxy_start_port ||  Start of SSH proxy ports, default = 3000 ||
     409|| ssh_proxy_end_port ||  End of SSH proxy ports, default = 3999 ||
     410|| vmoc_interface_port ||  Port on which to communicate to VMOC interface manager, default = 7001 ||
     411|| vmoc_slice_autoregister ||  SHould GRAM automatically reigster slices to VMOC? Default = True ||
     412|| vmoc_set_vlan_on_untagged_packet_out ||  Should VMOC set VLAN on untagged outgoing packet, default = False ||
     413|| vmoc_set_vlan_on_untagged_flow_mod ||  Should VMOC set VLAN on untagged outgoing flowmod, default = True ||
     414|| vmoc_accept_clear_all_flows_on_startup ||  Should VMOC clear all flows on startup, default  = True ||
     415|| control_host_address || The IP address of the controller node's control interface (used to set the /etc/hosts on the compute nodes ||
     416|| mgmt_ns || DO NOT set this field, it will be set up installation and is the name of the namespace containing the Quantum management network. This namespace can be used to access the VMs using their management address ||
     417|| disk_image_metadata ||  This provides a dictionary mapping names of images (as registered in Glance) with tags for 'os' (operating system of image), 'version' (version of OS of image) and 'description' (human readable description of image) e.g. ||
     418
     419{{{
     420   {
     421   "ubuntu-2nic":
     422       {
     423        "os": "Linux",
     424        "version": "12.0",
     425        "description":"Ubuntu image with 2 NICs configured"
     426        },
     427   "cirros-2nic-x86_64":
     428       {
     429        "os": "Linux",
     430        "version": "12.0",
     431        "description":"Cirros image with 2 NICs configured"
     432        }
     433}}}
     434|| control_host ||  The name or IP address of the control node host ||
     435|| compute_hosts ||  The names/addresses of the compute node hosts, e.g. ||
     436{{{
     437{
     438   "boscompute1": "10.10.8.101",
     439   "boscompute2": "10.10.8.102",
     440   "boscompute4": "10.10.8.104"
     441}
     442}}}
     443|| host_file_entries ||  The names/addresses of machines to be included in /etc/hosts, e.g. ||
     444{{{
     445{
     446   "boscontrol": "128.89.72.112",
     447   "boscompute1": "128.89.72.113",
     448   "boscompute2": "128.89.72.114"
     449}
     450}}}
     451||= stitching_info =||  Information necessary for the Stitching Infrastructure
     452   ||aggregate_id  || The URN of this AM
     453   ||aggregate_url || The URL of this AM
     454   ||edge_points   || A list of dictionaries for which:
     455      || local_switch               || URN of local switch                                || mandatory
     456      || port                       || URN port on local switch leading to remote switch  || mandatory
     457      || remote_switch              || URN of remote switch                               || mandatory
     458      || vlans                      || VLAN tags configured on this port                  || mandatory
     459      || traffic_engineering_metric || configurable metric for traffic engineering        || optional, default value = 10 (no units)
     460      || capacity                   || Capacity of the link between endpoints             || optional, default value = 1000000000 (bytes/sec)
     461      || interface_mtu              || MTU of interface                                   || optional, default value = 900 (bytes)
     462      || maximum_reservable_capacity|| Maximum reservable capacity between endpoints      || optional, default value = 1000000000 (bytes/sec)
     463      || minimum_reservable_capacity|| Minimum reservable capacity between endpoints      || optional, default value = 1000000 (bytes/sec)
     464      || granularity                || Increments for reservations                        || optional, default value = 1000000 (bytes/sec)
     465
     466== Installing Operations Monitoring ==
     467Monitoring can be installed after testing the initial installation of GRAM.   Most supporting infrastructure was installed by
     468the steps above.   Some steps, however, still need to be done by hand and the instructions can be found here: [wiki:InstallOpsMon Installing Monitoring on GRAM]
     469
     470
     471== Testing GRAM installation ==
     472This simple rspec can be used to test the gram installation - attachment:2n-1l.rspec
     473
     474
     475{{{
     476# Restart gram-am and clearinghose
     477sudo service gram-am restart
     478sudo service gram-ch restart
     479
     480# check omni/gcf config
     481cd /opt/gcf/src
     482./omni.py getusercred
     483
     484# allocate and provision a slice
     485# I created an rspec in /home/gram called 2n-1l.rspec
     486./omni.py -V 3 -a http://130.127.39.170:5001 allocate a1 ~/2n-1l.rspec
     487./omni.py -V 3 -a http://130.127.39.170:5001 provision a1 ~/2n-1l.rspec
     488
     489# check that the VMs were created
     490nova list --all-tenants
     491
     492# check that the VMs booted, using the VM IDs from the above command:
     493nova console-log <ID>
     494
     495# look at the 192.x.x.x IP in the console log
     496
     497# find the namespace for the management place:
     498sudo ip netns list
     499     # look at each qrouter-.... for one that has the external (130) and management (192)
     500sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ifconfig
     501
     502# using this namespace, ssh into the VM:
     503sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ssh -i ~/.ssh/id_rsa ssh gramuser@192.168.10.4
     504
     505# verify that the data plane is working by pinging across VMs on the 10.x.x.x addresses
     506# The above VM has 10.0.21.4 and the other VM i created has 10.0.21.3
     507ping 10.10.21.3
     508}}}
     509
     510== Turn off Password Authentication on the Control and Compute Nodes
     511
     5121.  Generate an rsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_rsa and ~gram/.ssh/id_rsa.pub
     513    ssh-keygen -t rsa -C "gram@address"
     5142.  Generate a dsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_dsa and ~gram/.ssh/id_dsa.pub.  Some components could only deal well with dsa keys and
     515so from the control node access to other resources on the rack should be using the dsa key.
     516    ssh-keygen -t dsa -C "gram@address"
     5173.  Copy the public key to the compute nodes, i.e. id_dsa.pub
     5184.  On the control and compute nodes,  cat id_rsa.pub >> ~/.ssh/authorized_keys
     5195.  As sudo, edit /etc/sshd/config and ensure that these entries are set this way:
     520     RSAAuthentication yes[[BR]]
     521     !PubkeyAuthentication yes[[BR]]
     522     !PasswordAuthentication no[[BR]]
     5236.  Restart the ssh service, sudo service ssh restart.
     5247.  Verify by login in using the key ssh -i ~/.ssh/id_dsa gram@address
     525
     526== TODO ==
     527*Need to make link to /opt/gcf in compute nodes
     528
     529*Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node: (broken sed in OpenVSwitch.py)
     530
     531*Service token not set in keystone.conf
     532
     533*add a step in the installation process that checks the status of the services before we start our installation scripts - check dependencies
     534
     535*fix installation such as the gcf_config has the proper entry for host in aggregate and clearinghouse portions - also need to check where the port number is actually read from
     536for the AM - as it is not the gcf_config
     537
     538
     539== DEBUGGING NOTES ==
     540* If it gets stuck at provisioning, you may have lost connectivity with one or more compute nodes. Check that network-manager is removed
     541
     542* If ip addresses are not being assigned and the VMs stall on boot: quantum port-delete 192.168.10.2 (the dhcp agent) and restart quantum-* services
     543
     544* To create the deb package check the Software Release Page for instructions [wiki:SoftwareReleaseNotes Software Release Procedure]
     545.
     546
     547
     548[https://superior.bbn.com/trac/bbn-rack/wiki/DebuggingNotes More Debugging Notes][[BR]]