Version 63 (modified by, 8 years ago) (diff)


OpenGENI GRAM Installation Guide

GENI Introduction

This page is provided for general instructions to build an OpenGENI Rack. Though anyone can buy their own hardware and build the software that is available from github, there are additional requirements before a rack can be connected to the GENI Infrastructure. All new racks must undergo two types of testing:

Additionally, there are various pages that will give background for new racks and provide an outline for the various processes to get a GENI rack to production:

OpenGENI Introduction

This document describes the procedures and context for installing GRAM software. There are these aspects which will be covered individually:

  • Configuration Overview
  • Hardware Requirements
  • Software Requirements
  • Network configuration
  • OpenStack Installation and Configuration
  • GRAM Installation and Configuration

Hardware Requirements

The minimum requirements are:

  • 1 Control Server
  • 1 Network Server
  • 1 Compute Server (Can be more)
  • 1 Switch with at least (number of servers)*3 ports [For non-dataplane traffic]
  • 1 OpenFlow Switch with at least (number of servers) ports [For dataplane traffic]
  • Each server should have at least 4 Ethernet ports
  • Each server should have internet connectivity for downloading packages

Software Requirements


The following Debian packages are required on the controller node

  • git


The following ports will be used by GRAM components. Verify that these ports are not already in use. If so, change the configuration of the gram component below to use a different port.

  • Controller node
    • 8000: GRAM Clearinghouse (Unless you are using a different clearinghouse). See this section to change this port.
    • 5001: GRAM Aggregate Manager Version 3. See this section to change this port.
    • 5000: GRAM Aggregate Manager Version 2. See this section to change this port.
    • 9000: VMOC Default Controller
    • 7001: VMOC Management. See this section to change this port.
    • 6633: VMOC

Openstack Requirements

  • This guide was written for Ubuntu 14.04
  • All dependencies will be downloaded for the Ubuntu repository.

Image requirements

  • Currently, nova images must meet the following requirements for GRAM:
    1. Must have the following packages installed:
      • cloud-utils
      • openssh-server
      • bash
      • apt

Configuration Overview

OpenStack and GRAM present software layers on top of rack hardware. It is expected that rack compute nodes are broken into two categories:

  • Controller Node: The central management and coordination point for OpenStack and GRAM operations and services. There is one of these per rack.
  • Network Node : The management point for the networking (neutron) aspects of OpenStack
  • Compute Node: The resource from which VM's and network connections are sliced and allocated on requrest. There are many of these per rack.
  • OpenStack and GRAM require establishing four distinct networks among the different nodes of the rack:
    • Control Network: The network over which OpenFlow and GRAM commands flow between control and compute nodes. This network is NOT OpenFlow controlled and has internal IP addresses for all nodes.
    • Data Network: The allocated network and associated interfaces between created VM's representing the requested compute/network resource topology. This network IS OpenFlow controlled.
    • External Network: The network connecting the controller node to the external internet. The compute nodes may or may not also have externaly visible addresses on this network, for convenience.
    • Management Network: Enables SSH entry into and between created VM's. This network is NOT OpenFlow controlled and has internal IP addresses for all nodes.

The mapping of the networks to interfaces is arbitrary and can be changed by the installer. For this document we assume the following convention:

  • eth0: Control network
  • eth1 : Data network
  • eth2 : External network
  • eth3 : Management network

The Controller node will have four interfaces, one for each of the above networks. The Compute nodes will have three (Control, Data and Management) with one (External) optional.

More details on the network configuration are provided in wiki:"GENIRacksHome/OpenGENIRacks/ArchitectureDescription".

Network Configuration

The first step of OpenStack/GRAM configuration is establishing the networks described above.

We need to define a range of VLAN's for the data network (say, 1000-2000) and separate VLANs for the external, control, and management networks (say, 5, 6, and 7) on the management switch. The external and control network ports should be configured untagged and the management port should be configured tagged.

The Control, External and Management networks are connected between the rack management switch and ethernet interfaces on the Controller or Compute nodes.

The Data network is connected between the rack OpenFlow switch and an ethernet interface on the Control and Compute nodes.

Juno Rack - added network node

OpenFlow Switch for the Data Network

The ports on the OpenFlow switch to which data network interfaces have been connected need to be configured to trunk the VLANs of the data network. How this is done varies from switch to switch but typical commands look something like

conf t
  vlan <vlanid>
    tagged <ports>
write memory

On the OpenFlow switch, for each VLAN used in the data network (1000-2000), set the controller to point to the VMOC running on the control node. The command will vary from switch to switch but this is typical:

conf t
  vlan <vlanid>
    openflow controller tcp:<controller_addr>:6633 enable
    openflow fail-secure on
write memory

For the Dell Force 10 switch, The following lines set up the vlan trunking in the data network, and sets up the default openflow controller on the VMOC.

interface Vlan 1001 of-instance 1
 no ip address
 tagged TenGigabitEthernet 0/0-2
 no shutdown
openflow of-instance 1
 controller 1  tcp
 flow-map l2 enable
 flow-map l3 enable
 interface-type vlan
 multiple-fwd-table enable
 no shutdown

The above snippet assumes that the controller node, running VMOC, is at

For a sample configuration file for the Dell Force10, see attachment:force10-running:wiki:GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzly

The ports on the management switch to which management network interfaces have been connected need to be configured to trunk the VLAN of the management network. How this is done varies from switch to switch, but typical commands look something like:

conf t
  int gi0/1/3
  switchport mode trunk
  switchport trunk native vlan 1
  switchport trunk allowed vlan add 7
  no shutdown
write memory

Here is a config file for a Dell Powerconnect 7048: attachment:powerconnect-running:wiki:GENIRacksHome/OpenGENIRacks/InstallationGuideGrizzly. We use VLAN 200, 300 and 2500 for the control plane, management plane and external network respectively.

GRAM and OpenStack Installation and Configuration

GRAM provides a custom installation script for installing and configuring OpenStack/Juno particularly for GRAM requirements as well as GRAM itself.

  1. Install fresh Ubuntu 14.04 image on control, network and N compute nodes
  • From among the rack nodes, select one to be the 'control', one to be the network and others to be the compute nodes. The control node should have at least 4 NIC's and the network and compute should have at least 3 NIC's.
  • Install Ubuntu 14.04 image on each selected rack. Server version is preferred and the biosdevname=0 passed on booting for the kernel.
    • When installing the operating system, before installing the server, hit F6 (Other Options) and type "biosdevname=0" and continue with installation.
  • Create 'gram' user with sudo/admin privileges
  • If there are additional admin accounts, you must manually install omni for each of these accounts.
  • set up ssh keys to allow access to the network node from the control node.

On the control node:

  • Copy gram ssh public key pair and create authorized_hosts file on network node so that commands for gram can be executed without typing in a password. You might also want to do this on the compute node.
ssh gram@networkNode:
mkdir .ssh
scp .ssh/ gram@networkNode:~/.ssh
ssh gram@networkNode:
cat .ssh/ >> .ssh/authorized_keys

It might be good to try and see that gram can indeed issue commands and not be prompted for a password.

ssh gram@networkNode ls /etc

On all compute, network and control servers:

Make sure you have the 10.10.8 and 10.10.5 interfaces up before proceeding. Check that you have the correct entries in /etc/network/interfaces and run

sudo ifup eth1
sudo ifup eth2
sudo ifup eth3

You may need to repeat ifdown/ifup a few times before the interfaces start working. Or you may need to 'sudo service networking restart'. Or you may need to reboot one of the nodes. Do not proceed until all these interfaces plus the external interface on all nodes can ping one another.

  1. Install mysql on the control node
sudo apt-get install -y mariadb-server python-mysqldb
  • You will be prompted for the password of the mysql admin. Type it in (twice) and remember it: it will be needed in the config.json file for the value of mysql_password.
  1. Install OpenStack and GRAM on the control, network and compute nodes
  • Generate the DEBIAN files gram_control.deb, gram_network.deb and gram_compute.deb.
  • These packages can be generated from the repository,, following instructions from here:
  • Alternatively, they can be obtained by request from
  • Set up the APT repository to read the correct version of juno packages:
    sudo apt-get install -y ubuntu-cloud-keyring
    echo deb trusty-updates/juno main >> /tmp/cloudarchive-juno.list
    sudo mv /tmp/cloudarchive-juno.list /etc/apt/sources.list.d/
  • Update the current repository:
    sudo apt-get -y update && sudo apt-get -y dist-upgrade
  • Get the gdebi package for direct installation of deb files
    sudo apt-get install -y gdebi-core
  • Install the gram package (where <type> is control, network or compute depending on what machine type is being installed):
    sudo gdebi gram_<control/network/compute>.deb
  • Edit /etc/gram/config.json. NOTE: This is the most critical step of the process. This specifies your passwords, network configurations, so that OpenStack will be configured properly. [See section "Configuring config.json" below for details on the variables in that file]
  • Run the GRAM installation script (where <type> is control or compute depending on what machine type is being installed):
    sudo /etc/gram/ <control/network/compute>
  • Configure the OS and Network. You will lose network connectivity in the step, it is recommended that the following command is run directly on the machine or using the Linux 'screen' program.
    sudo /tmp/install/install_operating_system_[control/network/compute].sh
  • Configure everything else. NOTE: Do these in order: Control, then Network, then all Compute.
    sudo /tmp/install/install_[control/network/compute].sh

This last command will do a number of things:

  • Read in all apt dependencies required
  • Configure the OpenStack configuration files based on values set in config.json
  • Start all OpenStack services
  • Start all GRAM services
  • Make sure all machines can be reached by SSH. You may need to reboot if not (typical in case of the network node).
  1. Setting up Namespace on Control Node

If something goes wrong (you'll see errors in the output stream), then the scripts it is running are in /tmp/install/install*.sh ( or You can usually run the commands by hand and get things to work or at least see where things went wrong (often a problem in the configuration file).

  • Set up the namespace only on the control node. But with neutron, the namespaces will be found on the network node.
    1. On the network node, check for namespaces. Check that sudo ip netns has two entries - the qrouter-* is the important one. Output will look like this:
      # ip netns
    2. If qdhcp-* namespace is not there, type (on network node) sudo neutron-dhcp-agent-restart
    3. If you still cannot get 2 entries, try restarting all the neutron services on network node:
      • sudo service openvswitch-switch restart
      • sudo service neutron-plugin-openvswitch-agent restart
      • sudo service neutron-l3-agent restart
      • sudo service neutron-dhcp-agent restart
      • sudo service neutron-metadata-agent restart

Once the namespaces are visible on the network node, on the control node ONLY: In the root shell, type

export PYTHONPATH=$PYTHONPATH:/opt/gcf/src:/home/gram/gram/src:/home/gram/gram/src/gram/am/gram
python /home/gram/gram/src/gram/am/gram/
sudo service gram-am restart
  1. Installing OS Images : Only on the Control Node

At this point, OS images must be placed in OpenStack Glance (the image repository service) to support creation of virtual machines.

The choice of images is installation-specific, but these commands are provided as a reasonable example of a first image, a 64-bit Ubuntu 12.04 server in qcow2 format (

glance image-create --name "ubuntu-12.04" --is-public=true \
--disk-format=qcow2 --container-format=bare < \
#Make sure your default_OS_image in /etc/gram/config.json is set to 
# the name of an existing image

Another image - a 64-bit Ubuntu 14.04 serverin qcow2 format (

glance image-create --name "ubuntu-14.04" --is-public=true \
--disk-format=qcow2 --container-format=bare < \

Another image, a 64-bit Fedora 19 in qcow2 format (

glance image-create --name "fedora-19" --is-public=true \
--disk-format=qcow2 --container-format=bare < \

Another image, a 64-bit CentOS 6.5 in qcow2 format (

glance image-create --name "centos-6.5" --is-public=true \
--disk-format=qcow2 --container-format=bare < \

*In the event, these links no longer work, copies of the images have been put on an internal projects directory in the GPO infrastructure.

  1. Adding Another OpenStack OS Flavor

We also wanted to add another OpenStack OS flavor. Some are created by default by OpenStack. We wanted a super image. As sudo, type:

nova flavor-create m1.super 7 32768 160 16
nova flavor-list

The 7 is the ID of the flavor. Generally, only 5 are installed by default. So using 7 should be safe. Otherwise pick a number, one larger than the number of flavors you have. Check using nova flavor-list.

  1. Edit gcf_config

If using the GENI Portal as the clearinghouse:

If using the local gcf clearinghouse, set up gcf_config: In ~/.gcf/gcf_config change hostname to be the fully qualified domain name of the control host for the clearinghouse portion and the aggregate manager portion (2x) eg,

Change the base_name to reflect the service token (the same service token used in config.json). Use the FQDN of the control for the token.


Generate new credentials:

cd /opt/gcf/src
./ --exp -u gramuser
./ --exp -u gramuser --notAll
sudo service gram-ch restart

This has to be done twice as the first creates certificates for the aggregate manager and the clearinghouse. The second creates the username certificates appropriately based on the previous certificates.

Modify ~/.gcf/omni_config to reflect the service token used in config.json: (Currently using FQDN as token)

Set the ip addresses of the ch and sa to the external IP address of the controler

ch =
sa =
ch =
sa =

  1. Enable Flash for Flack if necessary. This is not currently necessary, but is kept here for reference in case Flash needs to be enabled.

Install xinetd:

apt-get install xinetd

Add this line to /etc/services:

flashpolicy     843/tcp    # ProtoGENI flashpolicy service

Add this file to /etc/xinetd.d as flashpolicy:

# The flashpolicy service allows connections to ports 443 (HTTPS) and 8443
# (geni-pgch), as well as ports 8001-8002 which may be used by gcf-am
# or related local services.  It is harmless to allow these ports via
# flashpolicy if they are closed in the firewall.
service flashpolicy
       disable         = no
       id              = flashpolicy
       protocol        = tcp
       user            = root
       wait            = no
       server          = /bin/echo
       server_args     = <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="*" to-ports="80,443,5001,5002"/> </cross-domain-policy>

Restart xinetd

sudo service xinetd restart

Configuring config.json

The config.json file (in /etc/gram) is a JSON file that is parsed by GRAM code at configre/install time as well as run time.

JSON is a format for expressing dictionaries of name/value pairs where the values can be constants, lists or dictionaries. There are no constants, per se, in JSON, but the file as provided has some 'dummy' variables (e.g. "000001") against which comments can be added.

The following is a list of all the configuration variables that can be set in the config.json JSON file. For some, defaults are provided in the code but it is advised that the values of these parameters be explicitly set.

parameter definition
default_VM_flavor Name of the default VM flavor (if not provided in request RSpec), e.g. 'm1.small'
default_OS_image Name of default VM image (if not provided in request RSpec), e.g. 'ubuntu-12.04'
default_OS_type Name of OS of default VM image, e.g. 'Linux'
default OS_version Version of OS of default VM image, e.g. '12'
external_interface name of the nic connected to the external network (internet) e.g. eth0. GRAM configures this interface with a static IP address to be specified by the user
external_address IP address of the interface connected to the external network
external_netmask netmask associated with the above IP address
control_host Hostname of the control host of this rack, i.e. gram1-control
control_host_addr Address of the control host on the control plane, i.e. (whatever is on the control network)
control_host_external_addr Address of the control host on the external plane, internet reachable
control_interface name of the nic that is to be on the control plane
control_address IP address of control address. This should be a private address
data_interface name of the nic that is to be on the data plane
data_address IP address of the data interface
host_fqdn Fully Qualified name of this host
internal_vlans Set of VLAN tags for internal links and networks, not for stitching, this must match the OpenFlow switch configuration
management_interface name of the nic that is to be on the management plane
management_address IP address of the management interface
management_network_name Neutron will create a network with this name to provide an interface to the VMs through the controller
management_network_cidr The cidr of the neutron management network. It is recommended that this address space be different from the addresses used on the physical interfaces (control, management, data interfaces) of the control and compute nodes
management_network_vlan The vlan used on the management switch to connect the management interfaces of the compute/control nodes.
mysql_user The name of the mysql_user for OpenStack operations
mysql_password The password of the mysql_user for OpenStack operations. ([1] above.]
rabbit_password The password for RabbitMQ interface OpenStack operations
nova_password The password of the nova mysql database for the nova user
glance_password The password of the glance mysql database for the glance user
keystone_password The password of the keystone mysql database for the keystone user
network_database The OpenStack networking system in use (current release is neutron
network_password The password of the network mysql database for the network user
network_type The OpenStack networking system in use (current release is neutron)
openstack_type The name of the OpenStack release (e.g. grizzly, juno - current release is juno)
os_tenant_name The name of the OpenStack admin tenant (e.g. admin)
os_username The name of the OpenStack admin user (e.g. admin)
os_password The password of the OpenStack admin user
os_auth_url The URL for accessing OpenStack authorization services
os_region_name The name of the OpenStack region namespace (default = RegionOne)
os_no_cache Whether to enable/disable caching (default = 1)
service_token The unique token for identifying this rack, shared by all control and compute nodes of the rack in the same OpenStack instance (ie. the name of the rack, suggest FQDN of host)
service_endpoint The URL by which OpenStack services are identified within keystone
public_gateway_ip The address of the default gateway on the external network interface
public_subnet_cidr the range of address from which neutron may assign addresses on the external network
public_subnet_start_ip the first address of the public addresses available on the external network
public_subnet_end_ip the last address of the public addresses available on the external network
metadata_port The port on which OpenStack shares meta-data (defult 8775)
backup_directory The directory in which the GRAM install process places original versions of config files in case of the need to roll-back to a previous state.
allocation_expiration_minutes Time at which allocations expire (in minutes), default=10
lease_expiration_minutes Time at which provisioned resources expire (in minutes), default = 7 days
gram_snapshot_directory Directory of GRAM snapshots, default '/etc/gram/snapshots'
recover_from_snapshot Whether GRAM should, on initialization, reinitialize from a particular snapshot (default = None or "" meaning no file provided)
recover_from_most_recent_snapshot Whether GRAM should, on initialization, reinitialize from the most recent snapshot (default = True)
snapshot_maintain_limit Number of most recent snapshots maintained by GRAM (default = 10)
subnet_numfile File where gram stores the subnet number for last allocated subnet, default = '/etc/gram/GRAM-next-subnet.txt'. Note: This is temporary until we have namespaces working.
port_table_file File where GRAM stores the SSH proxy port state table, default = '/etc/gram/gram-ssh-port-table.txt'
port_table_lock_file File where SSH port table lock state is stored, default = = '/etc/gram/gram-ssh-port-table.lock'
ssh_proxy_exe Location of GRAM SSH proxy utility, which enables GRAM to create and delete proxies for each user requested, default = '/usr/local/bin/gram_ssh_proxy'
ssh_proxy_start_port Start of SSH proxy ports, default = 3000
ssh_proxy_end_port End of SSH proxy ports, default = 3999
switch_type Currently support either HP or Dell OpenFlow Switch - but best to check with GENI as to the specific models
vmoc_interface_port Port on which to communicate to VMOC interface manager, default = 7001
vmoc_slice_autoregister SHould GRAM automatically reigster slices to VMOC? Default = True
vmoc_set_vlan_on_untagged_packet_out Should VMOC set VLAN on untagged outgoing packet, default = False
vmoc_set_vlan_on_untagged_flow_mod Should VMOC set VLAN on untagged outgoing flowmod, default = True
vmoc_accept_clear_all_flows_on_startup Should VMOC clear all flows on startup, default = True
control_host_address The IP address of the controller node's control interface (used to set the /etc/hosts on the compute nodes
mgmt_ns DO NOT set this field, it will be set up installation and is the name of the namespace containing the Neutron management network. This namespace can be used to access the VMs using their management address
disk_image_metadata This provides a dictionary mapping names of images (as registered in Glance) with tags for 'os' (operating system of image), 'version' (version of OS of image) and 'description' (human readable description of image) e.g.
        "os": "Linux",
        "version": "12.0",
	"description":"Ubuntu image with 2 NICs configured"
        "os": "Linux",
	"version": "12.0",
        "description":"Cirros image with 2 NICs configured"
control_host The name or IP address of the control node host
compute_hosts The names/addresses of the compute node hosts, e.g.
   "boscompute1": "",
   "boscompute2": "",
   "boscompute4": ""
host_file_entries The names/addresses of machines to be included in /etc/hosts, e.g.
   "boscontrol": "",
   "boscompute1": "",
   "boscompute2": ""
location Longitude and latitude of rack for graphic display purposes
{"longitude": "-70", "latitude":"42"},
stitching_info Information necessary for the Stitching Infrastructure
aggregate_id The URN of this AM
aggregate_url The URL of this AM
edge_points A list of dictionaries for which:
local_switch URN of local switch mandatory
port URN port on local switch leading to remote switch mandatory
remote_switch URN of remote switch mandatory
vlans VLAN tags configured on this port mandatory
traffic_engineering_metric configurable metric for traffic engineering optional, default value = 10 (no units)
capacity Capacity of the link between endpoints optional, default value = 1000000000 (bytes/sec)
interface_mtu MTU of interface optional, default value = 900 (bytes)
maximum_reservable_capacity Maximum reservable capacity between endpoints optional, default value = 1000000000 (bytes/sec)
minimum_reservable_capacity Minimum reservable capacity between endpoints optional, default value = 1000000 (bytes/sec)
granularity Increments for reservations optional, default value = 1000000 (bytes/sec)

Installing Operations Monitoring

Monitoring can be installed after testing the initial installation of GRAM. Most supporting infrastructure was installed by the steps above. Some steps, however, still need to be done by hand and the instructions can be found here: Installing Monitoring on GRAM

Testing GRAM installation

This simple rspec can be used to test the gram installation - attachment:2n-1l.rspec

# Restart gram-am and clearinghose
sudo service gram-am restart
sudo service gram-ch restart

# check omni/gcf config
cd /opt/gcf/src
./ getusercred

# allocate and provision a slice
# I created an rspec in /home/gram called 2n-1l.rspec
./ -V 3 -a allocate a1 ~/2n-1l.rspec
./ -V 3 -a provision a1 ~/2n-1l.rspec

# check that the VMs were created
nova list --all-tenants

# check that the VMs booted, using the VM IDs from the above command:
nova console-log <ID>

# look at the 192.x.x.x IP in the console log and check that the ssh key has been loaded

# find the namespace for the management place:
sudo ip netns list
     # look at each qrouter-.... for one that has the external (130) and management (192)
sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ifconfig

# using this namespace, ssh into the VM:
sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ssh -i ~/.ssh/id_rsa ssh gramuser@

# verify that the data plane is working by pinging across VMs on the 10.x.x.x addresses
# The above VM has and the other VM i created has

Turn off Password Authentication on the Control and Compute Nodes

  1. Generate an rsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_rsa and ~gram/.ssh/

ssh-keygen -t rsa -C "gram@address"

  1. Generate a dsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_dsa and ~gram/.ssh/ Some components could only deal well with dsa keys and

so from the control node access to other resources on the rack should be using the dsa key.

ssh-keygen -t dsa -C "gram@address"

  1. Copy the public key to the compute nodes, i.e.
  2. On the control and compute nodes, cat >> ~/.ssh/authorized_keys
  3. As sudo, edit /etc/sshd/config and ensure that these entries are set this way:

RSAAuthentication yes
PubkeyAuthentication yes
PasswordAuthentication no

  1. Restart the ssh service, sudo service ssh restart.
  2. Verify by login in using the key ssh -i ~/.ssh/id_dsa gram@address



  • If it gets stuck at provisioning, you may have lost connectivity with one or more compute nodes. Check that network-manager is removed


More Debugging Notes

Attachments (4)

Download all attachments as: .zip