Version 15 (modified by, 11 years ago) (diff)


GENI Racks Administration

This page describes GENI racks administrative tasks and duties associated with each GENI rack. For each rack type, a site contact coordinates delivery, installation, configuration, and maintenance of the rack. In this very important role, you can rely on GPO support. Please contact us at for any questions. The GPO also provides a real-time public IRC chat room where engineers are often available, channel #geni, for debugging any issues you may encounter. See HowTo/ConnectToGENIChatRoom for details.

Site Requirements and Rack Installation

The site contact works with the organization deploying the rack (GPO, RENCI or HP) to get get the rack (ExoGENI, InstaGENI, or Starter respectively) and site requirements for their specific site networks defined. The site requirements include:

  • Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
  • Site Security Requirements- Determine engineering and procedures needed for rack connectivity, such as FlowVisor rules, IP filters, etc.
  • Address assignment for rack components - Define which addresses, subnet masks, and routes need to be configured for the rack components.
  • Power requirements - Define which PDU and related power equipment matches on site power availability
  • Administrative accounts - Setup site administrator accounts and other accounts that may be needed to manage access to the rack resources. Sites can choose to operate without administrator accounts if they prefer that option, and administrative responsibility will be delegated to a GENI operations group.
  • Delivery logistics - Details for where the rack is to be delivered, who will accept the delivers, and when the delivery will take place. Also covers identifying and plannign for any physical restrictions for the rack delivery.
  • GENI Agreements - Sites need to read and agree to basic usage agreements for GENI racks. All sites should read and understand the GENI Recommended Usage Policy and the GENI Aggregate Providers Agreement. There may also be additional specific agreements for the ExoGENI and InstaGENI projects.

In addition a site contact has continual administrative responsibilities that include:

  • managing user accounts for experimenters and for other operators.
  • managing updates for software and firmware, depending on the rack type. (See section below for specific rack type)
  • accessing compute and network resource consoles in the rack to support/manage experimenter resources or debug problems.
  • ensure that security and usage procedures are followed.

ExoGENI Administration

ExoGENI rack administration tasks are to be defined and will be captured here when available.

Get ExoGENI rack Accounts

Access Devices Consoles

Monitoring ExoGENI rack Health

Perform an experiment in your ExoGENI rack

Install a VM image on your ExoGENI rack

ExoGENI Racks Software/Firmware upgrades

InstaGENI Administration

InstaGENI rack administration tasks are to be defined and will be captured here when available.

Get InstaGENI rack Accounts

Access Devices Consoles

Monitoring InstaGENI rack Health

Perform an experiment in your InstaGENI rack

Install a VM image on your InstaGENI rack

InstaGENI Racks Software/Firmware upgrades

Starter Racks Administration

This section provides a few example of the administrative task on a Starter Rack. Example administrative tasks for ExoGENI and InstaGENI racks are different, but will accomplish similar functions.

Get Starter rack Accounts

Requesting an account

Site operators should contact to request sudo-capable login accounts on the Starter rack hosts by providing:

  • Preferred username
  • Preferred fullname
  • SSH public key for remote login
  • Hashed password for sudo obtained by running:
    openssl passwd -1
    and typing a password twice. The resulting string should be of the form: $1$xxxxxxxx$xxxxxxxxxxxxxxxxxxxxxx

Policies for Unix account use

  • Remote account access is via public-key SSH only (no password-based login).
  • Do not run interactive sessions as root (don't use sudo bash, but instead run individual commands under sudo for logging).
  • Do not share account credentials. We are happy to create individual accounts, or to give staffers who don't have logins access to our emergency account for outage debugging.
  • GPO staffers actively manage these systems using the puppet configuration management utility. If you need to modify a system, please e-mail us at to ensure that the desired change takes effect.

Accounts on non-Unix rack devices

Please contact if you need login access to:

  • Control router or dataplane switch
  • IP KVM for remote console access
  • PDU for remote power control

Access Devices Consoles

Compute Resource consoles

  • The fold-out console in the rack can be used to view the consoles of any of the hosts in the rack.
  • The KVM hotkey for changing which device is displayed is Ctrl Ctrl.

Network Devices Consoles The monitor1 node in each rack can be used as a serial console for network devices located in that rack.

  • Login to monitor1 using the console
  • Use screen to access the desired serial device, e.g.:
    screen /dev/ttyS0
  • When done using screen, kill the session by pressing: Ctrl-a K

Monitoring Starter rack Health

Service Health

GPO uses Nagios as a front-end for alerting about rack problems. The following services are monitored in the Starter Racks:

  • Resource problems with CPU, swap, or disk space on each host.
  • IP connectivity failures from the rack server to commodity internet (Google) and to the GPO lab.
  • Excessive CPU usage and excessive uplink broadcast traffic on the experimental switch.
  • Problems with standard experimental use of the Eucalyptus aggregate.

The current state of monitored hosts and services at a given city can be viewed at:

If you would like to be added to any of these notifications, please contact us at

Compute Resources Health Unix hosts report system health information via ganglia to the GPO Monitoring Server:

Network Devices Health Network devices are polled for system health via SNMP, and that data is also available at the GPO Monitoring Server:

If you need read-only SNMP access to the network devices in a Starter rack, please contact

Perform an experiment in your Starter rack

1. In this example, we specify 2 VM instances using the same image, it is also possible to specify 2 separate instances using different images:

$ euca-run-instances -k mykey -n 2 emi-05AC15E0
RESERVATION     r-47F80755      agosain agosain-default
INSTANCE        i-45E007BF      emi-05AC15E0 pending mykey   0               m1.small        2011-10-21T02:06:22.451Z   cha-euca        eki-8F5A137E    eri-CB4F1461
INSTANCE        i-335C067F      emi-05AC15E0 pending mykey   1               m1.small        2011-10-21T02:06:22.453Z   cha-euca        eki-8F5A137E    eri-CB4F1461

2. Login to the VMs. When connecting to your image you must use the private key from the Eucalyptus keypair you created above. The -i flag lets you specify the private key. Each image also has a specified username that you will use on instances. In the case of the Ubuntu 10.04 (Lucid) image, the username is "ubuntu". So the complete ssh command for this image is:

$ ssh -i mykey.priv ubuntu@
$ ssh -i mykey.priv ubuntu@

3. Now that the VMs are running you can use an iperf client and server setup to exchange traffic between the two VMs. First, install the Iperf application on both VMs:

apt-get install iperf

Them, start the iperf server:

ubuntu@ip-10-153-0-67:~$ iperf -s
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[  4] local port 5001 connected with port 52930
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.0 sec  1.92 GBytes    549 Mbits/sec

4. Then, connect to the private IP address of other VM and start the iperf client:

ubuntu@ip-10-153-0-66:~$ iperf -c -t 30
Client connecting to, TCP port 5001
TCP window size: 16.0 KByte (default)
[  3] local port 52930 connected with port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  1.92 GBytes    549 Mbits/sec

5. Terminate your VM instances after you have completed your tests:

euca-terminate-instances i-38E807A1

Install a VM image on your Starter rack

The following procedure outlines an experimenter view into using the Starter racks Eucalyptus VMs as a resource for an experiment.

To request an account for a GENI Starter Rack send an email request to including the following details:

  • Preferred username and full name.
  • Public SSH public key for remote login into rack resources.
  • Provide an MD5 hash of the password for sudo use. Generated by openssl passwd -1
  1. Install Euca2ools (where???), which are command-line tools for interacting with the Eucalyptus open-source cloud-computing infrastructure.
      $ sudo apt-get install euca2ools 
  1. Install Euca credentials. These credentials can be downloaded as a package from your Eucalyptus web site. If you do not have an account you can request one at ???? Once the account is verified and approved, go to the "Credentials" tab. In the "Credentials ZIP-file" section, click on the "Download Credentials" button. Locate the downloaded zip file (the location depends on your OS and web browser) and move it to a working directory.
  1. Unpack the credential and source the environment:
      $ mkdir ~/euca
      $ mv ~/Downloads/ ~/euca
      $ cd ~/euca
      $ unzip
      $ . eucarc
  1. Add firewall rules to your euca instance, below ssh and ping are allowed in the example:
      $ euca-authorize -P tcp -p 22 -s default
      $ euca-authorize -P icmp -t -1:-1 -s default
  1. Generate key pair to connect to eauca instance:
      $ euca-add-keypair mykey > mykey.priv
      $ chmod 600 mykey.priv
  1. Show available images, start a euca instance with your newly generated keys:
      $ euca-describe-images   # show list of available images
      IMAGE	emi-48AA122D  ubuntu-9.04/ubuntu.9-04.x86-64.img.manifest.xml	chaos	available  public  x86_64	machine	
      IMAGE	emi-62E51726  ubuntu-10.04/lucid-server-cloudimg-amd64.img.manifest.xml	tmitchel  available  public  x86_64 machine		
      $ euca-run-instances -k mykey emi-62E51726
  1. Set public address for euca VM created above, by requesting for an address to be allocated and then assigning it to the specific euca instance:
      $ euca-allocate-address    # will show address that is allocated to you
      $ euca-associate-address -i i-38E807A1  
  1. You may now connect into the Euca VM:
      $ ssh -i mykey.priv ubuntu@ 

Your Euca instance may now be used to run an experiment.

Email for GENI support or email me with feedback on this page!