wiki:HelloGENI

Version 51 (modified by lnevers@bbn.com, 11 years ago) (diff)

--

THIS PAGE IS OUTDATED, PLEASE GO TO this tutorial AND REFER TO Example 2

Feel free to send corrections and suggestions about this page to help@geni.net

Here's a description of a Hello GENI example that is using resources from the Mesoscale GENI deployment. For this example, you would need an account on GPO's ProtoGENI cluster, look here for instructions.

For this example we are going to use Omni, which is a command line tool that allows you to reserve resources from any Aggregate Manager that supports the GENI AM API. If you have never used Omni before, you should first download, install and configure Omni.

Overview

This tutorial will walk you through an example of running an experiment in Mesoscale GENI deployment. For the purpose of this example we are going to use a simple setup, where we are going to have 2 hosts in two different sites :

  • a ProtoGENI host at the GPO lab
  • a myPLC host at Clemson

And we will use netcat to show that they can talk to each other.

Reserving necessary resources

GENI Mesoscale deployment has diverse compute resources that are connected over multiple OpenFlow switches. In order to be able to reserve all the needed resources, we should first draw a detailed topology of our experiment. The Mesoscale GENI deployment has two backbone VLANs (3715, 3716) each providing a different topology. For this example we are going to use VLAN 3715.

So using the topology diagrams, we can draw this detailed topology for this simple example :

Looking at the topology, we see that we will need to reserve resources from different aggregates. In detail we need resources from :

  • GPO's ProtoGENI Aggregate (we will let ProtoGENI to choose a free node for us, all the nodes in the GPO's ProtoGENI cluster, are identically configured)
  • GPO's OpenFlow Aggregate (switches habanero and poblano are part of GPO's lab)
  • NLR's OpenFlow Aggregate (for the NLR switches)
  • Clemson's OpenFlow Aggregate (for the Clemon switch)
  • Clemson's myPLC Aggregate (for reserving planetlab5.clemson.edu)

Create a slice

It is always a good measure to create a slice per experiment, so before we start reserving resources, we would need to create a slice for this experiment. If you have already setup your omni_config file, then follow these instructions for creating and renewing a slice. By default slices that are created with a ProtoGENI Clearing House, have an expiration time of 4 hours, that is why you need to renew it to a time that is sensible for your experiment.

For the purpose of this example, using this trunk/wikifiles/hello-geni/omni_config file, created a slice named hellogenislice

> omni.py createslice hellogenislice
> omni.py renewslice hellogenislice 20120408T18:00:00

Reserve an IP subnet

In Mesoscale, if an experiment is a layer 3 experiment, it would need it's own separate IP subnet so that it is able to control all traffic for this subnet without interfering with traffic from other experiments. GPO has set aside 10.42.0.0/16 as a pool of IP subnets for this. These subnets aren't provided by an aggregate, and thus can't be reserved via the GENI AM API. Please DO NOT use an IP subnet that you have not reserved, just request for one to be assigned to you.

Email template for reserving IP subnet

Before continuing with the tutorial, please send an email to gpo-infra@geni.net to reserve an IP subnet. This is an example email :

To: gpo-infra@geni.net
From: Geni User <geniuser@example.com>
Subject: Reservation request for an IP subnet

Hi, 
I am going through the Hello Geni example, and I would like an IP subnet assigned to me, 
so that I can complete the tutorial. 
user urn : URI:urn:publicid:IDN+pgeni.gpolab.bbn.com+user+inki, email:inki@pgeni.gpolab.bbn.com
duration : until April the 8th 2012

Thanks, 

To determine your user urn see here for instructions.

For the purpose of this example we will assume that we got assigned IP subnet 10.42.130.0/24.

Create slivers

For each aggregate we would need to create a sliver that will contain the necessary resources. For all aggregates it is important to know the AM url for requesting the resources. When configuring omni, you should have included a set of nicknames so that you don't have to memorize long URLs. Also the example trunk/wikifiles/hello-geni/omni_config file has a list of all the AMs along with assigned nicknames. If you include a similar section in your omni_config file then you can refer to AMs using their nicknames. For the rest of this example we are going to use the nicknames specified in the example trunk/wikifiles/hello-geni/omni_config file. If there is an aggregate whose URL you can't find, please email help@geni.net.

Clemson myPLC sliver

From this page?, we get this aggregate's URL, and we find out what is its nickname from our trunk/wikifiles/hello-geni/omni_config file (in this case it is plc-clemson) and using this rspec file, we can reserve a sliver in planetlab5:

omni.py createsliver -a  plc-clemson hellogenislice myplc-clemson-hellogeni.rspec

The output should look like this.

Use sliverstatus to figure out the login name :

>omni.py sliverstatus -n -a plc-clemson hellogenislice

Look for the attribute 'pl_login'. For this example the login name is : 'pgenigpolabbbncom_hellogenislice'.

For PlanetLab aggregates, the 'geni_status' field would say 'unknown'. The best way to test if your resources are ready is trial error.

ssh -i .ssh/gcf_id_rsa pgenigpolabbbncom_hellogenislice@planetlab5.clemson.edu

Your sliver should be ready within 1 min. If after 5 min you are not able to login to the host, delete your sliver and recreate it. If the problem persists email help@geni.net.

omni.py deletesliver -n -a plc-clemson hellogenislice

After you login to the node, you can see all the available interfaces. In order to better support multiple experiments, each myPLC host is already configured with multiple subinterfaces. There should be a subinterface in the IP subnet that you have reserved

[pgenigpolabbbncom_hellogenislice@planetlab5 ~]$ /sbin/ifconfig |grep 42.130 -A 2 -B 2

eth1.1750:42130 Link encap:Ethernet  HWaddr 00:1B:21:43:A1:E4  
          inet addr:10.42.130.105  Bcast:10.42.130.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

If you can't find an interface on the reserved subnet, please email help@geni.net.

GPO ProtoGENI sliver

Using this rspec file, we can reserve one ProtoGENI node:

omni.py createsliver -n -a pg-gpo hellogenislice protogeni-bbn-hellogeni.rspec

The output should look like this.

Then we can do sliverstatus to find out the status of the sliver

>omni.py sliverstatus -n -a pg-gpo hellogenislice

Toward the end of the manifest there is an attribute 'geni_status', when this says ready we can login to the host

ssh -i ~/.ssh/gcf_id_rsa inki@pc11.pgeni.gpolab.bbn.com

Configuring OpenFlow interface on PG Host

In Tango GENI, to provide maximum flexibility to the experimenter, the port on the OpenFlow switch where the host is connected to, is configured as a trunk port that carries multiple VLANs. This allows the experimenter, using VLAN subinterfacing, to configure the host to be part of any of the configured VLANs. For this example we are going to use VLAN 1750 that will be cross-connected to 3715 (more details about this setup see XXX). For a list of all available VLANs see GPO's OpenFlow Aggregate Page. Before continuing we would need to configure eth3. The IP subnet assigned to us is 10.42.130.0. Although you can assign any IP address to your PG host that is not used in your experiment, the convention for the PG hosts of the GPO lab is that the last octet of the IP address is 200+pc number. Since we got pc 11, the IP address that should be assigned is 10.42.130.211.

Follow these steps to configure the host

  1. Add these lines in the end of /etc/network/interfaces, edit this file as sudo:
       auto eth3.1750
       iface eth3.1750 inet static
           address 10.42.130.211
           netmask 255.255.255.0
           mtu 1500
    
  2. Install the vlan package
        sudo apt-get install vlan
    
  3. Reboot the machine
       sudo init 6 && exit
    

It is important to configure the host before reserving the OpenFlow resources that are directly associated with this host.

OpenFlow Slivers

After we have reserved and configured all the compute resources we are ready to reserve the OpenFlow resources. This example spans three different OpenFlow aggregates, GPO, Clemson and NLR. The first thing we need to figure out, is where to run the OpenFlow controller. The OpenFlow controller is the software that decides how packets that belong to our sliver should be handled. The host running the OpenFlow controller listening on a TCP port, should be reachable by the OpenFlow resources at the sites (i.e. any firewalls in front of your controller must permit your TCP port).For the purpose of this example the controller will run at example.geni.net on tcp port 9933.

Note that for some switches, the OpenFlow number port is not the same as the physical port name. The best way to figure out what port number you need is through listresources. The advertisement rspecs contain both the physical port name and the OpenFlow number. For example running listresources in the OpenFlow AM at GPO will give this output

omni.py listresources -a of-gpo -V 1 -o

GPO OpenFlow Sliver

Based on the topology diagram above, at the GPO aggregate we would like to reserve resources related to IP subnet 10.42.130.0/24 on the two GPO switches for ports 1, 47 of the first switch and ports 20 and 3 on the second. This is an rspec that requests these resources. From GPO's aggregate page we can also get the AM URL.

omni.py createsliver -a of-gpo -n hellogenislice openflow-bbn-hellogeni.rspec

The output of this command should look like this.

Clemson OpenFlow Sliver

Based on the topology diagram above, at the Clemson aggregate we would like to reserve resources related to IP subnet 10.42.130.0/24 on the two clemson switch for ports 43 and 36. This is an rspec that requests these resources. From Clemson's aggregate page we can also get the AM URL.

omni.py createsliver  -a of-clemson hellogenislice openflow-clemson-hellogeni.rspec

The output of this command should look like this.

NLR OpenFlow Sliver

Based on the topology diagram above, at the Clemson aggregate we would like to reserve resources related to IP subnet 10.42.130.0/24 on the two NLR switch for ports 25 and 26 on both. However since we are just going to be using a learning switch controller we can just request all the ports on the switches. This is an rspec that requests these resources. From NLR's aggregate page we can also get the AM URL.

omni.py createsliver -n -a of-nlr hellogenislice openflow-nlr-hellogeni.rspec

The output of this command should look like this.

Once we receive a confirmation that our slivers have been approved from all three aggregates we are ready to run our experiment!

Run the experiment

First of all we need to start the OpenFlow controller so that our traffic is being forwarded in the network

Run OpenFlow controller

There are multiple choices for OpenFlow controllers that you might want to consider. For this example we are going to use the Nox Controller. We are going to use a simple switch controller, that will switch traffic for our slice like a layer 2 switch. Start the controller

./nox_core -i ptcp:1718 switch

Install netcat on myPLC resources

MyPlc hosts have very few installed packages, so we need to install the necessary software. First we login to our host at clemson

ssh -i .ssh/gcf_id_rsa pgenigpolabbbncom_hellogenislice@planetlab5.clemson.edu

Once you've logged in, you can install the 'nc' package, to run netcat, which you'll use later:

sudo yum -y install nc

Say hello

First we login to our hosts from two different terminals :

ssh -i ~/.ssh/gcf_id_rsa inki@pc11.pgeni.gpolab.bbn.com
ssh -i .ssh/gcf_id_rsa pgenigpolabbbncom_hellogenislice@planetlab5.clemson.ed

Run the listener on the myPLC host

[pgenigpolabbbncom_hellogenislice@planetlab5 ~]$ nc -lk 10.42.130.105 6256

Connect to the listener you just started from your ProtoGENI host :

node0:~> nc 10.42.130.105 6256

Make sure your OpenFlow controller is running. Type "Hello GENI" on the PG host and see it appear on the myPLC host. Try it with your controller running, and not running, to see.

Note that after stopping our controller, we need to wait five seconds without sending any traffic in order for the flowtable entries in the switches to time out -- if traffic is flowing continuously, it'll keep flowing even if the controller is not running. Once any existing flowtable entries time out, though, new ones won't be created (and traffic won't flow) while the controller is down.

Release resources

It is important after running an experiment to cleanup all the resources, since as long as the resources are reserved other experimenters can not use them. Although most of the reservations come with expiration times, we should be considerate and release our resources if our experiment has finished before that.

Release OpenFlow resources

It's important to release the OpenFlow resources before the slice expires. Currently the OpenFlow slivers are not tied with the slice's expiration time and they may have an expiration time that exceeds the expiration time of the slice, in which case you won't be able to access your resources through Omni.

For each aggregate delete the OpenFlow sliver :

omni.py deletesliver -a of-gpo hellogenislice
omni.py deletesliver -a of-clemson hellogenislice
omni.py deletesliver -a of-nlr hellogenislice

The output should look like this.

Release GPO's ProtoGENI and Clemson's myPLC

Invoke the delete sliver command

omni.py deletesliver -a pg-gpo hellogenislice
omni.py deletesliver -a plc-clemson hellogenislice

If the slice has expired, then these resources are released so there is no need to worry about this case.

Time to run your own experiment! Let us know if you need any help.

Attachments (10)

Download all attachments as: .zip