wiki:GeniForEveryone

Version 5 (modified by Aaron Helsinger, 11 years ago) (diff)

--

We ran a demo called "GENI For Everyone" at GEC 10, in which attendees reserved MyPLC resources at four GENI meso-scale campuses, live from the demo room floor. Here's some more info about how you can try it for yourself, and some behind-the-scenes information about how we set up the demo.

Do try this at home

You can reproduce this demo yourself! You'll need GENI credentials of your own, and the Omni command-line GENI client, but that's it. (If you don't already have both of those things, see below for more information on how to get them.)

...but note that it won't entirely work yet, because we don't have a way for GENI experimenters to log in to an OpenFlow server and run an OpenFlow controller. See below for more details, but for now, be aware that you can try out most of the steps here, but not actually establish network connectivity between the hosts. Yet! Stay tuned.

Geni experimenter docs

The GENI Experimenter Portal has some good introductory information for experimenters who are new to GENI (and anyone else looking for a high-level overview and links to more information).

The Omni how-to page describes how to get GENI credentials from a clearinghouse, and use them with the command-line Omni GENI client to reserve resources from multiple aggregates, and use them in an experiment. In order to try this particular demo yourself, you'll need to have done the first two steps in particular, to (1) get GENI credentials of your own, and (2) install and configure Omni to use your credentials.

Omni configuration

Here's the Omni configuration that we used for this demo (we had two users, 'juan' and 'rico', one for each of the two laptops):

[omni]

default_cf = pgeni_gpolab
users = juan

[pgeni_gpolab]

type = pg
verbose = false
ch = https://pgeni.gpolab.bbn.com:12369/protogeni/xmlrpc/ch
sa = https://boss.pgeni.gpolab.bbn.com:12369/protogeni/xmlrpc/sa
cert = ~/.gcf/juan@pgeni.gpolab.bbn.com.pem
key = ~/.gcf/juan@pgeni.gpolab.bbn.com.pem

[juan]

name = Juan Gecdemo
urn = urn:publicid:IDN+pgeni.gpolab.bbn.com+user+juan
keys = ~/.ssh/id_rsa.pub

We have another Omni configuration example that includes more information about the Omni configuration options.

The Omni overview page in the gcf project wiki has a more detailed technical overview of how Omni works, and the main Omni page there has further details about using it with various clearinghouses and aggregates.

Rspecs

The demo uses four MyPLC rspecs, which are attached to this page. (The behind-the-scenes setup also used another four OpenFlow rspecs for each of the two IP networks; see below for more info about that.)

Demo script

Here's a copy of the docs we used for the demo, which you can go through to try it for yourself.

You'll need to replace <slicename> below with a name for your slice, which can be any unique string you'd like, ideally something that it's unlikely that someone else would also want to use, perhaps including your username. Note that due to a buggy interaction between omni and SFA involving slice name length, you should use a short slice name (thirteen characters or shorter should work fine) if you plan to interact with PlanetLab or MyPLC resources.

Create a slice

omni createslice <slicename>

This contacts the pgeni.gpolab.bbn.com clearinghouse (because that's what omni_config is configured to use) and creates your slice, which you'll then use to identify the resources you reserve below.

Create slivers

omni -n -a https://myplc.gpolab.bbn.com:12346/ createsliver <slicename> ./myplc-bbn-gec10.rspec
omni -n -a https://myplc.cip.gatech.edu:12346/ createsliver <slicename> ./myplc-gt-gec10.rspec
omni -n -a https://myplc.grnoc.iu.edu:12346/ createsliver <slicename> ./myplc-indiana-gec10.rspec
omni -n -a http://nfcm13.stanford.edu:12346 createsliver <slicename> ./myplc-stanford-gec10.rspec

Each of these four commands reserves the resources described in an rspec file; each of the four files descsribes the resources at one of the four aggregates, and the sliver you create is identified by your slice name. In this demo, all four are MyPLC aggregates, at GPO Lab, Georgia Tech, Indiana, and Stanford. When you run each command, it returns an rspec showing what you've asked for.

Check sliver status

omni -n -a https://myplc.gpolab.bbn.com:12346/ sliverstatus <slicename>
omni -n -a https://myplc.cip.gatech.edu:12346/ sliverstatus <slicename>
omni -n -a https://myplc.grnoc.iu.edu:12346/ sliverstatus <slicename>
omni -n -a http://nfcm13.stanford.edu:12346 sliverstatus <slicename>

This asks each aggregate for status about your sliver, which gives you more information about what resources you've actually gotten there. For example, you can find out the username to use to log in to the MyPLC VMs you've created. You don't necessarily need to run all four of these from this example, but you can if you're curious.

Copy files to your hosts

rsync -av pl-homedir/ pgenigpolabbbncom_<slicename>@ganel.gpolab.bbn.com:
rsync -av pl-homedir/ pgenigpolabbbncom_<slicename>@of-planet2.stanford.edu:

These copy a few files from a local directory named "pl-homedir", into your home directory on each of two of the MyPLC VMs that you've created. In the demo at GEC 10, we installed a custom .bashrc file that doesn't change the title of the 'xterm' windows we'll launch below, and a copy of the 'nc' netcat executable to use to demonstrate connectivity. You don't necessarily need either of these to try the demo yourself, but this would be a good point in the process to copy over any other files that you wanted.

Also, you can use any of the hosts that you've reserved; we just picked two for the demo to make it quick and streamlined, but there's nothing special about ganel and of-planet2.

Log in to your hosts

xterm -bg midnightblue -fg white -geom 77x20-0+0 -T 'GPO Lab' -e ssh pgenigpolabbbncom_<slicename>@ganel.gpolab.bbn.com &
xterm -bg darkred -fg white -geom 77x20-0+294 -T Stanford -e ssh pgenigpolabbbncom_<slicename>@of-planet2.stanford.edu &

These launch a pair of color-coded xterms, each of which logs you in to one of the two hosts you copied files to a moment ago. If you're trying this yourself, you can drop everything but the 'ssh' command; you don't particularly need to launch a new xterm, and of course the color and position are just to look nice on the laptop we were using on the demo room floor.

Send some traffic

In the GPO Lab window window, run

nc -lk 42317

and in the Stanford window, run

nc 10.42.1.51 42317

This runs a netcat listener at GPO Lab, and a netcat client at Stanford connected to it. When you type a line into the client window (and hit return), you should see it show up in the listener window.

...except you won't, because you're not running an OpenFlow controller yet. But you will in just a moment!

netcat is just an example; you can also ping between your hosts, or ssh from one to another, or run traceroute to see that they're on the same VLAN, or whatever else you'd like. Each host has an IP address on the 10.42.1.0/24 subnet; you can find out the relevant address for any of your hosts by doing 'ifconfig eth1:42' on that host. It won't work at this point, but will after the next two steps.

Log in to an OpenFlow server

xterm -bg darkgreen -fg white -geom 77x20-0+588 -T OpenFlow -e ssh naxos.gpolab.bbn.com &

This launches another color-coded xterm, which logs you in to a server at GPO Lab.

If you're trying this yourself, you won't be able to do that -- for the demo session, we created accounts on that server for the demo user. We're in the process of setting up a way for GENI experimenters to do this, but haven't yet -- stay tuned''

Run an OpenFlow controller

cd /usr/bin
/usr/bin/nox_core --info=~/nox-4233.info -i ptcp:4233 switch

This runs an instance of the NOX OpenFlow controller, using the 'switch' module to implement a simple learning-switch algorithm. The various OpenFlow switches that your slice uses have been trying to reach this controller; after thirty seconds or so, they'll all have retried and connected, and at that point, traffic will flow.

If you're doing this yourself, you can't do this step either, yet.

Send traffic again

Once your controller's running, the failed backlog from the original netcat will come through within a minute or two, if you're patient; or you can kill and restart the netcat client, if you don't want to wait.

If you're doing this yourself, this step won't work either, since you don't actually have a controller running.

Delete slivers

omni -n -a https://myplc.gpolab.bbn.com:12346/ deletesliver <slicename>
omni -n -a https://myplc.cip.gatech.edu:12346/ deletesliver <slicename>
omni -n -a https://myplc.grnoc.iu.edu:12346/ deletesliver <slicename>
omni -n -a http://nfcm13.stanford.edu:12346 deletesliver <slicename>

Once you're done, you should delete your slivers, to free up the resources you were using. If you forget, they'll expire eventually, but it's polite to do this if you're know you're done.

Behind the scenes

We had to do much less manual setup for this demo than for previous demos, but there were still a number of steps that we had to take at the various campuses in order to get things working -- some preparation so that the demo would run smoothly, and some which are steps that a GENI experimenter should be able to do by themselves, but can't yet. Here's some more info about all of that.

Demo users

We set up two users ahead of time for this demo ('juan' and 'rico', one for each of the two laptops), and got GENI credentials for each of them; we also created an account on our OpenFlow server (naxos) to run the OpenFlow controller.

SFA prep

The SFA aggregate manager on each of the MyPLC instances at the campuses needed to be a known-good version, and needed to be configured in certain ways. The GPO Lab MyPLC Reference Implementation page has all of those steps; particular ones that we needed to double-check on existing sites include:

  • Making sure the site is public. (Without this, a 'listresources' call doesn't list any hosts.)
  • Making sure SFA is configured to trust the pgeni.gpolab.bbn.com Slice Authority. (Without this, an experimenter with a pgeni.gpolab cert can't talk to the SFA MyPLC aggregate manager at all.)
  • Configuring the MyPLC hosts to create slivers quickly. (Without this, there will be a long delay after a createsliver before your resources are available.)

VLAN 3715

Each of the MyPLC hosts needed to have a data interface (typically eth1) connected to VLAN 3715, one of the two GENI OpenFlow Network Core VLANs.

IP subinterfaces

The demo uses two IP networks, 10.42.1.0/24 and 10.55.1.0/24, and each MyPLC host needed an address on each of them. This needs to be set up as root on each MyPLC host, so that newly-created slivers will see those interfaces.

Static ARP entries

We didn't want to have to deal with handling broadcast traffic, like ARP requests, so we set up static ARP entries for the hosts. This also needs to be set up as root on each MyPLC host. We put together a config file, and wrote a simple script to use it, to make it easier for MyPLC admins to keep this up to date.

OpenFlow resources

FOAM, the OpenFlow aggregate manager, requires a two-step process to reserve resources:

  1. The experimenter creates their sliver to request the resources,
  2. An OpenFlow admin at each aggregate needs to approve the request.

We didn't want to have to do this live, times six aggregates, at the demo itself, so we pre-provisioned the OpenFlow resources for each network into a separate slice, and just left those in place the whole time. Each OpenFlow sliver points to one of two ports on an OpenFlow server at GPO Lab, so when the demo users log in and start their controllers, the switches re-connect to the appropriate controller, and voila.

Also, NLR and Internet2 didn't actually have an OpenFlow AM set up yet at the time of the demo, so we provisioned those OpenFlow resources by hand in their respective FlowVisors.