wiki:OpenFlow/GeniBackboneTest

Testing OpenFlow in the GENI backbone

This page describes a procedure for testing that an experimenter can allocate resources in the GENI OpenFlow backbone, and send traffic through them. We might want to do this to test a FOAM or FlowVisor upgrade at Internet2 or NLR, for example.

Pre-requisites/assumptions:

  • You're only testing the OpenFlow backbone in Internet2 and NLR, and not any GENI core resources at regionals.
  • You're using 10.42.17.0/24 as the IP subnet to test with; this is normally reserved to Josh Smift, so either check with him before using it, or use a different subnet. (The GENI Network Core subnet reservations page has a list of available (and reserved) subnets.)
  • You're using ProtoGENI VMs BBN (via NLR) and Utah (via Internet2), to send and receive traffic via VLAN 3716 in the core.
  • You're set up to use Omni in general, i.e. you've successfully used Omni to reserve GENI resources, have a current version of the software (in ~/src/gcf-current, in the examples below), a user credential signed by a CA that's trusted by all aggregates where you'll need to create slivers, a working omni_config file, etc.
  • You have an OpenFlow controller that you can use for this. (FIXME: It'd be better if we supplied instructions to allocate a controller host, and install and run a controller on it, but this is hard to do in a generic way, because the OpenFlow rspecs depend on the controller hostname, which you don't know until you allocate the host. Could be done, but it's more complicated.)
  • You have valid request rspec files to test with. There are example rspecs here, but NOTE that they'll need to be modified to point to your OpenFlow controller.
  • You know what output to expect when common things succeed. (Expected output is described here only in cases where an error is the expected result, or where it otherwise might not be obvious; if you think the expected output isn't obvious for any given step, let us know.)

Procedure

Here's the full procedure for doing this.

Setup

This section sets some variables that later sections use; in particular, ones that you'll need to allocate resources.

The values below are the ones JBS uses when we do this at BBN; you may need to change some of them for your own use.

Set your slicename:

slicename=jbs17

Set the names of your rspec files (if needed, modify these examples (which are BBN-specific) to use the path to your rspec files):

foam_bbn_rspec=~/rspecs/request/$slicename/openflow-bbn-$slicename.rspec
foam_bbn_instageni_rspec=~/rspecs/request/$slicename/openflow-bbn-instageni-$slicename.rspec
foam_internet2_rspec=~/rspecs/request/$slicename/openflow-internet2-$slicename.rspec
foam_nlr_rspec=~/rspecs/request/$slicename/openflow-nlr-$slicename.rspec
foam_uen_rspec=~/rspecs/request/$slicename/openflow-uen-$slicename.rspec
protogeni_bbn_rspec=~/rspecs/request/$slicename/protogeni-bbn-instageni-$slicename.rspec
protogeni_utah_rspec=~/rspecs/request/$slicename/protogeni-utah-protogeni-$slicename.rspec

Set the URLs of the aggregate managers:

foam_bbn_am=https://foam.gpolab.bbn.com:3626/foam/gapi/1
foam_bbn_instageni_am=https://foam.instageni.gpolab.bbn.com:3626/foam/gapi/1
foam_internet2_am=https://foam.net.internet2.edu:3626/foam/gapi/1
foam_nlr_am=https://foam.nlr.net:3626/foam/gapi/1
foam_uen_am=https://foamyflow.chpc.utah.edu:3626/foam/gapi/1
protogeni_bbn_am=https://instageni.gpolab.bbn.com:12369/protogeni/xmlrpc/am
protogeni_utah_am=https://www.emulab.net:12369/protogeni/xmlrpc/am

Set the target IP address that you'll want to ping. In this example, we use the dataplane IP address of the VM at Utah, which we'll ping from the VM at BBN.

dst_addr=$(grep "ip address" $protogeni_utah_rspec | sed -re 's/.+ip address="([0-9\.]+)".+/\1/')

Get resources

Create your slice, and make sure it won't expire for a few days:

~/src/gcf-current/src/omni.py createslice $slicename
~/src/gcf-current/src/omni.py renewslice $slicename "$(date -d 'now + 3 days')"

Create FOAM slivers at BBN, BBN InstaGENI, Internet2, NLR, and UEN:

~/src/gcf-current/src/omni.py -a $foam_bbn_am createsliver $slicename $foam_bbn_rspec
~/src/gcf-current/src/omni.py -a $foam_bbn_instageni_am createsliver $slicename $foam_bbn_instageni_rspec
~/src/gcf-current/src/omni.py -a $foam_internet2_am createsliver $slicename $foam_internet2_rspec
~/src/gcf-current/src/omni.py -a $foam_nlr_am createsliver $slicename $foam_nlr_rspec
~/src/gcf-current/src/omni.py -a $foam_uen_am createsliver $slicename $foam_uen_rspec

Create ProtoGENI slivers at BBN and Utah:

~/src/gcf-current/src/omni.py -a $protogeni_bbn_am createsliver $slicename $protogeni_bbn_rspec
~/src/gcf-current/src/omni.py -a $protogeni_utah_am createsliver $slicename $protogeni_utah_rspec

NOTE that your FOAM slivers may need to be approved manually by an admin at each FOAM site; you should get e-mail from each of the five FOAM aggregates when they're approved (automatically or manually). You can also check your controller for connections from the relevant switches, but you won't be able to run tests until all of your slivers are approved.

Run tests

Once your FOAM slivers are approved, you can confirm that the backbone OpenFlow switches are connecting to your controller, and test that you can send traffic on your subnet.

OpenFlow controller

We don't provide detailed instructions for running an OpenFlow controller here. In this example (and the example rspecs), the controller is running at naxos.gpolab.bbn.com port 33017, with a LAVI listener on port 11017.

That controller was started like this:

cd ~/nox_build/src
./nox_core --info=$HOME/nox-33017.info -i ptcp:33017 switch lavi_switches jsonmessenger=tcpport=11017,sslport=0

That host has a patched version of the 'nox-console' program; you can get the patch from http://groups.geni.net/geni/export/HEAD/trunk/wikifiles/geni-backbone-test/nox-console.patch, and then use it to display a sorted list of DPIDs that are connected to your controller:

nox-console -n localhost -p 11017 getnodes | sort

This should produce output like:

06:d6:00:12:e2:22:63:1d
06:d6:00:12:e2:22:63:38
06:d6:00:12:e2:22:63:6e
06:d6:00:12:e2:22:6f:e5
06:d6:00:12:e2:22:81:42
06:d6:00:12:e2:b8:a5:d0
06:d6:00:24:a8:d2:b8:40
06:d6:84:34:97:c6:c9:00
06:d6:ac:16:2d:f5:2d:00
0e:84:00:12:e2:22:63:1d
0e:84:00:12:e2:22:63:38
0e:84:00:12:e2:22:63:6e
0e:84:00:12:e2:22:6f:e5
0e:84:00:12:e2:22:81:42
0e:84:00:23:47:c8:bc:00
0e:84:00:23:47:ca:bc:40
0e:84:00:24:a8:d2:48:00
0e:84:00:24:a8:d2:b8:40
0e:84:00:26:f1:40:a8:00

In that list, the first five are from Internet2; the next four are from BBN, NLR, BBN InstaGENI, and UEN, respectively; the next five are from Internet2 again; and the last five are from NLR again.

Connectivity

Use Omni's remote-execute functionality to log in to the BBN VM, and ping the dataplane address of the Utah VM:

export PYTHONPATH=~/src/gcf-current/src
~/src/gcf-current/examples/remote-execute.py -a $protogeni_bbn_am $slicename -m "ping -c 10 $dst_addr"

The first few packets may take a second or two; the rest should have a consistent RTT.

Cleanup

Delete your slivers:

~/src/gcf-current/src/omni.py -a $foam_bbn_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $foam_bbn_instageni_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $foam_internet2_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $foam_nlr_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $foam_uen_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $protogeni_bbn_am deletesliver $slicename
~/src/gcf-current/src/omni.py -a $protogeni_utah_am deletesliver $slicename
Last modified 11 years ago Last modified on 08/09/13 12:26:40