[[PageOutline]] = Testing OpenFlow in the GENI backbone = This page describes a procedure for testing that an experimenter can allocate resources in the GENI OpenFlow backbone, and send traffic through them. We might want to do this to test a FOAM or FlowVisor upgrade at Internet2 or NLR, for example. Pre-requisites/assumptions: * You're only testing the OpenFlow backbone in Internet2 and NLR, and not any GENI core resources at regionals. * You'll use 10.42.17.0/24 as the IP subnet to test with; this is normally reserved to jbs@bbn.com, so either check with him before using it, or use a different subnet. (The [wiki:NetworkCore/SubnetReservations GENI Network Core subnet reservations page] has a list of available (and reserved) subnets.) * You're using mesoscale MyPLC plnodes at BBN (via Internet2) and Wisconsin (via NLR), which have dataplane addresses already configured on this subnet, to send and receive traffic via VLAN 3716 in the core. * You're set up to use Omni in general, and have it in your path as 'omni', i.e. you've successfully used Omni to reserve GENI resources (and have the latest version of the software, a user credential signed by a CA that's trusted by the I2 and NLR FOAM aggregates and the GPO Lab MyPLC aggregate, a working omni_config file, etc). * You have valid request rspec files to test with. There are example rspecs [source:trunk/wikifiles/geni-backbone-test here], but NOTE that they do not contain a valid e-mail address, so they MUST be modified to include one (as of FOAM 0.6.3), and MAY need to be modified to point to a different controller, if you use something other than than what's below for $ofctrlhost and $ofctrlport. * You know what output to expect when common things succeed. (Expected output is described here only in cases where an error is the expected result, or where it otherwise might not be obvious; if you think the expected output isn't obvious for any given step, let us know.) = Procedure = Here's the full procedure for doing this. == Setup == This section sets some variables that later sections use. The values below are the ones JBS uses when we do this at BBN; you may need to change some of them for your own use. Set your slicename: {{{ slicename=jbs17 }}} Set the names of your rspec files (if needed, modify these examples (which are BBN-specific) to use the path to your rspec files): {{{ foam_bbn_rspec=~/rspecs/request/$slicename/openflow-bbn-$slicename.rspec foam_wisconsin_rspec=~/rspecs/request/$slicename/openflow-wisconsin-$slicename.rspec foam_internet2_rspec=~/rspecs/request/$slicename/openflow-internet2-$slicename.rspec foam_nlr_rspec=~/rspecs/request/$slicename/openflow-nlr-$slicename.rspec myplc_bbn_rspec=~/rspecs/request/misc/myplc-bbn-all.rspec }}} Set the URLs of the aggregate managers: {{{ foam_bbn_am=https://foam.gpolab.bbn.com:3626/foam/gapi/1 foam_wisconsin_am=https://foam.wail.wisc.edu:3626/foam/gapi/1 foam_internet2_am=https://foam.net.internet2.edu:3626/foam/gapi/1 foam_nlr_am=https://foam.nlr.net:3626/foam/gapi/1 myplc_bbn_am=https://myplc.gpolab.bbn.com:12346/ }}} Set the name, address, and port of the OpenFlow control host: {{{ ofctrlhost=navis.gpolab.bbn.com ofctrlport=33017 oflaviport=11017 }}} Set the names of the source and destination test hosts: {{{ src_host=ganel.gpolab.bbn.com dst_host=wings-openflow-2-net-10-42-17.wisc.dataplane.geni.net }}} Set the MyPLC username you'll need to log in to your plnodes: {{{ username=pgenigpolabbbncom_$slicename }}} == Get resources == Create your slice, and make sure it won't expire for a few days: {{{ omni createslice $slicename omni renewslice $slicename "$(date -d 'now + 3 days')" }}} Create FOAM slivers at BBN, Wisconsin, Internet2, and NLR: {{{ omni -a $foam_bbn_am createsliver $slicename $foam_bbn_rspec omni -a $foam_wisconsin_am createsliver $slicename $foam_wisconsin_rspec omni -a $foam_internet2_am createsliver $slicename $foam_internet2_rspec omni -a $foam_nlr_am createsliver $slicename $foam_nlr_rspec }}} Create a MyPLC sliver at BBN: {{{ omni -a $myplc_bbn_am createsliver $slicename $myplc_bbn_rspec }}} NOTE that you may need to wait until your FOAM slivers are approved; you should get e-mail from each of the four FOAM aggregates when they are. Until you do, you can continue with the next section ("Set up your OpenFlow controller") meanwhile, but not beyond that. == Set up your OpenFlow controller == Log in to your OpenFlow controller host: {{{ ssh $username@$ofctrlhost }}} All the rest of the commands in this section are run on that host. Add to your .bash_profile some of the variables that you set earlier, so you'll have them whenever you log in here: {{{ echo "export ofctrlport=33017" >> .bash_profile echo "export oflaviport=11017" >> .bash_profile . .bash_profile }}} These next steps to download and install NOX are copied from http://groups.geni.net/geni/wiki/HowTo/RunNoxInFedora8. It may take an hour or more to complete all of these steps. Install the Development Tools packages: {{{ sudo yum groupinstall "Development Tools" }}} Install other packages that NOX depends on: {{{ sudo yum install python-twisted libpcap-devel git-core openssl-devel boost-devel python-simplejson }}} Download and and install GCC 4.2.4: {{{ cd wget http://mirrors-us.seosue.com/gcc/releases/gcc-4.2.4/gcc-4.2.4.tar.bz2 tar -xvjpf gcc-4.2.4.tar.bz2 mkdir gcc-obj cd gcc-obj ../gcc-4.2.4/configure make sudo make install }}} That may take half an hour or so. Download and install NOX: {{{ cd git clone git://noxrepo.org/nox cd nox ./boot.sh mkdir ../nox_build cd ../nox_build ../nox/configure make -j 5 }}} That may take another half hour or so. Start up the controller: {{{ cd ~/nox_build/src ./nox_core --info=$HOME/nox-${ofctrlport}.info -i ptcp:$ofctrlport switch lavi_switches lavi_swlinks jsonmessenger=tcpport=$oflaviport,sslport=0 }}} Your controller is now ready and listening; to do anything further, you need to wait until your FOAM slivers are approved. == Run tests == Once your FOAM slivers are approved, you can confirm that the backbone OpenFlow switches are connecting to your controller, and test that you can send traffic on your subnet. === Confirm connections === Log in to your OpenFlow controller host: {{{ ssh $username@$ofctrlhost }}} On your OpenFlow controller host, create a patched version of the nox-console program: {{{ cd wget http://groups.geni.net/geni/export/HEAD/trunk/wikifiles/geni-backbone-test/nox-console.patch -O nox-console.patch mkdir -p bin patch -o ~/bin/nox-console ./nox/src/scripts/nox-console.py nox-console.patch chmod 755 ~/bin/nox-console rm nox-console.patch }}} On your OpenFlow controller host, use nox-console to display a sorted list of DPIDs that are connected to your controller: {{{ nox-console -n localhost -p $oflaviport getnodes | sort }}} This should produce output like: {{{ 00:00:0e:84:40:39:18:1b 00:00:0e:84:40:39:18:58 00:00:0e:84:40:39:19:96 00:00:0e:84:40:39:1a:57 00:00:0e:84:40:39:1b:93 06:d6:00:12:e2:b8:a5:d0 06:d6:00:21:f7:be:8d:00 06:d6:00:23:47:cc:44:00 06:d6:00:24:a8:c4:b9:00 0e:84:00:23:47:c8:bc:00 0e:84:00:23:47:ca:bc:40 0e:84:00:24:a8:d2:48:00 0e:84:00:24:a8:d2:b8:40 0e:84:00:26:f1:40:a8:00 }}} The first five are from Internet2, the last five are from NLR, and the middle four are from BBN, Wisconsin, Wisconsin, and BBN, respectively. === Test connectivity === Log in to the source test host: {{{ ssh $username@$src_host }}} Add to your .bash_profile some of the variables that you set earlier, so you'll have them whenever you log in here: {{{ echo "dst_host=wings-openflow-2-net-10-42-17.wisc.dataplane.geni.net" >> .bash_profile . .bash_profile }}} Ping the destination test host: {{{ ping -c 10 $dst_host }}} The first few packets may take a second or two; the rest should have a consistent RTT. == Cleanup == Stop your OF controller (with ctrl-c), and log out of any MyPLC plnodes that you're still logged in to. Delete your slivers: {{{ omni -a $foam_bbn_am deletesliver $slicename omni -a $foam_wisconsin_am deletesliver $slicename omni -a $foam_internet2_am deletesliver $slicename omni -a $foam_nlr_am deletesliver $slicename omni -a $myplc_bbn_am deletesliver $slicename }}}