wiki:PlasticSlices/OFStatus

Version 24 (modified by Josh Smift, 13 years ago) (diff)

--

The Plastic Slices project plan calls for each campus to "Configure a local OpenFlow network consistent with the recommended campus topology, connected to the GENI network core." and "Provide a GENI AM API compliant Expedient aggregate managing the OpenFlow network connecting the MyPLC hosts to the GENI network core.", and for Internet2 and NLR to "Provide an Expedient aggregate managing the OpenFlow resources in the Internet2 portion of the GENI network core."

Status

This table is tracking the status of those items, divided by deadline.

Campus Deadline CT AM CA LR RS SL FV NT XT Notes
BBN 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Clemson 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Stanford 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Internet2 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
NLR 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Rutgers 2011-05-02 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Washington 2011-05-02 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Wisconsin 2011-05-02 CT AM CA LR RS SL FV NT XT Fully armed and operational.
GT 2011-05-09 AM Still setting up a new Expedient server.
Indiana 2011-05-09 CT AM CA LR RS SL FV NT XT Fully armed and operational.
  • CT: the campus topology is set up (or backbone, for NLR and I2)
  • AM: omni can connect to the AM at all
  • CA: the AM includes the pgeni.gpolab.bbn.com CA cert
  • LR: listresources returns the expected results
  • RS: we have an rspec that we think is correct
  • SL: we've created a sliver with that rspec
  • FV: the switches in that sliver are visible with 'fvctl listDevices' on naxos
  • NT: internally tested within a site
  • XT: externally tested with other sites

Testing

Assuming the MyPLC plnodes are all up and running, here are the commands that Josh has been using to test inter-campus connections. ('shmux' is a utility for running a command on multiple remote hosts, and aggregating the output)

For any of these tests, set $net and $slicename to control which slice and subnet you want to test:

subnet=101
username=pgenigpolabbbncom_plastic$subnet

These tests rely on ping having a non-zero exit status when it gets a 100% failure rate. It's not clear from the man page what the exit status is when some packets are received.

These tests will produce some failures for slices that don't include every campus, of course. plastic-101 and plastic-102 include every campus; the former uses VLAN 3715 and the latter VLAN 3716, so they're likely candidates for basic connectivity testing.

Test anything

This tests only that each of the BBN MyPLC plnode can reach the others on a subnet, which is a useful way to tell if anything at all is working on that subnet.

for i in {51..55} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com ; done

Test everything

Each of these lines tries to ping each of the allocated IP addresses on the whole subnet, from all of the MyPLC plnodes at one of the sites (e.g. "can I reach everything from BBN").

for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@planetlab{4,5}.clemson.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1,2}.cip.gatech.edu; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@pl{4,5}.myplc.grnoc.iu.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@orbitplc{1,2}.orbit-lab.org ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@of-planet{1..4}.stanford.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@pl0{1,2}.cs.washington.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {100..101} {104..105} {111..112} ; do ipaddr=10.42.$subnet.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@wings-openflow-{2,3}.wail.wisc.edu ; done

Each of these lines does the converse of that: Ping each of the allocated IP addresses for the subnet at one site, from all the MyPLC plnodes at all of sites (e.g. "can I reach BBN from everywhere").

for ipaddr in 10.42.$subnet.{51..55} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{72..73} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{80..81} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{90..93} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{95..96} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{100..101} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{104..105} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for ipaddr in 10.42.$subnet.{111..112} ; do echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com $username@planetlab{4,5}.clemson.edu $username@plnode{1,2}.cip.gatech.edu $username@pl{4,5}.myplc.grnoc.iu.edu $username@orbitplc{1,2}.orbit-lab.org $username@of-planet{1..4}.stanford.edu $username@pl0{1,2}.cs.washington.edu $username@wings-openflow-{2,3}.wail.wisc.edu ; done

This second version is nice because it multiplexes across all the plnodes and iterates across all the IP addresses at one site. These days, we have many more total plnodes than we have addresses at any one site.

Results

Failures look like this:

--> 10.42.101.95
  shmux! Child for pgenigpolabbbncom_jbstest@planetlab4.clemson.edu exited with status 1
  shmux! Child for pgenigpolabbbncom_jbstest@planetlab5.clemson.edu exited with status 1

2 targets processed in 4 seconds.
Summary: 2 errors
Error    : pgenigpolabbbncom_jbstest@planetlab4.clemson.edu pgenigpolabbbncom_jbstest@planetlab5.clemson.edu 

This indicates that the Clemson hosts couldn't ping 10.42.101.95. (At the time we generated this error, this was because they didn't yet have the static ARP entries for the Wisconsin hosts.)