wiki:PlasticSlices/OFStatus

Version 17 (modified by Josh Smift, 8 years ago) (diff)

Indiana is close; added them to the test suite.

The Plastic Slices project plan calls for each campus to "Configure a local OpenFlow network consistent with the recommended campus topology, connected to the GENI network core." and "Provide a GENI AM API compliant Expedient aggregate managing the OpenFlow network connecting the MyPLC hosts to the GENI network core.", and for Internet2 and NLR to "Provide an Expedient aggregate managing the OpenFlow resources in the Internet2 portion of the GENI network core."

Status

This table is tracking the status of those items, divided by deadline.

Campus Deadline CT AM CA LR RS SL FV NT XT Notes
BBN 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Clemson 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Stanford 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Internet2 2011-04-25 CT AM CA LR RS SL FV Believed to be ready, need to test. (With Rutgers, when they're ready)
NLR 2011-04-25 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Rutgers 2011-05-02 listresources says "500 Internal Server Error".
Washington 2011-05-02 CT AM CA LR RS SL FV NT XT Fully armed and operational.
Wisconsin 2011-05-02 CT AM CA LR RS SL FV NT XT Fully armed and operational.
GT 2011-05-09 AM CA LR listresources works, but the topology isn't done yet.
Indiana 2011-05-09 AM CA LR RS SL waiting for an opt in
  • CT: the campus topology is set up (or backbone, for NLR and I2)
  • AM: omni can connect to the AM at all
  • CA: the AM includes the pgeni.gpolab.bbn.com CA cert
  • LR: listresources returns the expected results
  • RS: we have an rspec that we think is correct
  • SL: we've created a sliver with that rspec
  • FV: the switches in that sliver are visible with 'fvctl listDevices' on naxos
  • NT: internally tested within a site
  • XT: externally tested with other sites

Testing

Assuming the MyPLC plnodes are all up and running, here are the commands that Josh has been using to test inter-campus connections. ('shmux' is a utility for running a command on multiple remote hosts, and aggregating the output)

Set $net and $slicename to control which slice and network you want to test:

net=101
username=pgenigpolabbbncom_plastic$net

Test everything:

for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@plnode{1..5}-myplc.gpolab.bbn.com ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@planetlab{4,5}.clemson.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@pl{4,5}.myplc.grnoc.iu.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@of-planet{1..4}.stanford.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@wings-openflow-{2,3}.wail.wisc.edu ; done
for i in {51..55} {72..73} {80..81} {90..93} {95..96} {104..105} ; do ipaddr=10.42.$net.$i ; echo -e "\n--> $ipaddr" ; shmux -c "ping -c 3 $ipaddr > /dev/null" $username@pl0{1,2}.cs.washington.edu ; done

This relies on ping having a non-zero exit status when it gets a 100% failure rate. It's not clear from the man page what the exit status is when some packets are received.

Failures look like this:

--> 10.42.101.95
  shmux! Child for pgenigpolabbbncom_jbstest@planetlab4.clemson.edu exited with status 1
  shmux! Child for pgenigpolabbbncom_jbstest@planetlab5.clemson.edu exited with status 1

2 targets processed in 4 seconds.
Summary: 2 errors
Error    : pgenigpolabbbncom_jbstest@planetlab4.clemson.edu pgenigpolabbbncom_jbstest@planetlab5.clemson.edu 

This indicates that the Clemson hosts couldn't ping 10.42.101.95. (In this case, they didn't yet have the static ARP entries for the Wisconsin hosts.)