wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-EXP-6

Version 53 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

EG-EXP-6: ExoGENI and Meso-scale Multi-site OpenFlow Acceptance Test

This page captures status for the test case EG-EXP-6, which verifies ExoGENI rack interoperability with other meso-scale GENI sites. For overall status see the ExoGENI Acceptance Test Status page. Last update: 09/17/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Ticket Comments
Step 1 Color(yellow,Complete)?
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)?
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)? Replaced bare metal with VM
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 Color(yellow,Complete)?
Step 14 Color(yellow,Complete)?
Step 15 Color(yellow,Complete)?
Step 16 Color(yellow,Complete)?
Step 17 Color(yellow,Complete)?
Step 18 Color(yellow,Complete)?
Step 19 Color(yellow,Complete)?
Step 20 Color(yellow,Complete)?
Step 21 Color(yellow,Complete)?
Step 22 Color(yellow,Complete)?
Step 23 Color(yellow,Complete)?
Step 24 Color(yellow,Complete)?
Step 25 Color(yellow,Complete)?
Step 26 Color(yellow,Complete)?
Step 27 Color(yellow,Complete)?
Step 28 Color(yellow,Complete)?
Step 29 Color(yellow,Complete)?
Step 30 Color(yellow,Complete)?
Step 31
Step 32 Color(yellow,Complete)?
Step 33 Color(yellow,Complete)?
Step 34
Step 35 Color(yellow,Complete)?|
Step 36 Color(yellow,Complete)?
Step 37 Color(yellow,Complete)?


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

The pgeni.gpolab.bbn.com slice authority is used for the credentials and the following aggregate manager nick_names were defined in the omni_config:

#ExoGENI Compute and OF Aggregates Managers 
exobbn=,https://bbn-hn.exogeni.net:11443/orca/xmlrpc
exorci=,https://rci-hn.exogeni.net:11443/orca/xmlrpc
exosm=,https://geni.renci.org:11443/orca/xmlrpc
of-exobbn=,https://bbn-hn.exogeni.net:3626/foam/gapi/1
of-exorci=,https://rci-hn.exogeni.net:3626/foam/gapi/1

#Meso-scale Compute and OF Aggregates Managers
of-bbn=,https://foam.gpolab.bbn.com:3626/foam/gapi/1
of-clemson=,https://foam.clemson.edu:3626/foam/gapi/1
of-i2=,https://foam.net.internet2.edu:3626/foam/gapi/1
of-rutgers=,https://nox.orbit-lab.org:3626/foam/gapi/1
plc-bbn=,http://myplc.gpolab.bbn.com:12346/
plc-clemson=,http://myplc.clemson.edu:12346/
pgeni=,https://pgeni.gpolab.bbn.com/protogeni/xmlrpc/am
pg2=,https://www.emulab.net/protogeni/xmlrpc/am/2.0

1. As Experimenter1, request ListResources from BBN ExoGENI, RENCI ExoGENI, and FOAM at NLR Site.

GPO ProtoGENI user credentials for lnevers@bbn.com were used for Experimenter1.

Requested listresources from each of the FOAM aggregates:

$ omni.py listresources -a of-exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-nlr -o
$ omni.py listresources -a of-i2 -o 
$ omni.py listresources -a of-exorci -o

2. Review ListResources output from all AMs

Requested listresources from each of the GENI AM aggregates:

$ omni.py listresources -a plc-bbn -o
$ omni.py listresources -a plc-clemson -o
$ omni.py listresources -a pg2 --api-version 2 -t GENI 3 -o
$ omni.py listresources -a exobbn -o 
$ omni.py listresources -a exorci -o 

3. Define a request RSpec for a VM at the BBN ExoGENI.

Defined an RSpec for one VM on the shared VLAN 1750 in the BBN ExoGENI rack: EG-EXP-6-exp1-exobbn.rspec

4. Define a request RSpec for a VM at the RENCI ExoGENI.

Define an RSpec for one VM on the shared VLAN 1750 in the RENCI ExoGENI rack: EG-EXP-6-exp1-exorci.rspec

5. Define request RSpecs for OpenFlow resources from BBN FOAM to access GENI OpenFlow core resources.

Defined an OpenFlow RSpec for the FOAM controllers in each rack:

The BBN ExoGENI rack connects to the GENI backbone via the BBN GPO Lab OpenFlow switch named poblano. In order for this scenario to work OpenFlow must be configured also on the poblano switch to allow the BBN ExoGENI rack OF traffic to get to the OF GENI core. The following OpenFlow RSpec was defined for the BBN site switch poblano: EG-EXP-6-exp1-openflow-bbn.rspec

6. Define request RSpecs for OpenFlow core resources at NLR FOAM.

Defined an OpenFlow RSpec for the NLR FOAM controller: EG-EXP-6-exp1-openflow-nlr.rspec

7. Create the first slice

Created the first slice:

$ omni.py createslice EG-EXP-6-exp1

8. Create a sliver in the first slice at each AM, using the RSpecs defined above.

 $ omni.py -a exobbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-exobbn.rspec
 $ omni.py -a of-exobbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-exobbn.rspec
 $ omni.py -a of-bbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-bbn.rspec
 $ omni.py -a of-nlr createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-nlr.rspec
 $ omni.py -a of-exorci createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-exorci.rspec
 $ omni.py -a exorci createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-exorci.rspec

Here are of the Rspecs used: EG-EXP-6-exp1-exobbn.rspec, EG-EXP-6-exp1-openflow-bbn.rspec, EG-EXP-6-exp1-openflow-nlr.rspec, and EG-EXP-6-exp1-exorci.rspec.

9. Log in to each of the systems, verify IP address assignment. Send traffic to the other system, leave traffic running.

Determine the status of the OpenFlow slivers, check the sliverstatus for each to make sure that they have been approved. Note that 'geni_status' is 'ready' when the sliver is approved. If the OpenFlow sliver is waiting for approval the 'geni_status' will be 'configuring:

 $ omni.py -a of-bbn sliverstatus EG-EXP-6-exp1
 $ omni.py -a of-nlr sliverstatus EG-EXP-6-exp1

Determine compute resources allocated to each sliver in the ExoGENI racks. First make sure the sliverstatus is ready:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp1
 $ omni.py -a exorci sliverstatus EG-EXP-6-exp1

Once the slivers are ready get the list of hosts allocated with Omni:

 $ omni.py -a exobbn listresources EG-EXP-6-exp1 -o
 $ omni.py -a exorci listresources EG-EXP-6-exp1 -o 
 $ egrep hostname EG-EXP-6-exp1-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.6" port="22" username="root"/> 
 $ egrep hostname EG-EXP-6-exp1-rspec-rci-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="152.54.14.13" port="22" username="root"/>   

Login to BBN VM and send traffic to remote:

 $ ssh root@192.1.242.6
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:5a:92:fe  
          inet addr:10.42.19.27  Bcast:10.42.19.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe5a:92fe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:864 (864.0 B)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.19.25
 PING 10.42.19.25 (10.42.19.25) 56(84) bytes of data.
 64 bytes from 10.42.19.25: icmp_req=1 ttl=64 time=2353 ms
 64 bytes from 10.42.19.25: icmp_req=2 ttl=64 time=1356 ms
 64 bytes from 10.42.19.25: icmp_req=3 ttl=64 time=357 ms
 64 bytes from 10.42.19.25: icmp_req=4 ttl=64 time=172 ms

Login to RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.13
  root@debian:~# ifconfig eth1
  eth1      Link encap:Ethernet  HWaddr 52:54:00:74:38:fd  
          inet addr:10.42.19.25  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fe74:38fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1056 (1.0 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.19.27
 PING 10.42.19.27 (10.42.19.27) 56(84) bytes of data.
 64 bytes from 10.42.19.27: icmp_req=1 ttl=64 time=2408 ms
 64 bytes from 10.42.19.27: icmp_req=2 ttl=64 time=1410 ms
 64 bytes from 10.42.19.27: icmp_req=3 ttl=64 time=410 ms
 64 bytes from 10.42.19.27: icmp_req=4 ttl=64 time=172 ms

10. As Experimenter2, define a request RSpec for one VM and one bare metal node at BBN ExoGENI.

GPO ProtoGENI user credentials for lnevers1@bbn.com were used for Experimenter2.

Requested listresources from each of the FOAM aggregates:

$ omni.py listresources -a of-exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-nlr -o
$ omni.py listresources -a of-exorci -o

Requested listresources from each of the ExoGENI aggregates:

$ omni.py listresources -a exobbn -o 
$ omni.py listresources -a exorci -o 

Defined an RSpec for one VM and one bare metal node on the shared VLAN 1750 in the BBN ExoGENI rack EG-EXP-6-exp2-exobbn.rspec. Attached as a reference, even though it could not be used for this test case as described.

The Bare Metal node is only available via the ExoSM, so in order to preserve the requests to the individual AMs, modified the RSpec to use 2 VMs, and the new RSpec used is EG-EXP-6-exp2-exobbn-mod.rspec.

11. Define a request RSpec for two VMs on the same worker node at RENCI ExoGENI.

Defined an RSpec for two VMs on the shared VLAN 1750 in the RENCI rack: EG-EXP-6-exp2-exorci.rspec

12. Define request RSpecs for OpenFlow resources from GPO FOAM to access GENI OpenFlow core resources.

The experiment uses the flows defined by the RSpec EG-EXP-6-exp2-openflow-bbn.rspec.

13. Define request RSpecs for OpenFlow core resources at NLR FOAM

The experiment uses the flows defined by the previous experiment, as defined in EG-EXP-6-exp2-openflow-nlr.rspec

14. Create a second slice

$ omni.py createslice EG-EXP-6-exp2

15. Create a sliver in the second slice at each AM, using the RSpecs defined above

Because of the issues described in step 10, had to request to VMs on the shared VLAN 1750 at the BBN SM. The request:

$ omni.py -a exobbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-exobbn-mod.rspec 

Created a sliver at RENCI ExoGENIrack aggregate:

$ omni.py -a exorci createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-exorci.rspec 

Create a sliver for the network resources required by experiment 2:

$ omni.py -a of-exobbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-exobbn.rspec
$ omni.py -a of-bbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-bbn.rspec
$ omni.py -a of-nlr createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-nlr.rspec
$ omni.py -a of-exorci createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-exorci.rspec

16. Log in to each of the systems in the slice

Login to each system and send traffic to each other systems; leave traffic running.

Determine compute resources allocated to each sliver in the ExoGENI racks. First make sure the sliverstatus is ready:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp2
 $ omni.py -a exorci sliverstatus EG-EXP-6-exp2

Once the slivers are ready get the list of hosts allocated with Omni:

 $ omni.py -a exobbn listresources EG-EXP-6-exp2 -o
 $ omni.py -a exorci listresources EG-EXP-6-exp2 -o 
 $ egrep hostname EG-EXP-6-exp2-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.8" port="22" username="root"/>      
   <login authentication="ssh-keys" hostname="192.1.242.7" port="22" username="root"/>           
 $ egrep hostname EG-EXP-6-exp2-rspec-rci-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="152.54.14.16" port="22" username="root"/>      
   <login authentication="ssh-keys" hostname="152.54.14.14" port="22" username="root"/>     

Login to first BBN VM and send traffic to remote:

 $ ssh root@192.1.242.7
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:0e:77:26  
          inet addr:10.42.18.98  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe0e:7726/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1248 (1.2 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.15
 PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
 64 bytes from 10.42.18.15: icmp_req=1 ttl=64 time=2277 ms
 64 bytes from 10.42.18.15: icmp_req=2 ttl=64 time=1362 ms
 64 bytes from 10.42.18.15: icmp_req=3 ttl=64 time=362 ms
 64 bytes from 10.42.18.15: icmp_req=4 ttl=64 time=172 ms

Login to second BBN VM and send traffic to remote:

 $ ssh root@192.1.242.8
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:9f:cf:e6  
          inet addr:10.42.18.77  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe9f:cfe6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1440 (1.4 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.15
 PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
 64 bytes from 10.42.18.15: icmp_req=1 ttl=64 time=2561 ms
 64 bytes from 10.42.18.15: icmp_req=2 ttl=64 time=1562 ms
 64 bytes from 10.42.18.15: icmp_req=3 ttl=64 time=654 ms
 64 bytes from 10.42.18.15: icmp_req=4 ttl=64 time=171 ms

Login to first RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.14
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:92:84:59  
          inet addr:10.42.18.16  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe92:8459/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1440 (1.4 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.98
 PING 10.42.18.98 (10.42.18.98) 56(84) bytes of data.
 64 bytes from 10.42.18.98: icmp_req=1 ttl=64 time=4554 ms
 64 bytes from 10.42.18.98: icmp_req=4 ttl=64 time=1610 ms
 64 bytes from 10.42.18.98: icmp_req=2 ttl=64 time=3610 ms
 64 bytes from 10.42.18.98: icmp_req=5 ttl=64 time=710 ms
 64 bytes from 10.42.18.98: icmp_req=3 ttl=64 time=2711 ms
 64 bytes from 10.42.18.98: icmp_req=6 ttl=64 time=172 ms

Login to second RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.16
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:4c:59:50  
          inet addr:10.42.18.15  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe4c:5950/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1116 (1.0 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.77
 PING 10.42.18.77 (10.42.18.77) 56(84) bytes of data.
 64 bytes from 10.42.18.77: icmp_req=1 ttl=64 time=2263 ms
 64 bytes from 10.42.18.77: icmp_req=2 ttl=64 time=1339 ms
 64 bytes from 10.42.18.77: icmp_req=3 ttl=64 time=394 ms
 64 bytes from 10.42.18.77: icmp_req=4 ttl=64 time=171 ms

17. As Experimenter3, request ListResources from BBN ExoGENI, GPO FOAM, and FOAM at Meso-scale Site

GPO ProtoGENI user credentials for lnevers2@bbn.com were used for Experimenter3.

Got a listresources from each aggregate in the experiment 2. Note, the WAPG resource at Rutgers is available via the Emulab ProtoGENI aggregate. Executed the following to get resources for each aggregate:

$ omni.py listresources -a exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-i2 -o
$ omni.py listresources -a of-rutgers -o
$ omni.py listresources -a pgeni -o   
$ omni.py listresources -a pg2 --api-version 2 -t GENI 3 -o   

18. Review ListResources output from all AMs

Reviewed content of advertisement RSpecs for each of the aggregates polled in the previous step.

19. Define a request RSpec for a VM at the BBN ExoGENI

Defined RSpec for one VM on the shared VLAN 1750 in the BBN ExoGENI rack: EG-EXP-6-exp3-exobbn.rspec

20. Define a request RSpec for a compute resource at the GPO Meso-scale site

Defined RSpec for one PG Resource at the GPO PG site: EG-EXP-6-exp3-bbn-pgeni.rspec

21. Define a request RSpec for a compute resource at a Meso-scale site

Defined RSpec for one WAPG Resource at the Rutgers site EG-EXP-6-exp3-rutgers-wapg.rspec

22. Define request RSpecs for OpenFlow resources to allow connection from OF BBN ExoGENI to Meso-scale OF sites GPO (NLR) and Rutgers Sites (I2)

Defined the OpenFlow RSpecs for the sites below, note that the BBN OF RSpec overlaps with the one in experiment 1 and is being used here:

23. Create a third slice

$ omni.py createslice EG-EXP-6-exp3

24. Create sliver that connects the Internet2 Meso-scale OpenFlow site to the BBN ExoGENI Site, and the GPO Meso-scale site

Requests are as follows:

 $ omni.py -a exobbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-exobbn.rspec  
 $ omni.py -a of-exobbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-exobbn.rspec 
 $ omni.py -a pgeni createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-bbn-pgeni.rspec  ## See Note1
 $ omni.py -a of-bbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-bbn.rspec
 $ omni.py -a of-i2 createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-i2.rspec
 $ omni.py -a pg2 createsliver EG-EXP-6-exp3 --api-version 2 -t GENI 3 EG-EXP-6-exp3-rutgers-wapg.rspec ## Both WAPG not available
 $ omni.py -a of-rutgers createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-rutgers.rspec 

Note1: The GPO ProtoGENI reservation must be requested before the BBN OpenFlow FOAM request. This must be done to determine the port of the GPO PG node assigned, and then configure the BBN FOAM RSpec accordingly.

25. Log in to each of the compute resources in the slice

Determine which compute resources are allocated to the sliver in the BBN ExoGENI rack. First make sure the sliverstatus is ready, then when the sliver is ready get the host allocated with Omni:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp3
 $ omni.py -a exobbn listresources EG-EXP-6-exp3 -o
 $ egrep hostname EG-EXP-6-exp3-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.9" port="22" username="root"/>

Determine the resources allocated to the WAPG Rutgers node:

 $ omni.py -a pg2 sliverstatus EG-EXP-6-exp3 --api-version 2 -t GENI 3 -o

Determine the resources allocated to the GPO PG node:

 $ omni.py -a pgeni sliverstatus EG-EXP-6-exp3
 $ omni.py -a pgeni listresources EG-EXP-6-exp3 -o  
 $ egrep "hostname" EG-EXP-6-exp3-rspec-pgeni-gpolab-bbn-com.xml
    <login authentication="ssh-keys" hostname="pc9.pgeni.gpolab.bbn.com" port="22" username="lnevers"/>  

Login to each node in the slice, configure data plane network interfaces on any non-ExoGENI resources as necessary, and send traffic to each of the other systems; leave traffic running.

Login to ExoGENI Node and send traffic to GPO PG node:

$ ssh 192.1.242.9 -l root
eth1      Link encap:Ethernet  HWaddr 52:54:00:e6:5e:1a  
          inet addr:10.42.11.198  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fee6:5e1a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:38 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2496 (2.4 KiB)  TX bytes:398 (398.0 B)
root@debian:~# ping 10.42.11.209 
PING 10.42.11.209 (10.42.11.209) 56(84) bytes of data.
64 bytes from 10.42.11.209: icmp_req=1 ttl=64 time=40.0 ms
64 bytes from 10.42.11.209: icmp_req=2 ttl=64 time=2.74 ms
64 bytes from 10.42.11.209: icmp_req=3 ttl=64 time=0.501 ms
^C
--- 10.42.11.209 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 0.501/14.442/40.080/18.152 ms

Login to GPO PG node and send traffic to ExoGENI node:

 $ ssh pc9.pgeni.gpolab.bbn.com

 vm:~> ping 10.42.11.198
 PING 10.42.11.198 (10.42.11.198) 56(84) bytes of data.
 64 bytes from 10.42.11.198: icmp_seq=1 ttl=64 time=0.541 ms
 64 bytes from 10.42.11.198: icmp_seq=2 ttl=64 time=0.415 ms
 64 bytes from 10.42.11.198: icmp_seq=3 ttl=64 time=0.468 ms
 ^C
 --- 10.42.11.198 ping statistics ---
 3 packets transmitted, 3 received, 0% packet loss, time 1998ms
 rtt min/avg/max/mdev = 0.415/0.474/0.541/0.057 ms
 vm:~> 

26. Verify that all three experiments continue to run

Verify that each experiment is running without impacting each other's traffic and verify that data is exchanged over the path along which data is supposed to flow.

Attempted to exchange traffic from Experimenter1 host to Experimenter2 host without success:

$ ssh root@192.1.242.6
Linux debian 2.6.32-5-amd64 #1 SMP Mon Jan 16 16:22:28 UTC 2012 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Aug 15 19:38:46 2012 from arendia.gpolab.bbn.com
root@debian:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 52:54:00:5a:92:fe  
          inet addr:10.42.19.27  Bcast:10.42.19.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe5a:92fe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13662 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1217282 (1.1 MiB)  TX bytes:1087918 (1.0 MiB)

root@debian:~# ping 10.42.18.15
PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
From 64.119.128.41 icmp_seq=87 Destination Host Unreachable

Attempted several other combinations which also failed.

27. Review baseline, GMOC, and monitoring statistics

Initial plan for this step was to review the slice monitoring data at GMOC DB location, but after discussion with ExoGENI team, this step is being modified to use Iperf to determine statistics for the experimenter.

Iperf Run-1

As Experimenter1 (lnevers), ran Iperf from the BBN host to the RENCI host in Experiment 1, with other 2 experiments (2&3) idle, with these results:

root@debian:~# iperf -c 10.42.19.25
------------------------------------------------------------
Client connecting to 10.42.19.25, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.19.27 port 36977 connected with 10.42.19.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.77 MBytes  4.81 Mbits/sec

Iperf Run-2

As Experimenter2 (lnevers), ran Iperf concurrently from the each of the two BBN hosts to the two RENCI hosts in Experiment 2, with other 2 experiments (1 & 3) idle, with these results on the first BBN host:

root@debian:~# iperf -c 10.42.18.15
------------------------------------------------------------
Client connecting to 10.42.18.15, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.98 port 42409 connected with 10.42.18.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.83 MBytes  4.87 Mbits/sec

and on the second BBN host:

root@debian:~# iperf -c 10.42.18.16
------------------------------------------------------------
Client connecting to 10.42.18.16, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.77 port 59549 connected with 10.42.18.16 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.83 MBytes  4.86 Mbits/sec

Iperf Run-3 Combined all Iperf traffic from Run-1+Run-2, and ran all 3 client/server combinations concurrently.

Results for Experiment1 BBN Host1:

# iperf -c 10.42.19.25
------------------------------------------------------------
Client connecting to 10.42.19.25, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.19.27 port 36979 connected with 10.42.19.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.70 MBytes  4.72 Mbits/sec

Results from Experiment2 Host1:

# iperf -c 10.42.18.15
------------------------------------------------------------
Client connecting to 10.42.18.15, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.98 port 42410 connected with 10.42.18.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.82 MBytes  4.88 Mbits/sec

Results from Experiment2 Host2:

root@debian:~# iperf -c 10.42.18.16
------------------------------------------------------------
Client connecting to 10.42.18.16, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.77 port 59550 connected with 10.42.18.16 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.90 MBytes  4.94 Mbits/sec

28. As site administrator, identify all controllers that the BBN ExoGENI OpenFlow switch is connected to

An experimenter does not have site administrator privileges to identify all OpenFlow controllers that have access to the BBN ExoGENI OpenFlow Switch. As the Administrator, get a list of active slivers on the FOAM controller which runs on the BBN Head node in the ExoGENI rack:

$ foamctl geni:list-slivers --passwd-file=/opt/foam/etc/foampasswd
{
 "slivers": [
  {
   "status": "Approved", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp1:21095873-0df9-4254-ba15-9af4e817d3a2", 
   "creation": "2012-08-14 22:00:31.138738+00:00", 
   "pend_reason": "Request has underspecified VLAN requests", 
   "expiration": "2012-08-30 00:00:00+00:00", 
   "deleted": "False", 
   "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers", 
   "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp1", 
   "enabled": true, 
   "email": "lnevers@bbn.com", 
   "flowvisor_slice": "21095873-0df9-4254-ba15-9af4e817d3a2", 
   "desc": "EG-EXP-5-scenario1 resources at BBN ExoGENI.", 
   "ref": null, 
   "id": 34, 
   "uuid": "21095873-0df9-4254-ba15-9af4e817d3a2"
  }, 
  {
   "status": "Approved", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp2:4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a", 
   "creation": "2012-08-14 22:05:36.797393+00:00", 
   "pend_reason": "Request has underspecified VLAN requests", 
   "expiration": "2012-08-22 00:00:00+00:00", 
   "deleted": "False", 
   "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers1", 
   "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp2", 
   "enabled": true, 
   "email": "lnevers@bbn.com", 
   "flowvisor_slice": "4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a", 
   "desc": "EG-EXP-5-scenario1 resources at BBN ExoGENI.", 
   "ref": null, 
   "id": 35, 
   "uuid": "4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a"
  }, 
  {
   "status": "Approved", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp3:f08d3095-1c80-4d13-bb55-77fe60cc2743", 
   "creation": "2012-09-12 15:51:56.969258+00:00", 
   "pend_reason": "Request has underspecified VLAN requests", 
   "expiration": "2012-09-26 00:00:00+00:00", 
   "deleted": "False", 
   "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers2", 
   "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp3", 
   "enabled": true, 
   "email": "lnevers@bbn.com", 
   "flowvisor_slice": "f08d3095-1c80-4d13-bb55-77fe60cc2743", 
   "desc": "EG-EXP-3 experiment 3 resources at BBN ExoGENI.", 
   "ref": null, 
   "id": 41, 
   "uuid": "f08d3095-1c80-4d13-bb55-77fe60cc2743"
  }
 ]
}
 
To get information which controllers are in use show the details for each of the slivers in this test case. 
The controller for EG-EXP-exp1 can be seen in the ''url'' below:
{{{
$ foamctl geni:show-sliver -u "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp1:21095873-0df9-4254-ba15-9af4e817d3a2" --passwd-file=/opt/foam/etc/foampasswd
{
 "sliver": {
  "flowspace rules": 2, 
  "status": "Approved", 
  "creation": "2012-08-14 22:00:31.138738+00:00", 
  "uuid": "21095873-0df9-4254-ba15-9af4e817d3a2", 
  "deleted": "False", 
  "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers", 
  "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp1", 
  "enabled": true, 
  "pend_reason": "Request has underspecified VLAN requests", 
  "email": "lnevers@bbn.com", 
  "controllers": [
   {
    "url": "tcp:mallorea.gpolab.bbn.com:33017", 
    "type": "primary"
   }
  ], 
  "expiration": "2012-08-30 00:00:00+00:00", 
  "desc": "EG-EXP-5-scenario1 resources at BBN ExoGENI.", 
  "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp1:21095873-0df9-4254-ba15-9af4e817d3a2", 
  "ref": null, 
  "id": 34, 
  "flowvisor_slice": "21095873-0df9-4254-ba15-9af4e817d3a2"
 }
}

}}}
The controller for EG-EXP-exp2 can be seen in the ''url'' below:
{{{
$ foamctl geni:show-sliver -u "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp2:4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a" --passwd-file=/opt/foam/etc/foampasswd
{
 "sliver": {
  "flowspace rules": 2, 
  "status": "Approved", 
  "creation": "2012-08-14 22:05:36.797393+00:00", 
  "uuid": "4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a", 
  "deleted": "False", 
  "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers1", 
  "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp2", 
  "enabled": true, 
  "pend_reason": "Request has underspecified VLAN requests", 
  "email": "lnevers@bbn.com", 
  "controllers": [
   {
    "url": "tcp:mallorea.gpolab.bbn.com:33018", 
    "type": "primary"
   }
  ], 
  "expiration": "2012-08-22 00:00:00+00:00", 
  "desc": "EG-EXP-5-scenario1 resources at BBN ExoGENI.", 
  "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp2:4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a", 
  "ref": null, 
  "id": 35, 
  "flowvisor_slice": "4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a"
 }
}

}}}

The controller for EG-EXP-exp3 can be seen in the ''url'' below:

{{{
$ foamctl geni:show-sliver -u "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp3:f08d3095-1c80-4d13-bb55-77fe60cc2743" --passwd-file=/opt/foam/etc/foampasswd
{
 "sliver": {
  "flowspace rules": 2, 
  "status": "Approved", 
  "creation": "2012-09-12 15:51:56.969258+00:00", 
  "uuid": "f08d3095-1c80-4d13-bb55-77fe60cc2743", 
  "deleted": "False", 
  "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers2", 
  "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp3", 
  "enabled": true, 
  "pend_reason": "Request has underspecified VLAN requests", 
  "email": "lnevers@bbn.com", 
  "controllers": [
   {
    "url": "tcp:mallorea.gpolab.bbn.com:33020", 
    "type": "primary"
   }
  ], 
  "expiration": "2012-09-26 00:00:00+00:00", 
  "desc": "EG-EXP-3 experiment 3 resources at BBN ExoGENI.", 
  "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-6-exp3:f08d3095-1c80-4d13-bb55-77fe60cc2743", 
  "ref": null, 
  "id": 41, 
  "flowvisor_slice": "f08d3095-1c80-4d13-bb55-77fe60cc2743"
 }
}

}}}

As an experimenter, it is possible to identify that the appropriate switches for the experiment on this page are connected to the !OpenFlow Controllers. 

__EG-EXP-6-exp1__

Based on the RSpecs the following nox controller and ports are requested for EG-EXP-6-exp1:
{{{
 <openflow:controller url="tcp:mallorea.gpolab.bbn.com:33017" type="primary" />
}}}
On the system mallorea.gpolab.bbn.com the nox controller is running and listening on port 33017 and allowing console connections on port 11017:
{{{
 $ ./nox_core --info=$HOME/nox-33017.info -i ptcp:33017 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11017,sslport=0
}}}
Verified the switches connected to the controller for EG-EXP-6-exp1 by using the nox console to get a list of switches connected:
{{{
 $ ./nox-console -n localhost -p 11017 getnodes | sort
  00:00:00:10:10:17:50:01
  00:64:08:17:f4:b3:5b:00
  00:64:08:17:f4:b5:2a:00  
  06:d6:00:12:e2:b8:a5:d0
  06:d6:00:24:a8:5d:0b:00
  06:d6:00:24:a8:d2:b8:40
  0e:84:00:23:47:c8:bc:00
  0e:84:00:23:47:ca:bc:40
  0e:84:00:24:a8:d2:48:00
  0e:84:00:24:a8:d2:b8:40
  0e:84:00:26:f1:40:a8:00
}}}

In the above list of connections, ''00:64:08:17:f4:b5:2a:00'' is the BBN ExoGENI OpenFlow Switch 
and ''00:64:08:17:f4:b3:5b:00'' is the RENCI ExoGENI OpenFlow Switch. 

''Note'': In later updates, the BBN ExoGENI switch DPID was modified to "00:01:08:17:f4:b5:2a:00"
and the RENCI ExoGENI switch DPID was modified to "00:01:08:17:f4:b3:5b:00"

__EG-EXP-6-exp2__

Based on the RSpecs the following controller and ports are requested for EG-EXP-6-exp2:
{{{
 <openflow:controller url="tcp:mallorea.gpolab.bbn.com:33018" type="primary" />
}}}
On the system mallorea.gpolab.bbn.com the controller is running and listening on port 33018 and allowing console connections on port 11018:
{{{
 $ ./nox_core --info=$HOME/nox-33018.info -i ptcp:33018 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11018,sslport=0
}}}
Verified the switches connected to the controller for EG-EXP-6-exp2 by using the nox console to get a list of switches connected:
{{{
 $ ./nox-console -n localhost -p 11018 getnodes
 00:64:08:17:f4:b3:5b:00
 00:64:08:17:f4:b5:2a:00
 06:d6:00:12:e2:b8:a5:d0
 06:d6:00:24:a8:d2:b8:40
 0e:84:00:23:47:c8:bc:00
 0e:84:00:23:47:ca:bc:40
 0e:84:00:24:a8:d2:48:00
 0e:84:00:24:a8:d2:b8:40
 0e:84:00:26:f1:40:a8:00
}}}
In the above list of connections, ''00:64:08:17:f4:b5:2a:00'' is the BBN ExoGENI OpenFlow Switch 
and ''00:64:08:17:f4:b3:5b:00'' is the RENCI ExoGENI OpenFlow Switch.

''Note'': In later updates, the BBN ExoGENI switch DPID was modified to "00:01:08:17:f4:b5:2a:00"
and the RENCI ExoGENI switch DPID was modified to "00:01:08:17:f4:b3:5b:00"

__EG-EXP-6-exp3__

Based on the RSpecs the following controller and ports are requested for EG-EXP-6-exp3:
{{{
<openflow:controller url="tcp:mallorea.gpolab.bbn.com:33020" type="primary" />

}}}
On the system mallorea.gpolab.bbn.com the controller is running and listening on port 33020 and allowing console connections on port 11020:
{{{
 $ ./nox_core --info=$HOME/nox-33020.info -i ptcp:33020 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11020,sslport=0
}}}
Verified the switches for EG-EXP-6-exp3 by using the nox console to get a list of switches connected:
{{{
 $ ./nox-console -n localhost -p 11020 getnodes
 06:d6:00:24:a8:c4:b9:00
 06:d6:00:12:e2:b8:a5:d0
 00:00:0e:84:40:39:19:96
 00:00:0e:84:40:39:18:58
 00:64:08:17:f4:b5:2a:00
 00:00:0e:84:40:39:1b:93
 00:00:0e:84:40:39:1a:57
 00:00:0e:84:40:39:18:1b
}}}
In the above list of connections, ''00:64:08:17:f4:b5:2a:00'' is the BBN ExoGENI OpenFlow Switch 
and ''00:64:08:17:f4:b3:5b:00'' is the RENCI ExoGENI OpenFlow Switch.

''Note'': In later updates, the BBN ExoGENI switch DPID was modified to "00:01:08:17:f4:b5:2a:00"
and the RENCI ExoGENI switch DPID was modified to "00:01:08:17:f4:b3:5b:00"

== 29. As Experimenter3, verify that traffic only flows on the network resources assigned to slivers as specified by the controller ==

From Experimenter 3 host attempted to connect to Experimenter 2 host:
{{{
$ ssh 192.1.242.9 -l root
root@debian:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 52:54:00:e6:5e:1a  
          inet addr:10.42.11.198  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fee6:5e1a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2668 errors:0 dropped:0 overruns:0 frame:0
          TX packets:731 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:167664 (163.7 KiB)  TX bytes:30918 (30.1 KiB)

root@debian:~# ping 10.42.18.15
PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
From 10.42.11.198 icmp_seq=2 Destination Host Unreachable
From 10.42.11.198 icmp_seq=3 Destination Host Unreachable
From 10.42.11.198 icmp_seq=4 Destination Host Unreachable
}}}

== 30.  Verify control access to OF experiment ==
Verify that no default controller, switch fail-open behavior,  or other resource other than experimenters' controllers, can control how traffic flows on network resources assigned to  experimenters' slivers.

The following scenarios were verified:
 * Traffic is not exchanged when the controller is not running. 
 * Starting controller allows existing traffic flows to be delivered.  
 * Stopping the controller does not stop an existing continuous traffic exchange. 
 * Stopping the controller and starting a flow within the "DEFAULT_FLOW_TIMEOUT" (5 sec) allow traffic delivery.
 * Stopping the controller and starting a flow after the "DEFAULT_FLOW_TIMEOUT" (5 sec) does not allow traffic delivery.


== 31. Set the hard and soft timeout of flowtable entries ==

The following scenarios were verified with a modified soft timeout by modifying the "DEFAULT_FLOW_TIMEOUT" in the file src/include/openflow-default.hh:
 * Stopping the controller and starting a flow within the "DEFAULT_FLOW_TIMEOUT" (25 sec) allow traffic delivery.
 * Stopping the controller and starting a flow after the "DEFAULT_FLOW_TIMEOUT" (25 sec) does not allow traffic delivery.

Flow removed (src/include/flow-removed.hh) 

== 32. Get switch statistics and flowtable entries for slivers from the OpenFlow switch. ==
Statistics can be collected by experimenters and by administrator. In this step verified that both are possible.

__Statistics available to Experimenters__ 

In this scenario the !FloodLight Controller is used to capture statistics. The Floodlight controller was configured to run on port 33020, in place of NOX.  
{{{
 $ curl http://localhost:9090/wm/core/controller/switches/json
  [{"dpid":"00:00:0e:84:40:39:18:1b"},{"dpid":"06:d6:00:24:a8:c4:b9:00"},
  {"dpid":"00:00:0e:84:40:39:19:96"},{"dpid":"00:00:0e:84:40:39:1b:93"},
  {"dpid":"00:00:0e:84:40:39:18:58"},{"dpid":"00:00:0e:84:40:39:1a:57"},
  {"dpid":"00:01:08:17:f4:b5:2a:00"},{"dpid":"06:d6:00:12:e2:b8:a5:d0"},
  {"dpid":"00:00:00:10:10:17:50:01"}]
}}}

Note: The ExoGENI DPID is "00:01:08:17:f4:b5:2a:00"

Following is the flow tables for the ExoGENI OF Switch, the flow between the 2 nodes 
10.42.11.209 (pc9.pgeni.gpolab.bbn.com) and the address 10.42.11.198 *ExoGENI VM on shared VLAN 1750):

{{{
lnevers@mallorea:~$ curl http://localhost:9090/wm/core/switch/00:01:08:17:f4:b5:2a:00/flow/json
 {"00:01:08:17:f4:b5:2a:00":[{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0, 
 "match":{"dataLayerDestination":"00:15:17:f4:31:aa","dataLayerSource":"52:54:00:3c:c2:a2",    
 "dataLayerType":"0x0800","dataLayerVirtualLan":1750,"dataLayerVirtualLanPriorityCodePoint":0,
 "inputPort":42,"networkDestination":"10.42.11.209","networkDestinationMaskLen":32,"networkProtocol":0,
 "networkSource":"10.42.11.198","networkSourceMaskLen":32,"networkTypeOfService":0,"transportDestination":0,
 "transportSource":0,"wildcards":3145952},"durationSeconds":1081,"durationNanoseconds":0,"packetCount":2154,
 "byteCount":228324,"tableId":0,"actions":[{"maxLength":0,"port":64,"lengthU":8,"length":8,"type":"OUTPUT"}],
 "priority":0},{"cookie":9007199254740992,"idleTimeout":5,"hardTimeout":0,"match":{"dataLayerDestination":"52:54:00:3c:c2:a2",
 "dataLayerSource":"00:15:17:f4:31:aa","dataLayerType":"0x0800",
 "dataLayerVirtualLan":1750,"dataLayerVirtualLanPriorityCodePoint":0,"inputPort":64,"networkDestination":"10.42.11.198",
 "networkDestinationMaskLen":32,"networkProtocol":0,"networkSource":"10.42.11.209","networkSourceMaskLen":32,
 "networkTypeOfService":0,"transportDestination":0,"transportSource":0,"wildcards":3145952},"durationSeconds":1081,"durationNanoseconds":0,
 "packetCount":2152,"byteCount":228112,"tableId":0,"actions":[{"maxLength":0,"port":42,"lengthU":8,"length":8,"type":"OUTPUT"}],"priority":0}]}
}}}

Also verified memory usage:
{{{
$ curl http://localhost:9090/wm/core/memory/json      
{"total":34148352,"free":12270464}
}}}



__Statistics available to Administrators__

The head node runs the !FlowVisor for the !OpenFlow switch in the ExoGENI rack. To get access to switch and slice statistics login access to the head node in the ExoGENI rack is required, this is not normally available to experimenters. This scenarios was using NOX as a controller for the slice EG-EXP-6-exp3.

Determined how many slices were running according to !FlowVisor:
{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd listSlices
Slice 0: f08d3095-1c80-4d13-bb55-77fe60cc2743
Slice 1: 21095873-0df9-4254-ba15-9af4e817d3a2
Slice 2: fvadmin
Slice 3: 4a1d909c-b0c7-45a7-b100-01e0b1d8bd3a
Slice 4: 8aad0aae-ae92-4a3c-bd5e-43f7456f628e
Slice 5: 013f6aa7-e600-4be5-9e31-5c0436223dfd
Slice 6: e10d67f9-4680-4774-9968-aae42c8fdccb
}}}
In this example, we will be looking for the !FlowVisor Slice associated with EG-EXP-6-exp3. So we have to show the slice information for each of the above to determine which is one that maps to EG-EXP-6-exp3.
{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd getSliceInfo f08d3095-1c80-4d13-bb55-77fe60cc2743
Got reply:
connection_1=00:01:08:17:f4:b5:2a:00-->/192.1.242.3:47453-->mallorea.gpolab.bbn.com/192.1.249.185:33020
contact_email=lnevers@bbn.com
controller_hostname=mallorea.gpolab.bbn.com
controller_port=33020
creator=fvadmin
}}}
The Experiment EG-EXP-6-exp-3 uses the controller mallorea.gpolab.bbn.com on port 33020 in our OpenFlow RSpec 
 [http://groups.geni.net/geni/browser/trunk/GENIRacks/ExoGENI/Spiral4/Rspecs/AcceptanceTests/EG-EXP-6/EG-EXP-6-exp3-openflow-exobbn.rspec EG-EXP-6-exp3-openflow-exobbn.rspec] for the ExoGENI FOAM. 

Now that we have identified the slice, we can get statistics for it:
{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd getSliceStats f08d3095-1c80-4d13-bb55-77fe60cc2743
Got reply:
---Sent---
slicer_f08d3095-1c80-4d13-bb55-77fe60cc2743_dpid=00:01:08:17:f4:b5:2a:00 :: ECHO_REPLY=791,FEATURES_REQUEST=10,PACKET_IN=312398,PACKET_OUT=3403704,ECHO_REQUEST=791,FLOW_MOD=4292,ERROR=20,FEATURES_REPLY=10,HELLO=20,SET_CONFIG=10,FLOW_REMOVED=474,VENDOR=10
Total :: ECHO_REPLY=791,FEATURES_REQUEST=10,PACKET_IN=312398,PACKET_OUT=3403704,ECHO_REQUEST=791,FLOW_MOD=4292,ERROR=20,FEATURES_REPLY=10,HELLO=20,SET_CONFIG=10,FLOW_REMOVED=474,VENDOR=10
---Recv---
slicer_f08d3095-1c80-4d13-bb55-77fe60cc2743_dpid=00:01:08:17:f4:b5:2a:00 :: ECHO_REPLY=791,FLOW_MOD=4292,FEATURES_REQUEST=10,HELLO=10,SET_CONFIG=10,PACKET_OUT=3403704,VENDOR=10
Total :: ECHO_REPLY=791,FLOW_MOD=4292,FEATURES_REQUEST=10,HELLO=10,SET_CONFIG=10,PACKET_OUT=3403704,VENDOR=10
---Drop---
classifier-dpid=00:01:08:17:f4:b5:2a:00 :: FLOW_REMOVED=25
slicer_f08d3095-1c80-4d13-bb55-77fe60cc2743_dpid=00:01:08:17:f4:b5:2a:00 :: ECHO_REQUEST=2
Total :: ECHO_REQUEST=2,FLOW_REMOVED=25
}}}

Determine the switch specific information, by first listing the devices and then the detailed information about them:
{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd listDevices
Device 0: 00:01:08:17:f4:b5:2a:00
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd getDeviceInfo 00:01:08:17:f4:b5:2a:00
nPorts=51
portList=1,5,9,13,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,64
dpid=00:01:08:17:f4:b5:2a:00
remote=/192.168.103.10:6633-->/192.168.103.4:64431
portNames=1(1),5(5),9(9),13(13),17(17),18(18),19(19),20(20),21(21),22(22),23(23),24(24),25(25),26(26),27(27),28(28),29(29),30(30),31(31),32(32),33(33),34(34),35(35),36(36),37(37),38(38),39(39),40(40),41(41),42(42),43(43),44(44),45(45),46(46),47(47),48(48),49(49),50(50),51(51),52(52),53(53),54(54),55(55),56(56),57(57),58(58),59(59),60(60),61(61),62(62),64(64)
}}}

To show statistics for the one !OpenFlow device in the ExoGENI Rack:
{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd getSwitchStats 00:01:08:17:f4:b5:2a:00
Got reply:
---Sent---
classifier-dpid=00:01:08:17:f4:b5:2a:00 :: ECHO_REQUEST=146870,ECHO_REPLY=24472,FLOW_MOD=27784,FEATURES_REQUEST=25,STATS_REQUEST.DESC=11,GET_CONFIG_REQUEST=10,HELLO=1,SET_CONFIG=23,PACKET_OUT=7537686,VENDOR=13
Total :: ECHO_REQUEST=146870,ECHO_REPLY=24472,FLOW_MOD=27784,FEATURES_REQUEST=25,STATS_REQUEST.DESC=11,GET_CONFIG_REQUEST=10,HELLO=1,SET_CONFIG=23,PACKET_OUT=7537686,VENDOR=13
---Recv---
classifier-dpid=00:01:08:17:f4:b5:2a:00 :: ECHO_REQUEST=24472,ECHO_REPLY=146870,STATS_REPLY.DESC=11,ERROR=1709,PACKET_IN=994158,FEATURES_REPLY=25,GET_CONFIG_REPLY=10,HELLO=1,FLOW_REMOVED=23862
Total :: ECHO_REQUEST=24472,ECHO_REPLY=146870,STATS_REPLY.DESC=11,ERROR=1709,PACKET_IN=994158,FEATURES_REPLY=25,GET_CONFIG_REPLY=10,HELLO=1,FLOW_REMOVED=23862
---Drop---
Total ::
}}}

To show further topology information, used access to the head node to run !FlowVisor for the !OpenFlow switch in the ExoGENI rack. Note that ExoGENI head node access is not normally available to experimenters. These are the FlowSpaces related to the EG-EXP-6-exp3, which uses network address 10.42.11.0:

{{{
[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd listFlowSpace |egrep 10.42.11.0
rule 10: FlowEntry[dpid=[00:01:08:17:f4:b5:2a:00],ruleMatch=[OFMatch[dl_type=0x800,dl_vlan=0x6d6,nw_dst=10.42.11.0/24,nw_src=10.42.11.0/24]],actionsList=[Slice:f08d3095-1c80-4d13-bb55-77fe60cc2743=4],id=[51458425],priority=[100],]
rule 11: FlowEntry[dpid=[00:01:08:17:f4:b5:2a:00],ruleMatch=[OFMatch[dl_type=0x806,dl_vlan=0x6d6,nw_dst=10.42.11.0/24,nw_src=10.42.11.0/24]],actionsList=[Slice:f08d3095-1c80-4d13-bb55-77fe60cc2743=4],id=[51458427],priority=[100],]
}}}

Some addition information about Version and configuration:
{{{

[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd getConfig 'flowvisor'
flowvisor 0 = flowvisor!api_webserver_port::INT : 8080
flowvisor 1 = flowvisor!api_jetty_webserver_port::INT : -1
flowvisor 2 = flowvisor!log_ident::STR : flowvisor
flowvisor 3 = flowvisor!checkpointing::BOOL : true
flowvisor 4 = flowvisor!listen_port::INT : 6633
flowvisor 5 = flowvisor!track_flows::BOOL : false
flowvisor 6 = flowvisor!logging::STR : NOTE
flowvisor 7 = flowvisor!stats_desc_hack::BOOL : true
flowvisor 8 = flowvisor!run_topology_server::BOOL : true
flowvisor 9 = flowvisor!log_facility::STR : LOG_LOCAL7

[lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd ping hello
Got reply:
PONG(fvadmin): FV version=flowvisor-0.8.1::hello

}}}

== 33. Get layer 2 topology information about slivers in each slice. ==

__Statistics available to Experimenters__ 

In this scenario the !FloodLight Controller is used to capture topology information. The Floodlight controller was configured to run on port 33020, in place of NOX. First we list all devices tracked by the !FloodLight controller:
{{{
 $ curl http://localhost:9090/wm/device/ 
 [{"mac":["00:0c:29:b0:74:08"],"ipv4":[],"vlan":[1750],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":60}],"lastSeen":1347912254393},
  {"mac": ["00:07:43:12:6e:30"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":48}],"lastSeen":1347912293697},
  {"mac":["00:0c:29:b0:74:08"],"ipv4":["10.42.11.23"],"vlan":[],"attachmentPoint":[],"lastSeen":1347912254389},
  {"mac":["00:07:43:12:6f:69"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":22}],"lastSeen":1347912293450},
  {"mac":["00:07:43:12:6e:39"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":24}],"lastSeen":1347912293697},
  {"mac":["00:15:17:f4:31:aa"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"06:d6:00:24:a8:c4:b9:00","errorStatus":null,"port":33}],"lastSeen":1347912221496},
  {"mac":["00:07:43:12:5c:f0"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":47}],"lastSeen":1347912293811},
  {"mac":["00:07:43:12:5c:f9"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":23}],"lastSeen":1347912293811},
  {"mac":["52:54:00:3c:c2:a2"],"ipv4":["10.42.11.198"],"vlan":[1750],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":42}],"lastSeen":1347912254388},
  {"mac":["00:07:43:12:5b:20"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":45}],"lastSeen":1347912294368},
  {"mac":["00:15:17:f4:31:aa"],"ipv4":["10.42.11.209"],"vlan":[1750],"attachmentPoint":[],"lastSeen":1347912221515},
  {"mac":["52:54:00:3c:c2:a2"],"ipv4":[],"vlan":[],"attachmentPoint":[],"lastSeen":1347912221488},
  {"mac":["00:07:43:12:5b:29"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":21}],"lastSeen":1347912294368},
  {"mac":["00:07:43:12:6f:60"],"ipv4":[],"vlan":[],"attachmentPoint":[{"switchDPID":"00:01:08:17:f4:b5:2a:00","errorStatus":null,"port":46}],"lastSeen":1347912293449}]
}}}

In the capture above the address 10.42.11.209 belongs to the node pc9.pgeni.gpolab.bbn.com and the address 10.42.11.198 belongs to the VM  node reserved in the ExoGENI Rack on the shared VLAN 1750.

Now some topology information about the inter-switch links:
{{{
 $ curl http://localhost:9090/wm/topology/links/json 
[{"dst-port":1,"dst-switch":"00:00:0e:84:40:39:1b:93","src-port":2,"src-switch":"00:00:0e:84:40:39:1a:57","type":"DIRECT_LINK"},
 {"dst-port":1,"dst-switch":"00:00:0e:84:40:39:19:96","src-port":3,"src-switch":"00:00:0e:84:40:39:18:58","type":"DIRECT_LINK"},
 {"dst-port":71,"dst-switch":"06:d6:00:24:a8:c4:b9:00","src-port":20,"src-switch":"06:d6:00:12:e2:b8:a5:d0","type":"DIRECT_LINK"},
 {"dst-port":20,"dst-switch":"06:d6:00:12:e2:b8:a5:d0","src-port":71,"src-switch":"06:d6:00:24:a8:c4:b9:00","type":"DIRECT_LINK"},
 {"dst-port":2,"dst-switch":"00:00:0e:84:40:39:18:1b","src-port":2,"src-switch":"00:00:0e:84:40:39:1b:93","type":"DIRECT_LINK"},
 {"dst-port":2,"dst-switch":"00:00:0e:84:40:39:1b:93","src-port":2,"src-switch":"00:00:0e:84:40:39:18:1b","type":"DIRECT_LINK"},
 {"dst-port":2,"dst-switch":"00:00:0e:84:40:39:1a:57","src-port":1,"src-switch":"00:00:0e:84:40:39:1b:93","type":"DIRECT_LINK"},
 {"dst-port":3,"dst-switch":"00:00:0e:84:40:39:18:58","src-port":1,"src-switch":"00:00:0e:84:40:39:19:96","type":"DIRECT_LINK"},
 {"dst-port":3,"dst-switch":"00:00:0e:84:40:39:18:1b","src-port":2,"src-switch":"00:00:0e:84:40:39:18:58","type":"DIRECT_LINK"},
 {"dst-port":2,"dst-switch":"00:00:0e:84:40:39:18:58","src-port":3,"src-switch":"00:00:0e:84:40:39:18:1b","type":"DIRECT_LINK"},
 {"dst-port":64,"dst-switch":"00:01:08:17:f4:b5:2a:00","src-port":15,"src-switch":"06:d6:00:12:e2:b8:a5:d0","type":"DIRECT_LINK"}]
}}}

Note: The ExoGENI DPID is "00:01:08:17:f4:b5:2a:00" 




__Statistics available to Administrators__

Information was captured in earlier steps where the console command was used to show which switches were connected to the NOX controller in step 28 above.
Following is an example where experiment 3 is partially set up with sliver at the BBN ExoGENI and the BBN Meso-scale site only. The topology is checked to verify that it only shows the switches for the slivers created:
{{{
$  ./nox-console -n localhost -p 11020 getnodes | sort
00:01:08:17:f4:b5:2a:00   ## ExoGENI BBN Switch 
06:d6:00:12:e2:b8:a5:d0   ## OF switch at BBN (jalapeno) to NLR 
06:d6:00:24:a8:c4:b9:00   ## OF switch at BBN (poblano) to GPO PG node pc9  
}}}


== 34. Install flows that match on layer 2 fields and/or layer 3 fields. ==

== 35. Run test for at least 4 hours ==

== 36. Review monitoring statistics and checks as above ==

Initial plan for this step was to review the slice monitoring data at [https://gmoc-db.grnoc.iu.edu/protected-openid/index.pl?method=slices GMOC DB] location, but after discussion with ExoGENI team, this step is being modified to use Iperf to determine statistics for the experimenter. 

First checked performance for the nodes in EG-EXP-6-exp3, endpoints are one ExoGENI VM and one GPO ProtoGENI meso-scale node (pc9), following are the results reported on the ExoGENI iperf server:
{{{
root@debian:~iperf -c 10.42.11.209
------------------------------------------------------------
Client connecting to 10.42.11.209, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.11.198 port 49713 connected with 10.42.11.209 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.08 GBytes    930 Mbits/sec
}}}

These are the results from the client iperf running on the GPO ProtoGENI node pc9:
{{{
vm:~> iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.42.11.209 port 5001 connected with 10.42.11.198 port 49713
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.08 GBytes    930 Mbits/sec
}}}



== 37. Stop traffic and delete slivers ==