wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-EXP-6

Version 43 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

EG-EXP-6: ExoGENI and Meso-scale Multi-site OpenFlow Acceptance Test

This page captures status for the test case EG-EXP-6, which verifies ExoGENI rack interoperability with other meso-scale GENI sites. For overall status see the ExoGENI Acceptance Test Status page. Last update: 08/14/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Ticket Comments
Step 1 Color(yellow,Complete)?
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)?
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)? Replaced bare metal with VM
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 Color(yellow,Complete)?
Step 14 Color(yellow,Complete)?
Step 15 Color(yellow,Complete)?
Step 16 Color(yellow,Complete)?
Step 17 Color(yellow,Complete)?
Step 18 Color(yellow,Complete)?
Step 19 Color(yellow,Complete)?
Step 20 Color(yellow,Complete)?
Step 21 Color(yellow,Complete)?
Step 22 Color(yellow,Complete)?
Step 23 Color(yellow,Complete)?
Step 24 Color(yellow,Complete)?
Step 25 Color(yellow,Complete)?
Step 26 Color(yellow,Complete)?
Step 27 Color(yellow,Complete)?
Step 28 Color(yellow,Complete)?
Step 29 Color(yellow,Complete)?
Step 30
Step 31
Step 32
Step 33
Step 34
Step 35
Step 36
Step 37


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This test was August 7, 2012.

The pgeni.gpolab.bbn.com slice authority is used for the credentials and the following aggregate manager nick_names were defined in the omni_config:

#ExoGENI Compute and OF Aggregates Managers 
exobbn=,https://bbn-hn.exogeni.net:11443/orca/xmlrpc
exorci=,https://rci-hn.exogeni.net:11443/orca/xmlrpc
exosm=,https://geni.renci.org:11443/orca/xmlrpc
of-exobbn=,https://bbn-hn.exogeni.net:3626/foam/gapi/1
of-exorci=,https://rci-hn.exogeni.net:3626/foam/gapi/1

#Meso-scale Compute and OF Aggregates Managers
of-bbn=,https://foam.gpolab.bbn.com:3626/foam/gapi/1
of-clemson=,https://foam.clemson.edu:3626/foam/gapi/1
of-i2=,https://foam.net.internet2.edu:3626/foam/gapi/1
of-rutgers=,https://nox.orbit-lab.org:3626/foam/gapi/1
plc-bbn=,http://myplc.gpolab.bbn.com:12346/
plc-clemson=,http://myplc.clemson.edu:12346/
pgeni=,https://pgeni.gpolab.bbn.com/protogeni/xmlrpc/am
pg2=,https://www.emulab.net/protogeni/xmlrpc/am/2.0

1. As Experimenter1, request ListResources from BBN ExoGENI, RENCI ExoGENI, and FOAM at NLR Site.

GPO ProtoGENI user credentials for lnevers@bbn.com were used for Experimenter1.

Requested listresources from each of the FOAM aggregates:

$ omni.py listresources -a of-exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-nlr -o
$ omni.py listresources -a of-i2 -o 
$ omni.py listresources -a of-exorci -o

2. Review ListResources output from all AMs

Requested listresources from each of the GENI AM aggregates:

$ omni.py listresources -a plc-bbn -o
$ omni.py listresources -a plc-clemson -o
$ omni.py listresources -a pg2 --api-version 2 -t GENI 3 -o
$ omni.py listresources -a exobbn -o 
$ omni.py listresources -a exorci -o 

3. Define a request RSpec for a VM at the BBN ExoGENI.

Defined an RSpec for one VM on the shared VLAN 1750 in the BBN ExoGENI rack: EG-EXP-6-exp1-exobbn.rspec

4. Define a request RSpec for a VM at the RENCI ExoGENI.

Define an RSpec for one VM on the shared VLAN 1750 in the RENCI ExoGENI rack: EG-EXP-6-exp1-exorci.rspec

5. Define request RSpecs for OpenFlow resources from BBN FOAM to access GENI OpenFlow core resources.

Defined an OpenFlow RSpec for the FOAM controllers in each rack:

The BBN ExoGENI rack connects to the GENI backbone via the BBN GPO Lab OpenFlow switch named poblano. In order for this scenario to work OpenFlow must be configured also on the poblano switch to allow the BBN ExoGENI rack OF traffic to get to the OF GENI core. The following OpenFlow RSpec was defined for the BBN site switch poblano: EG-EXP-6-exp1-openflow-bbn.rspec

6. Define request RSpecs for OpenFlow core resources at NLR FOAM.

Defined an OpenFlow RSpec for the NLR FOAM controller: EG-EXP-6-exp1-openflow-nlr.rspec

7. Create the first slice

Created the first slice:

$ omni.py createslice EG-EXP-6-exp1

8. Create a sliver in the first slice at each AM, using the RSpecs defined above.

 $ omni.py -a exobbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-exobbn.rspec
 $ omni.py -a of-exobbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-exobbn.rspec
 $ omni.py -a of-bbn createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-bbn.rspec
 $ omni.py -a of-nlr createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-nlr.rspec
 $ omni.py -a of-exorci createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-openflow-exorci.rspec
 $ omni.py -a exorci createsliver EG-EXP-6-exp1 EG-EXP-6-exp1-exorci.rspec

Here are of the Rspecs used: EG-EXP-6-exp1-exobbn.rspec, EG-EXP-6-exp1-openflow-bbn.rspec, EG-EXP-6-exp1-openflow-nlr.rspec, and EG-EXP-6-exp1-exorci.rspec.

9. Log in to each of the systems, verify IP address assignment. Send traffic to the other system, leave traffic running.

Determine the status of the OpenFlow slivers, check the sliverstatus for each to make sure that they have been approved. Note that 'geni_status' is 'ready' when the sliver is approved. If the OpenFlow sliver is waiting for approval the 'geni_status' will be 'configuring:

 $ omni.py -a of-bbn sliverstatus EG-EXP-6-exp1
 $ omni.py -a of-nlr sliverstatus EG-EXP-6-exp1

Determine compute resources allocated to each sliver in the ExoGENI racks. First make sure the sliverstatus is ready:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp1
 $ omni.py -a exorci sliverstatus EG-EXP-6-exp1

Once the slivers are ready get the list of hosts allocated with Omni:

 $ omni.py -a exobbn listresources EG-EXP-6-exp1 -o
 $ omni.py -a exorci listresources EG-EXP-6-exp1 -o 
 $ egrep hostname EG-EXP-6-exp1-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.6" port="22" username="root"/> 
 $ egrep hostname EG-EXP-6-exp1-rspec-rci-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="152.54.14.13" port="22" username="root"/>   

Login to BBN VM and send traffic to remote:

 $ ssh root@192.1.242.6
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:5a:92:fe  
          inet addr:10.42.19.27  Bcast:10.42.19.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe5a:92fe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:864 (864.0 B)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.19.25
 PING 10.42.19.25 (10.42.19.25) 56(84) bytes of data.
 64 bytes from 10.42.19.25: icmp_req=1 ttl=64 time=2353 ms
 64 bytes from 10.42.19.25: icmp_req=2 ttl=64 time=1356 ms
 64 bytes from 10.42.19.25: icmp_req=3 ttl=64 time=357 ms
 64 bytes from 10.42.19.25: icmp_req=4 ttl=64 time=172 ms

Login to RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.13
  root@debian:~# ifconfig eth1
  eth1      Link encap:Ethernet  HWaddr 52:54:00:74:38:fd  
          inet addr:10.42.19.25  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fe74:38fd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1056 (1.0 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.19.27
 PING 10.42.19.27 (10.42.19.27) 56(84) bytes of data.
 64 bytes from 10.42.19.27: icmp_req=1 ttl=64 time=2408 ms
 64 bytes from 10.42.19.27: icmp_req=2 ttl=64 time=1410 ms
 64 bytes from 10.42.19.27: icmp_req=3 ttl=64 time=410 ms
 64 bytes from 10.42.19.27: icmp_req=4 ttl=64 time=172 ms

10. As Experimenter2, define a request RSpec for one VM and one bare metal node at BBN ExoGENI.

GPO ProtoGENI user credentials for lnevers1@bbn.com were used for Experimenter2.

Requested listresources from each of the FOAM aggregates:

$ omni.py listresources -a of-exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-nlr -o
$ omni.py listresources -a of-exorci -o

Requested listresources from each of the ExoGENI aggregates:

$ omni.py listresources -a exobbn -o 
$ omni.py listresources -a exorci -o 

Defined an RSpec for one VM and one bare metal node on the shared VLAN 1750 in the BBN ExoGENI rack EG-EXP-6-exp2-exobbn.rspec. Attached as a reference, even though it could not be used for this test case as described.

The Bare Metal node is only available via the ExoSM, so in order to preserve the requests to the individual AMs, modified the RSpec to use 2 VMs, and the new RSpec used is EG-EXP-6-exp2-exobbn-mod.rspec.

11. Define a request RSpec for two VMs on the same worker node at RENCI ExoGENI.

Defined an RSpec for two VMs on the shared VLAN 1750 in the RENCI rack: EG-EXP-6-exp2-exorci.rspec

12. Define request RSpecs for OpenFlow resources from GPO FOAM to access GENI OpenFlow core resources.

The experiment uses the flows defined by the RSpec EG-EXP-6-exp2-openflow-bbn.rspec.

13. Define request RSpecs for OpenFlow core resources at NLR FOAM

The experiment uses the flows defined by the previous experiment, as defined in EG-EXP-6-exp2-openflow-nlr.rspec

14. Create a second slice

$ omni.py createslice EG-EXP-6-exp2

15. Create a sliver in the second slice at each AM, using the RSpecs defined above

Because of the issues described in step 10, had to request to VMs on the shared VLAN 1750 at the BBN SM. The request:

$ omni.py -a exobbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-exobbn-mod.rspec 

Created a sliver at RENCI ExoGENIrack aggregate:

$ omni.py -a exorci createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-exorci.rspec 

Create a sliver for the network resources required by experiment 2:

$ omni.py -a of-exobbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-exobbn.rspec
$ omni.py -a of-bbn createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-bbn.rspec
$ omni.py -a of-nlr createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-nlr.rspec
$ omni.py -a of-exorci createsliver EG-EXP-6-exp2 EG-EXP-6-exp2-openflow-exorci.rspec

16. Log in to each of the systems in the slice

Login to each system and send traffic to each other systems; leave traffic running.

Determine compute resources allocated to each sliver in the ExoGENI racks. First make sure the sliverstatus is ready:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp2
 $ omni.py -a exorci sliverstatus EG-EXP-6-exp2

Once the slivers are ready get the list of hosts allocated with Omni:

 $ omni.py -a exobbn listresources EG-EXP-6-exp2 -o
 $ omni.py -a exorci listresources EG-EXP-6-exp2 -o 
 $ egrep hostname EG-EXP-6-exp2-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.8" port="22" username="root"/>      
   <login authentication="ssh-keys" hostname="192.1.242.7" port="22" username="root"/>           
 $ egrep hostname EG-EXP-6-exp2-rspec-rci-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="152.54.14.16" port="22" username="root"/>      
   <login authentication="ssh-keys" hostname="152.54.14.14" port="22" username="root"/>     

Login to first BBN VM and send traffic to remote:

 $ ssh root@192.1.242.7
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:0e:77:26  
          inet addr:10.42.18.98  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe0e:7726/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1248 (1.2 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.15
 PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
 64 bytes from 10.42.18.15: icmp_req=1 ttl=64 time=2277 ms
 64 bytes from 10.42.18.15: icmp_req=2 ttl=64 time=1362 ms
 64 bytes from 10.42.18.15: icmp_req=3 ttl=64 time=362 ms
 64 bytes from 10.42.18.15: icmp_req=4 ttl=64 time=172 ms

Login to second BBN VM and send traffic to remote:

 $ ssh root@192.1.242.8
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:9f:cf:e6  
          inet addr:10.42.18.77  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe9f:cfe6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1440 (1.4 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.15
 PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
 64 bytes from 10.42.18.15: icmp_req=1 ttl=64 time=2561 ms
 64 bytes from 10.42.18.15: icmp_req=2 ttl=64 time=1562 ms
 64 bytes from 10.42.18.15: icmp_req=3 ttl=64 time=654 ms
 64 bytes from 10.42.18.15: icmp_req=4 ttl=64 time=171 ms

Login to first RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.14
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:92:84:59  
          inet addr:10.42.18.16  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe92:8459/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1440 (1.4 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.98
 PING 10.42.18.98 (10.42.18.98) 56(84) bytes of data.
 64 bytes from 10.42.18.98: icmp_req=1 ttl=64 time=4554 ms
 64 bytes from 10.42.18.98: icmp_req=4 ttl=64 time=1610 ms
 64 bytes from 10.42.18.98: icmp_req=2 ttl=64 time=3610 ms
 64 bytes from 10.42.18.98: icmp_req=5 ttl=64 time=710 ms
 64 bytes from 10.42.18.98: icmp_req=3 ttl=64 time=2711 ms
 64 bytes from 10.42.18.98: icmp_req=6 ttl=64 time=172 ms

Login to second RENCI VM and send traffic to remote:

 $ ssh root@152.54.14.16
 root@debian:~# ifconfig eth1
 eth1      Link encap:Ethernet  HWaddr 52:54:00:4c:59:50  
          inet addr:10.42.18.15  Bcast:10.42.18.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe4c:5950/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1116 (1.0 KiB)  TX bytes:398 (398.0 B)
 root@debian:~# ping 10.42.18.77
 PING 10.42.18.77 (10.42.18.77) 56(84) bytes of data.
 64 bytes from 10.42.18.77: icmp_req=1 ttl=64 time=2263 ms
 64 bytes from 10.42.18.77: icmp_req=2 ttl=64 time=1339 ms
 64 bytes from 10.42.18.77: icmp_req=3 ttl=64 time=394 ms
 64 bytes from 10.42.18.77: icmp_req=4 ttl=64 time=171 ms

17. As Experimenter3, request ListResources from BBN ExoGENI, GPO FOAM, and FOAM at Meso-scale Site

GPO ProtoGENI user credentials for lnevers2@bbn.com were used for Experimenter3.

Got a listresources from each aggregate in the experiment 2. Note, the WAPG resource at Rutgers is available via the Emulab ProtoGENI aggregate. Executed the following to get resources for each aggregate:

$ omni.py listresources -a exobbn -o
$ omni.py listresources -a of-bbn -o
$ omni.py listresources -a of-i2 -o
$ omni.py listresources -a of-rutgers -o
$ omni.py listresources -a pgeni -o   
$ omni.py listresources -a pg2 --api-version 2 -t GENI 3 -o   

18. Review ListResources output from all AMs

Reviewed content of advertisement RSpecs for each of the aggregates polled in the previous step.

19. Define a request RSpec for a VM at the BBN ExoGENI

Defined RSpec for one VM on the shared VLAN 1750 in the BBN ExoGENI rack: EG-EXP-6-exp3-exobbn.rspec

20. Define a request RSpec for a compute resource at the GPO Meso-scale site

Defined RSpec for one PG Resource at the GPO PG site: EG-EXP-6-exp3-bbn-pgeni.rspec

21. Define a request RSpec for a compute resource at a Meso-scale site

Defined RSpec for one WAPG Resource at the Rutgers site EG-EXP-6-exp3-rutgers-wapg.rspec

22. Define request RSpecs for OpenFlow resources to allow connection from OF BBN ExoGENI to Meso-scale OF sites GPO (NLR) and Rutgers Sites (I2)

Defined the OpenFlow RSpecs for the sites below, note that the BBN OF RSpec overlaps with the one in experiment 1 and is being used here:

23. Create a third slice

$ omni.py createslice EG-EXP-6-exp3

24. Create sliver that connects the Internet2 Meso-scale OpenFlow site to the BBN ExoGENI Site, and the GPO Meso-scale site

Requests are as follows:

 $ omni.py -a exobbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-exobbn.rspec  
 $ omni.py -a of-exobbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-exobbn.rspec 
 $ omni.py -a pgeni createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-bbn-pgeni.rspec  ## See Note1
 $ omni.py -a of-bbn createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-bbn.rspec
 $ omni.py -a of-i2 createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-i2.rspec
 $ omni.py -a pg2 createsliver EG-EXP-6-exp3 --api-version 2 -t GENI 3 EG-EXP-6-exp3-rutgers-wapg.rspec ## Both WAPG not available
 $ omni.py -a of-rutgers createsliver EG-EXP-6-exp3 EG-EXP-6-exp3-openflow-rutgers.rspec 

Note1: The GPO ProtoGENI reservation must be requested before the BBN OpenFlow FOAM request. This must be done to determine the port of the GPO PG node assigned, and then configure the BBN FOAM RSpec accordingly.

25. Log in to each of the compute resources in the slice

Determine which compute resources are allocated to the sliver in the BBN ExoGENI rack. First make sure the sliverstatus is ready, then when the sliver is ready get the host allocated with Omni:

 $ omni.py -a exobbn sliverstatus EG-EXP-6-exp3
 $ omni.py -a exobbn listresources EG-EXP-6-exp3 -o
 $ egrep hostname EG-EXP-6-exp3-rspec-bbn-hn-exogeni-net-11443-orca.xml 
   <login authentication="ssh-keys" hostname="192.1.242.9" port="22" username="root"/>

Determine the resources allocated to the WAPG Rutgers node:

 $ omni.py -a pg2 sliverstatus EG-EXP-6-exp3 --api-version 2 -t GENI 3 -o

Determine the resources allocated to the GPO PG node:

 $ omni.py -a pgeni sliverstatus EG-EXP-6-exp3
 $ omni.py -a pgeni listresources EG-EXP-6-exp3 -o  
 $ egrep "hostname" EG-EXP-6-exp3-rspec-pgeni-gpolab-bbn-com.xml
    <login authentication="ssh-keys" hostname="pc9.pgeni.gpolab.bbn.com" port="22" username="lnevers"/>  

Login to each node in the slice, configure data plane network interfaces on any non-ExoGENI resources as necessary, and send traffic to each of the other systems; leave traffic running.

Login to ExoGENI Node and send traffic to GPO PG node:

$ ssh 192.1.242.9 -l root
eth1      Link encap:Ethernet  HWaddr 52:54:00:e6:5e:1a  
          inet addr:10.42.11.198  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fee6:5e1a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:38 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2496 (2.4 KiB)  TX bytes:398 (398.0 B)
$ ping 10.42.11.209

Login to GPO PG node and send traffic to ExoGENI node:

 $ ssh pc9.pgeni.gpolab.bbn.com

 $ ping 10.42.11.198

26. Verify that all three experiments continue to run

Verify that each experiment is running without impacting each other's traffic and verify that data is exchanged over the path along which data is supposed to flow.

Attempted to exchange traffic from Experimenter1 host to Experimenter2 host without success:

$ ssh root@192.1.242.6
Linux debian 2.6.32-5-amd64 #1 SMP Mon Jan 16 16:22:28 UTC 2012 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Aug 15 19:38:46 2012 from arendia.gpolab.bbn.com
root@debian:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 52:54:00:5a:92:fe  
          inet addr:10.42.19.27  Bcast:10.42.19.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe5a:92fe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13662 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1217282 (1.1 MiB)  TX bytes:1087918 (1.0 MiB)

root@debian:~# ping 10.42.18.15
PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
From 64.119.128.41 icmp_seq=87 Destination Host Unreachable

Attempted several other combinations which also failed.

27. Review baseline, GMOC, and monitoring statistics

Initial plan for this step was to review the slice monitoring data at GMOC DB location, but after discussion with ExoGENI team, this step is being modified to use Iperf to determine statistics for the experimenter.

Iperf Run-1

As Experimenter1 (lnevers), ran Iperf from the BBN host to the RENCI host in Experiment 1, with other 2 experiments (2&3) idle, with these results:

root@debian:~# iperf -c 10.42.19.25
------------------------------------------------------------
Client connecting to 10.42.19.25, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.19.27 port 36977 connected with 10.42.19.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.77 MBytes  4.81 Mbits/sec

Iperf Run-2

As Experimenter2 (lnevers), ran Iperf concurrently from the each of the two BBN hosts to the two RENCI hosts in Experiment 2, with other 2 experiments (1 & 3) idle, with these results on the first BBN host:

root@debian:~# iperf -c 10.42.18.15
------------------------------------------------------------
Client connecting to 10.42.18.15, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.98 port 42409 connected with 10.42.18.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.83 MBytes  4.87 Mbits/sec

and on the second BBN host:

root@debian:~# iperf -c 10.42.18.16
------------------------------------------------------------
Client connecting to 10.42.18.16, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.77 port 59549 connected with 10.42.18.16 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.83 MBytes  4.86 Mbits/sec

Iperf Run-3 Combined all Iperf traffic from Run-1+Run-2, and ran all 3 client/server combinations concurrently.

Results for Experiment1 BBN Host1:

# iperf -c 10.42.19.25
------------------------------------------------------------
Client connecting to 10.42.19.25, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.19.27 port 36979 connected with 10.42.19.25 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.70 MBytes  4.72 Mbits/sec

Results from Experiment2 Host1:

# iperf -c 10.42.18.15
------------------------------------------------------------
Client connecting to 10.42.18.15, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.98 port 42410 connected with 10.42.18.15 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.82 MBytes  4.88 Mbits/sec

Results from Experiment2 Host2:

root@debian:~# iperf -c 10.42.18.16
------------------------------------------------------------
Client connecting to 10.42.18.16, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.42.18.77 port 59550 connected with 10.42.18.16 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  5.90 MBytes  4.94 Mbits/sec

28. As site administrator, identify all controllers that the BBN ExoGENI OpenFlow switch is connected to

An experimenter does not have site administrator privileges to identify all OpenFlow controllers that have access to the BBN ExoGENI OpenFlow Switch. Was able to verify that the appropriate switches for the experiment on this page are connected to the OpenFlow Controllers.

EG-EXP-6-exp1

Based on the RSpecs the following nox controller and ports are requested for EG-EXP-6-exp1:

 <openflow:controller url="tcp:mallorea.gpolab.bbn.com:33017" type="primary" />

On the system mallorea.gpolab.bbn.com the nox controller is running and listening on port 33017 and allowing console connections on port 11017:

 $ ./nox_core --info=$HOME/nox-33017.info -i ptcp:33017 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11017,sslport=0

Verified the switches connected to the controller for EG-EXP-6-exp1 by using the nox console to get a list of switches connected:

 $ ./nox-console -n localhost -p 11017 getnodes | sort
  00:00:00:10:10:17:50:01
  00:64:08:17:f4:b3:5b:00
  00:64:08:17:f4:b5:2a:00  
  06:d6:00:12:e2:b8:a5:d0
  06:d6:00:24:a8:5d:0b:00
  06:d6:00:24:a8:d2:b8:40
  0e:84:00:23:47:c8:bc:00
  0e:84:00:23:47:ca:bc:40
  0e:84:00:24:a8:d2:48:00
  0e:84:00:24:a8:d2:b8:40
  0e:84:00:26:f1:40:a8:00

In the above list of connections, 00:64:08:17:f4:b5:2a:00 is the BBN ExoGENI OpenFlow Switch and 00:64:08:17:f4:b3:5b:00 is the RENCI ExoGENI OpenFlow Switch.

EG-EXP-6-exp2

Based on the RSpecs the following controller and ports are requested for EG-EXP-6-exp2:

 <openflow:controller url="tcp:mallorea.gpolab.bbn.com:33018" type="primary" />

On the system mallorea.gpolab.bbn.com the controller is running and listening on port 33018 and allowing console connections on port 11018:

 $ ./nox_core --info=$HOME/nox-33018.info -i ptcp:33018 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11018,sslport=0

Verified the switches connected to the controller for EG-EXP-6-exp2 by using the nox console to get a list of switches connected:

 $ ./nox-console -n localhost -p 11018 getnodes
 00:64:08:17:f4:b3:5b:00
 00:64:08:17:f4:b5:2a:00
 06:d6:00:12:e2:b8:a5:d0
 06:d6:00:24:a8:d2:b8:40
 0e:84:00:23:47:c8:bc:00
 0e:84:00:23:47:ca:bc:40
 0e:84:00:24:a8:d2:48:00
 0e:84:00:24:a8:d2:b8:40
 0e:84:00:26:f1:40:a8:00

In the above list of connections, 00:64:08:17:f4:b5:2a:00 is the BBN ExoGENI OpenFlow Switch and 00:64:08:17:f4:b3:5b:00 is the RENCI ExoGENI OpenFlow Switch.

EG-EXP-6-exp3

Based on the RSpecs the following controller and ports are requested for EG-EXP-6-exp3:

 <openflow:controller url="tcp:mallorea.gpolab.bbn.com:33020" type="primary" />

On the system mallorea.gpolab.bbn.com the controller is running and listening on port 33020 and allowing console connections on port 11020:

 $ ./nox_core --info=$HOME/nox-33020.info -i ptcp:33020 switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11020,sslport=0

Verified the switches for EG-EXP-6-exp1 by using the nox console to get a list of switches connected:

 $ ./nox-console -n localhost -p 11020 getnodes
 06:d6:00:24:a8:c4:b9:00
 06:d6:00:12:e2:b8:a5:d0
 00:00:0e:84:40:39:19:96
 00:00:0e:84:40:39:18:58
 00:64:08:17:f4:b5:2a:00
 00:00:0e:84:40:39:1b:93
 00:00:0e:84:40:39:1a:57
 00:00:0e:84:40:39:18:1b

In the above list of connections, 00:64:08:17:f4:b5:2a:00 is the BBN ExoGENI OpenFlow Switch and 00:64:08:17:f4:b3:5b:00 is the RENCI ExoGENI OpenFlow Switch.

29. As Experimenter3, verify that traffic only flows on the network resources assigned to slivers as specified by the controller

From Experimenter 3 host attempted to connect to Experimenter 2 host:

$ ssh 192.1.242.9 -l root
root@debian:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 52:54:00:e6:5e:1a  
          inet addr:10.42.11.198  Bcast:10.42.255.255  Mask:255.255.0.0
          inet6 addr: fe80::5054:ff:fee6:5e1a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2668 errors:0 dropped:0 overruns:0 frame:0
          TX packets:731 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:167664 (163.7 KiB)  TX bytes:30918 (30.1 KiB)

root@debian:~# ping 10.42.18.15
PING 10.42.18.15 (10.42.18.15) 56(84) bytes of data.
From 10.42.11.198 icmp_seq=2 Destination Host Unreachable
From 10.42.11.198 icmp_seq=3 Destination Host Unreachable
From 10.42.11.198 icmp_seq=4 Destination Host Unreachable

30. Verify control access to OF experiment

Verify that no default controller, switch fail-open behavior, or other resource other than experimenters' controllers, can control how traffic flows on network resources assigned to experimenters' slivers.

31. Set the hard and soft timeout of flowtable entries

32. Get switch statistics and flowtable entries for slivers from the OpenFlow switch.

33. Get layer 2 topology information about slivers in each slice.

34. Install flows that match on layer 2 fields and/or layer 3 fields.

35. Run test for at least 4 hours

36. Review monitoring statistics and checks as above

37. Stop traffic and delete slivers