wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-EXP-4

Version 30 (modified by lnevers@bbn.com, 12 years ago) (diff)

--

EG-EXP-4: ExoGENI Multi-site Acceptance Test

This page captures status for the test case EG-EXP-4, which verifies the ability to support basic operations of VMs and data flows between two racks. For overall status see the ExoGENI Acceptance Test Status page. Last update: 07/27/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Date completed Ticket Comments
Step 1 Color(yellow,Complete)? exoticket:46 Run with ExoSM
Step 2 Color(yellow,Complete)?
Step 3 Color(yellow,Complete)?
Step 4 Color(yellow,Complete)?
Step 5 Color(yellow,Complete)?
Step 6 Color(yellow,Complete)?
Step 7 Color(yellow,Complete)? Run with ExoSM
Step 8 Color(yellow,Complete)?
Step 9 Color(yellow,Complete)?
Step 10 Color(yellow,Complete)?
Step 11 Color(yellow,Complete)?
Step 12 Color(yellow,Complete)?
Step 13 to be completed later
Step 14 Color(orange,Blocked)? monitoring does not support bare metal nodes
Step 15
Step 16 Color(orange,Blocked)? monitoring does not support bare metal nodes
Step 17


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This test case is modified to use ExoSM to request resources across sites. The nickname:

exosm=,https://geni.renci.org:11443/orca/xmlrpc

is used in place of individual site's SM for each experiment in this test case:

exo-bbn=,https://bbn-hn.exogeni.net:11443/orca/xmlrpc
exo-rci=,https://rci-hn.exogeni.net:11443/orca/xmlrpc

Also, for initial test run only VMs used.

Step 1. As Experimenter1, Request ListResources RENCI ExoGENI

Using the credentials lnevers1@bbn.com, request listresources from ExoSM, to determine which resources can be requested for the first experiment:

$ omni.py -a exosm listresources -o

The above command generates a file named rspec-geni-renci-org-11443-orca.xml.

Step 2. Review ListResources output

Reviewed content of output file named rspec-geni-renci-org-11443-orca.xml and determined site information for VM request.

Step 3. Define a request RSpec

Define a request RSpec for a VM at BBN ExoGENI, a VM at RENCI ExoGENI and an unbound exclusive non-OpenFlow VLAN to connect the 2 endpoints. RSpec created for this experiment is EG-EXP-4-exp1.rspec

Step 4. Create the first slice.

Using the following command create a slice for the first experiment:

 $ omni.py createslice EG-EXP-4-exp1

Step 5. Create a sliver

Using the ExoSM and the RSpecs defined above create a sliver with one VM at BBN and one VM at RENCI:

$ omni.py createsliver -a exosm EG-EXP-4-exp1 EG-EXP-4-exp1.rspec

Verify that sliver status is ready:

$ omni.py sliverstatus -a exosm EG-EXP-4-exp1  

When sliverstatus reports geni_status as ready, you can collect a listresource for the sliver to determined which VMs are allocated to the sliver:

$  omni.py listresources -a exosm EG-EXP-4-exp1  -o
$  egrep hostname EG-EXP-4-exp1-rspec-geni-renci-org-11443-orca.xml 
   <login authentication="ssh-keys" hostname="152.54.14.34" port="22" username="root"/>      
   <login authentication="ssh-keys" hostname="192.1.242.12" port="22" username="root"/>

Step 6. Log in to each of the systems, and send traffic to the other system, leave traffic running

Connect to the RENCI VM and send traffic to the BBN VM:

$ ssh root@152.54.14.34
root@debian:~# ifconfig|egrep 172.16
          inet addr:172.16.7.2  Bcast:172.16.7.255  Mask:255.255.255.0
root@debian:~# ping 172.16.7.1
PING 172.16.7.1 (172.16.7.1) 56(84) bytes of data.
64 bytes from 172.16.7.1: icmp_req=1 ttl=64 time=48.2 ms
64 bytes from 172.16.7.1: icmp_req=2 ttl=64 time=18.6 ms
64 bytes from 172.16.7.1: icmp_req=3 ttl=64 time=17.9 ms

Connect to the BBN VM and send traffic to the RENCI VM:

$ ssh root@192.1.242.12
root@debian:~# ifconfig |egrep 172.16
          inet addr:172.16.7.1  Bcast:172.16.7.255  Mask:255.255.255.0
root@debian:~# ping 172.16.7.2 
PING 172.16.7.2 (172.16.7.2) 56(84) bytes of data.
64 bytes from 172.16.7.2: icmp_req=1 ttl=64 time=24.4 ms
64 bytes from 172.16.7.2: icmp_req=2 ttl=64 time=17.9 ms
64 bytes from 172.16.7.2: icmp_req=3 ttl=64 time=17.8 ms

Step 7. As Experimenter2, Request ListResources from RENCI ExoGENI

Using the credentials lnevers@bbn.com, request listresources from ExoSM, to determine which resources can be requested:

$ omni.py -a exosm listresources -o

Step 8. Define a request RSpec

Define a request RSpec for one VM and one bare metal node each with two interfaces in the BBN ExoGENI rack, two VMs at RENCI, and two VLANs to connect the BBN ExoGENI to the RENCI ExoGENI.

RSpec created for this topology is EG-EXP-4-exp2.rspec

Step 9. Create a second slice

Using the following command create a slice for the second experiment:

$ omni.py createslice EG-EXP-4-exp2

Step 10. In the second slice, create a sliver at the RENCI ExoGENI aggregate using the RSpecs defined above

Using the ExoSM and the RSpecs defined in step 8 create a sliver:

$ omni.py createsliver -a exosm EG-EXP-4-exp2 EG-EXP-4-exp2.rspec

Verify that sliver status is ready:

$ omni.py sliverstatus -a exosm EG-EXP-4-exp2   #xxxxxxx

Determined which nodes (VMs and bare metal) are allocated to this sliver:

$ omni.py listresources -a exosm EG-EXP-4-exp2 -o
$ egrep ssh EG-EXP-4-exp2-rspec-geni-renci-org-11443-orca.xml 
                  <login authentication="ssh-keys" hostname="192.1.242.110" port="22" username="root"/>      
                  <login authentication="ssh-keys" hostname="192.1.242.12" port="22" username="root"/>      
                  <login authentication="ssh-keys" hostname="152.54.14.18" port="22" username="root"/>      
                  <login authentication="ssh-keys" hostname="152.54.14.19" port="22" username="root"/>         

Step 11. Log in to each of the end-point systems, and send traffic to the other end-point system which shares the same VLAN

Logged in to each of the hosts and pinged a remote following were assigned:

$ ssh root@192.1.242.24
Linux debian 2.6.32-5-amd64 #1 SMP Mon Jan 16 16:22:28 UTC 2012 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Jul  6 15:12:22 2012 from arendia.gpolab.bbn.com
root@debian:~# ping 172.16.3.2
PING 172.16.3.2 (172.16.3.2) 56(84) bytes of data.
64 bytes from 172.16.3.2: icmp_req=1 ttl=64 time=27.6 ms
64 bytes from 172.16.3.2: icmp_req=2 ttl=64 time=17.7 ms
64 bytes from 172.16.3.2: icmp_req=3 ttl=64 time=17.8 ms
64 bytes from 172.16.3.2: icmp_req=4 ttl=64 time=17.8 ms
^C
--- 172.16.3.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 17.741/20.262/27.631/4.256 ms

Step 12. Verify traffic handling per experiment, VM isolation, and MAC address assignment

Logged into each host and tried to ping the remotes in other experiments, no traffic was exchanged.

Verified MAC address assignment:

root@debian:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:16:3e:6e:22:3c  
          inet addr:10.103.0.23  Bcast:10.103.0.255  Mask:255.255.255.0
          inet6 addr: fe80::16:3eff:fe6e:223c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:582 errors:0 dropped:0 overruns:0 frame:0
          TX packets:431 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:62154 (60.6 KiB)  TX bytes:59306 (57.9 KiB)

eth1      Link encap:Ethernet  HWaddr 52:54:00:12:58:a7  
          inet addr:172.16.2.2  Bcast:172.16.2.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe12:58a7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:996 (996.0 B)  TX bytes:678 (678.0 B)

eth2      Link encap:Ethernet  HWaddr 52:54:00:d1:2d:24  
          inet addr:172.16.3.1  Bcast:172.16.3.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fed1:2d24/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:24 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1862 (1.8 KiB)  TX bytes:1238 (1.2 KiB)

Step 13. Construct and send a non-IP ethernet packet over the data plane interface.

To be executed later.

Step 14. Review baseline monitoring statistics

Current monitoring does not support bare metal nodes, no data was collected.

Step 15. Run test for at least 4 hours

Step 16. Review baseline monitoring statistics

Current monitoring does not support bare metal nodes, no data was collected.

Step 17. Stop traffic and delete slivers