Version 16 (modified by 12 years ago) (diff) | ,
---|
-
Detailed test plan for EG-MON-3: GENI Active Experiment Inspection Test
- Page format
- Status of test
- High-level description from test plan
- Step 1 (prep): start a local experiment and terminate it
- Step 2 (prep): start an ExoSM experiment and terminate it
- Step 3 (prep): start an experiment and leave it running
- Step 4: view running VMs
- Step 5: get information about terminated VMs
- Step 6: get OpenFlow state information
- Step 7: verify MAC addresses on the rack dataplane switch
- Step 8: verify active dataplane traffic
Detailed test plan for EG-MON-3: GENI Active Experiment Inspection Test
This page is GPO's working page for performing EG-MON-3. It is public for informational purposes, but it is not an official status report. See GENIRacksHome/ExogeniRacks/AcceptanceTestStatus for the current status of ExoGENI acceptance tests.
Page format
- The status chart summarizes the state of this test
- The high-level description from test plan contains text copied exactly from the public test plan and acceptance criteria pages.
- The steps contain things i will actually do/verify:
- Steps may be composed of related substeps where i find this useful for clarity
- Each step is either a preparatory step (identified by "(prep)") or a verification step (the default):
- Preparatory steps are just things we have to do. They're not tests of the rack, but are prerequisites for subsequent verification steps
- Verification steps are steps in which we will actually look at rack output and make sure it is as expected. They contain a Using: block, which lists the steps to run the verification, and an Expect: block which lists what outcome is expected for the test to pass.
Status of test
See GENIRacksHome/InstageniRacks/AcceptanceTestStatus for the meanings of test states.
Step | State | Date completed | Tickets | Closed tickets / Comments |
1 | Color(green,Pass)? | 2012-08-14 | ||
2 | Color(green,Pass)? | 2012-08-14 | ||
3 | Color(green,Pass)? | 2013-03-12 | ||
4 | Color(green,Pass)? | 2013-03-14 | ||
5 | Color(#98FB98,Pass: most criteria)? | 2013-05-03 | The ability to get information about terminated VMs was asserted but not tested. | |
6 | Color(green,Pass)? | 2013-03-12 | ||
7 | Color(yellow,Complete)? | 2013-03-14 | We should re-test this if the switch gains the ability to track MAC addresses | |
8 | Color(green,Pass)? | 2013-03-15 |
High-level description from test plan
This test inspects the state of the rack data plane and control networks when experiments are running, and verifies that a site administrator can find information about running experiments.
Procedure
- An experimenter from the GPO starts up experiments to ensure there is data to look at:
- An experimenter runs an experiment containing at least one rack VM, and terminates it.
- An experimenter runs an experiment containing at least one rack VM, and leaves it running.
- A site administrator uses available system and experiment data sources to determine current experimental state, including:
- How many VMs are running and which experimenters own them
- How many VMs were terminated within the past day, and which experimenters owned them
- What OpenFlow controllers the data plane switch, the rack FlowVisor, and the rack FOAM are communicating with
- A site administrator examines the switches and other rack data sources, and determines:
- What MAC addresses are currently visible on the data plane switch and what experiments do they belong to?
- For some experiment which was terminated within the past day, what data plane and control MAC and IP addresses did the experiment use?
- For some experimental data path which is actively sending traffic on the data plane switch, do changes in interface counters show approximately the expected amount of traffic into and out of the switch?
Criteria to verify as part of this test
- VII.09. A site administrator can determine the MAC addresses of all physical host interfaces, all network device interfaces, all active experimental VMs, and all recently-terminated experimental VMs. (C.3.f)
- VII.11. A site administrator can locate current configuration of flowvisor, FOAM, and any other OpenFlow services, and find logs of recent activity and changes. (D.6.a)
- VII.18. Given a public IP address and port, an exclusive VLAN, a sliver name, or a piece of user-identifying information such as e-mail address or username, a site administrator or GMOC operator can identify the email address, username, and affiliation of the experimenter who controlled that resource at a particular time. (D.7)
Step 1 (prep): start a local experiment and terminate it
Overview of Step 1
- An experimenter requests an experiment from the local SM containing two rack VMs and a dataplane VLAN
- The experimenter logs into a VM, and sends dataplane traffic
- The experimenter terminates the experiment
Results of Step 1 from 2012-08-14
I used this rspec:
<?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve the resources needed for EG-MON-3 Step 1. It requests two VMs from BBN ExoGENI ORCA , running Debian 5, named "rowan" and "martin", connected by an unbound dataplane VM. AM: https://bbn-hn.exogeni.gpolab.bbn.com:11443/orca/xmlrpc --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xmlns:sharedvlan="http://www.protogeni.net/resources/rspec/ext/shared-vlan/1" xs:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd http://www.protogeni.net/resources/rspec/ext/shared-vlan/1 http://www.protogeni.net/resources/rspec/ext/shared-vlan/1/request.xsd" type="request"> <node client_id="rowan"> <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/debian-squeeze-amd64-neuca-2g.zfilesystem.sparse.v0.2.xml" version="397c431cb9249e1f361484b08674bc3381455bb9" /> </sliver_type> <services> <execute shell="sh" command="hostname rowan"/> </services> <interface client_id="rowan:1"> <ip address="172.16.1.11" netmask="255.255.255.0" /> </interface> </node> <node client_id="martin"> <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/debian-squeeze-amd64-neuca-2g.zfilesystem.sparse.v0.2.xml" version="397c431cb9249e1f361484b08674bc3381455bb9" /> </sliver_type> <services> <execute shell="sh" command="hostname martin"/> </services> <interface client_id="martin:1"> <ip address="172.16.1.12" netmask="255.255.255.0" /> </interface> </node> <link client_id="dataplane"> <component_manager name="urn:publicid:IDN+bbnvmsite+authority+cm"/> <interface_ref client_id="rowan:1"/> <interface_ref client_id="martin:1"/> <property source_id="rowan:1" dest_id="martin:1" /> <property source_id="martin:1" dest_id="rowan:1" /> <link_type name="dataplane"/> </link> </rspec>
I created the sliver successfully:
slicename=jbstmp rspec=/home/jbs/subversion/geni/GENIRacks/ExoGENI/Spiral4/Rspecs/AcceptanceTests/EG-MON-3/step-1.rspec am=https://bbn-hn.exogeni.gpolab.bbn.com:11443/orca/xmlrpc omni createslice $slicename omni -a $am createsliver $slicename $rspec
I got two VMs:
[14:32:04] jbs@jericho:/home/jbs +$ omni -a $am listresources $slicename |& grep hostname= <login authentication="ssh-keys" hostname="192.1.242.5" port="22" username="root"/> <login authentication="ssh-keys" hostname="192.1.242.6" port="22" username="root"/>
I created an account for myself on them and installed my preferred config files:
logins='192.1.242.5 192.1.242.6' rootlogins=$(echo $logins | sed -re 's/([^ ]+)/root@\1/g') for login in $rootlogins ; do ssh $login date ; done shmux -c "apt-get install sudo" $rootlogins shmux -c 'sed -i -e "s/^%sudo ALL=(ALL) ALL$/%sudo ALL=(ALL) NOPASSWD:ALL/" /etc/sudoers' $rootlogins shmux -c 'sed -i -re "s/^(127.0.0.1.+localhost)$/\1 $(hostname)/" /etc/hosts' $rootlogins shmux -c 'useradd -c "Josh Smift" -G sudo -m -s /bin/bash jbs' $rootlogins shmux -c "sudo -u jbs mkdir ~jbs/.ssh" $rootlogins shmux -c "grep jbs /root/.ssh/authorized_keys > ~jbs/.ssh/authorized_keys" $rootlogins shmux -c "sudo chown jbs:jbs ~/.ssh/authorized_keys" $logins shmux -c "rm ~jbs/.profile" $logins for login in $logins ; do rsync -a ~/.cfhome/ $login: && echo $login & done shmux -c 'sudo apt-get install iperf' $logins
I then logged in to the two nodes, and ran a five-minute 10 Mbit iperf UDP between the two. On martin (receiver):
[15:27:11] jbs@martin:/home/jbs +$ nice -n 19 iperf -u -B 172.16.1.12 -s -i 10 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 172.16.1.12 Receiving 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.1.12 port 5001 connected with 172.16.1.11 port 57075 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.0 sec 11.9 MBytes 10.0 Mbits/sec 0.017 ms 0/ 8504 (0%) [ 3] 10.0-20.0 sec 11.9 MBytes 10.0 Mbits/sec 0.023 ms 0/ 8503 (0%) [ 3] 20.0-30.0 sec 11.9 MBytes 10.0 Mbits/sec 0.021 ms 0/ 8504 (0%) [ 3] 30.0-40.0 sec 11.9 MBytes 10.0 Mbits/sec 0.024 ms 0/ 8504 (0%) [ 3] 40.0-50.0 sec 11.9 MBytes 10.0 Mbits/sec 0.021 ms 0/ 8504 (0%) [ 3] 50.0-60.0 sec 11.9 MBytes 10.0 Mbits/sec 0.015 ms 0/ 8504 (0%) [ 3] 60.0-70.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8503 (0%) [ 3] 70.0-80.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8504 (0%) [ 3] 80.0-90.0 sec 11.9 MBytes 10.0 Mbits/sec 0.025 ms 0/ 8504 (0%) [ 3] 90.0-100.0 sec 11.9 MBytes 10.0 Mbits/sec 0.025 ms 0/ 8504 (0%) [ 3] 100.0-110.0 sec 11.9 MBytes 10.0 Mbits/sec 0.020 ms 0/ 8504 (0%) [ 3] 110.0-120.0 sec 11.9 MBytes 10.0 Mbits/sec 0.026 ms 0/ 8503 (0%) [ 3] 120.0-130.0 sec 11.9 MBytes 10.0 Mbits/sec 0.032 ms 0/ 8504 (0%) [ 3] 130.0-140.0 sec 11.9 MBytes 10.0 Mbits/sec 0.027 ms 0/ 8504 (0%) [ 3] 140.0-150.0 sec 11.9 MBytes 10.0 Mbits/sec 0.019 ms 0/ 8504 (0%) [ 3] 150.0-160.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8504 (0%) [ 3] 160.0-170.0 sec 11.9 MBytes 10.0 Mbits/sec 0.026 ms 0/ 8504 (0%) [ 3] 170.0-180.0 sec 11.9 MBytes 10.0 Mbits/sec 0.019 ms 0/ 8503 (0%) [ 3] 180.0-190.0 sec 11.9 MBytes 10.0 Mbits/sec 0.020 ms 0/ 8504 (0%) [ 3] 190.0-200.0 sec 11.9 MBytes 10.0 Mbits/sec 0.027 ms 0/ 8504 (0%) [ 3] 200.0-210.0 sec 11.9 MBytes 10.0 Mbits/sec 0.020 ms 0/ 8504 (0%) [ 3] 210.0-220.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8503 (0%) [ 3] 220.0-230.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8504 (0%) [ 3] 230.0-240.0 sec 11.9 MBytes 10.0 Mbits/sec 0.020 ms 0/ 8504 (0%) [ 3] 240.0-250.0 sec 11.9 MBytes 10.0 Mbits/sec 0.033 ms 0/ 8504 (0%) [ 3] 250.0-260.0 sec 11.9 MBytes 10.0 Mbits/sec 0.031 ms 0/ 8503 (0%) [ 3] 260.0-270.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8504 (0%) [ 3] 270.0-280.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8504 (0%) [ 3] 280.0-290.0 sec 11.9 MBytes 10.0 Mbits/sec 0.021 ms 0/ 8504 (0%) [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec 0.016 ms 0/255103 (0%) [ 3] 0.0-300.0 sec 1 datagrams received out-of-order
On rowan (sender):
[15:27:10] jbs@rowan:/home/jbs +$ nice -n 19 iperf -u -c 172.16.1.12 -t 300 -b 10M ------------------------------------------------------------ Client connecting to 172.16.1.12, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.1.11 port 57075 connected with 172.16.1.12 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec [ 3] Sent 255104 datagrams [ 3] Server Report: [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec 0.016 ms 0/255103 (0%) [ 3] 0.0-300.0 sec 1 datagrams received out-of-order
(I also had some earlier runs where there was a lot of packet loss, but things smoothed out over time, so I kept this one.)
I then deleted the sliver:
omni -a $am deletesliver $slicename $rspec
Step 2 (prep): start an ExoSM experiment and terminate it
Overview of Step 2
- An experimenter requests an experiment from the ExoSM containing two rack VMs and a dataplane VLAN
- The experimenter logs into a VM, and sends dataplane traffic
- The experimenter terminates the experiment
Results of Step 2 from 2012-08-14
I used this rspec:
<?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve the resources needed for EG-MON-3 Step 2. It requests two VMs from ExoSM, running Debian 5, named "rowan" and "martin", connected by an unbound dataplane VM. AM: https://geni.renci.org:11443/orca/xmlrpc --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xmlns:sharedvlan="http://www.protogeni.net/resources/rspec/ext/shared-vlan/1" xs:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd http://www.protogeni.net/resources/rspec/ext/shared-vlan/1 http://www.protogeni.net/resources/rspec/ext/shared-vlan/1/request.xsd" type="request"> <node client_id="rowan" component_manager_id="urn:publicid:IDN+bbnvmsite+authority+cm" > <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/debian-squeeze-amd64-neuca-2g.zfilesystem.sparse.v0.2.xml" version="397c431cb9249e1f361484b08674bc3381455bb9" /> </sliver_type> <services> <execute shell="sh" command="hostname rowan"/> </services> <interface client_id="rowan:1"> <ip address="172.16.1.11" netmask="255.255.255.0" /> </interface> </node> <node client_id="martin" component_manager_id="urn:publicid:IDN+bbnvmsite+authority+cm" > <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/debian-squeeze-amd64-neuca-2g.zfilesystem.sparse.v0.2.xml" version="397c431cb9249e1f361484b08674bc3381455bb9" /> </sliver_type> <services> <execute shell="sh" command="hostname martin"/> </services> <interface client_id="martin:1"> <ip address="172.16.1.12" netmask="255.255.255.0" /> </interface> </node> <link client_id="dataplane"> <component_manager name="urn:publicid:IDN+bbnvmsite+authority+cm"/> <interface_ref client_id="rowan:1"/> <interface_ref client_id="martin:1"/> <property source_id="rowan:1" dest_id="martin:1" /> <property source_id="martin:1" dest_id="rowan:1" /> <link_type name="dataplane"/> </link> </rspec>
I created the sliver successfully:
slicename=jbstmp rspec=/home/jbs/subversion/geni/GENIRacks/ExoGENI/Spiral4/Rspecs/AcceptanceTests/EG-MON-3/step-2.rspec am=https://geni.renci.org:11443/orca/xmlrpc omni createslice $slicename omni -a $am createsliver $slicename $rspec
I got two VMs:
[16:26:59] jbs@jericho:/home/jbs +$ omni -a $am listresources $slicename |& grep hostname= <login authentication="ssh-keys" hostname="192.1.242.8" port="22" username="root"/> <login authentication="ssh-keys" hostname="192.1.242.7" port="22" username="root"/>
I created an account for myself on them and installed my preferred config files:
logins='192.1.242.7 192.1.242.8' rootlogins=$(echo $logins | sed -re 's/([^ ]+)/root@\1/g') for login in $rootlogins ; do ssh $login date ; done shmux -c "apt-get install sudo" $rootlogins shmux -c 'sed -i -e "s/^%sudo ALL=(ALL) ALL$/%sudo ALL=(ALL) NOPASSWD:ALL/" /etc/sudoers' $rootlogins shmux -c 'sed -i -re "s/^(127.0.0.1.+localhost)$/\1 $(hostname)/" /etc/hosts' $rootlogins shmux -c 'useradd -c "Josh Smift" -G sudo -m -s /bin/bash jbs' $rootlogins shmux -c "sudo -u jbs mkdir ~jbs/.ssh" $rootlogins shmux -c "grep jbs /root/.ssh/authorized_keys > ~jbs/.ssh/authorized_keys" $rootlogins shmux -c "sudo chown jbs:jbs ~/.ssh/authorized_keys" $logins shmux -c "rm ~jbs/.profile" $logins for login in $logins ; do rsync -a ~/.cfhome/ $login: && echo $login & done shmux -c 'sudo apt-get install iperf' $logins
I then logged in to the two nodes, and ran a five-minute 10 Mbit iperf UDP between the two. On martin (receiver):
[16:29:51] jbs@martin:/home/jbs +$ nice -n 19 iperf -u -B 172.16.1.12 -s -i 10 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 172.16.1.12 Receiving 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.1.12 port 5001 connected with 172.16.1.11 port 35124 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.0 sec 11.9 MBytes 10.0 Mbits/sec 0.026 ms 4/ 8509 (0.047%) [ 3] 0.0-10.0 sec 4 datagrams received out-of-order [ 3] 10.0-20.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8503 (0%) [ 3] 20.0-30.0 sec 11.9 MBytes 10.0 Mbits/sec 0.015 ms 0/ 8504 (0%) [ 3] 30.0-40.0 sec 11.9 MBytes 10.0 Mbits/sec 0.015 ms 0/ 8503 (0%) [ 3] 40.0-50.0 sec 11.9 MBytes 10.0 Mbits/sec 0.019 ms 0/ 8503 (0%) [ 3] 50.0-60.0 sec 11.9 MBytes 10.0 Mbits/sec 0.027 ms 0/ 8504 (0%) [ 3] 60.0-70.0 sec 11.9 MBytes 10.0 Mbits/sec 0.017 ms 0/ 8504 (0%) [ 3] 70.0-80.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8503 (0%) [ 3] 80.0-90.0 sec 11.9 MBytes 10.0 Mbits/sec 0.028 ms 0/ 8504 (0%) [ 3] 90.0-100.0 sec 11.9 MBytes 10.0 Mbits/sec 0.015 ms 0/ 8503 (0%) [ 3] 100.0-110.0 sec 11.9 MBytes 10.0 Mbits/sec 0.013 ms 0/ 8503 (0%) [ 3] 110.0-120.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8503 (0%) [ 3] 120.0-130.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8504 (0%) [ 3] 130.0-140.0 sec 11.9 MBytes 10.0 Mbits/sec 0.023 ms 0/ 8503 (0%) [ 3] 140.0-150.0 sec 11.9 MBytes 10.0 Mbits/sec 0.019 ms 0/ 8504 (0%) [ 3] 150.0-160.0 sec 11.9 MBytes 10.0 Mbits/sec 0.022 ms 0/ 8503 (0%) [ 3] 160.0-170.0 sec 11.9 MBytes 10.0 Mbits/sec 0.020 ms 0/ 8504 (0%) [ 3] 170.0-180.0 sec 11.9 MBytes 10.0 Mbits/sec 0.017 ms 0/ 8503 (0%) [ 3] 180.0-190.0 sec 11.9 MBytes 10.0 Mbits/sec 0.018 ms 0/ 8503 (0%) [ 3] 190.0-200.0 sec 11.9 MBytes 10.0 Mbits/sec 0.012 ms 0/ 8504 (0%) [ 3] 200.0-210.0 sec 11.9 MBytes 10.0 Mbits/sec 0.017 ms 0/ 8504 (0%) [ 3] 210.0-220.0 sec 11.9 MBytes 10.0 Mbits/sec 0.016 ms 0/ 8503 (0%) [ 3] 220.0-230.0 sec 11.9 MBytes 10.0 Mbits/sec 0.021 ms 0/ 8503 (0%) [ 3] 230.0-240.0 sec 11.9 MBytes 10.0 Mbits/sec 0.024 ms 0/ 8504 (0%) [ 3] 240.0-250.0 sec 11.9 MBytes 10.0 Mbits/sec 0.025 ms 0/ 8504 (0%) [ 3] 250.0-260.0 sec 11.9 MBytes 10.0 Mbits/sec 0.015 ms 0/ 8503 (0%) [ 3] 260.0-270.0 sec 11.9 MBytes 10.0 Mbits/sec 0.017 ms 0/ 8503 (0%) [ 3] 270.0-280.0 sec 11.9 MBytes 10.0 Mbits/sec 0.016 ms 0/ 8504 (0%) [ 3] 280.0-290.0 sec 11.9 MBytes 10.0 Mbits/sec 0.014 ms 0/ 8503 (0%) [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec 0.018 ms 3/255103 (0.0012%) [ 3] 0.0-300.0 sec 5 datagrams received out-of-order
On rowan (sender):
[16:29:58] jbs@rowan:/home/jbs +$ nice -n 19 iperf -u -c 172.16.1.12 -t 300 -b 10M ------------------------------------------------------------ Client connecting to 172.16.1.12, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.1.11 port 35124 connected with 172.16.1.12 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec [ 3] Sent 255104 datagrams [ 3] Server Report: [ 3] 0.0-300.0 sec 358 MBytes 10.0 Mbits/sec 0.017 ms 3/255103 (0.0012%) [ 3] 0.0-300.0 sec 5 datagrams received out-of-order
I then deleted the sliver:
omni -a $am deletesliver $slicename $rspec
Step 3 (prep): start an experiment and leave it running
Overview of Step 3
- An experimenter requests an experiment from the local SM containing two rack VMs connected by an OpenFlow-controlled dataplane VLAN
- The experimenter configures a simple OpenFlow controller to pass dataplane traffic between the VMs
- The experimenter logs into one VM, and begins sending a continuous stream of dataplane traffic
Results of Step 3 from 2013-03-12
I created an ExoGENI sliver with this rspec:
<?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve the BBN ExoGENI resources used by the jbs17 slice. It requests two VMs, running Debian 5, named "bbn-eg-jbs17-a" and "bbn-eg-jbs17-b", each with a dataplane interface on VLAN 1750 with a jbs17 IP address. AM: https://bbn-hn.exogeni.gpolab.bbn.com:11443/orca/xmlrpc --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xmlns:sharedvlan="http://www.protogeni.net/resources/rspec/ext/shared-vlan/1" xs:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd http://www.protogeni.net/resources/rspec/ext/shared-vlan/1 http://www.protogeni.net/resources/rspec/ext/shared-vlan/1/request.xsd" type="request"> <node client_id="bbn-eg-jbs17-a"> <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/deb6-neuca-v1.0.5.xml" version="104ea3d824906f0e13cebb89b14df232290553b1" /> </sliver_type> <services> <execute shell="sh" command="hostname bbn-eg-jbs17-a"/> </services> <interface client_id="bbn-eg-jbs17-a:1"> <ip address="10.42.17.58" netmask="255.255.255.0" /> </interface> </node> <node client_id="bbn-eg-jbs17-b"> <sliver_type name="m1.small"> <disk_image name="http://geni-images.renci.org/images/standard/debian/deb6-neuca-v1.0.5.xml" version="104ea3d824906f0e13cebb89b14df232290553b1" /> </sliver_type> <services> <execute shell="sh" command="hostname bbn-eg-jbs17-b"/> </services> <interface client_id="bbn-eg-jbs17-b:1"> <ip address="10.42.17.59" netmask="255.255.255.0" /> </interface> </node> <link client_id="mesoscale"> <interface_ref client_id="bbn-eg-jbs17-a:1" /> <interface_ref client_id="bbn-eg-jbs17-b:1" /> <sharedvlan:link_shared_vlan name="1750" /> </link> </rspec>
I created a FOAM sliver with this rspec:
<?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve the OpenFlow resources in the ExoGENI rack at BBN used by the jbs17 slice. AM: https://bbn-hn.exogeni.gpolab.bbn.com:3626/foam/gapi/1 --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xmlns:openflow="http://www.geni.net/resources/rspec/ext/openflow/3" xs:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/ext/openflow/3/of-resv.xsd" type="request"> <openflow:sliver description="JBS 17 OpenFlow resources at BBN ExoGENI."> <openflow:controller url="tcp:naxos.gpolab.bbn.com:33017" type="primary" /> <openflow:group name="bbn-exogeni"> <openflow:datapath component_id="urn:publicid:IDN+openflow:foam:bbn-hn.exogeni.gpolab.bbn.com+datapath+00:01:08:17:f4:b5:2a:00" component_manager_id="urn:publicid:IDN+openflow:foam:bbn-hn.exogeni.gpolab.bbn.com+authority+am" /> </openflow:group> <openflow:match> <openflow:use-group name="bbn-exogeni" /> <openflow:packet> <openflow:dl_vlan value="1750"/> <openflow:dl_type value="0x800,0x806"/> <openflow:nw_dst value="10.42.17.0/24"/> <openflow:nw_src value="10.42.17.0/24"/> </openflow:packet> </openflow:match> </openflow:sliver> </rspec>
On naxos.gpolab.bbn.com, I ran a NOX 'switch' controller listening on port 33017:
subnet=017 port=33$subnet ; (cd /usr/bin && /usr/bin/nox_core --info=/home/jbs/nox/nox-${port}.info -i ptcp:$port switch lavi_switches lavi_swlinks jsonmessenger=tcpport=11$subnet,sslport=0)
I was then able to ping from bbn-eg-jbs17-a to bbn-eg-jbs17-b's dataplane address (10.42.17.59):
[13:23:06] jbs@bbn-eg-jbs17-a:/home/jbs +$ ping -q -c 10 10.42.17.59 PING 10.42.17.59 (10.42.17.59) 56(84) bytes of data. --- 10.42.17.59 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 9000ms rtt min/avg/max/mdev = 0.382/2.170/17.490/5.107 ms
I fired up an iperf UDP sink on bbn-eg-jbs17-a:
[13:32:02] jbs@bbn-eg-jbs17-a:/home/jbs +$ nice -n 19 iperf -u -B 10.42.17.58 -s -i 1 ------------------------------------------------------------ Server listening on UDP port 5001 Binding to local address 10.42.17.58 Receiving 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------
I fired up a 5 Mb/sec iperf UDP source on bbn-eg-jbs17-b, for sixty seconds:
[13:32:18] jbs@bbn-eg-jbs17-b:/home/jbs +$ nice -n 19 iperf -u -c 10.42.17.58 -t 60 -b 5M ------------------------------------------------------------ Client connecting to 10.42.17.58, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 3] local 10.42.17.59 port 41466 connected with 10.42.17.58 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 35.8 MBytes 5.00 Mbits/sec [ 3] Sent 25512 datagrams [ 3] Server Report: [ 3] 0.0-60.0 sec 35.8 MBytes 5.00 Mbits/sec 0.025 ms 5/25511 (0.02%) [ 3] 0.0-60.0 sec 5 datagrams received out-of-order
So, that all seems to be working as expected.
I then fired up a 5 Mb/sec iperf UDP source on bbn-eg-jbs17-b, for 86400 seconds:
nice -n 19 iperf -u -c 10.42.17.58 -t 86400 -b 5M
I see the connection in the server window:
[ 4] local 10.42.17.58 port 5001 connected with 10.42.17.59 port 37491 [ 4] 0.0- 1.0 sec 614 KBytes 5.03 Mbits/sec 0.011 ms 2/ 428 (0.47%) [ 4] 0.0- 1.0 sec 2 datagrams received out-of-order [ 4] 1.0- 2.0 sec 610 KBytes 5.00 Mbits/sec 0.009 ms 0/ 425 (0%) [ 4] 2.0- 3.0 sec 610 KBytes 5.00 Mbits/sec 0.022 ms 0/ 425 (0%)
etc, so I've left that running.
Step 4: view running VMs
Overview of Step 4
Using:
- On bbn-hn, use SM state, logs, or administrator interfaces to determine:
- What experiments are running right now
- How many VMs are allocated for those experiments
- Which worker node is each VM running on
- On bbn worker nodes, use system state, logs, or administrative interfaces to determine what VMs are running right now, and look at any available configuration or logs of each.
Verify:
- A site administrator can determine what experiments are running on the local SM
- A site administrator can determine the mapping of VMs to active experiments
- A site administrator can view some state of running VMs on the VM server
Results of Step 4 from 2013-03-12
On bbn-hn, I used Pequod to show all active bbn-sm actors:
pequod>show reservations for all actor bbn-sm state active b2d51078-b6a1-4480-b7ef-2c27c7792aa3 bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation b2d51078-b6a1-4480-b7ef-2c27c7792aa3 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon) is in state [Active,None] Start: Fri Mar 08 17:20:36 EST 2013 End:Fri Mar 22 17:20:37 EDT 2013 4979104a-60fc-4cf9-946b-06b99cf4f339 bbn-sm 1 bbnvmsite.vm [ active, activeticketed] Notices: Reservation 4979104a-60fc-4cf9-946b-06b99cf4f339 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs15) is in state [Active,ExtendingTicket] Start: Fri Mar 08 17:43:08 EST 2013 End:Fri Mar 22 17:43:09 EDT 2013 3c0440f6-833c-45da-9544-ef3b9b7a0ecf bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation 3c0440f6-833c-45da-9544-ef3b9b7a0ecf (Slice urn:publicid:IDN+panther:SampleClass+slice+hellotest) is in state [Active,None] Start: Sun Mar 10 21:37:57 EDT 2013 End:Sun Mar 24 20:37:58 EDT 2013 08dd9475-70c0-4a19-af57-97ea34529603 bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation 08dd9475-70c0-4a19-af57-97ea34529603 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-2-exp1) is in state [Active,None] Start: Mon Mar 11 08:39:58 EDT 2013 End:Mon Mar 25 07:39:59 EDT 2013 9d1b0514-f79e-4efb-a34e-b1fe9309a159 bbn-sm 1 bbnvmsite.vlan [ active, nascent] Notices: Reservation 9d1b0514-f79e-4efb-a34e-b1fe9309a159 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-2-exp1) is in state [Active,None] Start: Mon Mar 11 08:39:58 EDT 2013 End:Mon Mar 25 07:39:59 EDT 2013 dae2bf09-50fc-41b0-bfd1-8fb15c6dc955 bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation dae2bf09-50fc-41b0-bfd1-8fb15c6dc955 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+EG-EXP-2-exp1) is in state [Active,None] Start: Mon Mar 11 08:39:58 EDT 2013 End:Mon Mar 25 07:39:59 EDT 2013 d46fc601-9029-49cd-9df2-30fa481d85d3 bbn-sm 1 bbnvmsite.vm [ active, activeticketed] Notices: Reservation d46fc601-9029-49cd-9df2-30fa481d85d3 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs16) is in state [Active,ExtendingTicket] Start: Mon Mar 11 12:55:20 EDT 2013 End:Mon Mar 25 11:55:21 EDT 2013 0ab22272-49c2-4905-99ec-daf8fda62cf9 bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation 0ab22272-49c2-4905-99ec-daf8fda62cf9 (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17) is in state [Active,None] Start: Tue Mar 12 12:51:20 EDT 2013 End:Tue Mar 26 11:51:21 EDT 2013 564ef48a-46e2-4bde-abb4-51ab6954f4ee bbn-sm 1 bbnvmsite.vm [ active, nascent] Notices: Reservation 564ef48a-46e2-4bde-abb4-51ab6954f4ee (Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17) is in state [Active,None] Start: Tue Mar 12 12:51:20 EDT 2013 End:Tue Mar 26 11:51:21 EDT 2013 Total: 9 reservations
That shows what experiments are running (tuptymon, jbs15, hellotest, EG-EXP-2-exp1, jbs16, and jbs17), and how many VMs are allocated to each (1, 1, 1, 3, 1, and 2, respectively).
On bbn-hn, I used Pequod to get details about one of the jbs17 VMs:
pequod>show reservationProperties for 0ab22272-49c2-4905-99ec-daf8fda62cf9 actor bbn-sm 0ab22272-49c2-4905-99ec-daf8fda62cf9 CONFIG: config.ssh.key = ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvxWF39ISW4XwXbQ480ZumhPY/t+3hnhS91GxvBU2szCNZa7xvRO56sr7gbZLCD9TvQ+gD3X4W1Iy9pvVNX31EUmT+IKwIu8rwxbGF2Qt1VICw9Y0fbBnhkYSBnNwfwlsQUiGojv7IAyD3fi6gmBNdeKBtmiaJQSJf4iARPwfQE6QaXG8Q3+h9jH5GHW9LnWMpZ8VuStLaNLR0DQ8l/xU+i/1NX0vZHqaxxzbR5OSfQcDOlz+NxVjXa1uz7h3W8E0zVL6ZLn650OhoFAfPWEf+pdjyixHx2bUMMCzjBCTMhxQ2u792f/WD0Nq1bwwbZ93tifta8KiMc7UPJQUm4dw8Q== jbs@gpolab.bbn.com unit.eth1.ip = 10.42.17.58/24 unit.ec2.instance.type = m1.small unit.eth1.hosteth = data unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a unit.eth1.mac = fe:16:3e:00:38:ea config.image.url = http://geni-images.renci.org/images/standard/debian/deb6-neuca-v1.0.5.xml config.image.guid = 104ea3d824906f0e13cebb89b14df232290553b1 config.duration = 1206000000 unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-a\"" eval $execString xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a LOCAL: unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.eth1.ip = 10.42.17.58/24 xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data unit.eth1.mac = fe:16:3e:00:38:ea unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-a\"" eval $execString REQUEST: unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.eth1.ip = 10.42.17.58/24 xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data unit.eth1.mac = fe:16:3e:00:38:ea unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-a\"" eval $execString RESOURCE: unit.eth1.vlan.tag = 1750 resource.domain.value = bbnvmsite/vm unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-a\"" eval $execString resource.ndl.adomain.label = Abstract Domain NDL type = bbnvmsite.vm pool.name = OpenStack Virtual Machine (BBN) unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a label = OpenStack Virtual Machine (BBN) xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data resource.domain.type = 3 attribute.1.key = resource.domain unit.eth1.mac = fe:16:3e:00:38:ea resource.ndl.adomain.value = eAHFWVtz6jYQfj+/IkOeg2wSpo0nYYZAmmEm5GSAnumrsBVQYkvUEhD66yvJtpBvgGLS8gSYXe3l2/12xV0cvHmT4R8/LsTrMwoJ88Q3960l5ysPgO12295et2m8AO7t7S1wOqDTuRK/uGI7wuHnFWGXLUP2ve1o2QUi+IrGPmzHiPhYKaHbEPg0Wq05aov3BdnOUVnElygmiFcIu0eFQ7hDcUlSmKElDXc7jvDV+Q0UrfxkQc3PXfDX+HnqL1EEC37daIn6mIQh8jmmhJUMfG9fH5UPaAQxKYmKRDEtm3fOAY4LVCKrLO5qqTqLOV3RkC52yZkXvQuBgjuhzxsi5sd4JX25kJ/hnK75UXXzOdlEDAtYCJnLhzkBj5/0SSAIzBB5wgs4x/wxTT5wQYaDVk+B7+697XpvCPJ1jNShMWJ0Hfvo6Ll7TLjOUwuk6qTdfLey1JULyYASkmT0lWLCtWppaYR4jH1laAA5lCdpQ/NZMjEVUrJo9VxHv+7AXlkjw32aR9+Ioyhn8BySYIsDvjyTzVpfI7Nz8X5BfEvjj1kMCVvRmD+GKEJG3M+Q0hHhKH6DPtKxOYPS1O6iue/trhdi8jGjdngWdSQ0ZkUk3oLRwxg8/e50O9WVZNRSJfoJDdBoeN/qOyd6nQdwsc1M0sKsVqYPc/VzGYglZVwH/wVGqCfLRsK/6lnWEbqeqLskYT/frKNY3Y20WSLzzJsi1A8Z7aVkWdcqc73tTjbdvWhi7Blw9CzJ7QCYzhQL7QsYKsoBG7NTdD0/aXso+N+BW6KAPDA1uc+WWPTVw9jv6Oey4y4he8B8ArklP5yTazICVPSbEWNN+hXgQjhHYe/Gveo6KQSTbyRtS0iavC278tm4vISY6RZzXwZ9DAUNfpoUzrJHA7gSjB9ivrMrXMuwNCX5oisKRd8bzgdjNIIbiEUWQ/Rr/CzTO0V8H01H4vRPgvkUxRsU28VRje1y9P05GfT3SowyuJbqs24+s56VzCXg19jQe+Mx/E9SWBazkehuYjRyO5ITEg0Kw/8tsEUrTOcZRUyQJfBIYG4Xf100isn1p6zpVuIurVk9t2oyvc6FFyWDURN7TAy+0ldDvethpqD4GtNg7aews0jknNIQQdLq8XiNkgG3oPF8I+MQbfDJw1yePezGmhIV1U0KVRNtwrM6xvtmLtd1L4Rkcd9CImA2Q8hlCVGbqMgKRTg1cqI43uYGHg1Uc+pT/UW2LpGiJlgteZqSZKoa7M/M2VTyNp//U6aHkoq6rBfXL9nGK+eRL2sc6AVPK05blB6rm8TY7AeH1/VvIcgClzRqEMZFSoqQg9C1yMj+0msQQsbwG0bBILkKS48YEcYh+WJHqkck89h6rs60XYMKgW2GyareZhfbmhJUrmlcq/54zOEaTQdWga8lOpde08QDg2eNbXLtDUYkwBscrGGolYkhzUN/r/EGhuK6Q8VClbJuqjf6pyeFpq5JNU9fneZyydlZnA/YKcxcqggdrFPZ4ARfJulNh5yN7RyqU26WY10PMU86GWRHYyauoL2sAqQ/E0g+FMosJrtkRBfLZ5Wyqp5dnZRS7jK79A3qSeFrXpnFoBlt6Hs3wOMTjd5CXE+viNmCeC6e14qN3TMjCHlJArm6kbI7zmZ9lzASK+gs/R/C7iCj57yIC05dNanSvu8jxsbCHBp8WbFcnfs+p3GivQIUyVfi37fej38BUANc2A== unit.number.interface = 1 attributescount = 2 resource.ndl.adomain.type = 4 resource.domain.label = NdlResourcePoolFactory: Name of the domain of this resource attribute.0.key = resource.ndl.adomain unit.eth1.ip = 10.42.17.58/24 0 UNIT: unit.id = 36361ffb-c59f-4cfb-98a1-0758827e2ded shirako.save.unit.manage.port = 22 unit.manage.port = 22 shirako.save.unit.quantum.ifaces = 9e175351-cd3f-457b-8fdb-d24191ddd7a9:${iface.net.uuid} unit.manage.ip = 192.1.242.19 unit.state = 2 unit.domain = bbnvmsite/vm unit.quantum.ifaces = 9e175351-cd3f-457b-8fdb-d24191ddd7a9:${iface.net.uuid} shirako.save.unit.manage.ip = 192.1.242.19 unit.actorid = 6f72d5ac-5190-46c8-bc10-cc4af5dcab6e shirako.save.unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a shirako.save.unit.ec2.instance = a7f246a4-6801-429e-b7f5-18c245540b99 unit.sliceid = 9c8b737c-e771-4b3f-934b-6d2b63368143 shirako.save.unit.ec2.host = bbn-w6 unit.resourceType = bbnvmsite.vm unit.ec2.host = bbn-w6 unit.sequence = 2 unit.rid = 0ab22272-49c2-4905-99ec-daf8fda62cf9 unit.ec2.instance = a7f246a4-6801-429e-b7f5-18c245540b99 unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a unit.ndl.adomain = eAHFWVtz6jYQfj+/IkOeg2wSpo0nYYZAmmEm5GSAnumrsBVQYkvUEhD66yvJtpBvgGLS8gSYXe3l2/12xV0cvHmT4R8/LsTrMwoJ88Q3960l5ysPgO12295et2m8AO7t7S1wOqDTuRK/uGI7wuHnFWGXLUP2ve1o2QUi+IrGPmzHiPhYKaHbEPg0Wq05aov3BdnOUVnElygmiFcIu0eFQ7hDcUlSmKElDXc7jvDV+Q0UrfxkQc3PXfDX+HnqL1EEC37daIn6mIQh8jmmhJUMfG9fH5UPaAQxKYmKRDEtm3fOAY4LVCKrLO5qqTqLOV3RkC52yZkXvQuBgjuhzxsi5sd4JX25kJ/hnK75UXXzOdlEDAtYCJnLhzkBj5/0SSAIzBB5wgs4x/wxTT5wQYaDVk+B7+697XpvCPJ1jNShMWJ0Hfvo6Ll7TLjOUwuk6qTdfLey1JULyYASkmT0lWLCtWppaYR4jH1laAA5lCdpQ/NZMjEVUrJo9VxHv+7AXlkjw32aR9+Ioyhn8BySYIsDvjyTzVpfI7Nz8X5BfEvjj1kMCVvRmD+GKEJG3M+Q0hHhKH6DPtKxOYPS1O6iue/trhdi8jGjdngWdSQ0ZkUk3oLRwxg8/e50O9WVZNRSJfoJDdBoeN/qOyd6nQdwsc1M0sKsVqYPc/VzGYglZVwH/wVGqCfLRsK/6lnWEbqeqLskYT/frKNY3Y20WSLzzJsi1A8Z7aVkWdcqc73tTjbdvWhi7Blw9CzJ7QCYzhQL7QsYKsoBG7NTdD0/aXso+N+BW6KAPDA1uc+WWPTVw9jv6Oey4y4he8B8ArklP5yTazICVPSbEWNN+hXgQjhHYe/Gveo6KQSTbyRtS0iavC278tm4vISY6RZzXwZ9DAUNfpoUzrJHA7gSjB9ivrMrXMuwNCX5oisKRd8bzgdjNIIbiEUWQ/Rr/CzTO0V8H01H4vRPgvkUxRsU28VRje1y9P05GfT3SowyuJbqs24+s56VzCXg19jQe+Mx/E9SWBazkehuYjRyO5ITEg0Kw/8tsEUrTOcZRUyQJfBIYG4Xf100isn1p6zpVuIurVk9t2oyvc6FFyWDURN7TAy+0ldDvethpqD4GtNg7aews0jknNIQQdLq8XiNkgG3oPF8I+MQbfDJw1yePezGmhIV1U0KVRNtwrM6xvtmLtd1L4Rkcd9CImA2Q8hlCVGbqMgKRTg1cqI43uYGHg1Uc+pT/UW2LpGiJlgteZqSZKoa7M/M2VTyNp//U6aHkoq6rBfXL9nGK+eRL2sc6AVPK05blB6rm8TY7AeH1/VvIcgClzRqEMZFSoqQg9C1yMj+0msQQsbwG0bBILkKS48YEcYh+WJHqkck89h6rs60XYMKgW2GyareZhfbmhJUrmlcq/54zOEaTQdWga8lOpde08QDg2eNbXLtDUYkwBscrGGolYkhzUN/r/EGhuK6Q8VClbJuqjf6pyeFpq5JNU9fneZyydlZnA/YKcxcqggdrFPZ4ARfJulNh5yN7RyqU26WY10PMU86GWRHYyauoL2sAqQ/E0g+FMosJrtkRBfLZ5Wyqp5dnZRS7jK79A3qSeFrXpnFoBlt6Hs3wOMTjd5CXE+viNmCeC6e14qN3TMjCHlJArm6kbI7zmZ9lzASK+gs/R/C7iCj57yIC05dNanSvu8jxsbCHBp8WbFcnfs+p3GivQIUyVfi37fej38BUANc2A== Total: 0 reservations
That ("unit.ec2.host = bbn-w6" in particular) shows what worker node the VM is running on, and also shows ("unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-a") that this is bbn-eg-jbs17-a.
I used Pequod to get details about the other jbs17 VM:
pequod>show reservationProperties for 564ef48a-46e2-4bde-abb4-51ab6954f4ee actor bbn-sm 564ef48a-46e2-4bde-abb4-51ab6954f4ee CONFIG: config.ssh.key = ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvxWF39ISW4XwXbQ480ZumhPY/t+3hnhS91GxvBU2szCNZa7xvRO56sr7gbZLCD9TvQ+gD3X4W1Iy9pvVNX31EUmT+IKwIu8rwxbGF2Qt1VICw9Y0fbBnhkYSBnNwfwlsQUiGojv7IAyD3fi6gmBNdeKBtmiaJQSJf4iARPwfQE6QaXG8Q3+h9jH5GHW9LnWMpZ8VuStLaNLR0DQ8l/xU+i/1NX0vZHqaxxzbR5OSfQcDOlz+NxVjXa1uz7h3W8E0zVL6ZLn650OhoFAfPWEf+pdjyixHx2bUMMCzjBCTMhxQ2u792f/WD0Nq1bwwbZ93tifta8KiMc7UPJQUm4dw8Q== jbs@gpolab.bbn.com unit.eth1.ip = 10.42.17.59/24 unit.ec2.instance.type = m1.small unit.eth1.hosteth = data unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b unit.eth1.mac = fe:16:3e:00:5b:c1 config.image.url = http://geni-images.renci.org/images/standard/debian/deb6-neuca-v1.0.5.xml config.image.guid = 104ea3d824906f0e13cebb89b14df232290553b1 config.duration = 1206000000 unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-b\"" eval $execString xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b LOCAL: unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.eth1.ip = 10.42.17.59/24 xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data unit.eth1.mac = fe:16:3e:00:5b:c1 unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-b\"" eval $execString REQUEST: unit.eth1.vlan.tag = 1750 unit.number.interface = 1 unit.eth1.ip = 10.42.17.59/24 xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data unit.eth1.mac = fe:16:3e:00:5b:c1 unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-b\"" eval $execString RESOURCE: unit.eth1.vlan.tag = 1750 resource.domain.value = bbnvmsite/vm unit.instance.config = #!/bin/bash # Automatically generated boot script execString="/bin/sh -c \"hostname bbn-eg-jbs17-b\"" eval $execString resource.ndl.adomain.label = Abstract Domain NDL type = bbnvmsite.vm pool.name = OpenStack Virtual Machine (BBN) unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b label = OpenStack Virtual Machine (BBN) xmlrpc.user.dn = [urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs, jbs@pgeni.gpolab.bbn.com] unit.eth1.hosteth = data resource.domain.type = 3 attribute.1.key = resource.domain unit.eth1.mac = fe:16:3e:00:5b:c1 resource.ndl.adomain.value = eAHFWVtz6jYQfj+/IkOeg2wSpo0nYYZAmmEm5GSAnumrsBVQYkvUEhD66yvJtpBvgGLS8gSYXe3l2/12xV0cvHmT4R8/LsTrMwoJ88Q3960l5ysPgO12295et2m8AO7t7S1wOqDTuRK/uGI7wuHnFWGXLUP2ve1o2QUi+IrGPmzHiPhYKaHbEPg0Wq05aov3BdnOUVnElygmiFcIu0eFQ7hDcUlSmKElDXc7jvDV+Q0UrfxkQc3PXfDX+HnqL1EEC37daIn6mIQh8jmmhJUMfG9fH5UPaAQxKYmKRDEtm3fOAY4LVCKrLO5qqTqLOV3RkC52yZkXvQuBgjuhzxsi5sd4JX25kJ/hnK75UXXzOdlEDAtYCJnLhzkBj5/0SSAIzBB5wgs4x/wxTT5wQYaDVk+B7+697XpvCPJ1jNShMWJ0Hfvo6Ll7TLjOUwuk6qTdfLey1JULyYASkmT0lWLCtWppaYR4jH1laAA5lCdpQ/NZMjEVUrJo9VxHv+7AXlkjw32aR9+Ioyhn8BySYIsDvjyTzVpfI7Nz8X5BfEvjj1kMCVvRmD+GKEJG3M+Q0hHhKH6DPtKxOYPS1O6iue/trhdi8jGjdngWdSQ0ZkUk3oLRwxg8/e50O9WVZNRSJfoJDdBoeN/qOyd6nQdwsc1M0sKsVqYPc/VzGYglZVwH/wVGqCfLRsK/6lnWEbqeqLskYT/frKNY3Y20WSLzzJsi1A8Z7aVkWdcqc73tTjbdvWhi7Blw9CzJ7QCYzhQL7QsYKsoBG7NTdD0/aXso+N+BW6KAPDA1uc+WWPTVw9jv6Oey4y4he8B8ArklP5yTazICVPSbEWNN+hXgQjhHYe/Gveo6KQSTbyRtS0iavC278tm4vISY6RZzXwZ9DAUNfpoUzrJHA7gSjB9ivrMrXMuwNCX5oisKRd8bzgdjNIIbiEUWQ/Rr/CzTO0V8H01H4vRPgvkUxRsU28VRje1y9P05GfT3SowyuJbqs24+s56VzCXg19jQe+Mx/E9SWBazkehuYjRyO5ITEg0Kw/8tsEUrTOcZRUyQJfBIYG4Xf100isn1p6zpVuIurVk9t2oyvc6FFyWDURN7TAy+0ldDvethpqD4GtNg7aews0jknNIQQdLq8XiNkgG3oPF8I+MQbfDJw1yePezGmhIV1U0KVRNtwrM6xvtmLtd1L4Rkcd9CImA2Q8hlCVGbqMgKRTg1cqI43uYGHg1Uc+pT/UW2LpGiJlgteZqSZKoa7M/M2VTyNp//U6aHkoq6rBfXL9nGK+eRL2sc6AVPK05blB6rm8TY7AeH1/VvIcgClzRqEMZFSoqQg9C1yMj+0msQQsbwG0bBILkKS48YEcYh+WJHqkck89h6rs60XYMKgW2GyareZhfbmhJUrmlcq/54zOEaTQdWga8lOpde08QDg2eNbXLtDUYkwBscrGGolYkhzUN/r/EGhuK6Q8VClbJuqjf6pyeFpq5JNU9fneZyydlZnA/YKcxcqggdrFPZ4ARfJulNh5yN7RyqU26WY10PMU86GWRHYyauoL2sAqQ/E0g+FMosJrtkRBfLZ5Wyqp5dnZRS7jK79A3qSeFrXpnFoBlt6Hs3wOMTjd5CXE+viNmCeC6e14qN3TMjCHlJArm6kbI7zmZ9lzASK+gs/R/C7iCj57yIC05dNanSvu8jxsbCHBp8WbFcnfs+p3GivQIUyVfi37fej38BUANc2A== unit.number.interface = 1 attributescount = 2 resource.ndl.adomain.type = 4 resource.domain.label = NdlResourcePoolFactory: Name of the domain of this resource attribute.0.key = resource.ndl.adomain unit.eth1.ip = 10.42.17.59/24 0 UNIT: unit.id = 052aaba4-9bd5-4a23-aafc-94964f06f8aa shirako.save.unit.manage.port = 22 unit.manage.port = 22 shirako.save.unit.quantum.ifaces = bbd82a61-473a-475d-9323-cadfd60c1105:${iface.net.uuid} unit.manage.ip = 192.1.242.13 unit.state = 2 unit.domain = bbnvmsite/vm unit.quantum.ifaces = bbd82a61-473a-475d-9323-cadfd60c1105:${iface.net.uuid} shirako.save.unit.manage.ip = 192.1.242.13 unit.actorid = 6f72d5ac-5190-46c8-bc10-cc4af5dcab6e shirako.save.unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b shirako.save.unit.ec2.instance = 588a81ac-e6ac-41d5-91bd-1d0aa2e61015 unit.sliceid = 9c8b737c-e771-4b3f-934b-6d2b63368143 shirako.save.unit.ec2.host = bbn-w8 unit.resourceType = bbnvmsite.vm unit.ec2.host = bbn-w8 unit.sequence = 2 unit.rid = 564ef48a-46e2-4bde-abb4-51ab6954f4ee unit.ec2.instance = 588a81ac-e6ac-41d5-91bd-1d0aa2e61015 unit.hostname.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b unit.ndl.adomain = eAHFWVtz6jYQfj+/IkOeg2wSpo0nYYZAmmEm5GSAnumrsBVQYkvUEhD66yvJtpBvgGLS8gSYXe3l2/12xV0cvHmT4R8/LsTrMwoJ88Q3960l5ysPgO12295et2m8AO7t7S1wOqDTuRK/uGI7wuHnFWGXLUP2ve1o2QUi+IrGPmzHiPhYKaHbEPg0Wq05aov3BdnOUVnElygmiFcIu0eFQ7hDcUlSmKElDXc7jvDV+Q0UrfxkQc3PXfDX+HnqL1EEC37daIn6mIQh8jmmhJUMfG9fH5UPaAQxKYmKRDEtm3fOAY4LVCKrLO5qqTqLOV3RkC52yZkXvQuBgjuhzxsi5sd4JX25kJ/hnK75UXXzOdlEDAtYCJnLhzkBj5/0SSAIzBB5wgs4x/wxTT5wQYaDVk+B7+697XpvCPJ1jNShMWJ0Hfvo6Ll7TLjOUwuk6qTdfLey1JULyYASkmT0lWLCtWppaYR4jH1laAA5lCdpQ/NZMjEVUrJo9VxHv+7AXlkjw32aR9+Ioyhn8BySYIsDvjyTzVpfI7Nz8X5BfEvjj1kMCVvRmD+GKEJG3M+Q0hHhKH6DPtKxOYPS1O6iue/trhdi8jGjdngWdSQ0ZkUk3oLRwxg8/e50O9WVZNRSJfoJDdBoeN/qOyd6nQdwsc1M0sKsVqYPc/VzGYglZVwH/wVGqCfLRsK/6lnWEbqeqLskYT/frKNY3Y20WSLzzJsi1A8Z7aVkWdcqc73tTjbdvWhi7Blw9CzJ7QCYzhQL7QsYKsoBG7NTdD0/aXso+N+BW6KAPDA1uc+WWPTVw9jv6Oey4y4he8B8ArklP5yTazICVPSbEWNN+hXgQjhHYe/Gveo6KQSTbyRtS0iavC278tm4vISY6RZzXwZ9DAUNfpoUzrJHA7gSjB9ivrMrXMuwNCX5oisKRd8bzgdjNIIbiEUWQ/Rr/CzTO0V8H01H4vRPgvkUxRsU28VRje1y9P05GfT3SowyuJbqs24+s56VzCXg19jQe+Mx/E9SWBazkehuYjRyO5ITEg0Kw/8tsEUrTOcZRUyQJfBIYG4Xf100isn1p6zpVuIurVk9t2oyvc6FFyWDURN7TAy+0ldDvethpqD4GtNg7aews0jknNIQQdLq8XiNkgG3oPF8I+MQbfDJw1yePezGmhIV1U0KVRNtwrM6xvtmLtd1L4Rkcd9CImA2Q8hlCVGbqMgKRTg1cqI43uYGHg1Uc+pT/UW2LpGiJlgteZqSZKoa7M/M2VTyNp//U6aHkoq6rBfXL9nGK+eRL2sc6AVPK05blB6rm8TY7AeH1/VvIcgClzRqEMZFSoqQg9C1yMj+0msQQsbwG0bBILkKS48YEcYh+WJHqkck89h6rs60XYMKgW2GyareZhfbmhJUrmlcq/54zOEaTQdWga8lOpde08QDg2eNbXLtDUYkwBscrGGolYkhzUN/r/EGhuK6Q8VClbJuqjf6pyeFpq5JNU9fneZyydlZnA/YKcxcqggdrFPZ4ARfJulNh5yN7RyqU26WY10PMU86GWRHYyauoL2sAqQ/E0g+FMosJrtkRBfLZ5Wyqp5dnZRS7jK79A3qSeFrXpnFoBlt6Hs3wOMTjd5CXE+viNmCeC6e14qN3TMjCHlJArm6kbI7zmZ9lzASK+gs/R/C7iCj57yIC05dNanSvu8jxsbCHBp8WbFcnfs+p3GivQIUyVfi37fej38BUANc2A== Total: 0 reservations
That ("unit.ec2.host = bbn-w8" in particular) shows what worker node the VM is running on, and also shows ("unit.url = http://geni-orca.renci.org/owl/af818e14-9779-4cc7-9592-5c973f903b31#bbn-eg-jbs17-b") that this is bbn-eg-jbs17-b.
On bbn-hn, I got a list of all the running VMs:
[geni-orca@bbn-hn ~]$ nova list +--------------------------------------+--------------+--------+-----------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+--------------+--------+-----------------------------------+ | 2260268c-5577-40f3-a415-5c687ee8f885 | Server 11127 | ACTIVE | private=10.103.0.4, 192.1.242.12 | | 4995b710-1d34-437e-b87f-7a7375542249 | Server 11128 | ACTIVE | private=10.103.0.13, 192.1.242.20 | | 588a81ac-e6ac-41d5-91bd-1d0aa2e61015 | Server 11124 | ACTIVE | private=10.103.0.14, 192.1.242.13 | | 680a72fd-fe23-4c82-8aa7-12d7bbf6daea | Server 11069 | ACTIVE | private=10.103.0.15, 192.1.242.18 | | 688c2685-00da-4b17-ae10-e4f605c3aed3 | Server 10952 | ACTIVE | private=10.103.0.6, 192.1.242.9 | | 717ea2e5-f39d-44b7-9ccd-98b3a20eefb5 | Server 10922 | ACTIVE | private=10.103.0.2, 192.1.242.5 | | 82eb46eb-777c-42a1-9c97-2cc01dc821d3 | Server 10944 | ACTIVE | private=10.103.0.11, 192.1.242.14 | | 83c16791-cde9-418f-87c5-8546b7ae488e | Server 11120 | ACTIVE | private=10.103.0.12, 192.1.242.15 | | 86db64d6-421e-4c21-825c-c3eab728ce87 | Server 10951 | ACTIVE | private=10.103.0.5, 192.1.242.8 | | 8ccf0580-84a2-4cb4-bde7-8511c851de28 | Server 11061 | ACTIVE | private=10.103.0.9, 192.1.242.7 | | 9be31148-997a-4b6e-aa4b-cf9c54756305 | Server 11130 | ACTIVE | private=10.103.0.17, 192.1.242.22 | | 9cb6bbe6-2a2d-4272-90da-97b664372928 | Server 11131 | ACTIVE | private=10.103.0.19, 192.1.242.16 | | 9f24268b-dc8f-4bfb-8736-f857b63e91d3 | Server 10923 | ACTIVE | private=10.103.0.3, 192.1.242.6 | | a3a1b566-548f-436e-8815-a357e797b351 | Server 11119 | ACTIVE | private=10.103.0.10, 192.1.242.17 | | a7f246a4-6801-429e-b7f5-18c245540b99 | Server 11123 | ACTIVE | private=10.103.0.16, 192.1.242.19 | | bf3f2e60-d4d4-4d78-9de5-6092c55f9155 | Server 11129 | ACTIVE | private=10.103.0.18, 192.1.242.21 | | c01cc39f-b27f-4446-bb1a-beefa0c8bacc | Server 11022 | ACTIVE | private=10.103.0.8, 192.1.242.11 | | eaaa52ed-5cf8-455b-9924-6db35b5c01c2 | Server 11081 | ACTIVE | private=10.103.0.7, 192.1.242.10 | | f638f094-65d0-424c-85ed-8e2f687c996b | Server 11133 | ACTIVE | private=10.103.0.20, 192.1.242.23 | +--------------------------------------+--------------+--------+-----------------------------------+
From the earlier Pequod outuput, "unit.ec2.instance = a7f246a4-6801-429e-b7f5-18c245540b99" shows the instance ID of the bbn-eg-jbs17-a; on bbn-hn, I got more info about it:
[geni-orca@bbn-hn ~]$ nova show a7f246a4-6801-429e-b7f5-18c245540b99 +-------------------------------------+-------------------------------------------------------------------------+ | Property | Value | +-------------------------------------+-------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | bbn-w6 | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | instance-00002b73 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2013-03-12T16:51:31Z | | flavor | m1.tiny | | hostId | d9f82599acff02b0573944ef2d10c4b24d25c43d507f93f8cac215b6 | | id | a7f246a4-6801-429e-b7f5-18c245540b99 | | image | imageproxy.bukkit/3eadea96919db13bac398588beac6c4e9667772e.uncompressed | | key_name | geni-orca | | metadata | {} | | name | Server 11123 | | private network | 10.103.0.16, 192.1.242.19 | | progress | 0 | | status | ACTIVE | | tenant_id | d498c1f1028c47eb835eb6786e5acbab | | updated | 2013-03-12T16:52:43Z | | user_id | 9bf23f7eed714f279f99cbf56e789087 | +-------------------------------------+-------------------------------------------------------------------------+
That ("OS-EXT-SRV-ATTR:host in particular") confirms that this instance is running on bbn-w6. But I don't see an obvious way to get a list of all the VMs on bbn-w6.
2013-03-14 update: Jonathan showed me how to use virsh as root to get a list of the running VMs:
[root@bbn-w6 ~]# virsh list Id Name State ---------------------------------------------------- 679 instance-00002aaa running 681 instance-00002ac7 running 696 instance-00002b73 running 698 instance-00002b77 running 700 instance-00002b86 running 701 instance-00002b87 running 702 instance-00002b96 running 704 instance-00002ba0 running 705 instance-00002bc3 running
So, that's how to get a list.
Step 5: get information about terminated VMs
Overview of Step 5
Using:
- On bbn-hn use SM state, logs, or administrator interfaces to find evidence of the two terminated experiments.
- Determine how many other experiments were run in the past day.
- Determine which GENI user created each of the terminated experiments.
- Determine the mapping of experiments to VM servers for each of the terminated experiments.
- Determine the control and dataplane MAC addresses assigned to each VM in each terminated experiment.
- Determine any IP addresses assigned by ExoGENI to each VM in each terminated experiment.
Verify:
- A site administrator can get ownership and resource allocation information for recently-terminated experiments which were created on the local SM.
- A site administrator can get ownership and resource allocation information for recently-terminated experiments which were created using ExoSM.
- A site administrator can get information about MAC addresses and IP addresses used by recently-terminated experiments.
Results of Step 5 from 2013-03-12
This step has a lot of parts, and I wanted to get through as many steps as possible today, so I skipped this for now.
2013-05-03 update: We didn't get back to this before winding down our work on acceptance tests, but are confident that this can be done, we just haven't learned how to do it.
Step 6: get OpenFlow state information
Overview of Step 6
Using:
- On the 8264 (dataplane) switch, get a list of controllers, and see if any additional controllers are serving experiments.
- On bbn-hn, get a list of active FV slices from the FlowVisor
- On bbn-hn, get a list of active slivers from FOAM
- On bbn-hn, use FV or FOAM to get a list of the flowspace of a running OpenFlow experiment.
Verify:
- A site administrator can get information about the OpenFlow resources used by running experiments.
- No new controllers are added directly to the switch when an OpenFlow experiment is running.
- A new slice has been added to the FlowVisor which points to the experimenter's controller.
- No new sliver has been added to FOAM.
Results of Step 6 from 2013-03-12
On bbn-hn, I logged in to the 8264:
ssh 8264.bbn.xo
I showed OpenFlow information:
bbn-8264.bbn.xo>show openflow Protocol Version: 1 Openflow State: Enabled FDB Table Priority: 1000 Openflow Instance ID: 1 state: enabled , buffering: disabled retry 4, emergency time-out 30 echo req interval 30, echo reply time-out 15 min-flow-timeout : 0, use controller provided values. max flows acl : Maximum Available max flows unicast fdb : Maximum Available max flows multicast fdb : Maximum Available emergency feature: disabled ports : 1,5,9,13,17-62,64 Controller Id: 1 Active Controller IP Address: 192.168.103.10, port: 6633, Data-Port Openflow instance 2 is currently disabled Openflow instance 3 is currently disabled Openflow instance 4 is currently disabled Openflow Edge ports : None Openflow Management ports : 63
This shows that there's only one OF instance, and its controller is tcp:192.168.103.10:6633. No new controllers were added to the switch by jbs17.
On bbn-hn, I got a list of active slivers from FOAM:
[15:34:07] jbs@bbn-hn:/home/jbs +$ foamctl geni:list-slivers --passwd-file=/etc/foam.passwd { "slivers": [ { "status": "Approved", "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs15:013f6aa7-e600-4be5-9e31-5c0436223dfd", "creation": "2012-09-03 21:32:30.838618+00:00", "pend_reason": "Request has underspecified VLAN requests", "expiration": "2013-06-15 23:00:00+00:00", "deleted": "False", "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs", "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs15", "enabled": true, "email": "jbs@bbn.com", "flowvisor_slice": "013f6aa7-e600-4be5-9e31-5c0436223dfd", "desc": "JBS 15 OpenFlow resources at BBN ExoGENI.", "ref": null, "id": 38, "uuid": "013f6aa7-e600-4be5-9e31-5c0436223dfd" }, { "status": "Approved", "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs16:8aad0aae-ae92-4a3c-bd5e-43f7456f628e", "creation": "2012-09-03 21:34:38.493069+00:00", "pend_reason": "Request has underspecified VLAN requests", "expiration": "2013-06-15 23:00:00+00:00", "deleted": "False", "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs", "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs16", "enabled": true, "email": "jbs@bbn.com", "flowvisor_slice": "8aad0aae-ae92-4a3c-bd5e-43f7456f628e", "desc": "JBS 16 OpenFlow resources at BBN ExoGENI.", "ref": null, "id": 39, "uuid": "8aad0aae-ae92-4a3c-bd5e-43f7456f628e" }, { "status": "Approved", "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon:062d476f-2433-49bf-b55a-d902b3314e13", "creation": "2013-03-08 19:58:08.787536+00:00", "pend_reason": "Request has underspecified VLAN requests", "expiration": "2013-05-30 00:00:00+00:00", "deleted": "False", "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+tupty", "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon", "enabled": true, "email": "tupty@bbn.com", "flowvisor_slice": "062d476f-2433-49bf-b55a-d902b3314e13", "desc": "tuptymon OpenFlow resources at BBN.", "ref": null, "id": 93, "uuid": "062d476f-2433-49bf-b55a-d902b3314e13" }, { "status": "Approved", "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17:697158f8-3b67-4ada-9386-da77337b086a", "creation": "2013-03-12 17:18:23.544339+00:00", "pend_reason": "Request has underspecified VLAN requests", "expiration": "2013-03-15 23:00:00+00:00", "deleted": "False", "user": "urn:publicid:IDN+pgeni.gpolab.bbn.com+user+jbs", "slice_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17", "enabled": true, "email": "jbs@pgeni.gpolab.bbn.com", "flowvisor_slice": "697158f8-3b67-4ada-9386-da77337b086a", "desc": "JBS 17 OpenFlow resources at BBN ExoGENI.", "ref": null, "id": 94, "uuid": "697158f8-3b67-4ada-9386-da77337b086a" } ] }
That showed the FOAM sliver URN and FlowVisor slice name for the jbs17 slice, which I put into variables for later use:
sliver_urn="urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17:697158f8-3b67-4ada-9386-da77337b086a" flowvisor_slice="697158f8-3b67-4ada-9386-da77337b086a"
Note that a new sliver was added to FOAM, because I created one using the shared mesoscale VLAN, rather than a slice-private VLAN, because ORCA doesn't support the creation of slice-private OpenFlow-controlled VLANs yet. That doesn't affect the substance of this test.
On bbn-hn, I got a list of active FV slices from FlowVisor:
[15:33:44] jbs@bbn-hn:/home/jbs +$ fvctl --passwd-file=/etc/flowvisor.passwd listSlices Slice 0: orca-5 Slice 1: orca-4 Slice 2: fvadmin Slice 3: orca-6 Slice 4: 697158f8-3b67-4ada-9386-da77337b086a Slice 5: orca-3 Slice 6: orca-2 Slice 7: 8aad0aae-ae92-4a3c-bd5e-43f7456f628e Slice 8: 062d476f-2433-49bf-b55a-d902b3314e13 Slice 9: orca-2601 Slice 10: 013f6aa7-e600-4be5-9e31-5c0436223dfd Slice 11: orca-2602 Slice 12: orca-2603 Slice 13: orca-2604 Slice 14: orca-2605
The jbs17 FV slice is in that list, slice 4.
I got more information about that FV slice, and confirmed that it points to my controller:
[15:37:24] jbs@bbn-hn:/home/jbs +$ fvctl --passwd-file=/etc/flowvisor.passwd getSliceInfo $flowvisor_slice Got reply: connection_1=00:01:08:17:f4:b5:2a:00-->/192.1.242.3:43618-->naxos.gpolab.bbn.com/192.1.249.133:33017 contact_email=jbs@pgeni.gpolab.bbn.com controller_hostname=naxos.gpolab.bbn.com controller_port=33017 creator=fvadmin
On bbn-hn, I used FV or FOAM to get a list of the flowspace rules that correspond to that FV slice:
[15:38:18] jbs@bbn-hn:/home/jbs +$ fvctl --passwd-file=/etc/flowvisor.passwd listFlowSpace | grep $flowvisor_slice rule 2: FlowEntry[dpid=[00:01:08:17:f4:b5:2a:00],ruleMatch=[OFMatch[dl_type=0x800,dl_vlan=0x6d6,nw_dst=10.42.17.0/24,nw_src=10.42.17.0/24]],actionsList=[Slice:697158f8-3b67-4ada-9386-da77337b086a=4],id=[-892161202],priority=[100],] rule 3: FlowEntry[dpid=[00:01:08:17:f4:b5:2a:00],ruleMatch=[OFMatch[dl_type=0x806,dl_vlan=0x6d6,nw_dst=10.42.17.0/24,nw_src=10.42.17.0/24]],actionsList=[Slice:697158f8-3b67-4ada-9386-da77337b086a=4],id=[-892161200],priority=[100],]
Those are the expected flowspace rules (traffic that matches my subnet on VLAN 1750 (06d6 in hex), send it to my FV slice).
So, this is all set.
Step 7: verify MAC addresses on the rack dataplane switch
Overview of Step 7
Using:
- Establish a privileged login to the 8264 (dataplane) switch
- Obtain a list of the full MAC address table of the switch
- On bbn-hn and the worker nodes, use available data sources to determine which host or VM owns each MAC address.
Verify:
- It is possible to identify and classify every MAC address visible on the switch
Results of Step 7 from 2013-03-12
On bbn-hn, I logged in to the 8264:
ssh 8264.bbn.xo
I showed the MAC address table:
bbn-8264.bbn.xo>show mac Mac address Aging Time: 300 Total number of FDB entries : 16 MAC address VLAN Port Trnk State Permanent ----------------- -------- ------- ---- ----- --------- 00:80:e5:2d:01:64 1006 63 FWD 00:80:e5:2d:03:ac 1006 63 FWD 08:17:f4:e8:05:00 1006 63 FWD 5c:f3:fc:6b:10:a8 1006 63 FWD 5c:f3:fc:c0:14:c5 1006 63 FWD 5c:f3:fc:c0:50:09 1006 63 FWD 5c:f3:fc:c0:53:65 1006 63 FWD 5c:f3:fc:c0:57:f9 1006 63 FWD 5c:f3:fc:c0:67:51 1006 63 FWD 6c:ae:8b:1b:e7:8e 1006 63 FWD 6c:ae:8b:1b:e7:f6 1006 63 FWD 6c:ae:8b:1b:e9:d6 1006 63 FWD 6c:ae:8b:1b:ee:fe 1006 63 FWD 6c:ae:8b:1b:ef:56 1006 63 FWD 6c:ae:8b:1b:f0:ae 1006 63 FWD f8:c0:01:bb:f8:cb 1006 63 FWD
Those are all from the management VLAN, on the port that connects the switch to the control switch; I asked xo-bbn@renci.org if that was expected.
2013-03-14 update: Chris Heerman confirms on xo-bbn that "you shouldn't see MAC addresses from dataplane interfaces because they're OpenFlow enabled with 'no learning'". (https://mm.renci.org/mailman/private/xo-bbn/2013-March/000041.html, visible only to xo-bbn list members.) We've asked if this is the only way it can be done, or if a different configuration would allow the switch to keep track of (but not act on) which MAC addresses are on which ports; if there turns out to be a way, we should re-run this test.
Step 8: verify active dataplane traffic
Overview of Step 8
Using:
- Establish a privileged login to the 8264 (dataplane) switch
- Based on the information from Step 7, determine which interfaces are carrying traffic between the experimental VMs
- Collect interface counters for those interfaces over a period of 10 minutes
- Estimate the rate at which the experiment is sending traffic
Verify:
- The switch reports interface counters, and an administrator can obtain plausible estimates of dataplane traffic quantities by looking at them.
Results of Step 8 from 2013-03-12
We know from Step 4 that bbn-eg-jbs17-a is on bbn-w6, and bbn-eg-jbs17-b is on bbn-w8; we didn't find their MAC addresses in step 7, but we know from our connection inventory that the eth3 interfaces on bbn-w6 and bbn-w8 are on port 46 and 48 (respectively) of the 8264.
On bbn-hn, I logged in to the 8264:
ssh 8264.bbn.xo
I could show "bitrate usage" information about the ports that eth2 and eth3 of bbn-w6 and bbn-w8 are on:
bbn-8264.bbn.xo>show interface port 22,24,46,48 bitrate-usage Utilization statistics for rate. Press CTRL+C to stop: In Out Port 22: 0Kbps 0Kbps - Port 24: 0Kbps 0Kbps \ Port 46: 6Kbps 6Kbps | Port 48: 6Kbps 5Kbps /
It cycles through those, updating as it goes. That shows some traffic, but less of it, and more symmetric, than I would have expected.
I asked xo-bbn@renci.org about this.
2013-03-14 update: Chris Heerman confirms on xo-bbn that "show interface port <range> bitrate-usage" is a good way to see usage, and recommends "41-50,64" as the range, to get all the workers' dataplane interfaces plus the uplink towards the GENI core. I tried that, and saw:
bbn-8264.bbn.xo>show interface port 41-50,64 bitrate-usage Utilization statistics for rate. Press CTRL+C to stop: In Out Port 41: 41Kbps 14Kbps / Port 42: 3Kbps 3Kbps - Port 43: 52Kbps 18Kbps \ Port 44: 24Kbps 4Kbps | Port 45: 2Kbps 5403Kbps / Port 46: 45Kbps 144Kbps - Port 47: 5411Kbps 2Kbps \ Port 48: 66Kbps 19Kbps | Port 49: 0Kbps 0Kbps / Port 50: 0Kbps 0Kbps \ Port 64: 0Kbps 0Kbps |
So that's kind of interesting: There's 5 Mb/sec on port 45 and 47, but those aren't the ports corresponding to my workers. Unless we're just wrong about what's connected to what.
I tried killing my iperf source, and after a minute or two:
bbn-8264.bbn.xo>show interface port 41-50,64 bitrate-usage Utilization statistics for rate. Press CTRL+C to stop: In Out Port 41: 72Kbps 30Kbps / Port 42: 38Kbps 5Kbps - Port 43: 18Kbps 11Kbps \ Port 44: 26Kbps 5Kbps | Port 45: 2Kbps 0Kbps / Port 46: 47Kbps 197Kbps - Port 47: 17Kbps 4Kbps \ Port 48: 14Kbps 10Kbps / Port 49: 0Kbps 0Kbps - Port 50: 0Kbps 0Kbps \ Port 64: 0Kbps 0Kbps |
So, that was clearly my traffic. What's it doing on those ports?
Ah: Physical inspection shows that the hosts are connected to the wrong ports -- the cable from bbn-w6 is in port 45, not 46, and bbn-w8 is in port 47, not 48.
So, given that, we can see the rate of the traffic flowing between those ports, and I think this step is all set.