Version 26 (modified by 12 years ago) (diff) | ,
---|
-
Detailed test plan for IG-MON-3: GENI Active Experiment Inspection Test
- Page format
- Status of test
- High-level description from test plan
- Step 1 (prep): start a VM experiment and terminate it
- Step 2 (prep): start a bare metal node experiment and terminate it
- Step 3 (prep): start an experiment and leave it running
- Step 4: view running VMs
- Step 5: get information about terminated experiments
- Step 6: get OpenFlow state information
- Step 7: verify MAC addresses on the rack dataplane switch
- Step 8: verify active dataplane traffic
Detailed test plan for IG-MON-3: GENI Active Experiment Inspection Test
This page is GPO's working page for performing IG-MON-3. It is public for informational purposes, but it is not an official status report. See GENIRacksHome/InstageniRacks/AcceptanceTestStatus for the current status of InstaGENI acceptance tests.
Last substantive edit of this page: 2012-07-26
Page format
- The status chart summarizes the state of this test
- The high-level description from test plan contains text copied exactly from the public test plan and acceptance criteria pages.
- The steps contain things i will actually do/verify:
- Steps may be composed of related substeps where i find this useful for clarity
- Each step is either a preparatory step (identified by "(prep)") or a verification step (the default):
- Preparatory steps are just things we have to do. They're not tests of the rack, but are prerequisites for subsequent verification steps
- Verification steps are steps in which we will actually look at rack output and make sure it is as expected. They contain a Using: block, which lists the steps to run the verification, and an Expect: block which lists what outcome is expected for the test to pass.
Status of test
Step | State | Date completed | Open Tickets | Closed Tickets/Comments |
1 | Color(yellow,Completed)? | needs retesting when 3 is retested | ||
2 | needs retesting when 3 is retested | |||
3 | Color(yellow,Completed)? | needs retesting once OpenFlow resources are available from InstaGENI AM | ||
4 | Color(orange,Blocked)? | 26 | (35) blocked on resolution of MAC reporting issue | |
5 | Color(orange,Blocked)? | 26, 31 | blocked on availability of real MACs for VMs; blocked on determination of how to get control IP/MAC information for terminated VMs | |
6 | Color(orange,Blocked)? | blocked on availability of OpenFlow functionality | ||
7 | ready to test non-OpenFlow functionality | |||
8 | ready to test non-OpenFlow functionality |
High-level description from test plan
This test inspects the state of the rack data plane and control networks when experiments are running, and verifies that a site administrator can find information about running experiments.
Procedure
- An experimenter from the GPO starts up experiments to ensure there is data to look at:
- An experimenter runs an experiment containing at least one rack OpenVZ VM, and terminates it.
- An experimenter runs an experiment containing at least one rack OpenVZ VM, and leaves it running.
- A site administrator uses available system and experiment data sources to determine current experimental state, including:
- How many VMs are running and which experimenters own them
- How many physical hosts are in use by experiments, and which experimenters own them
- How many VMs were terminated within the past day, and which experimenters owned them
- What OpenFlow controllers the data plane switch, the rack FlowVisor, and the rack FOAM are communicating with
- A site administrator examines the switches and other rack data sources, and determines:
- What MAC addresses are currently visible on the data plane switch and what experiments do they belong to?
- For some experiment which was terminated within the past day, what data plane and control MAC and IP addresses did the experiment use?
- For some experimental data path which is actively sending traffic on the data plane switch, do changes in interface counters show approximately the expected amount of traffic into and out of the switch?
Criteria to verify as part of this test
- VII.09. A site administrator can determine the MAC addresses of all physical host interfaces, all network device interfaces, all active experimental VMs, and all recently-terminated experimental VMs. (C.3.f)
- VII.11. A site administrator can locate current configuration of flowvisor, FOAM, and any other OpenFlow services, and find logs of recent activity and changes. (D.6.a)
- VII.18. Given a public IP address and port, an exclusive VLAN, a sliver name, or a piece of user-identifying information such as e-mail address or username, a site administrator or GMOC operator can identify the email address, username, and affiliation of the experimenter who controlled that resource at a particular time. (D.7)
Step 1 (prep): start a VM experiment and terminate it
- An experimenter requests an experiment from the InstaGENI AM containing two rack VMs and a dataplane VLAN
- The experimenter logs into a VM, and sends dataplane traffic
- The experimenter terminates the experiment
Results of testing step 1: 2012-05-18
- I'll use the following rspec to get two VMs:
jericho,[~],05:29(0)$ cat IG-MON-nodes-C.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two openvz nodes, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <node client_id="virt2" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt2:if0" /> </node> <link client_id="virt1-virt2-0"> <interface_ref client_id="virt1:if0"/> <interface_ref client_id="virt2:if0"/> <property source_id="virt1:if0" dest_id="virt2:if0"/> <property source_id="virt2:if0" dest_id="virt1:if0"/> </link> </rspec>
- Then create a slice:
omni createslice ecgtest2
- Then create a sliver using that rspec:
jericho,[~],05:31(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest2 ~/IG-MON-nodes-C.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg ERROR:omni.protogeni:Call for Get Slice Cred for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 failed.: Exception: PG Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 does not exist. ERROR:omni.protogeni: ..... Run with --debug for more information ERROR:omni:Cannot create sliver urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2: Could not get slice credential: Exception: PG Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 does not exist.
- It looks like the slice just wasn't ready yet. Trying again after a minute, the same thing worked:
jericho,[~],05:31(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest2 ~/IG-MON-nodes-C.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-19 10:30:51 UTC INFO:omni:Creating sliver(s) from rspec file /home/chaos/IG-MON-nodes-C.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest2 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc2" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+372"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc2:eth1" mac_address="00000a0a0101" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+375"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pcvm2-1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc2.utah.geniracks.net" port="30266" username="chaos"/> </services> </node> <node client_id="virt2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+373"> <sliver_type name="emulab-openvz"/> <interface client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:eth1" mac_address="00000a0a0102" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+376"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm5-2" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt2.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc5.utah.geniracks.net" port="30266" username="chaos"/> </services> </node> <link client_id="virt1-virt2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+374" vlantag="260"> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc2:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+375"/> <interface_ref client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+376"/> <property dest_id="virt2:if0" source_id="virt1:if0"/> <property dest_id="virt1:if0" source_id="virt2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest2 /home/chaos/IG-MON-nodes-C.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-19 10:30:51 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- According to sliverstatus, my nodes are:
pc2.utah.geniracks.net port 30266 pc5.utah.geniracks.net port 30266
- However, pc2 needs to run frisbee before this is ready. Wait awhile.
- Login to pc2.utah.geniracks.net on port 30266 with agent forwarding
- Find that it is virt1 and has eth1=10.10.1.1
- Find a big file:
[chaos@virt1 ~]$ ls -l /usr/lib/locale/locale-archive-rpm -rw-r--r-- 1 root root 99154656 May 20 2011 /usr/lib/locale/locale-archive-rpm
- Copy the big file over the dataplane:
[chaos@virt1 ~]$ scp /usr/lib/locale/locale-archive 10.10.1.2:/tmp/ The authenticity of host '10.10.1.2 (10.10.1.2)' can't be established. RSA key fingerprint is 6d:1d:76:53:a5:25:99:39:e2:89:ea:b0:99:e3:d3:b9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.10.1.2' (RSA) to the list of known hosts. locale-archive 100% 95MB 11.8MB/s 00:08
- Look at the arps table on virt1 and virt2:
[chaos@virt1 ~]$ /sbin/arp -a virt2-virt1-virt2-0 (10.10.1.2) at 82:02:0a:0a:01:02 [ether] on mv1.1 pc2.utah.geniracks.net (155.98.34.12) at 00:01:ac:11:02:01 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:02:01 [ether] on eth999 [chaos@virt1 ~]$ ssh 10.10.1.2 Last login: Fri May 18 13:35:41 2012 from capybara.bbn.com [chaos@virt2 ~]$ /sbin/arp -a virt1-virt1-virt2-0 (10.10.1.1) at 82:01:0a:0a:01:01 [ether] on mv2.2 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:05:02 [ether] on eth999 pc5.utah.geniracks.net (155.98.34.15) at 00:01:ac:11:05:02 [ether] on eth999
- Delete the sliver:
jericho,[~],05:53(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am deletesliver ecgtest2
Results of testing step 1: 2012-05-21
Note: repeating this test for continuation of IG-MON-3 testing on 2012-05-21.
- I'll use the following rspec to get two VMs:
jericho,[~],11:33(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-M ON-nodes-C.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two openvz nodes, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <node client_id="virt2" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt2:if0" /> </node> <link client_id="virt1-virt2-0"> <interface_ref client_id="virt1:if0"/> <interface_ref client_id="virt2:if0"/> <property source_id="virt1:if0" dest_id="virt2:if0"/> <property source_id="virt2:if0" dest_id="virt1:if0"/> </link> </rspec>
- Then create a slice:
omni createslice ecgtest2
- Then create a sliver using that rspec:
jericho,[~],14:16(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest2 ~/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-22 19:16:43 UTC INFO:omni:Creating sliver(s) from rspec file /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest2 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+773"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+776"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pcvm3-1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"/> </services> </node> <node client_id="virt2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+774"> <sliver_type name="emulab-openvz"/> <interface client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+777"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm3-2" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt2.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30011" username="chaos"/> </services> </node> <link client_id="virt1-virt2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+775"> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+776"/> <interface_ref client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+777"/> <property dest_id="virt2:if0" source_id="virt1:if0"/> <property dest_id="virt1:if0" source_id="virt2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest2 /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-22 19:16:43 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- Hmm, i got a busy failure on sliverstatus:
jericho,[~],14:17(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am sliverstatus ecgtest2 INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-22 19:16:43 UTC INFO:omni:Status of Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2: ERROR:omni.protogeni:Call for Sliver status of urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 at http://www.utah.geniracks.net/protogeni/xmlrpc/am failed. Server says: <Fault 14: 'resource is busy; try again later'> INFO:omni.protogeni: ... pausing 10 seconds and retrying .... ERROR:omni.protogeni:Call for Sliver status of urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 at http://www.utah.geniracks.net/protogeni/xmlrpc/am failed. Server says: <Fault 14: 'resource is busy; try again later'> INFO:omni.protogeni: ... pausing 10 seconds and retrying .... ERROR:omni.protogeni:Call for Sliver status of urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 at http://www.utah.geniracks.net/protogeni/xmlrpc/am failed. Server says: <Fault 14: 'resource is busy; try again later'> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed sliverstatus: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: sliverstatus ecgtest2 Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-05-22 19:16:43 UTC Failed to get SliverStatus on ecgtest2 at AM http://www.utah.geniracks.net/protogeni/xmlrpc/am: <Fault 14: 'resource is busy; try again later'> Returned status of slivers on 0 of 1 possible aggregates. INFO:omni: ============================================================
- Getversion succeeded, though, and after awhile, sliverstatus succeeded again.
- According to sliverstatus, my nodes are:
pc3.utah.geniracks.net port 30010 pc3.utah.geniracks.net port 30011
- Login to pc3.utah.geniracks.net on port 30010 with agent forwarding
- Find that it is virt1 and has eth1=10.10.1.1
- Find a big file:
[chaos@virt1 ~]$ ls -l /usr/lib/locale/locale-archive-rpm -rw-r--r-- 1 root root 99154656 May 20 2011 /usr/lib/locale/locale-archive-rpm
- Copy the big file over the dataplane:
[chaos@virt1 ~]$ scp /usr/lib/locale/locale-archive 10.10.1.2:/tmp/ locale-archive 100% 95MB 10.5MB/s 00:09
- Look at the arps table on virt1 and virt2:
[chaos@virt1 ~]$ /sbin/arp -a virt2-virt1-virt2-0 (10.10.1.2) at 82:02:0a:0a:01:02 [ether] on mv1.1 pc3.utah.geniracks.net (155.98.34.13) at 00:01:ac:11:03:01 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:03:01 [ether] on eth999 [chaos@virt1 ~]$ ssh 10.10.1.2 [chaos@virt2 ~]$ /sbin/arp -a virt1-virt1-virt2-0 (10.10.1.1) at 82:01:0a:0a:01:01 [ether] on mv2.2 pc3.utah.geniracks.net (155.98.34.13) at 00:01:ac:11:03:02 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:03:02 [ether] on eth999
- Hmm, incidentally, the sliverstatus doesn't contain mac addresses at all:
jericho,[~],14:38(0)$ grep mac_address ecgtest2-sliverstatus-www-utah-geniracks-net-protogeni.json jericho,[~],14:39(1)$
- Delete the sliver:
omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am deletesliver ecgtest2
Results of testing step 1: 2012-06-07
Note: repeating this test for continuation of IG-MON-3 testing on 2012-06-07.
- I'll use the following rspec to get two VMs:
jericho,[~],10:54(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two openvz nodes, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <node client_id="virt2" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt2:if0" /> </node> <link client_id="virt1-virt2-0"> <interface_ref client_id="virt1:if0"/> <interface_ref client_id="virt2:if0"/> <property source_id="virt1:if0" dest_id="virt2:if0"/> <property source_id="virt2:if0" dest_id="virt1:if0"/> </link> </rspec>
- Then create a slice:
omni createslice ecgtest2
- Then create a sliver using that rspec:
jericho,[~],11:22(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest2 ~/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-06-08 16:22:42 UTC INFO:omni:Creating sliver(s) from rspec file /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest2 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4237"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4240"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pcvm5-3" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc5.utah.geniracks.net" port="30266" username="chaos"/> </services> </node> <node client_id="virt2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4238"> <sliver_type name="emulab-openvz"/> <interface client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4241"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm5-4" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt2.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc5.utah.geniracks.net" port="30267" username="chaos"/> </services> </node> <link client_id="virt1-virt2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4239"> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4240"/> <interface_ref client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+4241"/> <property dest_id="virt2:if0" source_id="virt1:if0"/> <property dest_id="virt1:if0" source_id="virt2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest2 /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-06-08 16:22:42 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- According to sliverstatus, my nodes are:
pc5.utah.geniracks.net port 30266 pc5.utah.geniracks.net port 30267
- Login to pc5.utah.geniracks.net on port 30266 with agent forwarding
- Find that it is virt1 and has mv3.3=10.10.1.1 (huh, my notes say this used to be eth1. Did something change? Ah, no, looking at the arp tables, it was always mvN.N: i was just being sloppy before.)
- Find a big file:
[chaos@virt1 ~]$ ls -l /usr/lib/locale/locale-archive-rpm -rw-r--r-- 1 root root 99154656 May 20 2011 /usr/lib/locale/locale-archive-rpm
- Copy the big file over the dataplane:
[chaos@virt1 ~]$ scp /usr/lib/locale/locale-archive 10.10.1.2:/tmp/ The authenticity of host '10.10.1.2 (10.10.1.2)' can't be established. RSA key fingerprint is 6d:1d:76:53:a5:25:99:39:e2:89:ea:b0:99:e3:d3:b9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.10.1.2' (RSA) to the list of known hosts. locale-archive 100% 95MB 10.5MB/s 00:09
- Look at the arps table on virt1 and virt2:
[chaos@virt1 ~]$ /sbin/arp -a pc5.utah.geniracks.net (155.98.34.15) at 00:01:ac:11:05:03 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:05:03 [ether] on eth999 virt2-virt1-virt2-0 (10.10.1.2) at 02:00:83:cf:e6:09 [ether] on mv3.3 [chaos@virt1 ~]$ ssh 10.10.1.2 [chaos@virt2 ~]$ /sbin/arp -a pc5.utah.geniracks.net (155.98.34.15) at 00:01:ac:11:05:04 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:01:ac:11:05:04 [ether] on eth999 virt1-virt1-virt2-0 (10.10.1.1) at 02:00:70:b7:95:54 [ether] on mv4.4
- Sliverstatus still doesn't contain mac addresses at all:
jericho,[~],11:33(0)$ grep mac ecgtest2-sliverstatus-www-utah-geniracks-net-protogeni.json jericho,[~],11:33(1)$
- The original spec for this step said to delete the sliver, but let's leave it running to demonstrate the lack of MAC addresses in sliverstatus.
Results of testing step 1: 2012-07-26
Note: repeating this test for continuation of IG-MON-3 testing on 2012-07-26.
- I'll use the following rspec to get two VMs:
jericho,[~],10:51(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two openvz nodes, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <node client_id="virt2" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt2:if0" /> </node> <link client_id="virt1-virt2-0"> <interface_ref client_id="virt1:if0"/> <interface_ref client_id="virt2:if0"/> <property source_id="virt1:if0" dest_id="virt2:if0"/> <property source_id="virt2:if0" dest_id="virt1:if0"/> </link> </rspec>
- Then create a slice:
omni createslice ecgtest2
- Then create a sliver using that rspec:
jericho,[~],10:51(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest2 ~/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-07-27 15:51:50 UTC INFO:omni:Creating sliver(s) from rspec file /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest2 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7162"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" mac_address="023f9c5ff6fc" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7165"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pcvm3-2" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30778" username="chaos"/> </services> </node> <node client_id="virt2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7163"> <sliver_type name="emulab-openvz"/> <interface client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" mac_address="02b1730e88f9" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7166"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm3-3" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt2.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30779" username="chaos"/> </services> </node> <link client_id="virt1-virt2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7164"> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7165"/> <interface_ref client_id="virt2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:lo0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+7166"/> <property dest_id="virt2:if0" source_id="virt1:if0"/> <property dest_id="virt1:if0" source_id="virt2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest2 /home/chaos/omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-C.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires on 2012-07-27 15:51:50 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- According to sliverstatus, my nodes are:
pc3.utah.geniracks.net port 30778 pc3.utah.geniracks.net port 30779
- Login to pc3.utah.geniracks.net on port 30778 with agent forwarding
- Find that it is virt1 and has mv2.2=10.10.1.1
- Find a big file:
[chaos@virt1 ~]$ ls -l /usr/lib/locale/locale-archive-rpm -rw-r--r-- 1 root root 99154656 May 20 2011 /usr/lib/locale/locale-archive-rpm
- Copy the big file over the dataplane:
[chaos@virt1 ~]$ scp /usr/lib/locale/locale-archive 10.10.1.2:/tmp/ The authenticity of host '10.10.1.2 (10.10.1.2)' can't be established. RSA key fingerprint is 6d:1d:76:53:a5:25:99:39:e2:89:ea:b0:99:e3:d3:b9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.10.1.2' (RSA) to the list of known hosts. locale-archive 100% 95MB 10.5MB/s 00:09
- Look at the arps table on virt1 and virt2:
[chaos@virt1 ~]$ /sbin/arp -a virt2-virt1-virt2-0 (10.10.1.2) at 02:b1:73:0e:88:f9 [ether] on mv2.2 pc3.utah.geniracks.net (155.98.34.13) at e8:39:35:b1:4e:88 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:00:9b:62:24:df [ether] on eth999 [chaos@virt1 ~]$ ssh 10.10.1.2 [chaos@virt2 ~]$ /sbin/arp -a virt1-virt1-virt2-0 (10.10.1.1) at 02:3f:9c:5f:f6:fc [ether] on mv3.3 pc3.utah.geniracks.net (155.98.34.13) at e8:39:35:b1:4e:88 [ether] on eth999 boss.utah.geniracks.net (155.98.34.4) at 00:00:9b:62:24:df [ether] on eth999
- Sliverstatus still doesn't contain mac addresses at all:
jericho,[~],11:33(0)$ grep -i mac ecgtest2-sliverstatus-www-utah-geniracks-net-protogeni.json jericho,[~],11:33(1)$
- The original spec for this step said to delete the sliver, but let's leave it running to demonstrate the lack of MAC addresses in sliverstatus.
Step 2 (prep): start a bare metal node experiment and terminate it
- An experimenter requests an experiment from the InstaGENI AM containing two rack hosts and a dataplane VLAN
- The experimenter logs into a host, and sends dataplane traffic
- The experimenter terminates the experiment
Results of testing step 2: 2012-05-18
- Here is an rspec for two physical nodes with no OS specified:
jericho,[~],05:39(0)$ cat IG-MON-nodes-D.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two physical node, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="phys1" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys1:if0" /> </node> <node client_id="phys2" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys2:if0" /> </node> <link client_id="phys1-phys2-0"> <interface_ref client_id="phys1:if0"/> <interface_ref client_id="phys2:if0"/> <property source_id="phys1:if0" dest_id="phys2:if0"/> <property source_id="phys2:if0" dest_id="phys1:if0"/> </link> </rspec>
- Create a slice for this experiment:
omni createslice ecgtest3
- Create a sliver using this rspec:
jericho,[~],05:40(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest3 ~/IG-MON-nodes-D.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 expires on 2012-05-19 10:40:34 UTC INFO:omni:Creating sliver(s) from rspec file /home/chaos/IG-MON-nodes-D.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest3 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="phys1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc4" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+378"> <sliver_type name="raw-pc"/> <interface client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" mac_address="e83935b1ec9e" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+381"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pc4" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys1.ecgtest3.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <node client_id="phys2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc1" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+379"> <sliver_type name="raw-pc"/> <interface client_id="phys2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc1:eth1" mac_address="e83935b10f96" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+382"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pc1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys2.ecgtest3.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc1.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <link client_id="phys1-phys2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+380" vlantag="261"> <interface_ref client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+381"/> <interface_ref client_id="phys2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc1:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+382"/> <property dest_id="phys2:if0" source_id="phys1:if0"/> <property dest_id="phys1:if0" source_id="phys2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest3 /home/chaos/IG-MON-nodes-D.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 expires on 2012-05-19 10:40:34 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- According to sliverstatus, my nodes are pc1 and pc4.
- Login to pc1.utah.geniracks.net with agent forwarding
- Find that it is phys2 and has eth1=10.10.1.2
- Find a big file:
[chaos@phys2 ~]$ ls -l /usr/lib/locale/locale-archive -rw-r--r-- 1 root root 104997424 Aug 10 2011 /usr/lib/locale/locale-archive
- Copy the big file over the dataplane in a loop:
[chaos@phys2 ~]$ while [ 1 ]; do scp /usr/lib/locale/locale-archive 10.10.1.1:/tmp/; done locale-archive 100% 100MB 50.1MB/s 00:02 locale-archive 100% 100MB 50.1MB/s 00:02 locale-archive 100% 100MB 50.1MB/s 00:02 ...
- After a bit of that, delete the sliver:
jericho,[~],05:53(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am deletesliver ecgtest3
Results of testing step 2: 2012-05-21
- Here is an rspec for two physical nodes with no OS specified:
jericho,[~],14:40(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-D.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve two physical node, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="phys1" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys1:if0" /> </node> <node client_id="phys2" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys2:if0" /> </node> <link client_id="phys1-phys2-0"> <interface_ref client_id="phys1:if0"/> <interface_ref client_id="phys2:if0"/> <property source_id="phys1:if0" dest_id="phys2:if0"/> <property source_id="phys2:if0" dest_id="phys1:if0"/> </link> </rspec>
- Create a slice for this experiment:
omni createslice ecgtest3
- Create a sliver using this rspec:
jericho,[~],14:50(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest3 omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-D.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 expires on 2012-05-22 19:50:36 UTC INFO:omni:Creating sliver(s) from rspec file omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-D.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest3 At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="phys1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+779"> <sliver_type name="raw-pc"/> <interface client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:eth1" mac_address="e4115bed1cb6" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+782"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pc5" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys1.ecgtest3.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc5.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <node client_id="phys2" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc4" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+780"> <sliver_type name="raw-pc"/> <interface client_id="phys2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" mac_address="e83935b1ec9e" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+783"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pc4" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys2.ecgtest3.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <link client_id="phys1-phys2-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+781" vlantag="259"> <interface_ref client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc5:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+782"/> <interface_ref client_id="phys2:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+783"/> <property dest_id="phys2:if0" source_id="phys1:if0"/> <property dest_id="phys1:if0" source_id="phys2:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest3 omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-D.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 expires on 2012-05-22 19:50:36 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- According to sliverstatus, my nodes are pc4 and pc5.
- Incidentally, sliverstatus here does include MAC addresses.
- Login to pc4.utah.geniracks.net with agent forwarding
- Find that it is phys2 and has eth1=10.10.1.2
- Find a big file:
[chaos@phys2 ~]$ ls -l /usr/lib/locale/locale-archive -rw-r--r-- 1 root root 104997424 Aug 10 2011 /usr/lib/locale/locale-archive
- Copy the big file over the dataplane in a loop:
[chaos@phys2 ~]$ bash [chaos@phys2 ~]$ while [ 1 ]; do scp /usr/lib/locale/locale-archive 10.10.1.1:/tmp/; done The authenticity of host '10.10.1.1 (10.10.1.1)' can't be established. RSA key fingerprint is 46:63:92:67:c8:75:20:4e:52:9f:2d:f6:cb:58:16:77. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.10.1.1' (RSA) to the list of known hosts. locale-archive 100% 100MB 33.4MB/s 00:03 locale-archive 100% 100MB 50.1MB/s 00:02 locale-archive 100% 100MB 50.1MB/s 00:02 locale-archive 100% 100MB 50.1MB/s 00:02 ...
- After a bit of that, delete the sliver:
jericho,[~],14:56(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am deletesliver ecgtest3
Step 3 (prep): start an experiment and leave it running
- An experimenter requests an experiment from the InstaGENI AM containing two rack VMs connected by an OpenFlow-controlled dataplane VLAN
- The experimenter configures a simple OpenFlow controller to pass dataplane traffic between the VMs
- The experimenter logs into one VM, and begins sending a continuous stream of dataplane traffic
Results of testing step 3: 2012-05-18
Note: per discussion on instageni-design on 2012-05-17, request of an OpenFlow-controlled dataplane is not yet possible. So this will need to be retested once OpenFlow control is available.
- Not creating a new experiment here, but instead reusing my experiment, ecgtest, created yesterday for
IG-MON-1
. - Login to pc3, whose eth1 is 10.10.1.1
- Make a bigger dataplane file by catting the other a few times, then start copying it around again:
[chaos@phys1 ~]$ ls -l /tmp/locale-archive -rw-r--r-- 1 chaos pgeni-gpolab-bbn 3149922720 May 18 04:14 /tmp/locale-archive while [ 1 ]; do scp /tmp/locale-archive 10.10.1.2:/tmp/; done
- This lets me see that the first instance of the file copy takes about a minute, at about 55MBps:
[chaos@phys1 ~]$ while [ 1 ]; do scp /tmp/locale-archive 10.10.1.2:/tmp/; done locale-archive 100% 3004MB 55.6MB/s 00:54
- Leave this running.
Results of testing step 3: 2012-05-21
Note: per discussion on instageni-design on 2012-05-17, request of an OpenFlow-controlled dataplane is not yet possible. So this will need to be retested once OpenFlow control is available.
- Recreating the experiment, ecgtest, which was initially used for IG-MON-1.
- Here is the rspec:
jericho,[~],15:05(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve one physical node and one openvz node, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="phys1" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys1:if0" /> </node> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <link client_id="phys1-virt1-0"> <interface_ref client_id="phys1:if0"/> <interface_ref client_id="virt1:if0"/> <property source_id="phys1:if0" dest_id="virt1:if0"/> <property source_id="virt1:if0" dest_id="phys1:if0"/> </link> </rspec>
- Create the sliver:
jericho,[~],15:18(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires within 1 day on 2012-05-22 16:02:36 UTC INFO:omni:Creating sliver(s) from rspec file omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="phys1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc4" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+785"> <sliver_type name="raw-pc"/> <interface client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" mac_address="e83935b1ec9e" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+788"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pc4" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+786"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" mac_address="00000a0a0102" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+789"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm3-1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"/> </services> </node> <link client_id="phys1-virt1-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+787" vlantag="259"> <interface_ref client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+788"/> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+789"/> <property dest_id="virt1:if0" source_id="phys1:if0"/> <property dest_id="phys1:if0" source_id="virt1:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires within 1 day(s) on 2012-05-22 16:02:36 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- My physical node is pc4
- My virtual node is pc3 port 30010
- Huh, and now sliverstatus does contain the MACs for both hosts, though the virtual one is still wrong. Updated 26, which Jon is looking at. That is not a blocker for this test.
- Login to pc4, whose eth1 is 10.10.1.1
- Make a bigger dataplane file by catting the other a few times, then start copying it around again:
bash touch /tmp/locale-archive for i in {1..40}; do cat /usr/lib/locale/locale-archive >> /tmp/locale-archive; done [chaos@phys1 ~]$ ls -l /tmp/locale-archive -rw-r--r-- 1 chaos pgeni-gpolab-bbn 4199896960 May 21 13:32 /tmp/locale-archive [chaos@phys1 ~]$ while [ 1 ]; do scp /tmp/locale-archive 10.10.1.2:/tmp/; done locale-archive 100% 4005MB 51.4MB/s 01:18 ...
- The first instance of the file copy takes somewhat over a minute, at about 51MBps
- Leave this running.
Results of testing step 3: 2012-05-26
Note: per discussion on instageni-design on 2012-05-17, request of an OpenFlow-controlled dataplane is not yet possible. So this will need to be retested once OpenFlow control is available.
- Recreating the experiment, ecgtest, which was initially used for IG-MON-1.
- Here is the rspec again:
jericho,[~],07:29(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec <?xml version="1.0" encoding="UTF-8"?> <!-- This rspec will reserve one physical node and one openvz node, each with no OS specified, and create a single dataplane link between them. It should work on any Emulab which has nodes available and supports OpenVZ. --> <rspec xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/request.xsd" type="request"> <node client_id="phys1" exclusive="true"> <sliver_type name="raw" /> <interface client_id="phys1:if0" /> </node> <node client_id="virt1" exclusive="false"> <sliver_type name="emulab-openvz" /> <interface client_id="virt1:if0" /> </node> <link client_id="phys1-virt1-0"> <interface_ref client_id="phys1:if0"/> <interface_ref client_id="virt1:if0"/> <property source_id="phys1:if0" dest_id="virt1:if0"/> <property source_id="virt1:if0" dest_id="phys1:if0"/> </link> </rspec>
- Create the sliver:
jericho,[~],07:29(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec INFO:omni:Loading config file /home/chaos/omni/omni_pgeni INFO:omni:Using control framework pg INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires on 2012-06-30 00:00:00 UTC INFO:omni:Creating sliver(s) from rspec file omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result: INFO:omni:<?xml version="1.0" ?> INFO:omni:<!-- Reserved resources for: Slice: ecgtest At AM: URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am --> INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="phys1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc2" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2993"> <sliver_type name="raw-pc"/> <interface client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc2:eth1" mac_address="e83935b10c7e" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2996"> <ip address="10.10.1.1" type="ipv4"/> </interface> <rs:vnode name="pc2" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="phys1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc2.utah.geniracks.net" port="22" username="chaos"/> </services> </node> <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2994"> <sliver_type name="emulab-openvz"/> <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" mac_address="00000a0a0102" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2997"> <ip address="10.10.1.2" type="ipv4"/> </interface> <rs:vnode name="pcvm3-1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/> <host name="virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/> <services> <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"/> </services> </node> <link client_id="phys1-virt1-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2995" vlantag="260"> <interface_ref client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc2:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2996"/> <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+2997"/> <property dest_id="virt1:if0" source_id="phys1:if0"/> <property dest_id="phys1:if0" source_id="virt1:if0"/> </link> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am configfile: /home/chaos/omni/omni_pgeni framework: pg native: True Args: createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires on 2012-06-30 00:00:00 UTC Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. INFO:omni: ============================================================
- My physical node is pc2
- My virtual node is pc3 port 30010
- Login to pc2 using ssh -A, whose eth1 is 10.10.1.1
- Make a bigger dataplane file by catting the other a few times, then start copying it around again:
bash touch /tmp/locale-archive for i in {1..40}; do cat /usr/lib/locale/locale-archive >> /tmp/locale-archive; done [chaos@phys1 ~]$ ls -l /tmp/locale-archive -rw-r--r-- 1 chaos pgeni-gpolab-bbn 4199896960 May 26 05:38 /tmp/locale-archive [chaos@phys1 ~]$ while [ 1 ]; do scp /tmp/locale-archive 10.10.1.2:/tmp/; done ...
- Leave this running.
Step 4: view running VMs
Using:
- On boss, use AM state, logs, or administrator interfaces to determine:
- What experiments are running right now
- How many VMs are allocated for those experiments
- Which OpenVZ node is each VM running on
- On OpenVZ nodes, use system state, logs, or administrative interfaces to determine what VMs are running right now, and look at any available configuration or logs of each.
Verify:
- A site administrator can determine what experiments are running on the InstaGENI AM
- A site administrator can determine the mapping of VMs to active experiments
- A site administrator can view some state of running VMs on the VM server
Results of testing step 4: 2012-05-18
- Per-host view of current state:
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc3 is allocated as phys1 to
pgeni-gpolab-bbn-com/ecgtest
. - I can see that pc5 is configured as an OpenVZ shared host, but i can't see how many experiments it is running.
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc3 is allocated as phys1 to
- Per-experiment view of current state:
- Browse to https://boss.utah.geniracks.net/genislices.php and find one slice running on the Component Manager:
ID HRN Created Expires 362 bbn-pgeni.ecgtest (ecgtest) 2012-05-17 08:12:37 2012-05-18 18:00:00
- Click
(ecgtest)
to view the details of that experiment at https://boss.utah.geniracks.net/showexp.php3?experiment=363#details. - This shows what nodes it's using, including that its VM has been put on pc5:
Physical Node Mapping: ID Type OS Physical --------------- ------------ --------------- ------------ phys1 dl360 FEDORA15-STD pc3 virt1 pcvm OPENVZ-STD pcvm5-1 (pc5)
- Here are some other interesting things:
IP Port allocation: Low High --------------- ------------ 30000 30255 SSHD Port allocation ('ssh -p portnum'): ID Port SSH command --------------- ---------- ---------------------- Physical Lan/Link Mapping: ID Member IP MAC NodeID --------------- --------------- --------------- -------------------- --------- phys1-virt1-0 phys1:0 10.10.1.1 e8:39:35:b1:4e:8a pc3 1/1 <-> 1/34 procurve2 phys1-virt1-0 virt1:0 10.10.1.2 pcvm5-1
- That last one is mysterious, because the experimenter's sliverstatus command contains:
{ 'attributes': { 'client_id': 'phys1:if0', 'component_id': 'urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1', 'mac_address': 'e83935b14e8a', ... { 'attributes': { 'client_id': 'virt1:if0', 'component_id': 'urn:publicid:IDN+utah.geniracks.net+interface+pc5:eth1', 'mac_address': '00000a0a0102',
- So i think it should be possible for the admin interface to know that virtual mac address too.
- Huh, but also, that mac address reported in sliverstatus is in fact wrong. Let me summarize:
MAC addrs reported for phys1:0 == 10.10.1.1 E8:39:35:B1:4E:8A: from /sbin/ifconfig eth1 run on phys1 (authoritative) e83935b14e8a: from sliverstatus as experimenter (correct) e8:39:35:b1:4e:8a: from: https://boss.utah.geniracks.net/showexp.php3?experiment=363#details (correct) MAC addrs reported for virt1:0 == 10.10.1.2 82:01:0A:0A:01:02: from /sbin/ifconfig mv1.1 run on virt1 (authoritative) 00000a0a0102: from sliverstatus as experimenter (incorrect: first four digits are wrong) - : from https://boss.utah.geniracks.net/showexp.php3?experiment=363#details (not reported)
I opened 26 for this issue.
- Browse to https://boss.utah.geniracks.net/genislices.php and find one slice running on the Component Manager:
- Now, use the OpenVZ host itself to view activity:
- As an admin, login to pc5.utah.geniracks.net
- Poking around, i was led to a couple of prospective data sources:
- Logs in
/var/emulab
- The
vzctl
RPM, containing a number of OpenVZ control commands
- Logs in
- The latter seems to give a list of running VMs easily:
vhost1,[/var/emulab],05:00(1)$ sudo vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 1 15 running - virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net
- I also see a command to figure out which container is running a given PID. Suppose i run top and am concerned about an sshd process chewing up all system CPU:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 51817 20001 20 0 116m 3780 872 R 94.4 0.0 0:05.74 sshd
- Since the user is numeric, i can assume this process is probably running in a container, so find out which one:
vhost1,[/var/emulab],05:05(0)$ sudo vzpid 51766 Pid CTID Name 51766 1 sshd chaos 51804 51163 0 05:04 pts/0 00:00:00 grep --color=auto ssh
- and then look up the container info as above.
- The files in
/var/emulab
give details about how each experiment was created. In particular:Information about experiment startup attributes: /var/emulab/boot/tmcc.pcvm5-1/ /var/emulab/boot/tmcc.pcvm5-2/ Logs of experiment progress: /var/emulab/logs/tbvnode-pcvm5-1.log /var/emulab/logs/tbvnode-pcvm5-2.log /var/emulab/logs/tmccproxy.pcvm5-1.log /var/emulab/logs/tmccproxy.pcvm5-2.log
- These may be useful for running and terminated experiments if the context IDs are unique.
Results of testing step 4: 2012-05-21
- Per-host view of current state:
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc4 is allocated as phys1 to
pgeni-gpolab-bbn-com/ecgtest
. - I can see that pc1 and pc3 are configured as OpenVZ shared hosts, but i can't see what experiments they are running.
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc4 is allocated as phys1 to
- Per-experiment view of current state:
- Browse to https://boss.utah.geniracks.net/genislices.php and find one slice running on the Component Manager:
ID HRN Created Expires 535 bbn-pgeni.ecgtest (ecgtest) 2012-05-21 13:19:28 2012-05-22 10:02:36
- Click
(ecgtest)
to view the details of that experiment at https://boss.utah.geniracks.net/showexp.php3?experiment=536#details. - This shows what nodes it's using, including that its VM has been put on pc3:
Physical Node Mapping: ID Type OS Physical --------------- ------------ --------------- ------------ phys1 dl360 FEDORA15-STD pc4 virt1 pcvm OPENVZ-STD pcvm3-1 (pc3)
- Here are some other interesting things, all of which are similar to Friday's test:
IP Port allocation: Low High --------------- ------------ 30000 30255 SSHD Port allocation ('ssh -p portnum'): ID Port SSH command --------------- ---------- ---------------------- Physical Lan/Link Mapping: ID Member IP MAC NodeID --------------- --------------- --------------- -------------------- --------- phys1-virt1-0 phys1:0 10.10.1.1 e8:39:35:b1:ec:9e pc4 1/1 <-> 1/37 procurve2 phys1-virt1-0 virt1:0 10.10.1.2 pcvm3-1
- Browse to https://boss.utah.geniracks.net/genislices.php and find one slice running on the Component Manager:
- Now, use the OpenVZ host itself to view activity:
- As an admin, login to pc3.utah.geniracks.net
- Everything seems similar to when i looked Friday:
vhost2,[~],13:57(0)$ sudo vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 1 19 running - virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net
Results of testing step 4: 2012-05-26
- Per-host view of current state:
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc2 is allocated as phys1 to
pgeni-gpolab-bbn-com/ecgtest
. - I can see that pc3 and pc5 are configured as OpenVZ shared hosts, but i can't see what experiments they are running.
- Using https://boss.utah.geniracks.net/showpool.php, i can see that pc3 is running one VM and pc5 is running zero, but not what experiments each is running. I opened 35 to ask whether a node-to-experiment mapping would be an easy modification to
showpool.php
.
- From https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360 in red dot mode, i can once again see that pc2 is allocated as phys1 to
- Per-experiment view of current state:
- Browse to https://boss.utah.geniracks.net/genislices.php and find two slices running on the Component Manager:
ID HRN Created Expires 949 bbn-pgeni.lnubuntu12b (lnubuntu12b) 2012-05-25 09:54:08 2012-05-29 18:00:00 951 bbn-pgeni.ecgtest (ecgtest) 2012-05-26 05:30:20 2012-06-29 18:00:00
- Click
(ecgtest)
to view the details of that experiment at https://boss.utah.geniracks.net/showexp.php3?experiment=952#details. - This shows what nodes it's using, including that its VM has been put on pc3:
Physical Node Mapping: ID Type OS Physical --------------- ------------ --------------- ------------ phys1 dl360 FEDORA15-STD pc2 virt1 pcvm OPENVZ-STD pcvm3-1 (pc3)
- Here are some other interesting things:
IP Port allocation: Low High --------------- ------------ 30000 30255 SSHD Port allocation ('ssh -p portnum'): ID Port SSH command --------------- ---------- ---------------------- Physical Lan/Link Mapping: ID Member IP MAC NodeID --------------- --------------- --------------- -------------------- --------- phys1-virt1-0 phys1:0 10.10.1.1 e8:39:35:b1:0c:7e pc2 1/1 <-> 1/28 procurve2 phys1-virt1-0 virt1:0 10.10.1.2 00:00:0a:0a:01:02 pcvm3-1
- Browse to https://boss.utah.geniracks.net/genislices.php and find two slices running on the Component Manager:
- So, indeed, a MAC address is reported for the virtual node. However, virt1 itself still says:
mv1.1 Link encap:Ethernet HWaddr 82:01:0A:0A:01:02 inet addr:10.10.1.2 Bcast:10.10.1.255 Mask:255.255.255.0
- Now, use the OpenVZ host itself to view activity:
- As an admin, login to pc3.utah.geniracks.net
- Everything seems similar to when i looked Friday:
vhost2,[~],06:00(0)$ sudo vzlist -a CTID NPROC STATUS IP_ADDR HOSTNAME 1 19 running - virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net
Earlier, i said:
- Using https://boss.utah.geniracks.net/showpool.php, i can see that pc3 is running one VM and pc5 is running zero, but not what experiments each is running. I opened 35 to ask whether a node-to-experiment mapping would be an easy modification to
showpool.php
. - Leigh responded that this is already at the bottom of the node UI, see e.g. https://boss.utah.geniracks.net/shownode.php3?node_id=pc3 (which you can get to by clicking through from showpool.php. So this is good for our purposes.
Step 5: get information about terminated experiments
Using:
- On boss, use AM state, logs, or administrator interfaces to find evidence of the two terminated experiments.
- Determine how many other experiments were run in the past day.
- Determine which GENI user created each of the terminated experiments.
- Determine the mapping of experiments to OpenVZ or exclusive hosts for each of the terminated experiments.
- Determine the control and dataplane MAC addresses assigned to each VM in each terminated experiment.
Determine any IP addresses assigned by InstaGENI to each VM in each terminated experiment.- Given a control IP address which InstaGENI had assigned to a now-terminated VM, determine which experiment was given that control IP.
- Given a data plane IP address which an experimenter had requested for a now-terminated VM, determine which experiment was given that IP.
Verify:
- A site administrator can get ownership and resource allocation information for recently-terminated experiments which used OpenVZ VMs.
- A site administrator can get ownership and resource allocation information for recently-terminated experiments which used physical hosts.
- A site administrator can get information about MAC addresses and IP addresses used by recently-terminated experiments.
Results of testing step 5: 2012-05-21
- In red dot mode, https://boss.utah.geniracks.net/genihistory.php, i can view lots of previous slivers, of which
ecgtest3
andecgtest2
are among the most recent - I can type:
urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2
into the search box, and bring up all previous instances of slivers in that slice. - Note that this is an exact match, not a regexp:
urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest
only pulls upecgtest
slivers, notecgtest2
orecgtest3
. And just searching forecgtest
reports nothing. - As promised by the default text in the search box, searching for:
urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos
does appear to get all of my slivers. - That UI shows that the following slivers were created in the past 24 hours:
ID Slice HRN/URN Creator HRN/URN Created Destroyed Manifest 784 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos 2012-05-21 13:19:41 manifest 778 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos 2012-05-21 12:51:18 2012-05-21 12:56:36 manifest 772 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos 2012-05-21 12:17:44 2012-05-21 12:40:30 manifest 760 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos 2012-05-21 09:05:11 2012-05-21 09:27:04 manifest 718 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+20vm urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 08:03:37 2012-05-21 10:34:19 manifest 686 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:47:56 2012-05-21 10:52:30 manifest 654 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:32:17 2012-05-21 07:39:03 manifest 622 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:24:50 2012-05-21 07:29:53 manifest 616 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:10:14 2012-05-21 07:23:27 manifest
- That display shows which GENI user created each experiment.
- The clickable manifests can be used to get the sliver-to-resource mappings. Within each manifest,
<rs:vnode />
elements can be used to find the resources used by the experiment. These look like:<rs:vnode xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="pc4"><host name="phys1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"><services><login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"></login></services></host></rs:vnode></sliver_type></node> <rs:vnode xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="pcvm3-1"><host name="virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"><services><login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"></login></services></host></rs:vnode></sliver_type></node>
- In addition, the manifests contain dataplane IP addresses and MAC addresses for each experiment (though these are wrong or missing for VMs, per 26)
- Here is all the information i can get this way:
Emulab ID | Sliver URN | Physical nodes | OpenVZ containers | Dataplane IPs and MACs |
784 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest | pc4(phys1) | pc3:pcvm3-1(virt1) | 10.10.1.1(phys1:e83935b1ec9e) 10.10.1.2(virt1:00000a0a0102) |
778 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 | pc5(phys1) pc4(phys2) | 10.10.1.1(phys1:e4115bed1cb6) 10.10.1.2(phys2:e83935b1ec9e) | |
772 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 | pc3:pcvm3-1(virt1) pc3:pcvm3-2(virt2) | 10.10.1.1(virt1:UNKNOWN) 10.10.1.2(virt2:UNKNOWN) | |
760 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest | pc5:pcvm5-21(virt01) pc5:pcvm5-22(virt02) pc5:pcvm5-23(virt03) pc5:pcvm5-24(virt04) pc5:pcvm5-25(virt05) pc5:pcvm5-26(virt06) pc5:pcvm5-27(virt07) pc5:pcvm5-28(virt08) pc5:pcvm5-29(virt09) pc5:pcvm5-30(virt10) pc1:pcvm1-1(virt11) | ||
718 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+20vm | pc5:pcvm5-11(VM-1) pc2:pcvm2-8(VM-2) pc5:pcvm5-16(VM-3) pc5:pcvm5-17(VM-4) pc2:pcvm2-9(VM-5) pc2:pcvm2-10(VM-6) pc5:pcvm5-18(VM-7) pc5:pcvm5-19(VM-8) pc5:pcvm5-20(VM-9) pc5:pcvm5-12(VM-10) pc5:pcvm5-13(VM-11) pc2:pcvm2-1(VM-12) pc2:pcvm2-2(VM-13) pc2:pcvm2-3(VM-14) pc2:pcvm2-4(VM-15) pc5:pcvm5-14(VM-16) pc2:pcvm2-5(VM-17) pc2:pcvm2-6(VM-18) pc2:pcvm2-7(VM-19) pc5:pcvm5-15(VM-20) | 10.10.1.1(VM-1:00000a0a0101) 10.10.1.2(VM-2:00000a0a0102) 10.10.1.3(VM-3:00000a0a0103) 10.10.1.4(VM-4:00000a0a0104) 10.10.1.5(VM-5:00000a0a0105) 10.10.1.6(VM-6:00000a0a0106) 10.10.1.7(VM-7:00000a0a0107) 10.10.1.8(VM-8:00000a0a0108) 10.10.1.9(VM-9:00000a0a0109) 10.10.1.10(VM-10:00000a0a010a) 10.10.1.20(VM-11:00000a0a0114) 10.10.1.19(VM-12:00000a0a0113) 10.10.1.11(VM-13:00000a0a010b) 10.10.1.12(VM-14:00000a0a010c) 10.10.1.13(VM-15:00000a0a010d) 10.10.1.14(VM-16:00000a0a010e) 10.10.1.15(VM-17:00000a0a010f) 10.10.1.16(VM-18:00000a0a0110) 10.10.1.17(VM-19:00000a0a0111) 10.10.1.18(VM-20:00000a0a0112) | |
686 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm | pc5:pcvm5-1(VM-1) pc5:pcvm5-6(VM-2) pc5:pcvm5-7(VM-3) pc4:pcvm4-3(VM-4) pc5:pcvm5-8(VM-5) pc4:pcvm4-4(VM-6) pc5:pcvm5-9(VM-7) pc4:pcvm4-5(VM-8) pc5:pcvm5-10(VM-9) pc5:pcvm5-2(VM-10) pc5:pcvm5-3(VM-11) pc5:pcvm5-4(VM-12) pc4:pcvm4-1(VM-13) pc4:pcvm4-2(VM-14) pc5:pcvm5-5(VM-15) | 10.10.1.1(VM-1:00000a0a0101) 10.10.1.2(VM-2:00000a0a0102) 10.10.1.3(VM-3:00000a0a0103) 10.10.1.4(VM-4:UNKNOWN) 10.10.1.5(VM-5:00000a0a0105) 10.10.1.6(VM-6:UNKNOWN) 10.10.1.7(VM-7:00000a0a0107) 10.10.1.8(VM-8:UNKNOWN) 10.10.1.9(VM-9:00000a0a0109) 10.10.1.10(VM-10:00000a0a010a) 10.10.1.15(VM-11:00000a0a010f) 10.10.1.14(VM-12:00000a0a010e) 10.10.1.11(VM-13:UNKNOWN) 10.10.1.12(VM-14:UNKNOWN) 10.10.1.13(VM-15:00000a0a010d) | |
654 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm | pc2:pcvm2-1(VM-1) pc5:pcvm5-5(VM-2) pc5:pcvm5-6(VM-3) pc5:pcvm5-7(VM-4) pc5:pcvm5-8(VM-5) pc2:pcvm2-4(VM-6) pc2:pcvm2-5(VM-7) pc5:pcvm5-9(VM-8) pc5:pcvm5-10(VM-9) pc2:pcvm2-2(VM-10) pc5:pcvm5-1(VM-11) pc5:pcvm5-2(VM-12) pc5:pcvm5-3(VM-13) pc2:pcvm2-3(VM-14) pc5:pcvm5-4(VM-15) | 10.10.1.2(VM-2:00000a0a0102) 10.10.1.3(VM-3:00000a0a0103) 10.10.1.4(VM-4:00000a0a0104) 10.10.1.5(VM-5:00000a0a0105) 10.10.1.8(VM-8:00000a0a0108) 10.10.1.9(VM-9:00000a0a0109) 10.10.1.15(VM-11:00000a0a010f) 10.10.1.14(VM-12:00000a0a010e) 10.10.1.11(VM-13:00000a0a010b) 10.10.1.13(VM-15:00000a0a010d) 10.10.1.1(VM-1:UNKNOWN) 10.10.1.6(VM-6:UNKNOWN) 10.10.1.7(VM-7:UNKNOWN) 10.10.1.10(VM-10:UNKNOWN) 10.10.1.12(VM-14:UNKNOWN) | |
622 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu | pc5:pcvm5-1(VM-1) pc1:pcvm1-3(VM-2) pc5:pcvm5-6(VM-3) pc1:pcvm1-4(VM-4) pc5:pcvm5-7(VM-5) pc1:pcvm1-5(VM-6) pc5:pcvm5-8(VM-7) pc5:pcvm5-9(VM-8) pc5:pcvm5-10(VM-9) pc1:pcvm1-1(VM-10) pc5:pcvm5-2(VM-11) pc5:pcvm5-3(VM-12) pc5:pcvm5-4(VM-13) pc5:pcvm5-5(VM-14) pc1:pcvm1-2(VM-15) | 10.10.1.1(VM-1:00000a0a0101) 10.10.1.3(VM-3:00000a0a0103) 10.10.1.5(VM-5:00000a0a0105) 10.10.1.7(VM-7:00000a0a0107) 10.10.1.8(VM-8:00000a0a0108) 10.10.1.9(VM-9:00000a0a0109) 10.10.1.15(VM-11:00000a0a010f) 10.10.1.14(VM-12:00000a0a010e) 10.10.1.11(VM-13:00000a0a010b) 10.10.1.12(VM-14:00000a0a010c) 10.10.1.2(VM-2:UNKNOWN) 10.10.1.4(VM-4:UNKNOWN) 10.10.1.6(VM-6:UNKNOWN) 10.10.1.10(VM-10:UNKNOWN) 10.10.1.13(VM-15:UNKNOWN) | |
622 | urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu | pc5:pcvm5-1(VM-1) pc1:pcvm1-3(VM-2) pc5:pcvm5-6(VM-3) pc1:pcvm1-4(VM-4) pc5:pcvm5-7(VM-5) pc1:pcvm1-5(VM-6) pc5:pcvm5-8(VM-7) pc5:pcvm5-9(VM-8) pc5:pcvm5-10(VM-9) pc1:pcvm1-1(VM-10) pc5:pcvm5-2(VM-11) pc5:pcvm5-3(VM-12) pc5:pcvm5-4(VM-13) pc5:pcvm5-5(VM-14) pc1:pcvm1-2(VM-15) | 10.10.1.1(VM-1:00000a0a0101) 10.10.1.3(VM-3:00000a0a0103) 10.10.1.5(VM-5:00000a0a0105) 10.10.1.7(VM-7:00000a0a0107) 10.10.1.8(VM-8:00000a0a0108) 10.10.1.9(VM-9:00000a0a0109) 10.10.1.15(VM-11:00000a0a010f) 10.10.1.14(VM-12:00000a0a010e) 10.10.1.11(VM-13:00000a0a010b) 10.10.1.12(VM-14:00000a0a010c) 10.10.1.2(VM-2:UNKNOWN) 10.10.1.4(VM-4:UNKNOWN) 10.10.1.6(VM-6:UNKNOWN) 10.10.1.10(VM-10:UNKNOWN) 10.10.1.13(VM-15:UNKNOWN) |
- Note, i semi-automated getting that information from the manifest using awk, as follows:
- Download the XML data from the page (copy/paste)
- Find every line that starts with
<interface
, and concatenate the next line, which contains</interface>
to it - Find the node assignments:
grep "<rs:vnode" tmpfile | awk '{print $6 " " $3 " " $4}' | awk -F= '{print $2 " " $3 " " $4}' | awk -F\" '{print $2 " " $4 " " $6}' | awk -F\. '{print $1 " " $4}' | awk '{print $1 ":" $3 "(" $4 ")"}'
- Find the interface data for interfaces which have mac addresses defined:
grep "<interface " tmpfile | grep mac_address | awk '{print $7 " " $2 " " $5}' | awk -F\" '{print $2 " " $4 " " $6}' | awk -F: '{print $1 " " $2}' | awk '{print $1 "(" $2 ":" $4 ")"}'
- Find the interface data for interfaces which don't have mac addresses defined:
grep "<interface " tmpfile | grep -v mac_address | awk '{print $6 " " $2}' | awk -F\" '{print $2 " " $4}' | awk -F: '{print $1}' | awk '{print $1 "(" $2 ":UNKNOWN)"}'
- Incidentally, i came across this UI (https://boss.utah.geniracks.net/showpool.php), which shows the utilization of the nodes in the shared pool. That's not bad.
- I poked around regarding how to do these:
- Determine the control and dataplane MAC addresses assigned to each VM in each terminated experiment.
- Determine any IP addresses assigned by InstaGENI to each VM in each terminated experiment.
Since i couldn't figure out anything really bulletproof, i created 31 to see whether Utah has a preferred solution to this. It's possible that some of this information can be obtained from the OpenVZ hosts. However, i can't get the information for e.g. Luisa's 20 VM experiment from earlier today, because the hosts have been swapped out and back in since then. This is an unusual situation in general, but not an unheard-of one. It would be better to cache information which might be forensically relevant on boss.
This test is also blocked by 26 from being fully completed, though i expect that the relevant parts of this will succeed too, and a cursory check should be sufficient.
Step 6: get OpenFlow state information
Using:
- On the dataplane switch, get a list of controllers, and see if any additional controllers are serving experiments.
- On the flowvisor VM, get a list of active FV slices from the FlowVisor
- On the FOAM VM, get a list of active slivers from FOAM
- Use FV, FOAM, or the switch to list the flowspace of a running OpenFlow experiment.
Verify:
- A site administrator can get information about the OpenFlow resources used by running experiments.
- When an OpenFlow experiment is started by InstaGENI, a new controller is added directly to the switch.
- No new FlowVisor slices are added for new OpenFlow experiments started by InstaGENI.
- No new FOAM slivers are added for new OpenFlow experiments started by InstaGENI.
Step 7: verify MAC addresses on the rack dataplane switch
Using:
- Establish a privileged login to the dataplane switch
- Obtain a list of the full MAC address table of the switch
- On boss and the experimental hosts, use available data sources to determine which host or VM owns each MAC address.
Verify:
- It is possible to identify and classify every MAC address visible on the switch
Step 8: verify active dataplane traffic
Using:
- Establish a privileged login to the dataplane switch
- Based on the information from Step 7, determine which interfaces are carrying traffic between the experimental VMs
- Collect interface counters for those interfaces over a period of 10 minutes
- Estimate the rate at which the experiment is sending traffic
Verify:
- The switch reports interface counters, and an administrator can obtain plausible estimates of dataplane traffic quantities by looking at them.
Attachments (8)
- IG-MON-3-2pc.jpg (829.2 KB) - added by 12 years ago.
- IG-MON-3-2vm-of.jpg (884.7 KB) - added by 12 years ago.
- IG-MON-3-2vm.jpg (874.5 KB) - added by 12 years ago.
- IG-MON-3-experiments.jpg (811.4 KB) - added by 12 years ago.
- IG-MON-3-2pc-detail.jpg (537.2 KB) - added by 12 years ago.
- IG-MON-3-2vm-of-detail.jpg (675.3 KB) - added by 12 years ago.
- IG-MON-3-AggHistory.jpg (1.3 MB) - added by 12 years ago.
- IG-MON-3-SliceHistory.jpg (291.3 KB) - added by 12 years ago.