Changes between Version 12 and Version 13 of GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-MON-3


Ignore:
Timestamp:
05/21/12 17:03:42 (12 years ago)
Author:
chaos@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-MON-3

    v12 v13  
    597597 * Leave this running.
    598598
     599=== Results of testing: 2012-05-21 ===
     600
     601''Note: per discussion on instageni-design on 2012-05-17, request of an OpenFlow-controlled dataplane is not yet possible.  So this will need to be retested once OpenFlow control is available.''
     602
     603 * Recreating the experiment, ecgtest, which was initially used for IG-MON-1.
     604 * Here is the rspec:
     605{{{
     606jericho,[~],15:05(0)$ cat omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec
     607<?xml version="1.0" encoding="UTF-8"?>
     608<!-- This rspec will reserve one physical node and one openvz node, each
     609     with no OS specified, and create a single dataplane link between
     610     them.  It should work on any Emulab which has nodes available and
     611     supports OpenVZ.  -->
     612<rspec xmlns="http://www.geni.net/resources/rspec/3"
     613       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     614       xsi:schemaLocation="http://www.geni.net/resources/rspec/3
     615                           http://www.geni.net/resources/rspec/3/request.xsd"
     616       type="request">
     617
     618  <node client_id="phys1" exclusive="true">
     619    <sliver_type name="raw" />
     620    <interface client_id="phys1:if0" />
     621  </node>
     622  <node client_id="virt1" exclusive="false">
     623    <sliver_type name="emulab-openvz" />
     624    <interface client_id="virt1:if0" />
     625  </node>
     626
     627  <link client_id="phys1-virt1-0">
     628    <interface_ref client_id="phys1:if0"/>
     629    <interface_ref client_id="virt1:if0"/>
     630    <property source_id="phys1:if0" dest_id="virt1:if0"/>
     631    <property source_id="virt1:if0" dest_id="phys1:if0"/>
     632  </link>
     633</rspec>
     634}}}
     635 * Create the sliver:
     636{{{
     637jericho,[~],15:18(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec
     638INFO:omni:Loading config file /home/chaos/omni/omni_pgeni
     639INFO:omni:Using control framework pg
     640INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires within 1 day on 2012-05-22 16:02:36 UTC
     641INFO:omni:Creating sliver(s) from rspec file omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest
     642INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result:
     643INFO:omni:<?xml version="1.0" ?>
     644INFO:omni:<!-- Reserved resources for:
     645        Slice: ecgtest
     646        At AM:
     647        URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am
     648 -->
     649INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3                            http://www.geni.net/resources/rspec/3/manifest.xsd"> 
     650
     651    <node client_id="phys1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc4" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="true" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+785">   
     652        <sliver_type name="raw-pc"/>   
     653        <interface client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" mac_address="e83935b1ec9e" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+788">      <ip address="10.10.1.1" type="ipv4"/>    </interface>   
     654      <rs:vnode name="pc4" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/>    <host name="phys1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/>    <services>      <login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"/>    </services>  </node> 
     655    <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+786">   
     656        <sliver_type name="emulab-openvz"/>   
     657        <interface client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" mac_address="00000a0a0102" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+789">      <ip address="10.10.1.2" type="ipv4"/>    </interface>   
     658      <rs:vnode name="pcvm3-1" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/>    <host name="virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"/>    <services>      <login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"/>    </services>  </node> 
     659
     660    <link client_id="phys1-virt1-0" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+787" vlantag="259">   
     661        <interface_ref client_id="phys1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc4:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+788"/>   
     662        <interface_ref client_id="virt1:if0" component_id="urn:publicid:IDN+utah.geniracks.net+interface+pc3:eth1" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+789"/>   
     663        <property dest_id="virt1:if0" source_id="phys1:if0"/>   
     664        <property dest_id="phys1:if0" source_id="virt1:if0"/>   
     665    </link> 
     666</rspec>
     667INFO:omni: ------------------------------------------------------------
     668INFO:omni: Completed createsliver:
     669
     670  Options as run:
     671                aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am
     672                configfile: /home/chaos/omni/omni_pgeni
     673                framework: pg
     674                native: True
     675
     676  Args: createsliver ecgtest omni/rspecs/request/rack-testing/acceptance-tests/IG-MON-nodes-B.rspec
     677
     678  Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest expires within 1 day(s) on 2012-05-22 16:02:36 UTC
     679Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. 
     680INFO:omni: ============================================================
     681}}}
     682 * My physical node is pc4
     683 * My virtual node is pc3 port 30010
     684 * Huh, and now sliverstatus ''does'' contain the MACs for both hosts, though the virtual one is still wrong.  Updated [instaticket:26], which Jon is looking at.  That is not a blocker for this test.
     685 * Login to pc4, whose eth1 is 10.10.1.1
     686 * Make a bigger dataplane file by catting the other a few times, then start copying it around again:
     687{{{
     688bash
     689touch /tmp/locale-archive
     690for i in {1..40}; do cat /usr/lib/locale/locale-archive >> /tmp/locale-archive; done
     691
     692[chaos@phys1 ~]$ ls -l /tmp/locale-archive
     693-rw-r--r-- 1 chaos pgeni-gpolab-bbn 4199896960 May 21 13:32 /tmp/locale-archive
     694
     695[chaos@phys1 ~]$ while [ 1 ]; do scp /tmp/locale-archive 10.10.1.2:/tmp/; done
     696locale-archive                                100% 4005MB  51.4MB/s   01:18   
     697...
     698}}}
     699 * The first instance of the file copy takes somewhat over a minute, at about 51MBps
     700 * Leave this running.
     701
    599702== Step 4: view running VMs ==
    600703
     
    713816   * These may be useful for running and terminated experiments ''if'' the context IDs are unique.
    714817
    715 ==== Side test: are experiment context IDs unique over time on an OpenVZ server? ====
    716 
    717  * rspec to create a single OpenVZ container:
    718 {{{
    719 jericho,[~],07:12(0)$ cat IG-MON-nodes-E.rspec
    720 <?xml version="1.0" encoding="UTF-8"?>
    721 <!-- This rspec will reserve one openvz node.  It should work on any
    722      Emulab which has nodes available and supports OpenVZ.  -->
    723 <rspec xmlns="http://www.geni.net/resources/rspec/3"
    724        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    725        xsi:schemaLocation="http://www.geni.net/resources/rspec/3
    726                            http://www.geni.net/resources/rspec/3/request.xsd"
    727        type="request">
    728 
    729   <node client_id="virt1" exclusive="false">
    730     <sliver_type name="emulab-openvz" />
    731   </node>
    732 </rspec>
    733 }}}
    734  * use existing slice `ecgtest2` to create a sliver:
    735 {{{
    736 jericho,[~],07:13(0)$ omni -a http://www.utah.geniracks.net/protogeni/xmlrpc/am
    737 createsliver ecgtest2 IG-MON-nodes-E.rspec
    738 INFO:omni:Loading config file /home/chaos/omni/omni_pgeni
    739 INFO:omni:Using control framework pg
    740 INFO:omni:Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires within 1 day on 2012-05-19 10:30:51 UTC
    741 INFO:omni:Creating sliver(s) from rspec file IG-MON-nodes-E.rspec for slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2
    742 INFO:omni:Asked http://www.utah.geniracks.net/protogeni/xmlrpc/am to reserve resources. Result:
    743 INFO:omni:<?xml version="1.0" ?>
    744 INFO:omni:<!-- Reserved resources for:
    745         Slice: ecgtest2
    746         At AM:
    747         URL: http://www.utah.geniracks.net/protogeni/xmlrpc/am
    748  -->
    749 INFO:omni:<rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3                            http://www.geni.net/resources/rspec/3/manifest.xsd"> 
    750 
    751     <node client_id="virt1" component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+utah.geniracks.net+sliver+384">   
    752         <sliver_type name="emulab-openvz"/>   
    753       <rs:vnode name="pcvm5-2" xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1"/>    <host name="virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net"/>    <services>      <login authentication="ssh-keys" hostname="pc5.utah.geniracks.net" port="30266" username="chaos"/>    </services>  </node> 
    754 </rspec>
    755 INFO:omni: ------------------------------------------------------------
    756 INFO:omni: Completed createsliver:
    757 
    758   Options as run:
    759                 aggregate: http://www.utah.geniracks.net/protogeni/xmlrpc/am
    760                 configfile: /home/chaos/omni/omni_pgeni
    761                 framework: pg
    762                 native: True
    763 
    764   Args: createsliver ecgtest2 IG-MON-nodes-E.rspec
    765 
    766   Result Summary: Slice urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 expires within 1 day(s) on 2012-05-19 10:30:51 UTC
    767 Reserved resources on http://www.utah.geniracks.net/protogeni/xmlrpc/am. 
    768 INFO:omni: ============================================================
    769 }}}
    770 
    771 Summary: this means that VM IDs are reused.
    772 
    773 At this point, i was going to gather more information about logs, when the Utah rack became totally unavailable: i was no longer able to use my shell sessions to any machines in the rack, and got ping timeouts to boss.
    774 
    775 After about 8 minutes, things became available again.  I went looking for logs of my dataplane file copy activity to see whether the dataplane had been interrupted, at which point i found out that sshd on the dataplane does not appear to be logged anywhere, either in `/var/log` within the container or on pc5 itself.  That's not a rack requirement, but it seems non-ideal for experimenters.  I opened [instaticket:27] to report it.
    776 
    777 ==== Side test: repeat of previous testing while 50 VMs are running on pc5 ====
    778 
    779 I briefly revisited this test at 13:00, because Luisa had started 45 experiments on pc5, consuming all available resources, so i wanted to look at pc5 again briefly.
    780  * For the record, the machine's load average is high, but the machine is responsive, and the CPU is still somewhat idle.  Here's the header of top output:
    781 {{{
    782 top - 11:06:22 up 1 day, 20:06,  1 user,  load average: 10.13, 9.29, 7.58
    783 Tasks: 1283 total,   2 running, 1281 sleeping,   0 stopped,   0 zombie
    784 Cpu(s):  7.7%us,  1.2%sy,  0.0%ni, 34.2%id, 56.3%wa,  0.0%hi,  0.6%si,  0.0%st
    785 Mem:  49311612k total,  7634748k used, 41676864k free,   255532k buffers
    786 Swap:  1050168k total,        0k used,  1050168k free,  4821552k cached
    787 }}}
    788  * Here is the output of `vzlist`:
    789 {{{
    790 vhost1,[/var/emulab],11:07(0)$ sudo vzlist
     818=== Results of testing: 2012-05-21 ===
     819
     820 * Per-host view of current state:
     821   * From [https://boss.utah.geniracks.net/nodecontrol_list.php3?showtype=dl360] in red dot mode, i can once again see that pc4 is allocated as phys1 to `pgeni-gpolab-bbn-com/ecgtest`.
     822   * I can see that pc1 and pc3 are configured as OpenVZ shared hosts, but i can't see what experiments they are running.
     823 * Per-experiment view of current state:
     824   * Browse to [https://boss.utah.geniracks.net/genislices.php] and find one slice running on the Component Manager:
     825{{{
     826ID   HRN                         Created             Expires
     827535  bbn-pgeni.ecgtest (ecgtest) 2012-05-21 13:19:28 2012-05-22 10:02:36
     828}}}
     829   * Click `(ecgtest)` to view the details of that experiment at [https://boss.utah.geniracks.net/showexp.php3?experiment=536#details].
     830   * This shows what nodes it's using, including that its VM has been put on pc3:
     831{{{
     832Physical Node Mapping:
     833ID              Type         OS              Physical
     834--------------- ------------ --------------- ------------
     835phys1           dl360        FEDORA15-STD    pc4
     836virt1           pcvm         OPENVZ-STD      pcvm3-1 (pc3)
     837}}}
     838   * Here are some other interesting things, all of which are similar to Friday's test:
     839{{{
     840IP Port allocation:
     841Low             High
     842--------------- ------------
     84330000           30255
     844
     845SSHD Port allocation ('ssh -p portnum'):
     846ID              Port       SSH command
     847--------------- ---------- ----------------------
     848
     849Physical Lan/Link Mapping:
     850ID              Member          IP              MAC                  NodeID
     851--------------- --------------- --------------- -------------------- ---------
     852phys1-virt1-0   phys1:0         10.10.1.1       e8:39:35:b1:ec:9e    pc4
     853                                                1/1 <-> 1/37         procurve2
     854phys1-virt1-0   virt1:0         10.10.1.2                            pcvm3-1
     855}}}
     856 * Now, use the OpenVZ host itself to view activity:
     857   * As an admin, login to pc3.utah.geniracks.net
     858   * Everything seems similar to when i looked Friday:
     859{{{
     860vhost2,[~],13:57(0)$ sudo vzlist -a
    791861      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
    792862         1         19 running   -               virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net
    793          2         11 running   -               virt1.ecgtest2.pgeni-gpolab-bbn-com.utah.geniracks.net
    794          3         11 running   -               host2.what-image.pgeni-gpolab-bbn-com.utah.geniracks.net
    795          4         11 running   -               host1.what-image2.pgeni-gpolab-bbn-com.utah.geniracks.net
    796          5         11 running   -               host2.what-image3.pgeni-gpolab-bbn-com.utah.geniracks.net
    797          6         15 running   -               host1.singlevm-1.pgeni-gpolab-bbn-com.utah.geniracks.net
    798          7         15 running   -               host1.singlevm-2.pgeni-gpolab-bbn-com.utah.geniracks.net
    799          8         15 running   -               host1.singlevm-3.pgeni-gpolab-bbn-com.utah.geniracks.net
    800          9         15 running   -               host1.singlevm-4.pgeni-gpolab-bbn-com.utah.geniracks.net
    801         10         15 running   -               host1.singlevm-5.pgeni-gpolab-bbn-com.utah.geniracks.net
    802         11         15 running   -               host1.singlevm-6.pgeni-gpolab-bbn-com.utah.geniracks.net
    803         12         15 running   -               host1.singlevm-7.pgeni-gpolab-bbn-com.utah.geniracks.net
    804         13         15 running   -               host1.singlevm-8.pgeni-gpolab-bbn-com.utah.geniracks.net
    805         14         15 running   -               host1.singlevm-9.pgeni-gpolab-bbn-com.utah.geniracks.net
    806         15         15 running   -               host1.singlevm-10.pgeni-gpolab-bbn-com.utah.geniracks.net
    807         16         15 running   -               host1.singlevm-11.pgeni-gpolab-bbn-com.utah.geniracks.net
    808         17         15 running   -               host1.singlevm-12.pgeni-gpolab-bbn-com.utah.geniracks.net
    809         18         15 running   -               host1.singlevm-13.pgeni-gpolab-bbn-com.utah.geniracks.net
    810         19         11 running   -               host1.singlevm-14.pgeni-gpolab-bbn-com.utah.geniracks.net
    811         20         15 running   -               host1.singlevm-15.pgeni-gpolab-bbn-com.utah.geniracks.net
    812         21         15 running   -               host1.singlevm-16.pgeni-gpolab-bbn-com.utah.geniracks.net
    813         22         15 running   -               host1.singlevm-17.pgeni-gpolab-bbn-com.utah.geniracks.net
    814         23         15 running   -               host1.singlevm-18.pgeni-gpolab-bbn-com.utah.geniracks.net
    815         24         15 running   -               host1.singlevm-19.pgeni-gpolab-bbn-com.utah.geniracks.net
    816         25         15 running   -               host1.singlevm-20.pgeni-gpolab-bbn-com.utah.geniracks.net
    817         26         16 running   -               host1.singlevm-21.pgeni-gpolab-bbn-com.utah.geniracks.net
    818         27         16 running   -               host1.singlevm-22.pgeni-gpolab-bbn-com.utah.geniracks.net
    819         28         15 running   -               host1.singlevm-23.pgeni-gpolab-bbn-com.utah.geniracks.net
    820         29         15 running   -               host1.singlevm-24.pgeni-gpolab-bbn-com.utah.geniracks.net
    821         30         15 running   -               host1.singlevm-25.pgeni-gpolab-bbn-com.utah.geniracks.net
    822         31         15 running   -               host1.singlevm-26.pgeni-gpolab-bbn-com.utah.geniracks.net
    823         32         15 running   -               host1.singlevm-27.pgeni-gpolab-bbn-com.utah.geniracks.net
    824         33         15 running   -               host1.singlevm-28.pgeni-gpolab-bbn-com.utah.geniracks.net
    825         34         15 running   -               host1.singlevm-29.pgeni-gpolab-bbn-com.utah.geniracks.net
    826         35         15 running   -               host1.singlevm-30.pgeni-gpolab-bbn-com.utah.geniracks.net
    827         36         15 running   -               host1.singlevm-31.pgeni-gpolab-bbn-com.utah.geniracks.net
    828         37         15 running   -               host1.singlevm-32.pgeni-gpolab-bbn-com.utah.geniracks.net
    829         38         15 running   -               host1.singlevm-33.pgeni-gpolab-bbn-com.utah.geniracks.net
    830         39         15 running   -               host1.singlevm-34.pgeni-gpolab-bbn-com.utah.geniracks.net
    831         40         15 running   -               host1.singlevm-35.pgeni-gpolab-bbn-com.utah.geniracks.net
    832         41         15 running   -               host1.singlevm-36.pgeni-gpolab-bbn-com.utah.geniracks.net
    833         42         15 running   -               host1.singlevm-37.pgeni-gpolab-bbn-com.utah.geniracks.net
    834         43         15 running   -               host1.singlevm-38.pgeni-gpolab-bbn-com.utah.geniracks.net
    835         44         15 running   -               host1.singlevm-39.pgeni-gpolab-bbn-com.utah.geniracks.net
    836         45         15 running   -               host1.singlevm-40.pgeni-gpolab-bbn-com.utah.geniracks.net
    837         46         15 running   -               host1.singlevm-41.pgeni-gpolab-bbn-com.utah.geniracks.net
    838         47         15 running   -               host1.singlevm-42.pgeni-gpolab-bbn-com.utah.geniracks.net
    839         48         15 running   -               host1.singlevm-43.pgeni-gpolab-bbn-com.utah.geniracks.net
    840         49         15 running   -               host1.singlevm-44.pgeni-gpolab-bbn-com.utah.geniracks.net
    841         50         11 running   -               host1.singlevm-45.pgeni-gpolab-bbn-com.utah.geniracks.net
    842 }}}
    843  * Here is the iptables nat table, which is being used to forward sshd to each VM:
    844 {{{
    845 vhost1,[/var/emulab],11:08(0)$ sudo iptables -L -n -t nat
    846 Chain PREROUTING (policy ACCEPT)
    847 target     prot opt source               destination         
    848 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:30010 to:172.17.5.1:30010
    849 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:30266 to:172.17.5.2:30266
    850 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:30522 to:172.17.5.3:30522
    851 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:30778 to:172.17.5.4:30778
    852 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:31034 to:172.17.5.5:31034
    853 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:31290 to:172.17.5.6:31290
    854 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:31546 to:172.17.5.7:31546
    855 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:31802 to:172.17.5.8:31802
    856 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:32058 to:172.17.5.9:32058
    857 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:32314 to:172.17.5.10:32314
    858 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:32570 to:172.17.5.11:32570
    859 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:32826 to:172.17.5.12:32826
    860 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:33082 to:172.17.5.13:33082
    861 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:33338 to:172.17.5.14:33338
    862 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:33594 to:172.17.5.15:33594
    863 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:33850 to:172.17.5.16:33850
    864 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:34106 to:172.17.5.17:34106
    865 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:34362 to:172.17.5.18:34362
    866 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:34618 to:172.17.5.19:34618
    867 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:34874 to:172.17.5.20:34874
    868 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:35130 to:172.17.5.21:35130
    869 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:35386 to:172.17.5.22:35386
    870 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:35642 to:172.17.5.23:35642
    871 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:35898 to:172.17.5.24:35898
    872 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:36154 to:172.17.5.25:36154
    873 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:36410 to:172.17.5.26:36410
    874 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:36666 to:172.17.5.27:36666
    875 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:36922 to:172.17.5.28:36922
    876 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:37178 to:172.17.5.29:37178
    877 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:37434 to:172.17.5.30:37434
    878 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:37690 to:172.17.5.31:37690
    879 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:37946 to:172.17.5.32:37946
    880 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:38202 to:172.17.5.33:38202
    881 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:38458 to:172.17.5.34:38458
    882 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:38714 to:172.17.5.35:38714
    883 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:38970 to:172.17.5.36:38970
    884 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:39226 to:172.17.5.37:39226
    885 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:39482 to:172.17.5.38:39482
    886 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:39738 to:172.17.5.39:39738
    887 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:39994 to:172.17.5.40:39994
    888 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:40250 to:172.17.5.41:40250
    889 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:40506 to:172.17.5.42:40506
    890 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:40762 to:172.17.5.43:40762
    891 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:41018 to:172.17.5.44:41018
    892 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:41274 to:172.17.5.45:41274
    893 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:41530 to:172.17.5.46:41530
    894 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:41786 to:172.17.5.47:41786
    895 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:42042 to:172.17.5.48:42042
    896 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:42298 to:172.17.5.49:42298
    897 DNAT       tcp  --  0.0.0.0/0            155.98.34.15        tcp dpt:42554 to:172.17.5.50:42554
    898 
    899 Chain POSTROUTING (policy ACCEPT)
    900 target     prot opt source               destination         
    901 ACCEPT     all  --  172.16.0.0/12        155.98.34.0/24     
    902 ACCEPT     all  --  172.16.0.0/12        172.16.0.0/12       
    903 SNAT       all  --  172.16.0.0/12        0.0.0.0/0           to:155.98.34.15
    904 
    905 Chain OUTPUT (policy ACCEPT)
    906 target     prot opt source               destination         
    907 }}}
    908  * As expected, `/var/emulab/{boot,logs,vms}` contain subdirectories or files related to each of the 50 running VMs.
    909  * As expected, a randomly-sampled experiment with dataplane interfaces does not list any mac addresses in the admin UI [https://boss.utah.geniracks.net/showexp.php3?experiment=379#details].  This is consistent with instaticket:26.
     863}}}
    910864
    911865== Step 5: get information about terminated experiments ==
     
    924878 * A site administrator can get information about MAC addresses and IP addresses used by recently-terminated experiments.
    925879
     880=== Results of testing: 2012-05-21 ===
     881
     882 * In red dot mode, [https://boss.utah.geniracks.net/genihistory.php], i can view lots of previous slivers, of which `ecgtest3` and `ecgtest2` are among the most recent
     883 * I can type:
     884{{{
     885urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2
     886}}}
     887 into the search box, and bring up all previous instances of slivers in that slice.
     888 * Note that this is an exact match, ''not'' a regexp:
     889{{{
     890urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest
     891}}}
     892 only pulls up `ecgtest` slivers, not `ecgtest2` or `ecgtest3`.  And just searching for `ecgtest` reports nothing.
     893 * As promised by the default text in the search box, searching for:
     894{{{
     895urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos
     896}}}
     897 does appear to get all of my slivers.
     898 * That UI shows that the following slivers were created in the past 24 hours:
     899{{{
     900ID  Slice HRN/URN                                         Creator HRN/URN                                    Created             Destroyed           Manifest
     901784 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest   urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos   2012-05-21 13:19:41                     manifest
     902778 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3  urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos   2012-05-21 12:51:18 2012-05-21 12:56:36 manifest
     903772 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2  urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos   2012-05-21 12:17:44 2012-05-21 12:40:30 manifest
     904760 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest   urn:publicid:IDN+pgeni.gpolab.bbn.com+user+chaos   2012-05-21 09:05:11 2012-05-21 09:27:04 manifest
     905718 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+20vm      urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 08:03:37 2012-05-21 10:34:19 manifest
     906686 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm      urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:47:56 2012-05-21 10:52:30 manifest
     907654 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm      urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:32:17 2012-05-21 07:39:03 manifest
     908622 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:24:50 2012-05-21 07:29:53 manifest
     909616 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu urn:publicid:IDN+pgeni.gpolab.bbn.com+user+lnevers 2012-05-21 07:10:14 2012-05-21 07:23:27 manifest
     910}}}
     911 * That display shows which GENI user created each experiment.
     912 * The clickable manifests can be used to get the sliver-to-resource mappings.  Within each manifest, `<rs:vnode />` elements can be used to find the resources used by the experiment.  These look like:
     913{{{
     914<rs:vnode xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="pc4"><host name="phys1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"><services><login authentication="ssh-keys" hostname="pc4.utah.geniracks.net" port="22" username="chaos"></login></services></host></rs:vnode></sliver_type></node>
     915<rs:vnode xmlns:rs="http://www.protogeni.net/resources/rspec/ext/emulab/1" name="pcvm3-1"><host name="virt1.ecgtest.pgeni-gpolab-bbn-com.utah.geniracks.net"><services><login authentication="ssh-keys" hostname="pc3.utah.geniracks.net" port="30010" username="chaos"></login></services></host></rs:vnode></sliver_type></node>
     916}}}
     917 * In addition, the manifests contain ''dataplane'' IP addresses and MAC addresses for each experiment (though these are wrong or missing for VMs, per [instaticket:26])
     918 * Here is all the information i can get this way:
     919|| '''Emulab ID''' || '''Sliver URN''' || '''Physical nodes''' || '''OpenVZ containers''' || '''Dataplane IPs and MACs''' ||
     920|| 784 || urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest || pc4(phys1) || pc3:pcvm3-1(virt1) || 10.10.1.1(phys1:e83935b1ec9e) 10.10.1.2(virt1:00000a0a0102) ||
     921{{{
     922778 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest3 
     923772 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest2 
     924760 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+ecgtest   
     925718 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+20vm     
     926686 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm     
     927654 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+15vm     
     928622 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu
     929616 urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+2vmubuntu
     930}}}
     931
     932 * Determine the mapping of experiments to OpenVZ or exclusive hosts for each of the terminated experiments.
     933 * Determine the control and dataplane MAC addresses assigned to each VM in each terminated experiment.
     934 * Determine any IP addresses assigned by InstaGENI to each VM in each terminated experiment.
     935
    926936== Step 6: get !OpenFlow state information ==
    927937