wiki:GENIRacksHome/ExogeniRacks/AcceptanceTestStatus/EG-MON-4

Version 16 (modified by lnevers@bbn.com, 11 years ago) (diff)

--

EG-MON-4: Infrastructure Device Performance Test

This page captures status for the test case EG-MON-4, which verifies that the rack head node performs well enough to run all the services it needs to run while OpenFlow and non-OpenFlow experiments are running. For overall status see the ExoGENI Acceptance Test Status page.

Last update: 2013/01/30

Test Status

This section captures the status for each step in the acceptance test plan.

StepState Ticket Notes
Step 1
Step 2
Step 3


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This test cases sets up several experiments to generate resource usage for both compute and network resources in an InstaGENI Rack. The rack used is the GPO rack and the following experiments are set up before head node device performance is reviewed:

  • EG-MON-4-exp1: EG GPO non-OpenFlow experiment with 10 VM nodes, all nodes exchanging traffic.
  • EG-MON-4-exp2: EG GPO non-OpenFlow experiment with 1 VM and one bare metal node, both exchanging traffic.
  • EG-MON-4-exp3: EG GPO OpenFlow experiment with 2 nodes in rack exchanging traffic with 2 site GPO OpenFlow campus resources.
  • EG-MON-4-exp4: EG GPO OpenFlow experiment with 2 nodes exchanging traffic with 2 remote EG RENCI OpenFlow nodes.

The setup of the experiments above is not captured in this test case, but the RSpec are available [insert_link_here]. Also traffic levels and types will be captured when this test is run.

1. View OpenFlow control monitoring at GMOC and verify that no monitoring data is missing

Before starting any experiments that request compute or network resources, collected information for baseline performance for the GPO rack head node at https://bbn-hn.exogeni.net/rack_bbn/check_mk/. Here are performance measurements for the node:

Checked Round Trip Averages:

Checked CPU Utilization:

Checked CPU Load:

Checked Memory Used:

At the time of these measurements, three foam slivers were in place and 57 FlowVisor rule entries:

   [lnevers@bbn-hn ~]$ foamctl geni:list-slivers --passwd-file=/opt/foam/etc/foampasswd|grep sliver_urn
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs15:013f6aa7-e600-4be5-9e31-5c0436223dfd", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs16:8aad0aae-ae92-4a3c-bd5e-43f7456f628e", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon:a512fb29-d04f-4223-9a34-ae38158be609", 
   [lnevers@bbn-hn ~]$ /opt/flowvisor/bin/fvctl --passwd-file=/opt/flowvisor/etc/flowvisor/fvpasswd listFlowSpace |grep rule |wc -l 
     57    

Also according to list resources there are 43 VM available via GPO SM:

      <node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" 
       component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" 
       component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="43"/>
            </hardware_type>

and 25 VMs available via the ExoSM and 1 Bare Metal:

      <node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-vm-cloud" 
      component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" 
      component_name="orca-vm-cloud" exclusive="false">
            <hardware_type name="orca-vm-cloud">
                  <ns3:node_type type_slots="25"/>
            </hardware_type>
            <available now="true"/>

      <node component_id="urn:publicid:IDN+exogeni.net:bbnvmsite+node+orca-xcat-cloud" component_manager_id="urn:publicid:IDN+exogeni.net:bbnvmsite+authority+am" component_name="orca-xcat-cloud" exclusive="false">
            <hardware_type name="orca-xcat-cloud">
                  <ns3:node_type type_slots="1"/>
            </hardware_type>

Set up the first experiment EG-MON-4-exp1 at the GPO SM. Experiment included 10 VM nodes exchanging traffic, without OpenFlow. Verified all nodes in the sliver were ready, and started a ping on each of the ten nodes to another node in the sliver using 64 bytes packets with an interval of 3 packet/sec. After 20 minutes, checked statistics for:

Checked Round Trip Averages:

Checked CPU Utilization:

Checked CPU Load:

Check Memory:

Left previous EG-MON-exp1 running and set up the second experiment EG-MON-4-exp2 at the ExoSM for the GPO site. Experiment included 1 bare metal and 1 VM, without OpenFlow. Verified all nodes in the sliver were ready, and started continuous Iperf traffic between the two nodes, traffic ran for 20 minutes and then checked statistics:

Checked Round Trip Averages:

Checked CPU Utilization:

Checked CPU Load:

Check Memory:

2. View VLAN 1750 data plane monitoring

Verify the VLAN 1750 data plane monitoring which pings the rack's interface on VLAN 1750, and verify that packets are not being dropped

3. Verify that the CPU idle percentage on the head node is nonzero

Misc scenarios

The following statistics were collected while NUM_of_EXP(*) experiments were in place:

(*) NUM_of_EXP from GMOC Monitoring.

[lnevers@bbn-hn ~]$ top

top - 16:38:32 up 54 days, 23:58,  1 user,  load average: 0.25, 0.26, 0.27
Tasks: 670 total,   1 running, 667 sleeping,   0 stopped,   2 zombie
Cpu(s):  1.0%us,  0.2%sy,  0.0%ni, 98.7%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  12152028k total, 10045784k used,  2106244k free,    19064k buffers
Swap:  8388600k total,   509084k used,  7879516k free,   155448k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND               
 8212 nova      20   0  414m  77m 2872 S  6.6  0.7   1208:26 nova-api               
 8210 nova      20   0  414m  78m 2872 S  4.3  0.7   1199:03 nova-api               
 4614 rabbitmq  20   0 1662m 166m 1916 S  4.0  1.4   2601:37 beam.smp               
 4433 mysql     20   0 4545m 271m 3896 S  1.3  2.3   1288:09 mysqld                 
 8064 nova      20   0  389m  83m 3860 S  1.3  0.7   1356:08 nova-network           
  997 lnevers   20   0 17524 1744  968 R  0.7  0.0   0:00.29 top                    
 3628 ldap      20   0 1904m 592m 5696 S  0.7  5.0   1623:22 slapd                  
 7709 nova      20   0  352m  46m 3832 S  0.7  0.4 174:07.01 nova-consoleaut        
 3596 root      39  19     0    0    0 S  0.3  0.0 544:05.41 kipmi0                 
19894 geni-orc  20   0 4349m 715m 5028 S  0.3  6.0   5:59.35 java                   
19995 geni-orc  20   0 4349m 745m 5040 S  0.3  6.3   5:58.09 java                   
28520 geni-orc  20   0 4349m 737m 5160 S  0.3  6.2   3:37.46 java                   
31265 geni-orc  20   0 9556m 294m 4728 S  0.3  2.5 119:50.72 java                   
    1 root      20   0 21448 1204  980 S  0.0  0.0   0:35.95 init                   
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.07 kthreadd               
    3 root      RT   0     0    0    0 S  0.0  0.0   0:03.42 migration/0            
    4 root      20   0     0    0    0 S  0.0  0.0   4:06.67 ksoftirqd/0            
    5 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/0            
    7 root      RT   0     0    0    0 S  0.0  0.0   0:02.34 migration/1            
    8 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/1            
    9 root      20   0     0    0    0 S  0.0  0.0   3:28.83 ksoftirqd/1            
   11 root      RT   0     0    0    0 S  0.0  0.0   0:01.91 migration/2            
   12 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/2            
   13 root      20   0     0    0    0 S  0.0  0.0   3:08.81 ksoftirqd/2            
   15 root      RT   0     0    0    0 S  0.0  0.0   0:01.65 migration/3            
   16 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/3            
   17 root      20   0     0    0    0 S  0.0  0.0   2:24.44 ksoftirqd/3            
   19 root      RT   0     0    0    0 S  0.0  0.0   0:02.76 migration/4            
   20 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/4            
   21 root      20   0     0    0    0 S  0.0  0.0   1:55.21 ksoftirqd/4            
   23 root      RT   0     0    0    0 S  0.0  0.0   0:01.25 migration/5            
   24 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/5            
   25 root      20   0     0    0    0 S  0.0  0.0   1:51.98 ksoftirqd/5            
   27 root      RT   0     0    0    0 S  0.0  0.0   0:00.93 migration/6            
   28 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/6            
   29 root      20   0     0    0    0 S  0.0  0.0   1:29.01 ksoftirqd/6            
   31 root      RT   0     0    0    0 S  0.0  0.0   0:00.91 migration/7            
   32 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/7            
   33 root      20   0     0    0    0 S  0.0  0.0   1:28.20 ksoftirqd/7            
   35 root      RT   0     0    0    0 S  0.0  0.0   0:02.08 migration/8            
   36 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/8            
   37 root      20   0     0    0    0 S  0.0  0.0   3:30.95 ksoftirqd/8            
   39 root      RT   0     0    0    0 S  0.0  0.0   0:43.11 migration/9            
   40 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/9            
   41 root      20   0     0    0    0 S  0.0  0.0   4:00.11 ksoftirqd/9            
   43 root      RT   0     0    0    0 S  0.0  0.0   0:10.64 migration/10           
   44 root      RT   0     0    0    0 S  0.0  0.0   0:00.01 migration/10           
   45 root      20   0     0    0    0 S  0.0  0.0   2:43.38 ksoftirqd/10           
   47 root      RT   0     0    0    0 S  0.0  0.0   0:03.92 migration/11           
[lnevers@bbn-hn ~]$ uptime 
 16:38:50 up 54 days, 23:58,  1 user,  load average: 0.27, 0.26, 0.27
[lnevers@bbn-hn ~]$ iostat
Linux 2.6.32-279.11.1.el6.x86_64 (bbn-hn.exogeni.net) 	01/30/2013 	_x86_64_	(16 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.58    0.01    0.41    0.36    0.00   97.64

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              26.08        85.61       362.92  406793702 1724581812
dm-0              3.90        16.30        25.45   77450962  120936712
dm-1              0.08         0.22         0.40    1046016    1895768
dm-2              0.18         0.06         1.40     294738    6675168
dm-3             13.19        18.00        90.04   85552522  427866832
dm-4             16.97        48.44       126.64  230192994  601772624
dm-5             15.01         2.57       118.99   12227530  565434408
dm-6              0.00         0.00         0.00        824          0
dm-7              0.00         0.00         0.00        824          0
sdb               1.96       394.07        87.75 1872576050  416967720
sdc               0.00         0.00         0.00       2104          0
dm-8              1.96       394.07        87.75 1872575898  416967720
dm-9              6.87        49.51        51.90  235253650  246633856
dm-10             5.94       343.74        35.84 1633410714  170318240
dm-11             0.01         0.82         0.00    3911074      15624

Attachments (35)