wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-MON-4

Version 23 (modified by lnevers@bbn.com, 7 years ago) (diff)

--

IG-MON-4: Infrastructure Device Performance Test

This page captures status for the test case IG-MON-4, which verifies that the rack head node performs well enough to run all the services it needs to run while OpenFlow and non-OpenFlow experiments are running. For overall status see the InstaGENI Acceptance Test Status page.

Last update: 2013/01/29

Test Status

This section captures the status for each step in the acceptance test plan.

StepState Ticket Notes
Step 1
Step 2
Step 3


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

This test cases sets up several experiments to generate resource usage for both compute and network resources in an InstaGENI Rack. The rack used is the GPO rack and the following experiments are set up before head node device performance is reviewed:

  • IG-MON-4-exp1: IG GPO non-OpenFlow experiment with 10 VM nodes, all nodes exchanging traffic.
  • IG-MON-4-exp2: IG GPO non-OpenFlow experiment with 1 VM and one dedicated raw-pc, both exchanging traffic.
  • IG-MON-4-exp3: IG GPO OpenFlow experiment with 2 nodes in rack exchanging traffic with 2 site GPO OpenFlow campus resources.
  • IG-MON-4-exp4: IG GPO OpenFlow experiment with 4 nodes exchanging traffic within the GPO rack.

The setup of the experiments above is not captured in this test case, but the RSpec are available [insert_link_here]. Also traffic levels and types will be captured when this test is run.

In addition to the 4 experiment listed above the following experiments were running on the GPO IG rack: [Insert_capture for https://boss.instageni.gpolab.bbn.com/showexp_list.php3]

While the 4 experiments involving OpenFlow and compute slivers are running, execute the steps below.

1. View OpenFlow control monitoring at GMOC

Before starting any experiments that request compute or network resources, collected information for baseline performance for the GPO rack for the boss, FlowVisor and FOAM nodes.

Checked existing experiments within the rack, which also shows node allocation by node type:

Baseline statistics for Boss Node: Top statistics:

last pid:  9865;  load averages:  0.07,  0.03,  0.00              up 51+16:42:42  09:45:38
145 processes: 3 running, 141 sleeping, 1 zombie
CPU:  0.8% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.2% idle
Mem: 488M Active, 1189M Inact, 189M Wired, 31M Cache, 94M Buf, 103M Free
Swap: 2047M Total, 2628K Used, 2045M Free

The above shows the boss node CPU Load averages:

  • 1 minutes: 0.07
  • 5 minutes: 0.03
  • 15 minutes: 0.00

I/O Statistics for boss node:

       tty             da4              da1              da2             cpu
 tin  tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0     3 25.98   0  0.01  20.73   0  0.00   7.90   1  0.01   1  0  1  0 98

Baseline statistics for FOAM Node:

Top statistics:

top - 09:45:35 up 49 days, 21:49,  2 users,  load average: 0.00, 0.01, 0.05
Tasks:  67 total,   1 running,  64 sleeping,   0 stopped,   2 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    756268k total,   607344k used,   148924k free,   149308k buffers
Swap:   794620k total,     4444k used,   790176k free,   334760k cached

The above shows the FOAM node CPU Load averages:

  • 1 minutes: 0.00
  • 5 minutes: 0.01
  • 15 minutes: 0.05

I/O Statistics for FOAM node:

Linux 3.2.0-34-generic (foam) 	02/01/2013 	_x86_64_	(1 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.64    0.00    0.35    0.00    0.00   99.00
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              1.80         0.09        13.93     371249   60046465
dm-0              2.52         0.09        13.93     366921   60041316
dm-1              0.00         0.00         0.00       2324       5104

Checked for existing FOAM sliver (also shown in the EID column of the "Active Nodes" above:

lnevers@foam:~$ foamctl geni:list-slivers --passwd-file=/etc/foam.passwd |grep sliver_urn
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs15:8a0abd6f-0f5a-469f-91d2-c7f990b8494e", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs16:a92990b6-1ede-4dd7-b6f6-7b4a4bd36fd7", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+tuptymon:b7850c93-110f-4e63-a121-26f3449dac44", 
   "sliver_urn": "urn:publicid:IDN+pgeni.gpolab.bbn.com+slice+jbs17:fd82bb82-eef3-407d-b092-c1393773791c", 

Baseline statistics for FlowVisor Node:

Top statistics for FlowVisor node:

top - 09:45:37 up 49 days, 16:53,  2 users,  load average: 0.00, 0.01, 0.05
Tasks:  68 total,   1 running,  67 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4031252k total,   967100k used,  3064152k free,    92444k buffers
Swap:  4202492k total,        0k used,  4202492k free,   366332k cached

The above shows the FlowVisor node CPU Load averages:

  • 1 minutes: 0.00
  • 5 minutes: 0.01
  • 15 minutes: 0.05

I/O Statistics for FlowVisor node:

Linux 3.2.0-34-generic (flowvisor) 	02/01/2013 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.09    0.00    0.07    0.00    0.00   99.84
Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
xvda              0.31         0.06         2.01     268011    8634017
dm-0              0.44         0.06         2.01     264869    8633968
dm-1              0.00         0.00         0.00       1148          0

Checked for existing FlowSpace rules:

lnevers@flowvisor:~$ fvctl --passwd-file=/etc/flowvisor.passwd listFlowSpace
Got reply:
rule 0: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x800,nw_dst=10.42.15.0/24,nw_src=10.42.15.0/24]],actionsList=[Slice:8a0abd6f-0f5a-469f-91d2-c7f990b8494e=4],id=[13],priority=[2000],]
rule 1: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x806,nw_dst=10.42.15.0/24,nw_src=10.42.15.0/24]],actionsList=[Slice:8a0abd6f-0f5a-469f-91d2-c7f990b8494e=4],id=[14],priority=[2000],]
rule 2: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x800,nw_dst=10.42.16.0/24,nw_src=10.42.16.0/24]],actionsList=[Slice:a92990b6-1ede-4dd7-b6f6-7b4a4bd36fd7=4],id=[15],priority=[2000],]
rule 3: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x806,nw_dst=10.42.16.0/24,nw_src=10.42.16.0/24]],actionsList=[Slice:a92990b6-1ede-4dd7-b6f6-7b4a4bd36fd7=4],id=[16],priority=[2000],]
rule 4: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x800,nw_dst=10.50.0.0/16,nw_src=10.50.0.0/16]],actionsList=[Slice:b7850c93-110f-4e63-a121-26f3449dac44=4],id=[21],priority=[2000],]
rule 5: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x806,nw_dst=10.50.0.0/16,nw_src=10.50.0.0/16]],actionsList=[Slice:b7850c93-110f-4e63-a121-26f3449dac44=4],id=[22],priority=[2000],]
rule 6: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x800,nw_dst=10.42.17.0/24,nw_src=10.42.17.0/24]],actionsList=[Slice:fd82bb82-eef3-407d-b092-c1393773791c=4],id=[50],priority=[2000],]
rule 7: FlowEntry[dpid=[06:d6:84:34:97:c6:c9:00],ruleMatch=[OFMatch[dl_type=0x806,nw_dst=10.42.17.0/24,nw_src=10.42.17.0/24]],actionsList=[Slice:fd82bb82-eef3-407d-b092-c1393773791c=4],id=[51],priority=[2000],]
lnevers@flowvisor:~$ 

Baseline statistics for OPS/users Node:

Top statistics:

last pid: 66647;  load averages:  0.63,  0.16,  0.05   up 51+16:42:51  09:45:37
99 processes:  1 running, 98 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 158M Active, 851M Inact, 163M Wired, 7660K Cache, 95M Buf, 820M Free
Swap: 2047M Total, 852K Used, 2046M Free

The above shows the FlowVisor node CPU Load averages:

  • 1 minutes: 0.63
  • 5 minutes: 0.16
  • 15 minutes: 0.05

I/O Statistics for OPS node:

       tty             da3              da0              da4             cpu
 tin  tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0     1 20.01   0  0.00  11.03   0  0.01  11.19   0  0.00   0  0  0  0 100

Set up the first experiment IG-MON-4-exp1 at the GPO InstaGENI. Experiment included 10 VM nodes exchanging traffic, without OpenFlow. Verified all nodes in the sliver were ready, and checked statistics for boss and ops nodes:

Boss after EG-MON-4-exp1 sliver creation:

load averages:  0.05,  0.11,  0.08              
145 processes: 1 running, 143 sleeping, 1 zombie
CPU:  0.0% user,  0.0% nice,  0.8% system,  0.0% interrupt, 99.2% idle
Mem: 432M Active, 1171M Inact, 188M Wired, 27M Cache, 93M Buf, 183M Free
Swap: 2047M Total, 2628K Used, 2045M Free

Ops after EG-MON-4-sliver-creation:

load averages:  0.00,  0.02,  0.00  
101 processes: 2 running, 99 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 160M Active, 852M Inact, 164M Wired, 7660K Cache, 95M Buf, 817M Free
Swap: 2047M Total, 852K Used, 2046M Free

Started a ping on each of the ten nodes IN IG-MON-4-EXP1 to another node in the sliver using 64 bytes packets with an interval of 3 packet/sec. All nodes allocated were on pc2. After 10 minutes of traffic checked statistics:

EG-MON-4-exp1 statistics for PC2 Node:

top - 10:52:49 up 50 days, 15:24,  1 user,  load average: 0.00, 0.45, 1.87
Tasks: 650 total,   1 running, 649 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  49311300k total, 15063252k used, 34248048k free,   904700k buffers
Swap:  1050168k total,        0k used,  1050168k free, 12100764k cached

The above shows CPU Load averages:

  • 1 minutes: 0.00
  • 5 minutes: 0.45
  • 15 minutes: 1.87

EG-MON-4-exp1 statistics for Boss Node:

load averages:  0.01,  0.05,  0.03              
146 processes: 1 running, 144 sleeping, 1 zombie
CPU:  0.8% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.2% idle
Mem: 396M Active, 1211M Inact, 189M Wired, 27M Cache, 94M Buf, 177M Free
Swap: 2047M Total, 2628K Used, 2045M Free

The above shows CPU Load averages:

  • 1 minutes: 0.01
  • 5 minutes: 0.05
  • 15 minutes: 1.87

EG-MON-4-exp1 statistics for FOAM Node:

load average: 0.08, 0.06, 0.05
Tasks:  67 total,   1 running,  64 sleeping,   0 stopped,   2 zombie
Cpu(s):  0.3%us,  0.3%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    756268k total,   608972k used,   147296k free,   149320k buffers
Swap:   794620k total,     4444k used,   790176k free,   336216k cached

The above shows CPU Load averages:

  • 1 minutes: 0.08
  • 5 minutes: 0.06
  • 15 minutes: 0.05

EG-MON-4-exp1 statistics for FlowVisor Node:

load average: 0.00, 0.01, 0.05
Tasks:  68 total,   1 running,  67 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4031252k total,   966976k used,  3064276k free,    92444k buffers
Swap:  4202492k total,        0k used,  4202492k free,   366360k cached

The above shows CPU Load averages:

  • 1 minutes: 0.00
  • 5 minutes: 0.01
  • 15 minutes: 0.05

EG-MON-4-exp1 statistics for OPS Node:

load averages:  0.05,  0.04,  0.00   
101 processes: 1 running, 100 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 160M Active, 852M Inact, 164M Wired, 7660K Cache, 96M Buf, 817M Free
Swap: 2047M Total, 852K Used, 2046M Free

The above shows CPU Load averages:

  • 1 minutes: 0.05
  • 5 minutes: 0.04
  • 15 minutes: 0.00

Leave first experiment IG-MON-4-exp1 running and add experiment IG-MON-4-exp2. Experiment includes XX nodes running iperf traffic, without OpenFlow

EG-MON-4-exp2 statistics for Boss Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp2 statistics for FOAM Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp2 statistics for FlowVisor Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp2 statistics for OPS Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

2. View VLAN 1750 data plane monitoring

Verify the VLAN 1750 data plane monitoring which pings the rack's interface on VLAN 1750, and verify that packets are not being dropped..

Leave first experiment IG-MON-4-exp2 running and add experiment IG-MON-4-exp3. Experiment includes XX nodes running ?? traffic, without OpenFlow

EG-MON-4-exp3 statistics for Boss Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp3 statistics for FOAM Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp3 statistics for FlowVisor Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp3 statistics for OPS Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

Leave first experiment IG-MON-4-exp3 running and add experiment IG-MON-4-exp4. Experiment includes XX nodes running ?? traffic, without OpenFlow

EG-MON-4-exp4 statistics for Boss Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp4 statistics for FOAM Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp4 statistics for FlowVisor Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

EG-MON-4-exp4 statistics for OPS Node:

The above shows CPU Load averages:

  • 1 minutes: XX
  • 5 minutes: XX
  • 15 minutes: XX

3. Verify CPU idle percentage on the server host and the FOAM VM are both nonzero

Attachments (1)

Download all attachments as: .zip