wiki:GENIRacksHome/InstageniRacks/AcceptanceTestStatus/IG-EXP-2

IG-EXP-2: InstaGENI Single Site Acceptance Test

This page captures status for the test case IG-EXP-2, which verifies basic operations of VMs and data flows within one rack. For overall status see the InstaGENI Acceptance Test Status page.

Last Update: 08/20/12

Test Status

This section captures the status for each step in the acceptance test plan.

Step State Date completed Ticket Comments
Step 1 Color(green,Pass)? instaticket:14Minor, multiple default images listed
Step 2 Color(green,Pass)? Customized Ubuntu image available
Step 3 Color(green,Pass)?
Step 4 Color(#98FB98,Pass: most criteria)? OpenVZ=Fedora15, cannot request other VM OS, modified test to use Fedora for VM.
Step 5 Color(green,Pass)?
Step 6 Color(green,Pass)?
Step 7 Color(green,Pass)?
Step 8 Color(green,Pass)?
Step 9 Color(green,Pass)?
Step 10 Color(green,Pass)?
Step 11 Color(green,Pass)? 2 raw-pc with custom Ubuntu 12.04
Step 12 Color(green,Pass)?
Step 13 Color(green,Pass)?
Step 14 Color(green,Pass)?
Step 15 Color(green,Pass)?
Step 16 Color(green,Pass)?
Step 17 Color(green,Pass)?
Step 18 Color(green,Pass)?
Step 19 Color(green,Pass)?
Step 20 Color(green,Pass)?


State Legend Description
Color(green,Pass)? Test completed and met all criteria
Color(#98FB98,Pass: most criteria)? Test completed and met most criteria. Exceptions documented
Color(red,Fail)? Test completed and failed to meet criteria.
Color(yellow,Complete)? Test completed but will require re-execution due to expected changes
Color(orange,Blocked)? Blocked by ticketed issue(s).
Color(#63B8FF,In Progress)? Currently under test.


Test Plan Steps

Test case verified that InstaGENI made available at least two Linux distributions and a FreeBSD image as stated in design document

Created two GPO customized Ubuntu image snapshots which are available and have been manually uploaded by the rack administrator using available InstaGENI documentation. One Ubuntu image is for the VM and one Ubuntu image is for the physical node in this test.

A nick_name alias is used for the Utah InstaGENI aggregate manager in the omni_config:

ig-utah=,http://utah.geniracks.net/protogeni/xmlrpc/am

GPO ProtoGENI user credentials used: lnevers@bbn.com used for Experimenter1, and lnevers1@bbn.com used for Experimenter2.

Step 1. As Experimenter1, request ListResources from Utah InstaGENI

As Experimenter1 (lnevers) requested the list of available resources as follows:

 $ omni.py -a ig-utah listresources --available --api-version 2 -t GENI 3 --available -o

Step 2. Review advertisement RSpec for a list of OS images which can be loaded, and identify available resources

Used the output file from previous step to determine list of OS images available and available compute resources:

 $ egrep "node component|disk_image|available" rspec-boss-utah-geniracks-net-protogeni-xmlrpc-am-2-0.xml 

The following disk images were listed and available:

<disk_image description="FreeBSD 8.2 32-bit version" name="urn:publicid:IDN+utah.geniracks.net+image+emulab-ops:FBSD82-STD" os="FreeBSD" version="8.2"/>      
<disk_image default="true" description="Standard 32-bit Fedora 15 image" name="urn:publicid:IDN+utah.geniracks.net+image+emulab-ops:FEDORA15-STD" os="Fedora" version="15"/>      
<disk_image description="Standard 64-bit Ubuntu 11 image" name="urn:publicid:IDN+utah.geniracks.net+image+emulab-ops:UBUNTU11-64-STD" os="Linux" version="11.04"/>      
<disk_image description="Ubuntu 12.04 image " name="urn:publicid:IDN+utah.geniracks.net+image+gpo:LNUBUNTU1204" os="Linux" version="2.6.38.7-1.0"/>      
<disk_image default="true" description="Standard 32-bit Fedora 15 image" name="urn:publicid:IDN+utah.geniracks.net+image+emulab-ops:FEDORA15-STD" os="Fedora" version="15"/>      

Note 1: Ticket instaticket:14 has been written for duplicate default image in the Advertisement RSpec.

Step 3. Verify that the GPO Ubuntu customized image is available in the advertisement RSpec

Defined customized OS image as defined in the Custom OS InstaGENI notes page and in instaticket:20.

Images available are part of the listresources output which was collected as shown below:

$ omni.py -a ig-utah listresources --api-version 2 -t GENI 3 --available -o

Found available physical nodes (raw-pc) as well as the custom image that had been uploaded:

<disk_image description="Ubuntu 12.04 image " name="urn:publicid:IDN+utah.geniracks.net+image+gpo:LNUBUNTU1204" 
  os="Linux" version="2.6.38.7-1.0"/>      

Step 4. Define a request RSpec for two VMs, each with a GPO Ubuntu image

This test case could not be executed as originally planned, modifications were required. Using advertised Ubuntu or FreeBSD images is not supported for sliver_type emulab-openvz. The containers can only support Fedora. This does not allow the execution of step 4 in the IG-EXP-2 InstaGENI Single Site test case, as described in the Acceptance Test Plan section for the Single site scenario.

Two options available:

(a) Modify step 4 to assume default image is used (Fedora15).

(b) Modify step 4 to assume a raw-pc is used to load the custom image, which duplicates later step 12 in the procedure.

Choosing option (a) for the execution of step 4, which implies that RSpec is modified to use default OpenVZ image.

Also the request RSpec used in this sliver requests a publically routable IP address and public TCP/UDP port mapping for the control interface on each node.

Step 5. Create the first slice

Created the slice as follows:

$ omni.py createslice IG-EXP2-exp1 

Step 6. Create a sliver in the first slice, using the "modified" RSpec as defined in step 4

Created a 2 VMs sliver using the RSpec IG-EXP-2-exp1.rspec:

$ omni.py createsliver -a ig-utah IG-EXP-2-exp1 --api-version 2 -t GENI 3 IG-EXP-2-exp1.rspec

Checked sliver status to verify that the node allocation and provisioning has been completed, by checking the 'pg_status' is 'ready':

 $ omni.py sliverstatus -a ig-utah IG-EXP-2-exp1 --api-version 2 -t GENI 3

Once the nodes are allocated and provisioned, determine the assigned hosts that are assigned to the experiment:

 $ omni.py listresources -a ig-utah IG-EXP-2-exp1 --api-version 2 -t GENI 3 -o
 $ egrep "hostname|port" IG-EXP-2-exp1-rspec-utah-geniracks-net-protogeni.xml    
   <login authentication="ssh-keys" hostname="pcvm5-2.utah.geniracks.net" port="22" username="lnevers"/>    </services>  </node>  
   <login authentication="ssh-keys" hostname="pcvm5-3.utah.geniracks.net" port="22" username="lnevers"/>    </services>  </node>  

Step 7. Log in to each of the systems, and send traffic to the other system sharing a VLAN

Login to first host using the hostname that has been allocated on the control network (155.98.34.0), verify address assignment and send traffic to the remote:

 $ ssh pcvm5-2.utah.geniracks.net
  Warning: Permanently added 'pcvm5-2.utah.geniracks.net,155.98.34.130' (RSA) to the list of known hosts.
  [lnevers@host1 ~]$ /sbin/ifconfig | egrep "inet addr"
          inet addr:155.98.34.130  Bcast:155.98.34.255  Mask:255.255.255.0
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet addr:10.10.1.1  Bcast:10.10.1.255  Mask:255.255.255.0
          inet addr:10.10.2.1  Bcast:10.10.2.255  Mask:255.255.255.0
 [lnevers@host1 ~]$ ping 10.10.2.2 -c 5
 PING 10.10.2.2 (10.10.2.2) 56(84) bytes of data.
 64 bytes from 10.10.2.2: icmp_req=1 ttl=64 time=0.033 ms
 64 bytes from 10.10.2.2: icmp_req=2 ttl=64 time=0.026 ms
 64 bytes from 10.10.2.2: icmp_req=3 ttl=64 time=0.026 ms
 64 bytes from 10.10.2.2: icmp_req=4 ttl=64 time=0.026 ms
 64 bytes from 10.10.2.2: icmp_req=5 ttl=64 time=0.026 ms
--- 10.10.2.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.026/0.027/0.033/0.005 ms

Login to the second host verify address assignment and exchange traffic:

$ ssh pcvm5-3.utah.geniracks.net
Warning: Permanently added 'pcvm5-3.utah.geniracks.net,155.98.34.131' (RSA) to the list of known hosts.
[lnevers@host2 ~]$  /sbin/ifconfig | egrep "inet addr"
          inet addr:155.98.34.131  Bcast:155.98.34.255  Mask:255.255.255.0
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet addr:10.10.1.2  Bcast:10.10.1.255  Mask:255.255.255.0
          inet addr:10.10.2.2  Bcast:10.10.2.255  Mask:255.255.255.0
[lnevers@host2 ~]$ ping 10.10.2.1 -c 5
PING 10.10.2.1 (10.10.2.1) 56(84) bytes of data.
64 bytes from 10.10.2.1: icmp_req=1 ttl=64 time=0.808 ms
64 bytes from 10.10.2.1: icmp_req=2 ttl=64 time=0.026 ms
64 bytes from 10.10.2.1: icmp_req=3 ttl=64 time=0.027 ms
64 bytes from 10.10.2.1: icmp_req=4 ttl=64 time=0.027 ms
64 bytes from 10.10.2.1: icmp_req=5 ttl=64 time=0.026 ms
--- 10.10.2.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.026/0.182/0.808/0.313 ms

Step 8. Using root privileges on one of the VMs load a Kernel module.

Loading a kernel module is expected to not work on shared OpenVZ nodes, testing will proceed past this step.

Step 9. Run a netcat listener and bind to port XYZ on each of the VMs in the Utah rack

Modified test to use iperf. Started iperf server on host2:

[lnevers@host2 ~]$ /usr/bin/iperf -s

Step 10. Send traffic to port XYZ on each of the VMs in the Utah rack over the control network from any commodity Internet host

Started iperf on host1 to send to host2, with the following results:

On host1:

[lnevers@host1 ~]$ /usr/bin/iperf -c 10.10.1.2
------------------------------------------------------------
Client connecting to 10.10.1.2, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.1.1 port 55845 connected with 10.10.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   116 MBytes  97.5 Mbits/sec

On host2:

[lnevers@host2 ~]$ /usr/bin/iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.10.1.2 port 5001 connected with 10.10.1.1 port 55845
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.2 sec   116 MBytes  95.7 Mbits/sec

Step 11. As Experimenter2, request ListResources from Utah InstaGENI

As experimenter lnevers1@bbn.com requested the list of available resources as follows:

 $ omni.py -a ig-utah listresources --available --api-version 2 -t GENI 3 --available -o

Step 12. Define a request RSpec for two physical nodes, both using the uploaded GPO Ubuntu images

Created a 2 raw-pc RSpec using the Customized OS image named IG-EXP-2-exp2.rspec.

Step 13. Create the second slice

The following command was used to create the second slice:

 $ ./src/omni.py createslice IG-EXP-2-exp2

Step 14. Create a sliver in the second slice, using the RSpec defined in step 12

The following commands were used to create the sliver:

 $ omni.py createsliver IG-EXP-2-exp2 -a ig-utah --api-version 2 -t GENI 3 IG-EXP-2-exp2.rspec 

Verified that sliver was ready:

 $ omni.py sliverstatus IG-EXP-2-exp2 -a ig-utah --api-version 2 -t GENI 3 

Determine the host assignment for the sliver:

 $ omni.py listresources IG-EXP-2-exp2 -a ig-utah --api-version 2 -t GENI 3 -o  
 $  egrep "hostname|port" IG-EXP-2-exp2-sliverstatus-boss-utah-geniracks-net-protogeni-xmlrpc-am-2-0.json
  'hostname': 'pc2.utah.geniracks.net',
  'port': 22,
  'hostname': 'pc4.utah.geniracks.net',
  'port': 22,
                                                          

Step 15. Log in to each of the systems, and send traffic to the other system

Logged in to the first assigned host:

lnevers1@sendaria:~$ ssh pc2.utah.geniracks.net
Welcome to Ubuntu 12.04 LTS (GNU/Linux 2.6.38.7-1.0emulab x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Mon Aug 20 13:00:48 2012 from sendaria.gpolab.bbn.com
hostx:~% cat /etc/issue
Ubuntu 12.04 LTS \n \l
hostx:~% 

Logged in to the second host:

lnevers1@sendaria:~$ ssh pc4.utah.geniracks.net
Welcome to Ubuntu 12.04 LTS (GNU/Linux 2.6.38.7-1.0emulab x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Mon Aug 20 13:00:47 2012 from sendaria.gpolab.bbn.com
hosty:~% cat /etc/issue
Ubuntu 12.04 LTS \n \l

hosty:~% 

Exchanged iperf traffic from hosty to hostx. Server statistics:

hostx:~% /usr/bin/iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59124
[  5] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59125
[  6] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59123
[  7] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59121
[  8] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59119
[  9] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59118
[ 10] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59120
[ 11] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59116
[ 12] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59117
[ 13] local 10.10.1.1 port 5001 connected with 10.10.1.2 port 59122
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  94.4 MBytes  78.9 Mbits/sec
[  6]  0.0-10.1 sec  80.0 MBytes  66.6 Mbits/sec
[  5]  0.0-10.1 sec  66.8 MBytes  55.4 Mbits/sec
[ 13]  0.0-13.1 sec   163 MBytes   105 Mbits/sec
[  8]  0.0-13.1 sec   139 MBytes  89.0 Mbits/sec
[  9]  0.0-13.1 sec   153 MBytes  98.0 Mbits/sec
[ 11]  0.0-13.1 sec   114 MBytes  73.3 Mbits/sec
[ 12]  0.0-13.1 sec   115 MBytes  73.5 Mbits/sec
[  7]  0.0-13.1 sec   102 MBytes  65.6 Mbits/sec
[ 10]  0.0-13.1 sec   106 MBytes  68.1 Mbits/sec
[SUM]  0.0-13.1 sec  1.11 GBytes   726 Mbits/sec

Client statistics:

hosty:~% /usr/bin/iperf -c 10.10.1.1 -P 10
------------------------------------------------------------
Client connecting to 10.10.1.1, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.10.1.2 port 59116 connected with 10.10.1.1 port 5001
[ 11] local 10.10.1.2 port 59124 connected with 10.10.1.1 port 5001
[ 12] local 10.10.1.2 port 59125 connected with 10.10.1.1 port 5001
[ 10] local 10.10.1.2 port 59123 connected with 10.10.1.1 port 5001
[  8] local 10.10.1.2 port 59120 connected with 10.10.1.1 port 5001
[  4] local 10.10.1.2 port 59117 connected with 10.10.1.1 port 5001
[  6] local 10.10.1.2 port 59121 connected with 10.10.1.1 port 5001
[  5] local 10.10.1.2 port 59118 connected with 10.10.1.1 port 5001
[  7] local 10.10.1.2 port 59119 connected with 10.10.1.1 port 5001
[  9] local 10.10.1.2 port 59122 connected with 10.10.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[ 11]  0.0- 7.0 sec  94.4 MBytes   113 Mbits/sec
[ 10]  0.0- 7.0 sec  80.0 MBytes  95.6 Mbits/sec
[ 12]  0.0- 7.0 sec  66.8 MBytes  79.6 Mbits/sec
[  9]  0.0-10.0 sec   163 MBytes   136 Mbits/sec
[  5]  0.0-10.0 sec   153 MBytes   128 Mbits/sec
[  7]  0.0-10.0 sec   139 MBytes   116 Mbits/sec
[  4]  0.0-10.0 sec   115 MBytes  96.0 Mbits/sec
[  3]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  6]  0.0-10.0 sec   102 MBytes  85.7 Mbits/sec
[  8]  0.0-10.0 sec   106 MBytes  88.9 Mbits/sec
[SUM]  0.0-10.0 sec  1.11 GBytes   947 Mbits/sec

Step 16. Verify that experimenters 1 and 2 cannot use the control plane to access each other's resources

Verified network access from hostx to host2: { Verified unauthenticated SSH access, as user lnevers1 tried to ssh to control interface for lnevers experiment:

hostx:~% ssh pcvm5-3.utah.geniracks.net
lnevers1@pcvm5-3.utah.geniracks.net's password: 
Permission denied, please try again.
lnevers1@pcvm5-3.utah.geniracks.net's password: 
Permission denied, please try again.
lnevers1@pcvm5-3.utah.geniracks.net's password: 
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hostx:~% 

Verified shared writable filesystem mount for each user. As user lnevers accessed shared area:

[lnevers@host2 ~]$ id
uid=20001(lnevers) gid=504(pgeni-gpolab-bbn) groups=504(pgeni-gpolab-bbn),0(root)
[lnevers@host2 ~]$ ls -l /proj/pgeni-gpolab-bbn-com/exp/
total 10
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug 20 12:02 IG-EXP-2-exp1
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug 20 12:49 IG-EXP-2-exp2
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Jul 25 22:29 jbs15
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug  2 08:44 jbs16
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Jul 23 07:02 tuptymon

As user lnevers1 accessed shared area:

hostx:~% id
uid=20001(lnevers1) gid=504(pgeni-gpolab-bbn) groups=504(pgeni-gpolab-bbn),0(root)
hostx:~% ls -l /proj/pgeni-gpolab-bbn-com/exp/
total 10
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug 20 12:02 IG-EXP-2-exp1
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug 20 12:49 IG-EXP-2-exp2
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Jul 25 22:29 jbs15
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Aug  2 08:44 jbs16
drwxrwx--- 10 geniuser pgeni-gpolab-bbn 512 Jul 23 07:02 tuptymon
hostx:~% 

Step 17. Review system statistics and VM isolation and network isolation on data plane

Reviewed statistics on one of the VM nodes:

[lnevers@host1 ~]$ vmstat -s
     49311612 K total memory
      8852736 K used memory
      2708328 K active memory
      5266024 K inactive memory
     40458876 K free memory
      1030752 K buffer memory
      6747276 K swap cache
      1050168 K total swap
            0 K used swap
      1050168 K free swap
          891 non-nice user cpu ticks
            0 nice user cpu ticks
          217 system cpu ticks
      4940569 idle cpu ticks
          548 IO-wait cpu ticks
            0 IRQ cpu ticks
            0 softirq cpu ticks
            3 stolen cpu ticks
            0 pages paged in
       117884 pages paged out
            0 pages swapped in
            0 pages swapped out
            0 interrupts
    650412925 CPU context switches
   1345485834 boot time
      2616207 forks
[lnevers@host1 ~]$ top

top - 13:16:24 up  1:12,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  16 total,   1 running,  15 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  49311612k total,  8852612k used, 40459000k free,  1030776k buffers
Swap:  1050168k total,        0k used,  1050168k free,  6747344k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                 
    1 root      20   0 19236 1488 1248 S  0.0  0.0   0:00.01 init                     
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd/2               
    3 root      20   0     0    0    0 S  0.0  0.0   0:00.00 khelper/2                
   32 root      20   0 15668 1216  644 S  0.0  0.0   0:00.00 crond                    
  315 root      20   0  179m 1436 1048 S  0.0  0.0   0:00.00 rsyslogd                 
  328 root      20   0 53216 1088  548 S  0.0  0.0   0:00.12 watchquagga              
  349 root      20   0 75096 1232  544 S  0.0  0.0   0:00.00 sshd                     
  358 root      20   0  4152  468  344 S  0.0  0.0   0:00.00 pubsubd                  
  408 root      20   0  135m 7620 1232 S  0.0  0.0   0:00.01 watchdog                 
  545 root      20   0  135m 6952  960 S  0.0  0.0   0:00.00 rc.progagent             
  548 geniuser  20   0 20440 1464 1140 S  0.0  0.0   0:00.00 program-agent            
  564 root      20   0 43428  768  300 S  0.0  0.0   0:00.00 linktest                 
  583 root      20   0  116m 4016 3120 S  0.0  0.0   0:00.00 sshd                     
  585 lnevers   20   0  116m 1844  940 S  0.0  0.0   0:00.03 sshd                     
  586 lnevers   20   0 16236 2080 1204 S  0.0  0.0   0:00.02 csh                      
  632 lnevers   20   0 14940 1136  904 R  0.0  0.0   0:00.00 top        

On the dedicated raw-pc node:

hosty:~% vmstat -s
     49419336 K total memory
      1148440 K used memory
        83936 K active memory
        93180 K inactive memory
     48270896 K free memory
        20436 K buffer memory
       134396 K swap cache
      1050172 K total swap
            0 K used swap
      1050172 K free swap
          518 non-nice user cpu ticks
            0 nice user cpu ticks
         1041 system cpu ticks
      1346555 idle cpu ticks
         2774 IO-wait cpu ticks
            0 IRQ cpu ticks
           16 softirq cpu ticks
            0 stolen cpu ticks
       130005 pages paged in
        91492 pages paged out
            0 pages swapped in
            0 pages swapped out
       194941 interrupts
       392442 CPU context switches
   1345489123 boot time
         2122 forks
hosty:~% top

top - 13:17:50 up 19 min,  1 user,  load average: 0.00, 0.01, 0.05
Tasks: 121 total,   1 running, 120 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.1%sy,  0.0%ni, 99.7%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  49419336k total,  1148604k used, 48270732k free,    20448k buffers
Swap:  1050172k total,        0k used,  1050172k free,   134492k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
    1 root      20   0 24444 2384 1344 S    0  0.0   0:04.39 init               
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd           
    3 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/0        
    5 root      20   0     0    0    0 S    0  0.0   0:00.27 kworker/u:0        
    6 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/0        
    7 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/1        
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/1:0        
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/1        
   10 root      20   0     0    0    0 S    0  0.0   0:00.02 kworker/0:1        
   11 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/2        
   12 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/2:0        
   13 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/2        
   14 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/3        
   15 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/3:0        
   16 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/3        
   17 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/4        
   18 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/4:0        
   19 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/4        
   20 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/5        
   21 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/5:0        
   22 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/5        
   23 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/6        
   24 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/6:0        
   25 root      20   0     0    0    0 S    0  0.0   0:00.01 ksoftirqd/6        
   26 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/7        
   27 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/7:0        
   28 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/7        
   29 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/8        
   31 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/8        
   32 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/9        
   34 root      20   0     0    0    0 S    0  0.0   0:00.00 ksoftirqd/9        
   35 root      RT   0     0    0    0 S    0  0.0   0:00.00 migration/10       
                                  
hosty:~% procinfo
Memory:        Total        Used        Free     Buffers                       
RAM:        49419336     1151452    48267884       21444                       
Swap:        1050172           0     1050172                                   

Bootup: Mon Aug 20 12:58:43 2012   Load average: 0.00 0.01 0.05 1/130 2297     

user  :   00:00:06.26   0.0%  page in :           130977                       
nice  :   00:00:00.00   0.0%  page out:           136992                       
system:   00:00:10.74   0.1%  page act:            36024                       
IOwait:   00:00:32.33   0.2%  page dea:                0                       
hw irq:   00:00:00.00   0.0%  page flt:           741102                       
sw irq:   00:00:00.18   0.0%  swap in :                0                       
idle  :   04:56:40.84  99.7%  swap out:                0                       
uptime:   00:24:49.15         context :           401497                       

irq   0:         52  timer               irq  73:      16079  eth1-2           
irq   1:          2  i8042               irq  74:      22494  eth1-3           
irq   4:        261  serial              irq  75:       7869  eth1-4           
irq   8:          1  rtc0                irq  76:      13730  eth1-5           
irq   9:          0  acpi                irq  77:       5064  eth1-6           
irq  12:          4  i8042               irq  78:      20112  eth1-7           
irq  17:         31  uhci_hcd:usb6       irq  98:        397  eth0-0           
irq  20:          0  ehci_hcd:usb1, uh   irq  99:        102  eth0-1           
irq  22:          0  uhci_hcd:usb4       irq 100:         19  eth0-2           
irq  23:          0  uhci_hcd:usb3, uh   irq 101:        362  eth0-3           
irq  64:          0  dmar0               irq 102:        464  eth0-4           
irq  67:      11360  hpsa0               irq 103:        811  eth0-5           
irq  71:       7628  eth1-0              irq 104:        127  eth0-6           
irq  72:      29610  eth1-1              irq 105:        986  eth0-7           

sda             6169r            5657w                                         

eth0        TX 1.97MiB       RX 287.01KiB     eth3        TX 0.00B         RX 0.00B        
eth1        TX 1.16GiB       RX 5.81MiB       lo          TX 7.77KiB       RX 7.77KiB      
eth2        TX 0.00B         RX 0.00B                                          

Step 18. Verify that each VM has a distinct MAC address for that interface

Verified the unique MAC address for each interface as follows on the first host:

hosty:~% ifconfig -a|grep HW
eth0      Link encap:Ethernet  HWaddr e8:39:35:b1:ec:9c  
eth1      Link encap:Ethernet  HWaddr e8:39:35:b1:ec:9e  
eth2      Link encap:Ethernet  HWaddr e8:39:35:b1:ec:d0  
eth3      Link encap:Ethernet  HWaddr e8:39:35:b1:ec:d2  

On the second host:

hostx:~% ifconfig -a|grep HW
eth0      Link encap:Ethernet  HWaddr e8:39:35:b1:0c:7c  
eth1      Link encap:Ethernet  HWaddr e8:39:35:b1:0c:7e  
eth2      Link encap:Ethernet  HWaddr e8:39:35:b1:0c:54  
eth3      Link encap:Ethernet  HWaddr e8:39:35:b1:0c:56  

Step 19. Verify that VMs' MAC addresses are learned on the data plane switch

Successfully exchanged traffic over the dataplane, as shown by ARP tables.

ARP table on hosty:

hosty:~% arp -a | egrep 10.10
hostX-lan0 (10.10.1.1) at e8:39:35:b1:0c:7e [ether] on eth1
hosty:~% 

ARP table on hostx:

hostx:~% arp -a | egrep 10.10
hostY-lan0 (10.10.1.2) at e8:39:35:b1:ec:9e [ether] on eth1
hostx:~% 

Step 20. Stop traffic and delete slivers

As user lnevers@bbn.com issued the following:

 $ omni.py deletesliver -a ig-utah IG-EXP-2-exp1 --api-version 2 

As user lnevers1@bbn.com issued the following:

 $ omni.py deletesliver -a ig-utah IG-EXP-2-exp2 --api-version 2 

Verified that resources are released by checking listresources details. Listresources details show:

  • The dedicated nodes pc2 and pc4 are available again.
  • The VM slot (emulab:node_type type_slots) counts are restored for the pcshared nodes pc3 and pc5:
     $ ./src/omni.py -a ig-utah listresources --available --api-version 2 -t GENI 3 -o
     $ egrep "node comp|avail|type_slots" rspec-boss-utah-geniracks-net-protogeni-xmlrpc-am-2-0.xml
      
          <available name="starlight-express"/>    
          <available name="mesoscale-openflow"/>    
          <available name="jgn-1"/>    
          <available name="jgn-2"/>    
          <available name="jgn-3"/>    
          <available name="jgn-4"/>    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+procurve2" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="procurve2" exclusive="true">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc3" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc3" exclusive="false">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="98"/>      
              <emulab:node_type type_slots="98"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc5" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc5" exclusive="false">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="99"/>      
              <emulab:node_type type_slots="99"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
    <node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc4" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc4" exclusive="true">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc1" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc1" exclusive="false">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+pc2" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="pc2" exclusive="true">    
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="1"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type type_slots="100"/>      
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
    
      <node component_id="urn:publicid:IDN+utah.geniracks.net+node+internet" component_manager_id="urn:publicid:IDN+utah.geniracks.net+authority+cm" component_name="internet" exclusive="true">    
              <emulab:node_type static="true" type_slots="unlimited"/>      
          <available now="true"/>    
    
    
Last modified 12 years ago Last modified on 01/28/13 13:07:45