Version 2 (modified by 7 years ago) (diff) | ,
---|
GPO US Ignite InstaGENI Confirmation Tests
For details about the tests in this page, see the US Ignite InstaGENI Confirmation Tests page.
For site status test status see the US Ignite InstaGENI Confirmation Tests Status page.
Note: Omni nick_names for site aggregate used for these tests are:
usignite-ig=urn:publicid:IDN+research.umich.edu+authority+cm,https://boss.mcv.sdn.uky.edu:12369/protogeni/xmlrpc/am usignite-ig-of=urn:publicid:IDN+openflow:foam:foam.research.umich.edu+authority+am,https://foam.mcv.sdn.uky.edu:3626/foam/gapi/2
IG-CT-1 - Access to New Site VM resources
Got Aggregate version, which showed AM API V1, V2, and V3 are supported and V2 is default:
$ omni.py getversion -a usignite-ig
The US Ignite InstaGENI version in" 'code_tag':'99a2b1f03656cb665918eebd2b95434a6d3e50f9'" is the same as the other two available InstaGENI site (GPO and Utah):
IG GPO: { 'code_tag': '99a2b1f03656cb665918eebd2b95434a6d3e50f9' IG SITENAME: { 'code_tag': '99a2b1f03656cb665918eebd2b95434a6d3e50f9', IG Utah: { 'code_tag': '99a2b1f03656cb665918eebd2b95434a6d3e50f9'
Get list of "available" compute resources:
$ omni.py -a usignite-ig listresources --available -o
Verified that Advertisement RSpec only includes available resources, as requested:
$ egrep "node comp|available now" rspec-xxx
Created a slice:
$ omni.py createslice IG-CT-1
Created a 4 VMs sliver using the RSpec IG-CT-1.rspec:
$ omni.py createsliver -a usignite-ig IG-CT-1 IG-CT-1.rspec
The following is login information for the sliver:
$ readyToLogin.py -a usignite-ig IG-CT-1 <...>
Measurements
Log into specified host and collect iperf and ping statistics. All measurements are collected over 60 seconds, using default images and default link bandwidth:
Iperf US Ignite InstaGENI SITENAME VM-2 to VM-1 (TCP) - TCP window size: 16.0 KB
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM-2 to the VM-1 (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI SITENAME VM-2 to the VM-1
IG-CT-2 - Access to New Site bare metal and VM resources
Create a slice:
$ omni.py createslice IG-CT-2
Created a sliver with one VM and one Raw PC using RSpec IG-CT-2.rspec
$ omni.py createsliver -a usignite-ig IG-CT-2 IG-CT-2.rspec
Determined login information:
$ readyToLogin.py -a usignite-ig IG-CT-2 <...>
Measurements
Log into specified host and collect iperf and ping statistics. All measurements are collected over 60 seconds, using default images and default link bandwidth:
Iperf US Ignite InstaGENI SITENAME PC to VM (TCP) - TCP window size: 16.0 KB
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME PC to the VM (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI SITENAME PC to VM
Iperf US Ignite InstaGENI SITENAME VM to PC (TCP) - TCP window size: 16.0 KB
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM to the PC (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI SITENAME VM to PC
BELOW NOT PART OF TEST PLAN - MAY Choose to ignore Iperf US Ignite InstaGENI SITENAME PC to PC (TCP) - TCP window size: 16.0 KB
Even though not part of this test, ran an experiment with 2 raw pcs to capture performance between dedicated devices.
Collected: 2013-02-20
One Client
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME PC to the PC (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI SITENAME PC to PC
IG-CT-3 - Multiple sites experiment
Create a slice:
$ omni.py createslice IG-CT-3
Create a sliver with one VM at SITENAME and one VM at GPO connected via a GRE tunnel using RSpec IG-CT-3.rspec:
$ stitcher.py createsliver IG-CT-3 -a usignite-ig IG-CT-3.rspec
Determined login information at each SITENAME and GPO aggregate:
$ readyToLogin.py IG-CT-3 --useSliceAggregates ....
Measurements
Iperf US Ignite InstaGENI GPO VM-2 to SITENAME VM-1 (TCP) - TCP window size: 16.0 KB
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI GPO VM-2 to GPO VM-1 (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI GPO VM-2 to the GPO VM-1
Iperf US Ignite InstaGENI SITENAME VM-1 to GPO VM-2 (TCP) - TCP window size: 16.0 KB
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM-1 to GPO VM-2 (UDP) - 1470 byte datagrams & UDP buffer size: 136 KByte
Ping from US Ignite InstaGENI SITENAME VM-1 to GPO VM-2
IG-CT-5 - Experiment Monitoring
GMOC Monitoring
Reviewed content of the GMOC Monitoring page for aggregates, and found the FOAM aggreg
Active OpenFlow Slivers:
List of OpenFlow Resources in use:
Monitoring shows Aggregate measurement for CPU utilization, Disk Utilization, Network Statistics and OF Datapath and Sliver Statistics:
GENI Monitoring
Checked for site's compute and foam aggregates:
Compute aggregate availability:
FOAM Aggregate availability:
External Checks for site aggregates:
No Experiments for External Check Stores - No meso-scale.
IG-CT-6 - Administrative Tests
Sent request for administrative account to site contact from the SITENAME InstaGENI aggregate page. Followed instructions at Admin Accounts on InstaGeni Racks page for account request. A local admin account was create and also had to join the emulab-ops group at https://www.research.umich.edu/joinproject.php3?target_pid=emulab-ops. Once account was create and membership to emulab-ops was approved proceeded to execute administrative tests.
LNM:~$ ssh lnevers@control.research.umich.edu
Also access the node via the PG Boss alias:
LNM:~$ ssh boss.research.umich.edu
Further verified access by ssh from ops.instageni.gpolab.bbn.com to boss.instageni.gpolab.bbn.com, which is usually restricted for non-admin users:
LNM:~$ ssh ops.research.umich.edu
From boss node accessed each of the experiment nodes that support VMs:
[lnevers@boss ~]$ for i in pc1 pc2; do ssh $i "echo -n '===> Host: ';hostname;sudo whoami;uname -a;echo"; done
In order to access Dedicated Nodes some experiment must be running on the raw-pc device. At the time of this capture two raw-pc nodes were in use (pcX and pcY):
[lnevers@boss ~]$ sudo ssh pcX [root@pcX ~]# sudo whoami root [root@pcX ~]# exit logout Connection to pcX.research.umich.edu [lnevers@boss ~]$ sudo ssh pcY [root@pc ~]# sudo whoami root [root@pc ~]#
Access infrastructure Switches using documented password. First connect to the switch named procurve1 the control network switch:
[lnevers@boss ~]$ sudo more /usr/testbed/etc/switch.pswd XXXXXXXXX [lnevers@boss ~]$ telnet procurve1
Connect to the switch named procurve2 the dataplane network switch via ssh using the documented password:
[lnevers@boss ~]$ sudo more /usr/testbed/etc/switch.pswd xxxxxxx [lnevers@boss ~]$ ssh manager@procurve2
Access the FOAM VM and gather information for version
LNM:~$ ssh lnevers@foam.research.umich.edu sudo foamctl admin:get-version --passwd-file=/etc/foam.passwd
Check FOAM configuration for site.admin.email, geni.site-tag, email.from settings:
foamctl config:get-value --key="site.admin.email" --passwd-file=/etc/foam.passwd foamctl config:get-value --key="geni.site-tag" --passwd-file=/etc/foam.passwd foamctl config:get-value --key="email.from" --passwd-file=/etc/foam.passwd # check if FOAM auto-approve is on. Value 2 = auto-approve is on. foamctl config:get-value --key="geni.approval.approve-on-creation" --passwd-file=/etc/foam.passwd
Show FOAM slivers and details for one sliver:
foamctl geni:list-slivers --passwd-file=/etc/foam.passwd
Access the FlowVisor VM and gather version information:
ssh lnevers@flowvisor.research.umich.edu
Check the FlowVisor version, list of devices, get details for a device, list of active slices, and details for one of the slices:
fvctl --passwd-file=/etc/flowvisor.passwd ping hello # Devices fvctl --passwd-file=/etc/flowvisor.passwd listDevices fvctl --passwd-file=/etc/flowvisor.passwd getDeviceInfo 06:d6:6c:3b:e5:68:00:00 #Slices fvctl --passwd-file=/etc/flowvisor.passwd listSlices fvctl --passwd-file=/etc/flowvisor.passwd getSliceInfo 5c956f94-5e05-40b5-948f-34d0149d9182
Check the FlowVisor setting:
fvctl --passwd-file=/etc/flowvisor.passwd dumpConfig /tmp/flowvisor-config more /tmp/flowvisor-config
GPO Stitching Confirmation Tests
This pages capture the detailed test logs for each test defined in the New Site Stitching Confirmation Tests page. For the sites status see the New Site Stitching Confirmation Tests Status page.
IG-ST-1 New Site to GPO IG topology
The SITENAME site advertises the following stitching details:
Experimenter may not need any of this data, but it is helpful to reference when trying to determine how many VLANs are delegated for stitching at the site or how much bandwidth can be requested.
Create a slice and then create the stitched slivers with the RSpec IG-ST-1.rspec:
omni.py createslice IG-ST-1 stitcher.py createsliver IG-ST-1 IG-ST-1.rspec -o
Determined login information at each SITENAME and GPO aggregate:
$ readyToLogin.py IG-ST-1 --useSliceAggregates ....
Measurements
Iperf US Ignite InstaGENI GPO VM to InstaGENI SITENAME VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI GPO VM to InstaGENI SITENAME VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from US Ignite InstaGENI GPO VM to the InstaGENI SITENAME VM
Iperf US Ignite InstaGENI SITENAME VM to GPO InstaGENI VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM to GPO InstaGENI VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from US Ignite InstaGENI SITENAME VM to GPO InstaGENI VM
IG-ST-2 New Site to GPO IG Loop topology
Create a slice and then create the stitched slivers with the RSpec IG-ST-2.rspec:
omni.py createslice IG-ST-2 stitcher.py createsliver IG-ST-2 IG-ST-2.rspec -o
Determined login information at each SITENAME and GPO aggregate:
$ readyToLogin.py IG-ST-2 --useSliceAggregates ....
Login to GPO host and ping the remote on each of the two interfaces. Below is the ping output for the GPO site:
ping 10.10.4.2 -c 60 -q ping 192.168.4.2 -c 60 -q
IG-ST-3 IG-ST-3 New Site 3 node linear topology
Create a slice and then create the stitched slivers with the RSpec IG-ST-3.rspec:
omni.py createslice IG-ST-3 stitcher.py createsliver IG-ST-3 IG-ST-3.rspec -o
Determined login information the SITENAME host:
$ readyToLogin.py IG-ST-3 --useSliceAggregates ....
Login to the SITENAME host and ping each remote:
#ping GPO IG ping 192.168.2.1 -c 60 -q # Utah IG ping 192.168.4.1 -c 60 -q
IG-ST-4 New Site to GPO EG interoperability
Create a slice and then create the stitched slivers with the RSpec IG-ST-4.rspec, which specifies a 100 Mbps link capacity. This was done to get around the mismatch in link capacity units that exists between IG and EG.
omni.py createslice IG-ST-4 stitcher.py createsliver IG-ST-4 IG-ST-4.rspec -o
Determined login information at each SITENAME and GPO aggregate:
$ readyToLogin.py IG-ST-4 --useSliceAggregates ....
Measurements
Iperf ExoGENI GPO VM to US Ignite InstaGENI SITENAME VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf ExoGENI GPO VM to US Ignite InstaGENI SITENAME VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from ExoGENI GPO VM to the US Ignite InstaGENI SITENAME VM
Iperf US Ignite InstaGENI SITENAME VM to ExoGENI GPO VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-XX
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM to ExoGENI GPO VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from US Ignite InstaGENI SITENAME VM to ExoGENI GPO VM
IG-ST-5 Site Information
Various pages include stitching information for a site. Each of the following were verified for this site:
- Verified that stitching VLANs and Device URN information exists in the SITENAME Aggregate page.
- Verified that Delegated GENI Stitching VLANs for site are documented at the Delegated GENI Stitching VLANs.
- Stitching Computation Service logs were reviewed while testing stitching to this site, no issues found.
- Added site to the list of GENI Network Stitching Sites.
IG-ST-6 New Site OpenFlow topology
Create a slice and then create the stitched slivers using OpenFlow with the RSPec IG-ST-6.rspec:
omni.py createslice IG-ST-6 stitcher.py createsliver IG-ST-6 IG-ST-6.rspec -o
Determined login information at each SITENAME and GPO aggregate:
$ readyToLogin.py IG-ST-6 --useSliceAggregates ....
Measurements
Iperf US Ignite InstaGENI GPO VM to InstaGENI SITENAME VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-xx
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI GPO VM to InstaGENI SITENAME VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from US Ignite InstaGENI GPO VM to InstaGENI SITENAME VM
Iperf US Ignite InstaGENI SITENAME VM to InstaGENI GPO VM (TCP) - TCP window size: 85.0 KByte (default)
Collected: 2017-02-xx
One Client_
Five Clients
Ten Clients
Iperf US Ignite InstaGENI SITENAME VM to InstaGENI GPO VM (UDP) - UDP buffer size: 208 kByte (default)
Ping from US Ignite InstaGENI SITENAME VM to InstaGENI GPO VM
---
Email help@geni.net for GENI support or email me with feedback on this page!