OG-EXP-6: OpenGENI and Meso-scale Multi-site OpenFlow Acceptance Test
This page captures status for the test case OG-EXP-6. For additional information see the Acceptance Test Status - May 2013 page overall status, or the OpenGENI Acceptance Test Plan for details about the planned evaluation.
Last Update: 2013/05/17"
Step | State | Notes | Tickets |
Step 1 | Pass | ||
Step 2 | Pass | ||
Step 3 | Pass: most criteria | Aggregate retains information for failed sliver | # 71 |
Step 4 | Pass | ||
Step 5 | Pass | ||
Step 6 | Pass: most criteria | Unable to request IP Address | #56 |
Step 7 | Pass | ||
Step 8 | Pass | ||
Step 9 | Pass | ||
Step 10 | Pass | ||
Step 11 | Pass | ||
Step 12 | Fail | Cannot install Iperf for baseline measurements | #57 |
Step 13 | Pass | ||
Step 14 | Pass | ||
Step 15 | Pass | ||
Step 16 | Pass | ||
Step 17 | Fail | No tools available to install static flows into controller | |
Step 18 | Fail | No tools available to install static flows into controller | |
Step 19 | Pass |
State Legend | Description |
Pass | Test completed and met all criteria |
Pass: most criteria | Test completed and met most criteria. Exceptions documented |
Fail | Test completed and failed to meet criteria. |
Complete | Test completed but will require re-execution due to expected changes |
Blocked | Blocked by ticketed issue(s). |
In Progress | Currently under test. |
Test Plan Steps
This test case uses the following aggregate nick_names:
gram=,https://128.89.91.170:5001
Evaluation Note: Test case in in one rack, there are no remote OpenFlow aggregates.
Evaluation Note: Failed sliver information is preserved in the aggregate. Ticket #71
Evaluation Note: Experimenter is not able to request IP addresses Ticket #56
Evaluation Note: Cannot install iperf for baseline measurements, No sudo. Ticket #57
Evaluation Note: No tools available to install static flows into controller. This is not a OpenGENI issue, but prevents steps 17 and 18 from being executed.
Step 1. As Experimenter1, request ListResources from BBN OpenGENI
As experimenter lnevers requested a list of resources:
$ omni.py listresources -a gram2 -V2 INFO:omni:Loading config file /home/lnevers/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:Listed advertised resources at 1 out of 1 possible aggregates. INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:<?xml version="1.0" ?> INFO:omni: <!-- Resources at AM: URN: unspecified_AM_URN URL: https://128.89.91.170:5002 --> INFO:omni: <rspec type="advertisement" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/3/ad.xsd http://www.geni.net/resources/rspec/ext/opstate/1 http://www.geni.net/resources/rspec/ext/opstate/1/ad.xsd"> <node client_id="VM" component_id="urn:public:geni:gpo:vm+ca262be4-0bd5-48a9-a2ba-e0a7d94ca9a4" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" component_name="ca262be4-0bd5-48a9-a2ba-e0a7d94ca9a4" exclusive="False"> <node_type type_name="m1.tiny"/> <node_type type_name="m1.small"/> <node_type type_name="m1.medium"/> <node_type type_name="m1.large"/> <node_type type_name="m1.xlarge"/> <disk_image description="" name="ubuntu-12.04" os="Linux" version="12"/> <sliver_type name="m1.small"/> <available now="True"/> </node></rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed listresources: Options as run: aggregate: ['gram2'] framework: gram Args: listresources Result Summary: Queried resources from 1 of 1 aggregate(s). INFO:omni: ============================================================
Step 2. Review list resources and determine OpenFlow resources
There is no way to review available OpenFlow resources. It is possible to get a list from VMOC of the resources allocation in the switch, a registry of active slices and VLANs configured.
lnevers@boscontroller:~$ echo "dump" |nc localhost 7001
Step 3. Define a request RSpec for a VM at the BBN OpenGENI.
The following RSpec was defined:
<?xml version="1.0" encoding="UTF-8"?> <rspec type="request" xmlns="http://www.geni.net/resources/rspec/3" xmlns:openflow="http://www.geni.net/resources/rspec/ext/openflow/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/3/request.xsd"> <node client_id="of-vm1" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" > <sliver_type name="m1.small"> <disk_image description="" name="ubuntu-12.04" os="Linux" version="12"/> </sliver_type> <interface client_id="of-vm1:if0" > </interface> </node> <node client_id="of-vm2" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" > <sliver_type name="m1.small"> <disk_image description="" name="ubuntu-12.04" os="Linux" version="12"/> </sliver_type> <interface client_id="of-vm2:if0" > </interface> </node> <link client_id="link-vm1-vm2"> <interface_ref client_id="of-vm1:if0"/> <interface_ref client_id="of-vm2:if0"/> </link> <openflow:controller url="tcp:10.10.8.71:6633" type="primary"/> </rspec>
Step 4. Create the first slice.
Create the slice:
$ omni.py createslice ln6633 INFO:omni:Loading config file /home/lnevers/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Created slice with Name ln6633, URN urn:publicid:IDN+geni:bos:gcf+slice+ln6633, Expiration 2013-05-17 15:25:34 INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createslice: Options as run: framework: gram Args: createslice ln6633 Result Summary: Created slice with Name ln6633, URN urn:publicid:IDN+geni:bos:gcf+slice+ln6633, Expiration 2013-05-17 15:25:34 INFO:omni: ============================================================
Step 5. Create a sliver using the RSpecs defined above.
Note: Failed sliver information is preserved in the aggregate. Ticket #71
$ omni.py -a gram2 -V2 createsliver ln6633 ./OG-EXP-6.rspec INFO:omni:Loading config file /home/lnevers/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN WARNING:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+ln6633 expires in <= 3 hours INFO:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+ln6633 expires on 2013-05-17 15:25:34 UTC INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:Creating sliver(s) from rspec file ./OG-EXP-6.rspec for slice urn:publicid:IDN+geni:bos:gcf+slice+ln6633 INFO:omni:Got return from CreateSliver for slice ln6633 at https://128.89.91.170:5002: INFO:omni:<?xml version="1.0" ?> INFO:omni: <!-- Reserved resources for: Slice: ln6633 at AM: URN: unspecified_AM_URN URL: https://128.89.91.170:5002 --> INFO:omni: <rspec type="manifest" xmlns="http://www.geni.net/resources/rspec/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/3/manifest.xsd"> <node client_id="of-vm1" component_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute2" component_manager_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm0c7363a9-3bbc-4d6f-ab60-af67667a0232"> <interface client_id="of-vm1:if0" mac_address="fa:16:3e:27:43:6b" sliver_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface35638f90-4383-440b-897e-f76f35a24c7d"> <ip address="10.0.70.100" type="ip"/> </interface> <sliver_type name="m1.small"> <disk_image name="urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04" os="Linux" version="12"/> </sliver_type> <services> <login authentication="ssh-keys" hostname="boscontroller" port="3000" username="lnevers"/> </services> <host name="of-vm1"/> </node> <link client_id="link-vm1-vm2" sliver_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+link8f732665-cf1f-4d25-8dd8-70ca48fa62d0"> <interface_ref client_id="of-vm1:if0"/> <interface_ref client_id="of-vm2:if0"/> </link> <node client_id="of-vm2" component_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute1" component_manager_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm" exclusive="false" sliver_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmca506e79-be07-4c71-8bcf-c04325ac967b"> <interface client_id="of-vm2:if0" mac_address="fa:16:3e:13:66:fa" sliver_id="urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface1be51546-51fc-46fb-9411-04c4966d5d25"> <ip address="10.0.70.101" type="ip"/> </interface> <sliver_type name="m1.small"> <disk_image name="urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04" os="Linux" version="12"/> </sliver_type> <services> <login authentication="ssh-keys" hostname="boscontroller" port="3001" username="lnevers"/> </services> <host name="of-vm2"/> </node> </rspec> INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createsliver: Options as run: aggregate: ['gram2'] framework: gram Args: createsliver ln6633 ./OG-EXP-6.rspec Result Summary: Got Reserved resources RSpec from 128-89-91-170-5002 INFO:omni: ============================================================
Step 6. Log in to each of the systems, verify IP address assignment. Send traffic to the other system, leave traffic running.
Note: Unable to verify IP Addresses, experimenter cannot request IP Addresses from OpenGENI, ticket #56.
Get login information:
$ egrep login ln6633-manifest-rspec-128-89-91-170-5002.xml <login authentication="ssh-keys" hostname="boscontroller" port="3000" username="lnevers"/> <login authentication="ssh-keys" hostname="boscontroller" port="3001" username="lnevers"/>
Log in to first host:
$ ssh 128.89.91.170 -p 3000 lnevers@of-vm1:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:27:43:6b inet addr:10.0.70.100 Bcast:10.0.70.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe27:436b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:128 errors:0 dropped:0 overruns:0 frame:0 TX packets:22 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20707 (20.7 KB) TX bytes:3821 (3.8 KB) lnevers@of-vm2:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:13:66:fa inet addr:10.0.70.101 Bcast:10.0.70.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe13:66fa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:99 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15620 (15.6 KB) TX bytes:3630 (3.6 KB)
Log in to second host:
$ ssh 128.89.91.170 -p 3001 lnevers@of-vm2:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:13:66:fa inet addr:10.0.70.101 Bcast:10.0.70.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe13:66fa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:99 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15620 (15.6 KB) TX bytes:3630 (3.6 KB) lnevers@of-vm2:~$ ping 10.0.70.100 PING 10.0.70.100 (10.0.70.100) 56(84) bytes of data. 64 bytes from 10.0.70.100: icmp_req=1 ttl=64 time=1.57 ms 64 bytes from 10.0.70.100: icmp_req=2 ttl=64 time=0.978 ms 64 bytes from 10.0.70.100: icmp_req=3 ttl=64 time=0.873 ms 64 bytes from 10.0.70.100: icmp_req=4 ttl=64 time=0.947 ms 64 bytes from 10.0.70.100: icmp_req=5 ttl=64 time=0.864 ms
Verified VMOC information for the two node experiment:
lnevers@boscontroller:~$ echo "dump" |nc localhost 7001 >>OG-EXP-6-vmoc lnevers@boscontroller:~$ tail -10 OG-EXP-6-vmoc VMOC Slice Registry: urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-13: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-13'} urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1'} urn:publicid:IDN+geni:bos:gcf+slice+lngram: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+lngram'} urn:publicid:IDN+geni:bos:gcf+slice+ln6633: {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} https://localhost:9000 tcp:10.10.8.71:6633 {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} 1001: {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'}
Step 7. As Experimenter2, define a request RSpec for two VMs at BBN OpenGENI.
As lnevers2, defined the following RSpec:
<?xml version="1.0" encoding="UTF-8"?> <rspec type="request" xmlns="http://www.geni.net/resources/rspec/3" xmlns:openflow="http://www.geni.net/resources/rspec/ext/openflow/3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/3/request.xsd"> <node client_id="exp2-of-vm1" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" > <interface client_id="exp2-of-vm1:if0" > </interface> </node> <node client_id="exp2-of-vm2" component_manager_id="urn:publicid:geni:bos:gcf+authority+am" > <interface client_id="exp2-of-vm2:if0" > </interface> </node> <link client_id="exp2-vm1-vm2"> <interface_ref client_id="exp2-of-vm1:if0"/> <interface_ref client_id="exp2-of-vm2:if0"/> </link> <openflow:controller url="tcp:10.10.8.71:6633" type="primary"/> </rspec>
Step 8. Create a second slice
As lnevers2, created slice:
$ omni.py createslice OG-EXP-6-2 INFO:omni:Loading config file /home/lnevers2/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Created slice with Name OG-EXP-6-2, URN urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2, Expiration 2013-05-17 15:54:30 INFO:omni: ------------------------------------------------------------ INFO:omni: Completed createslice: Options as run: framework: gram Args: createslice OG-EXP-6-2 Result Summary: Created slice with Name OG-EXP-6-2, URN urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2, Expiration 2013-05-17 15:54:30 INFO:omni: ============================================================
Step 9. Create a second sliver via AM API V3
Created sliver via AM API V3. First allocated resources:
$ omni.py allocate OG-EXP-6-2 -a gram -V3 ./OG-EXP-6-2.rspec INFO:omni:Loading config file /home/lnevers2/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN WARNING:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires in <= 3 hours INFO:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires on 2013-05-17 15:54:30 UTC INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN INFO:omni:Allocate slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at https://128.89.91.170:5001: INFO:omni:{ "geni_rspec": "<?xml version=\"1.0\" ?> INFO:omni: <!-- Reserved resources for: Slice: OG-EXP-6-2 at AM: URN: unspecified_AM_URN URL: https://128.89.91.170:5001 --> INFO:omni: <rspec type=\"manifest\" xmlns=\"http://www.geni.net/resources/rspec/3\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/3/manifest.xsd\"> \n <node client_id=\"exp2-of-vm1\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm7eb21a0f-a59e-4a83-96b2-53df196057d0\"> \n <interface client_id=\"exp2-of-vm1:if0\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface052ebf9a-083f-4836-abd2-0d8d849623ca\"/> \n <sliver_type name=\"m1.small\"> \n <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> \n </sliver_type> \n <host name=\"exp2-of-vm1\"/> \n </node> \n <node client_id=\"exp2-of-vm2\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmff538923-1154-4b9e-be2d-ad5a010ebaa0\"> \n <interface client_id=\"exp2-of-vm2:if0\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface2a49e96f-ee7a-45c0-984c-c0dbfadb0b27\"/> \n <sliver_type name=\"m1.small\"> \n <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> \n </sliver_type> \n <host name=\"exp2-of-vm2\"/> \n </node> \n <link client_id=\"exp2-vm1-vm2\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+linka21589dd-1f36-4839-a704-01e21b6bfb75\"> \n <interface_ref client_id=\"exp2-of-vm1:if0\"/> \n <interface_ref client_id=\"exp2-of-vm2:if0\"/> \n </link> \n</rspec>", "geni_slivers": [ { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm7eb21a0f-a59e-4a83-96b2-53df196057d0", "geni_expires": "2013-05-17T14:06:24.886914+00:00", "geni_allocation_status": "geni_allocated", "geni_operational_status": "geni_notready", "geni_error": "" }, { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmff538923-1154-4b9e-be2d-ad5a010ebaa0", "geni_expires": "2013-05-17T14:06:24.886914+00:00", "geni_allocation_status": "geni_allocated", "geni_operational_status": "geni_notready", "geni_error": "" }, { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+linka21589dd-1f36-4839-a704-01e21b6bfb75", "geni_expires": "2013-05-17T14:06:24.886914+00:00", "geni_allocation_status": "geni_allocated", "geni_operational_status": "geni_notready", "geni_error": "" } ] } INFO:omni:All slivers expire on '2013-05-17T14:06:24.886914' INFO:omni: ------------------------------------------------------------ INFO:omni: Completed allocate: Options as run: aggregate: ['gram'] api_version: 3 framework: gram Args: allocate OG-EXP-6-2 ./OG-EXP-6-2.rspec Result Summary: Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires in <= 3 hours on 2013-05-17 15:54:30 UTC Allocated slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at https://128.89.91.170:5001. Next sliver expiration: 2013-05-17T14:06:24.886914 INFO:omni: ============================================================
Then provisioned resources:
$ omni.py provision OG-EXP-6-2 -a gram -V3 INFO:omni:Loading config file /home/lnevers2/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN WARNING:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires in <= 3 hours INFO:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires on 2013-05-17 15:54:30 UTC INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN INFO:omni:Provision slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at https://128.89.91.170:5001 INFO:omni:{ "geni_rspec": "<?xml version=\"1.0\" ?> INFO:omni: <!-- Provision slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at AM URL https://128.89.91.170:5001 --> INFO:omni: <rspec type=\"manifest\" xmlns=\"http://www.geni.net/resources/rspec/3\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.geni.net/resources/rspec/3 http://www.geni.net/resources/rspec/ext/openflow/3 http://www.geni.net/resources/rspec/3/manifest.xsd\"> \n <link client_id=\"exp2-vm1-vm2\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+linka21589dd-1f36-4839-a704-01e21b6bfb75\"> \n <interface_ref client_id=\"exp2-of-vm1:if0\"/> \n <interface_ref client_id=\"exp2-of-vm2:if0\"/> \n </link> \n <node client_id=\"exp2-of-vm2\" component_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute1\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmff538923-1154-4b9e-be2d-ad5a010ebaa0\"> \n <interface client_id=\"exp2-of-vm2:if0\" mac_address=\"fa:16:3e:c9:95:2a\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface2a49e96f-ee7a-45c0-984c-c0dbfadb0b27\"> \n <ip address=\"10.0.71.101\" type=\"ip\"/> \n </interface> \n <sliver_type name=\"m1.small\"> \n <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> \n </sliver_type> \n <services> \n <login authentication=\"ssh-keys\" hostname=\"boscontroller\" port=\"3003\" username=\"lnevers2\"/> \n </services> \n <host name=\"exp2-of-vm2\"/> \n </node> \n <node client_id=\"exp2-of-vm1\" component_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute2\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm7eb21a0f-a59e-4a83-96b2-53df196057d0\"> \n <interface client_id=\"exp2-of-vm1:if0\" mac_address=\"fa:16:3e:a4:df:3f\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface052ebf9a-083f-4836-abd2-0d8d849623ca\"> \n <ip address=\"10.0.71.100\" type=\"ip\"/> \n </interface> \n <sliver_type name=\"m1.small\"> \n <disk_image name=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+imageubuntu-12.04\" os=\"Linux\" version=\"12\"/> \n </sliver_type> \n <services> \n <login authentication=\"ssh-keys\" hostname=\"boscontroller\" port=\"3004\" username=\"lnevers2\"/> \n </services> \n <host name=\"exp2-of-vm1\"/> \n </node> \n</rspec>", "geni_slivers": [ { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+linka21589dd-1f36-4839-a704-01e21b6bfb75", "geni_expires": "2013-05-17T15:54:30+00:00", "geni_allocation_status": "geni_provisioned", "geni_operational_status": "geni_ready", "geni_error": "" }, { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmff538923-1154-4b9e-be2d-ad5a010ebaa0", "geni_expires": "2013-05-17T15:54:30+00:00", "geni_allocation_status": "geni_provisioned", "geni_operational_status": "geni_notready", "geni_error": "" }, { "geni_sliver_urn": "urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm7eb21a0f-a59e-4a83-96b2-53df196057d0", "geni_expires": "2013-05-17T15:54:30+00:00", "geni_allocation_status": "geni_provisioned", "geni_operational_status": "geni_notready", "geni_error": "" } ] } INFO:omni:All slivers expire on '2013-05-17T15:54:30' INFO:omni: ------------------------------------------------------------ INFO:omni: Completed provision: Options as run: aggregate: ['gram'] api_version: 3 framework: gram Args: provision OG-EXP-6-2 Result Summary: Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires in <= 3 hours on 2013-05-17 15:54:30 UTC Provisioned slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at https://128.89.91.170:5001. Next sliver expiration: 2013-05-17T15:54:30 INFO:omni: ============================================================
Step 10. Log in to each of the systems in the slice, and send traffic to each other systems; leave traffic running
Determined login information:
lnevers2@arendia:~$ omni.py describe OG-EXP-6-2 -a gram -V3 -o lnevers2@arendia:~$ egrep login OG-EXP-6-2-describe-128-89-91-170-5001.json <login authentication=\"ssh-keys\" hostname=\"boscontroller\" port=\"3003\" username=\"lnevers2\"/> <login authentication=\"ssh-keys\" hostname=\"boscontroller\" port=\"3004\" username=\"lnevers2\"/>
Logged into first host:
lnevers2@arendia:~$ ssh 128.89.91.170 -p 3003 lnevers2@exp2-of-vm1:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:a4:df:3f inet addr:10.0.71.100 Bcast:10.0.71.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fea4:df3f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:97 errors:0 dropped:0 overruns:0 frame:0 TX packets:19 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15505 (15.5 KB) TX bytes:3114 (3.1 KB) lnevers2@exp2-of-vm1:~$ ping 10.0.71.101 PING 10.0.71.101 (10.0.71.101) 56(84) bytes of data. 64 bytes from 10.0.71.101: icmp_req=1 ttl=64 time=2.96 ms 64 bytes from 10.0.71.101: icmp_req=2 ttl=64 time=0.734 ms 64 bytes from 10.0.71.101: icmp_req=3 ttl=64 time=0.838 ms 64 bytes from 10.0.71.101: icmp_req=4 ttl=64 time=0.717 ms 64 bytes from 10.0.71.101: icmp_req=5 ttl=64 time=0.802 ms
Logged into second host:
lnevers2@arendia:~$ ssh 128.89.91.170 -p 3004 lnevers2@exp2-of-vm2:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr fa:16:3e:c9:95:2a inet addr:10.0.71.101 Bcast:10.0.71.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fec9:952a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:103 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15297 (15.2 KB) TX bytes:2980 (2.9 KB) lnevers2@exp2-of-vm2:~$ ping 10.0.71.100 PING 10.0.71.100 (10.0.71.100) 56(84) bytes of data. 64 bytes from 10.0.71.100: icmp_req=1 ttl=64 time=1.59 ms 64 bytes from 10.0.71.100: icmp_req=2 ttl=64 time=0.778 ms 64 bytes from 10.0.71.100: icmp_req=3 ttl=64 time=0.753 ms
Step 11. Verify that experiments traffic exchange
Verify that the 2 experiments run without impacting each other's traffic, and that data is exchanged over the path along which data is supposed to flow.
Verify that experiment 1 (OG-EXP-6) hosts cannot communicate with experiment 2 (OG-EXP-6-2) hosts:
lnevers@of-vm1:~$ ping 10.0.71.100 PING 10.0.71.100 (10.0.71.100) 56(84) bytes of data. ^C --- 10.0.71.100 ping statistics --- 17 packets transmitted, 0 received, 100% packet loss, time 16000ms lnevers@of-vm1:~$
Verify that experiment 2 (OG-EXP-6-2) hosts cannot communicate with experiment 1 (OG-EXP-6) hosts:
lnevers2@exp2-of-vm1:~$ ping 10.0.70.100 PING 10.0.70.100 (10.0.70.100) 56(84) bytes of data. ^C --- 10.0.70.100 ping statistics --- 31 packets transmitted, 0 received, 100% packet loss, time 30239ms lnevers2@exp2-of-vm1:~$
Step 12. Review baseline monitoring statistics and checks.
Note: Cannot install iperf for baseline measurements. Ticket #57
Step 13. As site administrator, identify all controllers that the BBN OpenGENI OpenFlow switch is connected to.
Check the VMOC controller for the data path for each experiment (OG-EXP-6 and OG-EXP-6-2) on the controller :
lnevers@boscontroller:~$ echo "dump" |nc localhost 7001 ..... VMOC Slice Registry: urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-13: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-13'} urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1'} urn:publicid:IDN+geni:bos:gcf+slice+lngram: {'vlan_configurations': [], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+lngram'} urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2: {'vlan_configurations': [{'vlan': 1002, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+G urn:publicid:IDN+geni:bos:gcf+slice+ln6633: {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln663 https://localhost:9000 tcp:10.10.8.71:6633 {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} {'vlan_configurations': [{'vlan': 1002, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2'} 1001: {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} 1002: {'vlan_configurations': [{'vlan': 1002, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2'}
Controller for each VLAN can be shown on the switch, which provide address and port number for the VMOC controller:
bosswitch# show openflow 1001 Openflow Configuration - VLAN 1001 Openflow state [Disabled] : Disabled Controller pseudo-URL : tcp:128.89.91.170:6633 Listener pseudo-URL : Openflow software rate limit [100] : 100 Openflow connecting max backoff [60] : 60 Openflow hardware acceleration [Enabled] : Enabled Openflow hardware rate limit [0] : 0 Openflow hardware stats max refresh rate [0] : 0 Openflow fail-secure [Disabled] : Enabled Second Controller pseudo-URL : Third Controller pseudo-URL : Openflow Status - VLAN 1001 Switch MAC address : 00:25:61:36:9D:00 Openflow datapath ID : 03E9002561369D00 Instance not running, no controller connection Number of hardware rules: 0 bosswitch# show openflow 1002 Openflow Configuration - VLAN 1002 Openflow state [Disabled] : Disabled Controller pseudo-URL : tcp:128.89.91.170:6633 Listener pseudo-URL : Openflow software rate limit [100] : 100 Openflow connecting max backoff [60] : 60 Openflow hardware acceleration [Enabled] : Enabled Openflow hardware rate limit [0] : 0 Openflow hardware stats max refresh rate [0] : 0 Openflow fail-secure [Disabled] : Enabled Second Controller pseudo-URL : Third Controller pseudo-URL : Openflow Status - VLAN 1002 Switch MAC address : 00:25:61:36:9D:00 Openflow datapath ID : 03EA002561369D00 Instance not running, no controller connection Number of hardware rules: 0
Step 14. Set the hard and soft timeout of flowtable entries
The hard and soft timeouts defaults can be modified by editing the VMOC source file VMOCUtils.py found in /home/gram/gram/src/vmoc directory. Default values are in VMOCUtils.py for soft timeout (idle_timeout) is 10 seconds, and for hard timeout the default is 30 seconds.
Step 15. Get switch statistics and flowtable entries for slivers from the OpenFlow switch.
Connect to switch and captured OpenFlow statistics:
bosswitch# show openflow Openflow Configuration Openflow aggregate VLANs [Disabled] : Openflow aggregate management VlanId [0] : 0 Openflow second aggregate management VlanId [0] : 0 Openflow aggregate configuration VlanId [0] : 0 VID State HW Controller Pseudo-URL Conn ---- ----- --- -------------------------------------------------- ---- 1000 Off On tcp:128.89.91.170:6633 No 1001 Off On tcp:128.89.91.170:6633 No 1002 Off On tcp:128.89.91.170:6633 No 1003 Off On tcp:128.89.91.170:6633 No 1004 Off On tcp:128.89.91.170:6633 No 1005 On On tcp:128.89.91.170:6633 Yes 1006 On On tcp:128.89.91.170:6633 Yes 1007 On On tcp:128.89.91.170:6633 Yes 1008 On On tcp:128.89.91.170:6633 Yes 1009 On On tcp:128.89.91.170:6633 Yes 1010 On On tcp:128.89.91.170:6633 Yes
Show information for VLAN 1001 which is used by the first experiment (ln6633)
bosswitch# show openflow 1001 Openflow Configuration - VLAN 1001 Openflow state [Disabled] : Disabled Controller pseudo-URL : tcp:128.89.91.170:6633 Listener pseudo-URL : Openflow software rate limit [100] : 100 Openflow connecting max backoff [60] : 60 Openflow hardware acceleration [Enabled] : Enabled Openflow hardware rate limit [0] : 0 Openflow hardware stats max refresh rate [0] : 0 Openflow fail-secure [Disabled] : Enabled Second Controller pseudo-URL : Third Controller pseudo-URL : Openflow Status - VLAN 1001 Switch MAC address : 00:25:61:36:9D:00 Openflow datapath ID : 03E9002561369D00 Instance not running, no controller connection Number of hardware rules: 0 bosswitch# show vlans 1001 Status and Counters - VLAN Information - VLAN 1001 VLAN ID : 1001 Name : vlan1001 Status : Port-based Voice : No Jumbo : No Port Information Mode Unknown VLAN Status ---------------- -------- ------------ ---------- 8 Tagged Learn Up 16 Tagged Learn Up 24 Tagged Learn Up 34 Tagged Learn Up
Show information for second experiment:
bosswitch# show vlan 1002 Status and Counters - VLAN Information - VLAN 1002 VLAN ID : 1002 Name : vlan1002 Status : Port-based Voice : No Jumbo : No Port Information Mode Unknown VLAN Status ---------------- -------- ------------ ---------- 8 Tagged Learn Up 16 Tagged Learn Up 24 Tagged Learn Up 34 Tagged Learn Up bosswitch# show mac-address Status and Counters - Port Address Table MAC Address Port VLAN ------------- ------ ---- 001b21-bc2710 48 820 00c0b7-58dca9 48 820 782bcb-4f626e 48 820 782bcb-505c02 48 820 782bcb-506801 48 820 782bcb-506803 48 820 782bcb-51a684 48 820 ccef48-675bd0 48 820 fa163e-1366fa 8 1001 fa163e-27436b 34 1001 fa163e-f60aa6 16 1001 fa163e-25abbf 16 1002 fa163e-a4df3f 34 1002 fa163e-c9952a 8 1002
The MAC address for port "8" for VLAN 1002 is for "of-vm2" for experiment OG-EXP-6-2. We show some traffic statistics for the VM "of-vm2" on port 8:
bosswitch# show interfaces 8 Status and Counters - Port Counters for port 8 Name : MAC Address : 002561-369de0 Link Status : Up Totals (Since boot or last clear) : Bytes Rx : 49,103,554 Bytes Tx : 245,278,086 Unicast Rx : 200,103 Unicast Tx : 162,608 Bcast/Mcast Rx : 84,953 Bcast/Mcast Tx : 1,259,131 Errors (Since boot or last clear) : FCS Rx : 0 Drops Tx : 0 Alignment Rx : 0 Collisions Tx : 0 Runts Rx : 0 Late Colln Tx : 0 Giants Rx : 0 Excessive Colln : 0 Total Rx Errors : 0 Deferred Tx : 0 Others (Since boot or last clear) : Discard Rx : 0 Out Queue Len : 0 Unknown Protos : 0 Rates (5 minute weighted average) : Total Rx (bps) : 4,199,472 Total Tx (bps) : 4,037,200 Unicast Rx (Pkts/sec) : 0 Unicast Tx (Pkts/sec) : 0 B/Mcast Rx (Pkts/sec) : 0 B/Mcast Tx (Pkts/sec) : 0 Utilization Rx : 00.41 % Utilization Tx : 00.40 %
Show information for VLAN 1002 which is used by the second experiment (OG-EXP-6-2)
bosswitch# show openflow 1002 Openflow Configuration - VLAN 1002 Openflow state [Disabled] : Disabled Controller pseudo-URL : tcp:128.89.91.170:6633 Listener pseudo-URL : Openflow software rate limit [100] : 100 Openflow connecting max backoff [60] : 60 Openflow hardware acceleration [Enabled] : Enabled Openflow hardware rate limit [0] : 0 Openflow hardware stats max refresh rate [0] : 0 Openflow fail-secure [Disabled] : Enabled Second Controller pseudo-URL : Third Controller pseudo-URL : Openflow Status - VLAN 1002 Switch MAC address : 00:25:61:36:9D:00 Openflow datapath ID : 03EA002561369D00 Instance not running, no controller connection Number of hardware rules: 0
Step 16. Get layer 2 topology information about slivers in each slice.
Form the controller:
$ python /etc/gram/dump_gram_snapshot.py --directory . --snapshot 2013_05_17_09_56_24_0.json Dumping snapshot 2013_05_17_09_56_24_0.json: Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-2-exp1 Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-1a Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+linka21589dd-1f36-4839-a704-01e21b6bfb75 User: urn:publicid:IDN+geni:bos:gcf+user+lnevers2 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vmff538923-1154-4b9e-be2d-ad5a010ebaa0 User: urn:publicid:IDN+geni:bos:gcf+user+lnevers2 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface2a49e96f-ee7a-45c0-984c-c0dbfadb0b27 User: urn:publicid:IDN+geni:bos:gcf+user+lnevers2 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface052ebf9a-083f-4836-abd2-0d8d849623ca User: urn:publicid:IDN+geni:bos:gcf+user+lnevers2 Sliver urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm7eb21a0f-a59e-4a83-96b2-53df196057d0 User: urn:publicid:IDN+geni:bos:gcf+user+lnevers2
From the snapshot 2013_05_17_10_01_50_0.json, ARP infornatib us available:
"network_interfaces": ["urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+interface35638f90-4383-440b-897e-f76f35a24c7d"], "__type__": "VirtualMachine", "last_octet": "100", "operational_state": "geni_ready", "os_version": "12", "mgmt_net_addr": "192.168.10.4", "manifest_rspec": "<node client_id=\"of-vm1\" component_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+node+boscompute2\" component_manager_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+authority+cm\" exclusive=\"false\" sliver_id=\"urn:publicid:IDN+boscontroller.gpolab.bbn.com+sliver+vm0c7363a9-3bbc-4d6f-ab60-af67667a0232\"><interface client_id=\"of-vm1:if0\" mac_address=\"fa:16:3e:27:43:6b\
From VMOC, dumped all data and found the following information specific to VLAN 1001 for experiment 1:
VMOCSwitchControllerMap: Switches: <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5950> [VMOCControllerConnection DPID 282038087208836352 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5910> [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5890> [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5910> [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5910> [VMOCControllerConnection DPID 282038087208836352 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5950> [VMOCControllerConnection DPID 285415786929364224 URL https://localhost:9000 VLAN 1001] <vmoc.VMOCSwitchConnection.VMOCSwitchConnection object at 0x19b5890> Controllers (by VLAN) 1001 [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 282038087208836352 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 285978736882785536 URL https://localhost:9000 VLAN 1001] [VMOCControllerConnection DPID 285415786929364224 URL https://localhost:9000 VLAN 1001] urn:publicid:IDN+geni:bos:gcf+slice+ln6633: {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} {'vlan_configurations': [{'vlan': 1001, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+ln6633'} 1002: {'vlan_configurations': [{'vlan': 1002, 'controller_url': u'tcp:10.10.8.71:6633'}], 'slice_id': u'urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2'}
}}}
Step 17. Install layer 2 flows
Install flows that match only on layer 2 fields, and confirm whether the matching is done in hardware.
Note: No tools are available to install static flows.
Step 18. Install layer 3 flows
If supported, install flows that match only on layer 3 fields, and confirm whether the matching is done in hardware
Note: No tools are available to install static flows.
Step 19. Delete slivers.
As lnevers delete first sliver:
$ omni.py deletesliver -a gram2 -V2 ln6633INFO:omni:Loading config file /home/lnevers/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN WARNING:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+ln6633 expires in <= 3 hours INFO:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+ln6633 expires on 2013-05-17 19:23:00 UTC INFO:omni:Substituting AM nickname gram2 with URL https://128.89.91.170:5002, URN unspecified_AM_URN INFO:omni:Deleted sliver urn:publicid:IDN+geni:bos:gcf+slice+ln6633 on unspecified_AM_URN at https://128.89.91.170:5002 INFO:omni: ------------------------------------------------------------ INFO:omni: Completed deletesliver: Options as run: aggregate: ['gram2'] framework: gram Args: deletesliver ln6633 Result Summary: Deleted sliver urn:publicid:IDN+geni:bos:gcf+slice+ln6633 on unspecified_AM_URN at https://128.89.91.170:5002 INFO:omni: ============================================================
As lnevers2 delete second sliver:
lnevers2@arendia:~$ omni.py delete OG-EXP-6-2 -a gram -V3 INFO:omni:Loading config file /home/lnevers2/.gcf/omni_config INFO:omni:Using control framework gram INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN WARNING:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires in <= 3 hours INFO:omni:Slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 expires on 2013-05-17 19:24:44 UTC INFO:omni:Substituting AM nickname gram with URL https://128.89.91.170:5001, URN unspecified_AM_URN INFO:omni:Deleted slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 on unspecified_AM_URN at https://128.89.91.170:5001 INFO:omni:Deletion of slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 at AM URL https://128.89.91.170:5001 INFO:omni:[] INFO:omni: ------------------------------------------------------------ INFO:omni: Completed delete: Options as run: aggregate: ['gram'] api_version: 3 framework: gram Args: delete OG-EXP-6-2 Result Summary: Deleted slivers in slice urn:publicid:IDN+geni:bos:gcf+slice+OG-EXP-6-2 on unspecified_AM_URN at https://128.89.91.170:5001 INFO:omni: ============================================================