wiki:Omni

Version 8 (modified by Josh Karlin, 14 years ago) (diff)

--

The Omni GENI Client

Omni is an end-user GENI client that communicates with GENI Aggregate Managers and presents their resources with a uniform specification, known as the omnispec. The Omni client can also communicate with control frameworks in order to create slices, delete slices, and enumerate available GENI Aggregate Managers. Note that Omni also supports using control framework native RSpecs.

To configure Omni, please copy src/omni_config or /etc/omni/templates/omni_config to your ~/.gcf directory and fill in the parameters for at least one control framework. The "omni" section should be filled in with the certificate and key that you use in your control framework. Note that keys for the GCF framework are by default stored in ~/.gcf-servers. Embedded comments describe the meaning of each field.

The currently supported control frameworks are SFA, PG and GCF. OpenFlow Aggregate Managers are also supported.

Omni performs the following functions:

  • Talks to each clearinghouse in its native API
  • Contacts Aggregate Managers via the GENI API
  • Converts RSpects into (and from) native RSpec form to a common 'omnispec'.

Omnispecs

Each OmniResource has a name, a description, a type, booleans indicating whether the resource is allocated and whether the request wants to allocate it, and then hashes for options and misc fields.

Extending Omni

Extending Omni to support additional types of Aggregate Managers with different RSpec formats requires adding a new omnispec/rspec conversion file.

Extending Omni to support additional frameworks with their own clearinghouse APIs requires adding a new Framework extension class.

Omni workflow

  1. Pick a Clearinghouse you want to use. That is the control framework you will use.
  1. Be sure the appropriate section of omni config for your framework (sfa/gcf/pg) has appropriate settings for contacting that Clearinghouse, and user credentials that are valid for that Clearinghouse.
  1. Run omni listresources > avail-resources.omnispec
    1. When you do this, Omni will contact your designated Clearinghouse, using your framework-specific user credentials.
    2. The clearinghouse will list the Aggregates it knows about. EG for GCF, the am_* entries in gcf_config. For SFA, it will return the contents of /etc/sfa/geni_aggregates.xml.
    3. Omni will then contact each of the Aggregates that the Clearinghouse told it about, and use the GENI AM API to ask each for its resources. Again, it will use your user credentials. So each Aggregate Manager must trust the signer of your user credentials, in order for you to talk to it. This is why you add the CH certificate to /etc/sfa/trusted_roots or to the -r argument of your GCF gcf-am.py.
    4. Omni will then convert the proprietary RSPecs into a single 'omnispec'.
  1. Save this to a file. You can then edit this file to reserve resources, by changing 'allocate: false' to 'allocate: true' wherever the resource is not already allocated ('allocated: true').
  1. Create a Slice. Slices are created at your Clearinghouse. Slices are named based on the Clearinghouse authority that signs for them. Using the shorthand (just the name of your slice within PG, for example) allows Omni to ensure your Slice is named correctly. So run: omni.py createslice MyGreatTestSlice
  1. Allocate your Resources Given a slice, and your edited omnispec file, you are ready to allocate resources by creating slivers at each of the Aggregate Managers. Omni will contact your Clearinghouse again, to get the credentials for your slice. It will parse your omnispec file, converting it back into framework specific RSpec format as necessary. It will then contact each Aggregate Manager in your omnispec where you are reserving resources, calling the GENI AM API CreateSliver call on each. It will supply your Slice Credentials (from the Clearinghouse) plus your own user certificate, and the RSpec.

At this point, you have resources and can do your experiment.

  1. Renew or Delete After a while you may want to Renew your Sliver that is expiring, or Delete it. Omni will contact the Clearinghouse, get a list of all Aggregates, and invoke RenewSliver or DeleteSliver on each, for your slice name.

Running Omni

The following options are supported:

-c FILE -- location of your config file (default ~/.gcf/omni_config)

-f FRAMEWORK -- control framework to use (e.g. my_sfa), overriding default

in config file. The framework is a section named in the config file.

--debug -- Enable debug output

The following commands are supported:

createslice

  • format: omni.py createslice <slice-name>
  • example: omni.py createslice myslice

Creates the slice in your chosen control framework.

Default GCF certs require a slice named geni.net:gpo:gcf+slice+<name> based on the base_name section of gcf_config. The shorthand notation is available for SFA and PG. Shorthand works if your control framework is GCF only if you have configured the 'authority' line in the gcf section of omni_config.

deleteslice

  • format: omni.py deleteslice <slice-name>
  • example: omni.py deleteslice myslice

Deletes the slice in your chosen control framework

listresources

  • format: omni.py listresources [slice-name] [-a AM-URL] [-n]
  • example: omni.py listresources

omni.py listresources myslice omni.py listresources myslice -a http://localhost:12348 omni.py listresources myslice -a http://localhost:12348 -n

This command will list the rspecs of all geni aggregates available through your chosen framework, and present them in omnispec form. Save the result to a file and edit the allocate value to true to set up a reservation RSpec, suitable for use in a call to createsliver. If a slice name is supplied, then resources for that slice only will be displayed. If an Aggregate Manager URL is supplied, only resources from that AM will be listed. If the "-n" flag s used the native RSpec is returned instead of an omnispec. The "-n" flag requires the "-a" flag also be used to specify an aggregate manager.

createsliver

  • format: omni.py createsliver <slice-name> [-a AM-URL -n] <spec file>
  • example: omni.py createsliver myslice resources.ospec

omni.py createsliver myslice -a http://localhost:12348 -n resources.rspec

  • argument: the spec file should have been created by a call to

listresources (e.g. omni.py listresources > resources.ospec) Then, edit the file and set "allocate": true, for each

resource that you want to allocate.

This command will allocate the requested resources (those marked with allocate: true in the rspec). It will send an rspec to each aggregate manager that a resource is requested for. This command can also operate in native mode "-n" by sending a native rspec to a single aggregate specified by the "-a" command.

deletesliver

  • format: omni.py deletesliver <slice-name>
  • example: omni.py deletesliver myslice

This command will free any resources associated with your slice.

renewsliver

  • format: omni.py renewsliver <slice-name> "<time>"
  • example: omni.py renewsliver myslice "12/12/10 4:15pm"
  • example: omni.py renewsliver myslice "12/12/10 16:15"

This command will renew your resources at each aggregate up to the specified time. This time must be less than or equal to the time available to the slice. Times are in UTC.

sliverstatus

  • format: omni.py sliverstatus <slice-name>
  • example: omni.py sliverstatus myslice

This command will get information from each aggregate about the status of the specified slice

shutdown

  • format: omni.py shutdown <slice-name>
  • example: omni.py shutdown myslice

This command will stop the resources from running, but not delete

their state. This command should not be needed by most users.

Omnispecs

OpenFlow

A native OpenFlow advertisement RSpec looks like this:

<rspec>
    <network name="Stanford" location="Stanford, CA, USA">
        <switches>
            <switch urn="urn:publicid:IDN+openflow:stanford+switch:0" />
            <switch urn="urn:publicid:IDN+openflow:stanford+switch:1" />
            <switch urn="urn:publicid:IDN+openflow:stanford+switch:2" />
        </switches>
        <links>
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:0+port:0"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:1+port:0"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:1+port:0"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:0+port:0"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:0+port:1"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:2+port:0"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:2+port:0"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:0+port:1"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:1+port:1"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:2+port:1"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:2+port:1"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:1+port:1"
            />
            </links>
            </network>
            <network name="Princeton" location="USA">
            <switches>
            <switch urn="urn:publicid:IDN+openflow:stanford+switch:3" />
            <switch urn="urn:publicid:IDN+openflow:stanford+switch:4" />
            </switches>
            <links>
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:3+port:0"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:4+port:0"
            />
            <link
            src_urn="urn:publicid:IDN+openflow:stanford+switch:4+port:0"
            dst_urn="urn:publicid:IDN+openflow:stanford+switch:3+port:0"
            />
        </links>
    </network>
</rspec>

This gets converted by Omni (when you call listresources) into an omnispec that looks like this:

{
    "urn": "urn:publicid:IDN+openflow:stanford+authority+am", 
    "type": "rspec_of", 
    "resources": {
        "urn:publicid:IDN+openflow:stanford+switch:2": {
            "name": "Stanford:openflow:stanford", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "switch", 
            "options": {
                "dl_type": "from=*, to=*", 
                "port:1": "switch:1 port:1", 
                "port:0": "switch:0 port:1", 
                "nw_dst": "from=*, to=*", 
                "dl_src": "from=*, to=*", 
                "nw_proto": "from=*, to=*", 
                "tp_dst": "from=*, to=*", 
                "tp_src": "from=*, to=*", 
                "dl_dst": "from=*, to=*", 
                "nw_src": "from=*, to=*", 
                "vlan_id": "from=*, to=*"
            }, 
            "description": "OpenFlow Switch"
        }, 
        "urn:publicid:IDN+openflow:stanford+switch:3": {
            "name": "Princeton:openflow:stanford", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "switch", 
            "options": {
                "dl_type": "from=*, to=*", 
                "port:0": "switch:4 port:0", 
                "nw_dst": "from=*, to=*", 
                "dl_src": "from=*, to=*", 
                "nw_proto": "from=*, to=*", 
                "tp_dst": "from=*, to=*", 
                "tp_src": "from=*, to=*", 
                "dl_dst": "from=*, to=*", 
                "nw_src": "from=*, to=*", 
                "vlan_id": "from=*, to=*"
            }, 
            "description": "OpenFlow Switch"
        }, 
        "urn:publicid:IDN+openflow:stanford+switch:0": {
            "name": "Stanford:openflow:stanford", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "switch", 
            "options": {
                "dl_type": "from=*, to=*", 
                "port:1": "switch:2 port:0", 
                "port:0": "switch:1 port:0", 
                "nw_dst": "from=*, to=*", 
                "dl_src": "from=*, to=*", 
                "nw_proto": "from=*, to=*", 
                "tp_dst": "from=*, to=*", 
                "tp_src": "from=*, to=*", 
                "dl_dst": "from=*, to=*", 
                "nw_src": "from=*, to=*", 
                "vlan_id": "from=*, to=*"
            }, 
            "description": "OpenFlow Switch"
        }, 
        "urn:publicid:IDN+openflow:stanford+switch:1": {
            "name": "Stanford:openflow:stanford", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "switch", 
            "options": {
                "dl_type": "from=*, to=*", 
                "port:1": "switch:2 port:1", 
                "port:0": "switch:0 port:0", 
                "nw_dst": "from=*, to=*", 
                "dl_src": "from=*, to=*", 
                "nw_proto": "from=*, to=*", 
                "tp_dst": "from=*, to=*", 
                "tp_src": "from=*, to=*", 
                "dl_dst": "from=*, to=*", 
                "nw_src": "from=*, to=*", 
                "vlan_id": "from=*, to=*"
            }, 
            "description": "OpenFlow Switch"
        }, 
        "urn:publicid:IDN+openflow:stanford+user+sliceinfo": {
            "name": "sliceinfo", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "user", 
            "options": {
                "project_description": "Internet performance research to ...", 
                "controller_url": "tcp:localhost:6633", 
                "slice_name": "Crazy Load Balancing Experiment", 
                "firstname": "John", 
                "lastname": "Doe", 
                "project_name": "Stanford Networking Group", 
                "fv_password": "slice_pass", 
                "slice_description": "Does crazy load balancing and plate spinning", 
                "email": "jdoe@geni.net"
            }, 
            "description": "Slice information for FlowVisor Access"
        }, 
        "urn:publicid:IDN+openflow:stanford+switch:4": {
            "name": "Princeton:openflow:stanford", 
            "misc": {}, 
            "allocate": false, 
            "allocated": false, 
            "type": "switch", 
            "options": {
                "dl_type": "from=*, to=*", 
                "port:0": "switch:3 port:0", 
                "nw_dst": "from=*, to=*", 
                "dl_src": "from=*, to=*", 
                "nw_proto": "from=*, to=*", 
                "tp_dst": "from=*, to=*", 
                "tp_src": "from=*, to=*", 
                "dl_dst": "from=*, to=*", 
                "nw_src": "from=*, to=*", 
                "vlan_id": "from=*, to=*"
            }, 
            "description": "OpenFlow Switch"
        }
    }
}

To allocate, simply set "allocate" to true for each switch that you want to allocate, and fill in the flowspace options for the flowspace on that switch. Next, fill in 'sliceinfo' section with your name, password, etc.