Changes between Version 71 and Version 72 of ExperimenterPortal


Ignore:
Timestamp:
05/10/11 11:49:12 (9 years ago)
Author:
Vic Thomas
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ExperimenterPortal

    v71 v72  
    2020
    2121The following figure illustrates the role of GENI clearinghouses and aggregates:
    22   [[Image(GENIComponentsPicture-2.png, 30%)]]
     22  [[Image(GENIComponentsPicture-2.png, 25%)]]
    2323
    2424You will also need to know about GENI ''slices''.  A slice holds a collection of computing and communications resources capable of running an experiment or a wide area service.  An experiment is a researcher-defined use of resources in a slice; an experiment runs in a slice.  A researcher may run multiple experiments using resources in a slice, concurrently or over time.
     
    2929GENI has a number of aggregates that make different kinds of resources available for use by experimenters.  Examples of such resources include: 
    3030  * ''Backbone networks.''  Geographically distributed GENI resources may be connected to one another using [http://www.internet2.edu/ Internet2], [http://www.nlr.net/ National Lambda Rail (NLR)] or the public Internet.  Many aggregates can be connected using Layer 2 VLANS over Internet2 and NLR.  Most aggregates can be connected using IP.
    31   * ''Programmable nodes.''   GENI provides a wide array of programmable hosts such as entire PCs from the ProtoGENI aggregate that can be booted with an experimenter specified operating system, operating system virtual machines from the PlanetLab aggregate that can host experimenter software, programming language virtual machines from the Million Node GENI aggregate and cloud computing resources from the GENICloud aggregate.
    32   * ''Programmable networks.''  Experimenter programmable switches within the GENI backbone networks (e.g. ProtoGENI backbone nodes and SPP ndoes) and at campuses around the country (e.g. Stanford OpenFlow network).
    33   * ''Wireless testbeds.''  Resources for wireless experiments such as the ORBIT and DOME testbeds.
     31  * ''Programmable nodes.''   GENI provides a wide array of programmable hosts such as entire PCs from the [wiki:GeniAggregate/UtahProtoGeni ProtoGENI] aggregate that can be booted with an experimenter specified operating system; operating system virtual machines that can host experimenter software from the [wiki:GeniAggregate/PlanetLab PlanetLab] and [wiki:GeniAggregate/UtahProtoGeni ProtoGENI] aggregates , programming language virtual machines from the [wiki:GeniAggregate/MillionNodeGeni Million Node GENI] aggregate and cloud computing resources from the GENICloud aggregate.
     32  * ''Programmable networks.''  Experimenter programmable switches within the GENI backbone networks (e.g. [wiki:GeniAggregate/ProtoGeniBackBoneNodes ProtoGENI backbone nodes] and SPP nodes) and at campuses around the country (e.g. Stanford OpenFlow network).
     33  * ''Wireless testbeds.''  Resources for wireless experiments such as the [wiki:GeniAggregate/OrbitWirelessTestbed ORBIT] and [wiki:GeniAggregate/Dome DOME] testbeds.
    3434
    3535See [wiki:"ExperimenterPortal#a7GENIaggregatescurrentlyavailabletoexperimenters" Section 7] for a listing of GENI aggregates along with a description of the resources they provide.
     
    3939== 4 Picking Resources for Your Experiment ==
    4040As you plan your experiment you will want to consider:
    41   * ''The degree of control you need over your experiment.''  Do you need to tightly control the resources (CPU, bandwidth, etc.) allocated to your experiment or will best-effort suffice.  If you need a tightly controlled environment you might want to consider the U. of Utah ProtoGENI aggregate that allocate entire PCs that can be connected in arbitrary topologies.
     41  * ''The degree of control you need over your experiment.''  Do you need to tightly control the resources (CPU, bandwidth, etc.) allocated to your experiment or will best-effort suffice?  If you need a tightly controlled environment you might want to consider one of the ProtoGENI aggregate that allocate entire PCs that can be connected in arbitrary topologies.
    4242  * ''The desired network topology.''  Does your experiment have to be geographically distributed?  What kinds of connectivity do you need between these geographically distributed locations.  Almost all aggregates can connect using IP connectivity over the Internet.   Many aggregates connect to one of the GENI backbones and allow you to set up IP connections with other resources on the backbone.  This will give you a bit more control over the network.   Some aggregates provide Layer 2 connectivity over a GENI backbone i.e. you can set up vlans between these aggregates and other resources on the backbone network.  This allows you to run non-IP protocols across between the aggregate and other resources.
    4343  * ''The desired control over network flows.''  If you need to manage network traffic to/from an aggregate you might want to use aggregates that connect to a GENI backbone using OpenFlow switches or set up vlans to these aggregates through the ProtoGENI Backbone Nodes or the SPP Nodes.
    4444  * ''The number of resources you need from an aggregate.''  Aggregates vary from small installations such as the GPO Lab ProtoGENI aggregate that consists of eleven nodes to the PlanetLab and ProtoGENI aggregates that consist of hundreds of nodes.
    45   * ''Support for the GENI Aggregate Manager API.'' Aggregates that support the GENI Aggregate Manager API generally recognize credentials issued by one of the GENI Clearinghouses.  Aggregates that do not will likely require you to get an account from them.   Additionally, a growing number of GENI experiment control tools support the GENI API i.e. these tools can be used to create slices, add resources from aggregates that support the GENI API, etc.  Examples of such tools include the [http://www.protogeni.net/trac/protogeni/wiki/MapInterface ProtoGENI Tools], [http://trac.gpolab.bbn.com/gcf/wiki/Omni  Omni] and [http://gush.cs.williams.edu/trac/gush Gush].
     45  * ''If the aggregate accepts GENI credentials''.   You will likely be able to use resources from these aggregates with a credential issued by a GENI clearinghouse; you do not have to contact the aggregate owner to get an account for the aggregate.  Additionally, aggregates that accept GENI credentials typically implement the GENI Aggregate Manager API.  A growing number of GENI experiment control tools support this API i.e. these tools can be used to create slices, add resources from aggregates that support the GENI API, etc.  Examples of such tools include the [http://www.protogeni.net/trac/protogeni/wiki/MapInterface ProtoGENI Tools], [http://trac.gpolab.bbn.com/gcf/wiki/Omni  Omni] and [http://gush.cs.williams.edu/trac/gush Gush].
    4646
    4747The GENI Project Office is happy to help find the best match of resources for your experiments.  Please contact [mailto:help@geni.net] for assistance.
     
    7373[[BR]]
    7474
    75 == 8 GENI aggregates currently available to experimenters ==
     75== 8 GENI Aggregates Currently Available to Experimenters ==
    7676 GENI has two backbone networks: [http://www.internet2.edu/ Internet2] and [http://www.nlr.net/ National Lambda Rail (NLR)].  The Internet2 backbone provides 1Gbps of dedicated bandwidth for GENI experiments and the NLR backbone provides up to 30Gbps of non-dedicated  bandwidth.   Some aggregates that connect to GENI backbone networks may be connected to other resources on the network using Layer 2 VLANS, giving experimenters the option of running non-IP based Layer 3 and above protocols.  Experimenters wishing to connect Internet2 connected resources to NLR connected resources may do so using switches in Atlanta .
    7777
     
    159159                 <th><b>Description</b></th>
    160160                 <th><b>Compute Resources</b></th>
    161                  <th><b>Programmable Network</b></th>
    162161                 <th><b>Accepts GENI Credentials</b></th>
    163162                 <th><b>Network Connectivity</b></th>
     
    170169                <td> Five high-performance PlanetLab nodes at Internet2 co-location sites.  Nodes incorporate high-performance server and network processor blades to support service delivery over high speed overlay networks.  </td>
    171170                <td>  Experimenters program the General-Purpose Processing Engines (GPEs) and Network Processor Blades (NPE) of the SPP nodes. </td>
    172                 <td>  Yes </td>
    173171                <td>No</td>
    174172                <td> Internet2 </td>
     
    179177                <td> Nodes at 5 Internet2 co-location sites.  The ProtoGENI backbone runs Ethernet on a 1Gbps Internet2 wave, and slices it with VLANs.  Researchers select the topology of VLANs on this infrastructure. </td>
    180178                <td> No  </td>
    181                 <td>  Yes </td>
    182179                <td>Yes</td>
    183180                <td> Internet2: Layer 2 and IP; Internet2 ION service (incl. many ProtoGENI sites); 1 Gbps to GpENI and Wisconsin ProtoGENI site, 10 GBps to Utah ProtoGENI site and Mid-Atlantic Crossroads; connected to SPP and ShadowNet nodes</td>
     
    188185                <td> BGP-session multiplexer that provides stable, on-demand access to global BGP route feeds. Arbitrary and even transient client BGP connections can be provisioned and torn down on demand without affecting globally visible BGP sessions. </td>
    189186                <td> No  </td>
    190                 <td> Yes  </td>
    191187                <td>No</td>
    192188                <td> Internet2 </td>
     
    197193                <td>  </td>
    198194                <td>   </td>
     195                <td>No</td>
     196                <td> Internet2 </td>
     197                <td>  </td>
     198        </tr>
     199        <tr>
     200                <td> Indiana Openflow Network </a></td>
     201                <td>  </td>
    199202                <td>   </td>
    200203                <td>No</td>
     
    203206        </tr>
    204207        <tr>
    205                 <td> Indiana Openflow Network </a></td>
    206                 <td>  </td>
    207                 <td>   </td>
    208                 <td>   </td>
    209                 <td>No</td>
    210                 <td> Internet2 </td>
    211                 <td>  </td>
    212         </tr>
    213         <tr>
    214208                <td> Rutgers Openflow Network </a></td>
    215209                <td>  </td>
    216                 <td>   </td>
    217210                <td>   </td>
    218211                <td>No</td>
     
    224217                <td>  <a href="http://www.openflow.org/"> OpenFlow </a> testbed consisting of three OpenFlow-controlled switches (one each of HP, NEC, and Quanta) and an Expedient AM/OIM/FV stack. </td>
    225218                <td> Computing resources provided by the GPO Lab myPLC and GPO Lab ProtoGENI aggregates  </td>
    226                 <td>  Yes </td>
    227219                <td>Yes</td>
    228220                <td> Internet2: IP and Layer 2, NLR: IP and Layer 2 </td>