[[PageOutline(2-3, Table of Contents)]] = Experimentation with GENI = == 1.0 Why GENI? == GENI might be right for you if your experiment requires: * ''More resources than would ordinarily be found in your lab.'' Since GENI is a suite of infrastructures it can potentially provide you with more resources than is typically found in any one laboratory. This is especially true for compute resources: GENI provides access to large testbeds with hundreds of PCs and to cloud computing resources. * ''Non-IP connectivity across resources.'' Some GENI aggregates allow you to set up Layer 2 connections between resources within the aggregate. Experimenters may install and run their own Layer 3 and above protocols on these resources. It is also possible to setup Layer 2 connections between many GENI aggregates that connect to GENI backbone networks (Internet2 and NLR). You can even set up your network to route through experimenter programmable switches in the GENI backbone. * ''A deeply programmable network.'' GENI has switches in the backbone and at the edges that you can program to set up the network topologies you need and to control flows in your network. * ''Geographically distributed resources.'' Some GENI resources are distributed around the world. * ''Reproducibility.'' You can get exclusive access to certain GENI resources including CPU resources and network resources. This gives you control over your experiment's environment and hence the ability for you and others to repeat experiments under identical or very similar conditions. [[BR]] == 2.0 An Experimenter's View of GENI == GENI is a suite of infrastructures for networking and distributed systems experimentation. GENI supports at-scale experimentation on shared, heterogeneous, highly instrumented infrastructure and enables deep programmability throughout the network. As an experimenter you will need to know about GENI ''clearinghouses'' and GENI ''aggregates''. A GENI ''clearinghouse'' authenticates experimenters and issues them credentials needed to obtain GENI resources for experimentation. GENI ''aggregates'' provide resources to experimenters with GENI credentials. GENI has a number of different aggregates that provide a variety of resources for experimentation. An important aspect of planning your experiment is deciding what resources you need (type and numbers) and which aggregates might meet your needs. The following figure illustrates the role of GENI clearinghouses and aggregates: [[Image(GENIComponentsPicture-2.png, 30%)]] You will also need to know about GENI ''slices''. A slice holds a collection of computing and communications resources capable of running an experiment or a wide area service. An experiment is a researcher-defined use of resources in a slice; an experiment runs in a slice. A researcher may run multiple experiments using resources in a slice, concurrently or over time. [[BR]] [[BR]] == 3.0 GENI Aggregates == The following table lists GENI aggregates that are currently available for use by experimenters and the networks (GENI backbone network or the Internet) to which they connect. GENI has two backbone networks: [http://www.internet2.edu/ Internet2] and [http://www.nlr.net/ National Lambda Rail (NLR)]. The Internet2 backbone provides 1Gbps of dedicated bandwidth for GENI experiments and the NLR backbone provides up to 30Gbps of non-dedicated bandwidth. Some aggregates that connect to GENI backbone networks may be connected to other resources on the network using Layer 2 VLANS, giving experimenters the option of running non-IP based Layer 3 and above protocols. === 3.1 GENI aggregates currently available to experimenters: === {{{ #!html
GENI Aggregate Providers: Please report errors and omissions in this list to Vic Thomas
Aggregate | Description | Compute Resources | Programmable Network | Wireless | Network Connectivity | Experimenter Tools |
---|---|---|---|---|---|---|
PlanetLab | Testbed consisting of 1090 nodes at 513 sites around the world | Virtual machines on PlanetLab nodes | No | No | Internet | Gush, Omni, Raven, SFI |
GPO Lab myPLC | PlanetLab installation consisting of 5 multi-homed nodes | Virtual machines on PlanetLab nodes | No | No | Internet2: IP; NLR: IP; Internet | Gush, Omni, SFI |
Utah ProtoGENI | Over 500 co-located PCs that can be loaded with an experimenter specified OS image and connected in arbitrary topologies | Complete PCs or virtual machines on PCs | PCs can be set up as routers, plus experimenter-controllable switches (HP ProCurves) | 60 nodes with 2 WiFi cards each, plus software-defined radio peripherals (USRP2) | Internet2: IP and Layer 2; Internet | ProtoGENI Tools, Gush |
Kentucky ProtoGENI | Over 50 co-located PCs that can be loaded with an experimenter specified OS image and connected in arbitrary topologies. Strong instrumentation capabilities | Complete PCs or virtual machines on PCs | PCs can be set up as routers | No | Internet2: IP and Layer 2; Internet | ProtoGENI Tools, Instrumentation Tools |
GPO Lab ProtoGENI | 11 co-located PCs that can be loaded with an experimenter specified OS image and connected in arbitrary topologies | Complete PCs | PCs can be set up as routers | No | Internet2: IP and Layer 2; NLR: IP and Layer 2; Internet | ProtoGENI Tools, Gush |
Deter | Testbed for security experiments consisting of about 200 co-located PCs that can be loaded with an experimenter specified OS image and connected in arbitrary topologies | Complete PCs | PCs can be set up as routers | No | I2? NLR? Internet | ProtoGENI Tools, SEER |
ORBIT Wireless Testbed | 400 nodes, each with two 802.11 a/b/g interfaces, arranged in a grid. Nodes can be loaded with experimenter specified OS and software. | Full access to nodes in the testbed | MAC layer and above programmable by experimenter. Topology control by changing transmit power levels and noise floor. | Yes | OMF Tools | |
DOME | 35 transit buses equipped with computers and a variety of wireless radios, stationary WiFi access points with buses authenticated for access, numerous organic access points. | Virtual machines on an embedded computer running Linux | No | Yes: 802.11b/g AP, 802.11g PCI, XTend 900Mhz radio, 3G modem, and GPS | Internet | |
Million Node GENI | Compute resources on thousands of platforms donated by individuals and institutions. Platforms may be mobile and/or behind firewalls and NATs. | Experimenter software, written in a subset of Python, runs in sandboxes on Million Node GENI platforms. | No | Million Node GENI includes wireless platforms | Internet | ProtoGENI Tools, Million Node GENI Tools |
ViSE | Virtualized access to three sensor nodes located in the Amherst, MA area. Sensor nodes include a Davis Pro Vantage Pro2 Weather Station, a Sony SNC-RZ50N Pan-Tilt-Zoom Camera, and a Raymarine RD424 Radome Radar Scanner. | Linux virtual machines on sensor nodes | No | Testbed nodes use long distance 802.11b over directional antenna for communication | ||
Kansei | Sensor networking testbed consisting of 96 nodes. Each node has one XSM, 4 Telosbs, and one iMote2, all of which are attached to a Stargate. The Stargates are connected using both wired and wireless ethernet. | Experimenters program the Stargates running Stargate Release 7.2 from Intel Research. | No | Yes: 802.11, 802.15.4, and 900 MHz Chipcon CC1000 radios on the nodes. | Internet | EmStar stargate development environment |
Supercharged PlanetLab Platform (SPP) Nodes | Five high-performance PlanetLab nodes at Internet2 co-location sites. Nodes incorporate high-performance server and network processor blades to support service delivery over high speed overlay networks. | Experimenters program the General-Purpose Processing Engines (GPEs) and Network Processor Blades (NPE) of the SPP nodes. | Yes | No | Internet2 | |
ProtoGENI Backbone Nodes | Nodes at 5 Internet2 co-location sites. The ProtoGENI backbone runs Ethernet on a 1Gbps Internet2 wave, and slices it with VLANs. Researchers select the topology of VLANs on this infrastructure. | No | Yes | No | Internet2: Layer 2 and IP; Internet2 ION service (incl. many ProtoGENI sites); 1 Gbps to GpENI and Wisconsin ProtoGENI site, 10 GBps to Utah ProtoGENI site and Mid-Atlantic Crossroads; connected to SPP and ShadowNet nodes | ProtoGENI Tools |
BGP Mux | BGP-session multiplexer that provides stable, on-demand access to global BGP route feeds. Arbitrary and even transient client BGP connections can be provisioned and torn down on demand without affecting globally visible BGP sessions. | No | Yes | No | Internet2 | |
Stanford OpenFlow Network | Internet2 | |||||
Indiana Openflow Network | Internet2 | |||||
Rutgers Openflow Network | Internet2 | |||||
GPO Lab Openflow Network | OpenFlow testbed consisting of three OpenFlow-controlled switches (one each of HP, NEC, and Quanta) and an Expedient AM/OIM/FV stack. | Computing resources provided by the GPO Lab myPLC and GPO Lab ProtoGENI aggregates | Yes | No | Internet2: IP and Layer 2, NLR: IP and Layer 2 | OpenFlow tools (NOX and Expedient), Omni |