Version 110 (modified by 10 years ago) (diff) | ,
---|
Current GENI Rack Projects
GENI racks are being implemented to meet the goals specified by the GPO GENI Rack Requirements. Current rack projects can be characterized as follows:
- ExoGENI A higher cost, flexible virtual networking topologies solution including OpenFlow, that also delivers a powerful platform for multi-site cloud applications. These racks are typically deployed as an integrated part of a campus network.
- InstaGENI - A mid-range cost, expandable GENI Racks solution that can will be deployed at a large number of campuses, delivering Internet cloud applications support, along with OpenFlow and VLAN networking. These racks are normally deployed outside a site firewall.
- OpenGENI - Also a mid-range, expandable GENI Racks solution delivering Internet cloud applications support, along with OpenFlow and VLAN networking. These racks are deployed on Dell hardware.
- Cisco GENI - A dual data plane fabric expandable GENI Rack solution, that combines ExoGENI software with Cisco UCS-B and -C series servers, UCS disk and NetApp FAS 2240 storage, and Nexus 3548 dataplane switches. These racks support OpenFlow, virtual topologies, and multi-site cloud applications.
All GENI Racks have layer 3 connections to the Internet and layer 2 connections to the GENI core networks (Internet2 ION and AL2S). The racks use commodity Internet for control access to rack resources, and shared VLANs for the application and experiment data connections. The racks may also use layer 3 Internet connections for some experiments, particularly IP cloud experiments. This diagram illustrates logical connections for all the GENI Rack types:
Note: InstaGENI Racks additionally allow layer 3 connections on the Data Plane connections
Additional information is available for various aspects of the GENI racks life cycle:
- GENI Racks Administration page highlights tasks for each of these rack solutions to provide some insight into the effort required by a participant.
- GENI Racks Security page provides some information about security in the context of the various GENI racks.
- GENI Racks Acceptance Tests page provides insight into how the GPO validates these racks solutions before broad campus deployment.
- Sites Confirmation Tests page provides insight into how the GPO validates each rack as it is deployed for production or as network stitching support is introduced for a site.
- GENI Racks Deployment Sites page captures the sites planned to have an ExoGENI or InstaGENI rack. The GENI Production page captures the sites that are now operational and are available to GENI experimenters.
- Rack Checklist page provides high level steps for installing a GENI rack site.
- Rack Checklist Status page provides the status of each GENI site for completing the rack checklist.
- ExoGENI Rack Project, InstaGENI Rack Project, and OpenGENI Rack Project overview pages provide insight into each project's activity. There is not yet a project page for Cisco GENI Racks.
- GENI Resources page captures experimenter information for ExoGENI, InstaGENI and other GENI aggregates that provide resources for experimenters.
- GENI Racks Interoperability Experiment page captures an interoperability experiment with ExoGENI, InstaGENI and Meso-scale OpenFlow resources that was run in the initial stages of GENI Racks acceptance.
ExoGENI Racks
The ExoGENI project is an effort to implement high-performance GENI Racks via a partnership between RENaissance Computing Institute (RENCI), Duke and IBM. ExoGENI racks are assembled and tested by IBM and shipped directly to sites, where they are managed by the RENCI team. ExoGENI racks deliver support for multi-domain cloud structure with flexible virtual networking topologies that allow combining ExoGENI, Meso-scale OpenFlow and WiMAX resources. An overview of this project was presented at the GEC12 ExoGENI Presentation. Also available are an ExoGENI Design document and an ExoGENI white paper. For a more details see the ExoGENI Rack Project overview page. The up-to-date technical information about ExoGENI Racks is located on ExoGENI Wiki.
ExoGENI Components
IMPORTANT: For the most up-to-date information about GENI Racks configurations and features please visit http://wiki.exogeni.net
An ExoGENI Rack delivers the following types of systems:
- Management Switch - An IBM G8052R switched is delivered to allow access to/from the local network provider.
- VPN appliance - A Juniper SSG5 provides backup access to manage nodes.
- OpenFlow-enabled switch - An IBM G8264R switch to carry experimental traffic via layer2 to GENI backbone (I2 AL2S) and to local OF campus.
- Management Node - An IBM x3650 host running Elastic Compute Cloud(EC2) with OpenStack to provision VMs and running xCat to provision bare-metal nodes. Also runs monitoring functions.
- Worker Nodes - Ten IBM 3650 M3 Worker nodes provide both OpenStack virtualized instances and Bare-metal Xcat nodes
- Monitoring Host - None, Monitoring is through Nagios from GMOC.
The ExoGENI resources have the following connections:
ExoGENI Specifications
IMPORTANT: For the most up-to-date information about GENI Racks configurations and features please visit http://wiki.exogeni.net
An initial inventory of the ExoGENI Rack hardware components is found here, which is superseded by the following:
Compute Resources A total of 12 hosts are in the rack to provide the Resources, Monitoring, Storage and Application functions:
- Management node: 1 IBM x3650 M3, 2x146GB 10K SAS hard drives, 12G RAM, dual-socket 4-core Intel X5650 2.66Ghz CPU, Quad-port 1Gbps adapter
- Worker/Bare-Metal nodes: 10 IBM x3650 M3, 1x146GB 10K SAS hard drive +1x500+GB secondary drive, 48G RAM, dual-socket 6-core Intel X5650 2.66Ghz CPU, dual 1Gbps adapter, 10G dual-port Chelseo adapter
- Sliverable Storage: 1 IBM DS3512 storage NAS 6x1TB 7200RPM drives
Network Components
- Management Switch: IBM BNT G8052R 1G client/10G uplink ports - Access to commodity internet.
- OpenFlow Switch: IBM BNT G8264R 10G client/40G uplink ports - Access to GENI backbone.
- VPN Appliance: Juniper SSG5 - Backup management access.
Misc. Components General purpose hardware included:
- IBM PDU based on site power requirements, (GPO=IBM 5897 PDU; RENCI=DPI 5900 PDU)
- No UPS included
- IBM Local 2x16 Console Manager (LCM16)
- IBM 1U 17-inch Flat Panel Console Kit (PN 172317X)
Up-to-date details about the configuration and setup of ExoGENI racks can be found in ExoGENI wiki.
ExoGENI Implementation
Any configuration or run-time requirements for the ExoGENI rack systems will be documented here, when available.
Each ExoGENI Rack systems requires a specific setup, which is to captured for each of the rack components: OpenFlow Switch, Management node, Worker Nodes, etc.. See the ExoGENI Wiki page for details about the required configuration. For help about the configuration settings, please contact help@geni.net. Some examples are available to get insight into ExoGENI Racks Administration tasks.
ExoGENI Monitoring
Monitoring data for the ExoGENI rack is collected on the management node by a Nagios aggregator and then forwarded to the GMOC. The type of data that will be available is currently being defined. ExoGENI Racks meet the GENI monitoring requirements. ExoGENI Monitoring data will be available at the GMOC SNAPP Home.
ExoGENI Site Requirements
ExoGENI racks have the following site requirements:
- Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
- Site Security Requirements- Determine changes required for rack connectivity, such as FV rules, IP filters, etc.
- Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
- Power requirements - based on site requirements
- Administrative accounts - Setup of site administrator account that will be created on the management/head node.
- Delivery logistics - Details for where the rack is to be delivered, who will accept the delivers, and when the delivery will take place. Also covers any physical restrictions for the rack delivery.
InstaGENI Racks
The InstaGENI project is an effort to deploy low-end, expandable GENI Racks at large number of campuses and is typically found outside the site firewall, more details are to be added as defined. An overview of the project is available from the GEC12 InstaGENI Racks presentation. An InstaGENI Design document is available. For more details see the InstaGENI Rack Project overview page.
InstaGENI Components
Each InstaGENI racks delivers a small ProtoGENI cluster with OpenFlow networking and FOAM aggregate management. This rack includes the following types of systems:
- Control Node - Xen Server that runs 3 VMS to provide:
- PG boss node, web server, monitoring, and GENI AM API Server
- Local File Server node
- FOAM Controller
- Experiment Nodes - Five nodes managed by ProtoGENI software stack, which provides boot services, account creation, experimental management, etc.
- OpenFlow Switch - Provides internal routing and data plane connectivity to the GENI backbone (layer 2 and layer 3)
- Management Switch - Provides control plane connectivity to the Internet (layer 3)
Following are the network connections for an InstaGENI rack:
InstaGENI Specifications
The current hardware components specification for the InstaGENI Racks includes:
Compute Resources 5 HP ProLiant DL360 G7 Server series hosts to provide the VM Server, Monitoring, Storage, and Application functions:
- Control Node - 1 HP ProLiant DL360 G7 Server, quad-core, single-socket, 12 GB Ram, 4 TB Disk (RAID), and dual NIC
- Experiment Nodes - 5 HP ProLiant DL360 G7 Server, six-core, dual-socket,48GB Ram, 1TB Disk, and dual NIC
- Bare Metal Node
Network Components 2 network components to provide access to GENI core networks and commodity Internet:
- HP ProCurve 2620 Switch (J9623A), 24 10/100/100 Mb/s ports, 2 1 Gb/s ports
- HP ProCurve 5406zl Switch (J8697A) 48 1 Gb/s ports, 4 10 Gb/s ports
Misc. Components_ General purpose hardware also included:
- 1 or more HP Intelignt Mod PDU
- HP Dedicated iLO Management Port Option
- HP TFT7600 KVM Console US Kit
InstaGENI Implementation
Configuration and run-time requirements for InstaGENI rack systems are documented here. Following are a list of available configuration details for the rack setup:
- Questionnaire used to gather initial site configuration data.
- Rack Deployment page detailing how rack software is deployed.
- Administrative Accounts instructions for creating new accounts.
- FOAM Aggregate setup details are not documented at this time.
- HP Integrated Lights-Out 3 information to configure, update, and operate servers remotely.
For help or questions about configuration settings or run-time requirements, please contact help@geni.net. Some examples are available to get insight into InstaGENIRacks Administration tasks.
InstaGENI Monitoring
InstaGENI Racks meet the following monitoring requirements. InstaGENI Monitoring data is currently being defined and will be available at the GMOC SNAPP home.
InstaGENI Site Requirements
InstaGENI racks have the following site requirements:
- Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
- Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
- Power requirements - based on site requirements
- Administrative accounts - Setup of site administrator account that will be created on the management/head node.
- Delivery logistics - Details for where the rack is to be delivered, who will accept the delivers, and when the delivery will take place. Also covers any physical restrictions for the rack delivery.
OpenGENI Racks
The OpenGENI project is mid-range, expandable GENI Racks solution delivering a GENI Compliant aggregate on a Dell rack. OpenGENI racks are currently designated as a GENI Provisional Resource. An OpenGENI Architecture document is available and for more details see the OpenGENI Racks page.
OpenGENI Components
An OpenGENI rack includes the following types of systems:
- Control Node - OpenStack Server that runs the GENI AM API Server and the GENI Operational Monitoring function.
- Experiment Nodes - Nodes managed by GRAM software, to provide boot services, account creation, experimental management, etc.
- OpenFlow Switch - Provides internal routing and data plane connectivity to the GENI backbone (layer 2 and layer 3)
- Management Switch - Provides control plane connectivity to the Internet (layer 3)
Following are the network connections for an OpenGENI rack:
OpenGENI Specifications
The current hardware components specification for the OpenGENI Racks includes:
Compute Resources - 3 PowerEdge Dell R620XL rack server hosts that provide the VM Server, Monitoring, Storage, and Application functions:
- Control Node - 1 Dell PowerEdge R620XL server Server with 32 GB of RAM and 300 GB hard drive.
- Experiment Nodes - 2 Dell PowerEdge R620XL server Server with 32 GB of RAM, and 300 GB hard drive. More compute nodes are possible.
- Bare Metal Node - Not supported at this time.
Network Components 2 network components to provide access to GENI core networks and commodity Internet:
- Dell Force10 S4810P !Openflow Switch supports 48 dual-speed 1/10 Gb ports and four 40 Gb ports
- Dell PowerConnect Switch with up to 48 ports of GbE and optional 4x 10Gb
Misc. Components_ General purpose hardware also included:
- Dell Remote Access Controller - iDRAC
OpenGENI Implementation
Configuration and run-time requirements for OpenGENI rack systems are documented here. Following are a list of available configuration details for the rack setup:
- Site Checklist used to gather initial site configuration data.
- Administrative Accounts instructions for creating new administrative accounts.
For help or questions about configuration settings or run-time requirements, please contact help@geni.net.
OpenGENI Monitoring
OpenGENI Racks are GENI Provisional Resources. Development is taking place to meet monitoring requirements by implementing GENI Operational Monitoring.
OpenGENI Site Requirements
OpenGENI racks have the following site requirements:
- Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
- Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
- Power requirements - based on site requirements
- Administrative accounts - Setup of site administrator account that will be created on the all rack devices.
- Delivery logistics - Details for where the rack is to be delivered, who will accept the delivers, and when the delivery will take place. Also covers any physical restrictions for the rack delivery.
Cisco GENI Racks
These dual fabric interconnect expandable GENI Racks combine ExoGENI software with Cisco UCS-B and C series servers, UCS and NetApp FAS 2240 storage, and Nexus 3548 dataplane switches. All equipment is physically located in one cabinet, but the rack operates as two separate but related GENI aggregates. Cisco racks support OpenFlow, virtual topologies, data centers, and multi-site cloud applications.
The Cisco GENI rack has undergone GENI Acceptance Testing for more details see the GENI Racks Acceptance Tests page. Please contact help@geni.net if you are interested in more information.
Cisco GENI Rack Components
A Cisco GENI rack includes the following types of systems:
- Control Node - Server that runs the GENI AM API server
- Experiment Nodes - Nodes managed by ExoGENI software, to provide boot services, account creation, experimental management, etc.
- Experimenter Storage - Slicable storage for experimenters from the UCS onboard disks and NetApp FAS 2240 storage
- OpenFlow Switch - Provides internal routing and data plane connectivity to the GENI backbone (layer 2 and layer 3)
- Management Switch - Provides control plane connectivity to the Internet (layer 3)
See the attached NCSU GENI RACK Certification v1.0 for high-level information on the components, wiring, and network connections for an example Cisco GENI rack at NCSU.
Cisco GENI Rack Specifications
The main hardware components undergoing acceptance testing are shown in the attached Cisco presentation and Bill of Materials files (see end of page).
Cisco GENI Rack Implementation
Configuration details, run-time requirements, and design documents for the Cisco GENI rack are not yet available.
Cisco GENI Monitoring
Development is taking place to meet monitoring requirements by implementing GENI Operational Monitoring. GENI monitoring is not yet available for the system undergoing acceptance testing.
Cisco GENI Site Requirements
Cisco GENI racks have the following site requirements:
- Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
- Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
- Power requirements - based on site requirements
- Administrative accounts - Setup of site administrator account that will be created on the all rack devices.
- Delivery logistics - Details for where the rack is to be delivered, who will accept the delivers, and when the delivery will take place. Also covers any physical restrictions for the rack delivery.
Attachments (8)
- GENI-Starter-Racks-connection.jpg (140.4 KB) - added by 13 years ago.
- GENI-Racks-connection.jpg (145.7 KB) - added by 13 years ago.
- GENI-ExoGENI-components.jpg (98.3 KB) - added by 13 years ago.
- GENI-InstaGENI-components.jpg (57.6 KB) - added by 13 years ago.
-
InstaGENI Design Document Rick 2-15(2).doc (3.0 MB) - added by 13 years ago.
Initial InstaGENI Design document
- GENI-OpenGENI-components.jpg (50.5 KB) - added by 11 years ago.
-
NCSU GENI RACK Certification v1.0.pdf (853.7 KB) - added by 10 years ago.
Cisco GENI rack summary presentation
-
GENI Rack Cisco-GPO- BOM v2 5.xlsx (20.4 KB) - added by 10 years ago.
Cisco GENI Rack Bill of Materials (basic)