Changes between Initial Version and Version 1 of GENIRacksHome


Ignore:
Timestamp:
02/27/12 20:53:42 (12 years ago)
Author:
hdempsey@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIRacksHome

    v1 v1  
     1[[PageOutline]]
     2
     3= GENI Racks =
     4
     5GENI racks are being implemented to meet the goals specified by the GPO [http://groups.geni.net/geni/wiki/GeniRacks GENI Rack Requirements].  Current rack projects can be characterized as follows:
     6
     7 * '''[wiki:GENI-Infrastructure-Portal/GENIRacks#GENIStarterRacks Starter Racks]''': Deployed by the GPO, a project to deliver an early low-end solution for GENI Racks.
     8 * '''[wiki:GENI-Infrastructure-Portal/GENIRacks#InstaGENIRacks InstaGENI]''' - A mid-range cost, expandable GENI Racks solution that can will be deployed at a large number of campuses, delivering Internet cloud applications support, along with Openflow and VLAN networking. These racks are normally deployed outside a site firewall.
     9 * '''[wiki:GENI-Infrastructure-Portal/GENIRacks#ExoGENIRacks ExoGENI]'''  A higher cost, flexible virtual networking topologies solution including OpenFlow, that also delivers a powerful platform for multi-site cloud applications. These racks are typically deployed as an integrated part of a campus network.
     10
     11All GENI Racks have layer 3 connections to the internet and layer 2 connections to the GENI core networks (curently NLR and Internet2). The racks use commodity Internet for control access to rack resources, and shared VLANS for the application and experiment data connections.  The racks may also use layer 3 Internet connections for some experiments, particularly IP cloud experiments.  This diagram illustrates logical connections for all the GENI Rack types:
     12
     13[[Image(GENI-Racks-connection.jpg)]]
     14
     15''Note: InstaGENI Racks additionally allow Layer 3 connections on the Data Plane connections''
     16
     17'''[wiki:GENI-Infrastructure-Portal/GENIRacksAdministration GENI Racks Administration]''' tasks are highlighted for each of these rack solutions to provide some insight into the effort required by a participant.
     18
     19= GENI Starter Racks =
     20
     21The Starter Racks project is an effort to get [http://www.nsf.gov/cise/usignite/ US Ignite] cities connected to the GENI Network to facilitate experimental network and compute research, and city application development. The Starter racks jump start the delivery of fully integrated network and compute resources to selected sites. At this time, the Starter racks do not deliver GENI software features such as an Aggregate Manager. Starter Racks can be used as Meso-scale !OpenFlow Site, but requires GPO Infrastructure support to implement, if you are interested in running !OpenFlow on a deployed Starter rack contact [mailto:help.geni.net]. If you would like to make your own GENI Starter rack, see the [wiki:GENI-Infrastructure-Portal/GENIRacks#GetYourOwnGENIRack section] below.
     22
     23== Starter Racks Components ==
     24
     25A Starter rack delivers software and hardware components that enable a site to be a GENI site. Each GENI Starter Rack delivers the following types of systems:
     26
     27 * Router - Cisco IOS router is delivered to set up standard routed IP to the local network provider.
     28 * !OpenFlow switch - HP Procurve 6600 switch to carry experimental traffic via layer2 to GENI backbone (I2, NRL) and to carry Eucalyptus communication between local VMs.
     29 * Eucalyptus Head Host - Host running Eucalyptus service to manage Eucalyptus hosts and provides public interface to access Site VMs via NAT.
     30 * Eucalyptus Worker Hosts - Two Eacalyptus Worker hosts to provide VMs. The number of VMs allowed is based on address space available at each site.
     31 * Application Host (aka Bare-metal node) - A High performance host to provide experimenters a real network interfaces to be provisioned manually.
     32 * Monitoring Host - Monitors both compute and network resources for the GENI site.
     33
     34The Rack resource above are connected as follows:
     35
     36[[Image(GENI-StarterRacks-components.jpg)]]
     37
     38''Note: The data plane connection from the Euca Head host to the Cisco router is used to provide public access Euca Worker VMs via NAT.''
     39
     40== Starter Racks Specifications ==
     41
     42''__Compute Resources__''
     43  * 5 Dell™ !PowerEdge™ R510 -  1 for Eucalyptus Head, 2 for Worker hosts, 1 for Monitoring, and 1 for bare-metal node.
     44
     45''__Network Components__''
     46 * 1 Cisco 2901 Integrated Services Router - Access to commodity internet.     
     47 * 1 HP !ProCurve 6600-48G-4Xg  - Access to GENI backbone.
     48
     49''__Misc. Components__'' General purpose hardware included in the Starter Racks:
     50  * 1 APC Swiched Rack PDU - Load monitoring, remote power cycle.
     51  * 1 APC Smart-UPS - Network power protection
     52  * IOGEAR 8-port KVM switch - Console access
     53  * Lantronix !SecureLinx Spider Compact Remote - One port for KVM over IP access
     54 
     55== Starter Racks Implementation ==
     56
     57Each GENI Starter Rack systems requires a specific setup, which is captured for each of the rack components: Router, !OpenFlow Switch, IPKVM, Eucalyptus Head Host, Eucalyptus Worker Hosts, and Monitor Node. See the '''[http://groups.geni.net/syseng/wiki/GENI-Infrastructure-Portal/GENIRacks/StarterCompSetup Starter Component Setup]''' for details about the required configuration. For specific details about the configuration settings, please contact [mailto:help@geni.net help@geni.net]. Some examples are available to get insight into '''[wiki:GENI-Infrastructure-Portal/GENIRacksAdministration#StarterracksAdmin  Starter Racks Administration]''' tasks.
     58
     59== Starter Racks  Monitoring ==
     60Starter Racks meet the following [http://groups.geni.net/geni/wiki/GeniRacks#D.MonitoringRequirements monitoring requirements]. Monitoring data for the 3 deployed Starter Racks, GPO, Chattanooga and Cleveland is available at the [http://monitor.gpolab.bbn.com/ganglia/ GPOLab Monitor] portal, where the user can select a location and get monitoring detail for ''System load'', ''CPU load'', ''Memory usage'', and ''Network usage''. In addition, ''Services Health'' is monitored on each of the GENI Starter Racks and alert notifications are available upon request, contact [mailto:help.geni.net] to become a notification recipient. To access the ''Service Health'' detail on demand, go to the [http://monitor.gpolab.bbn.com/nagios/cgi-bin/status.cgi Service Status Details] page for all Starter rack sites.
     61
     62== Starter rack Site Requirements ==
     63Starter racks have the following site requirements:
     64 * Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
     65 * Site Security Requirements- Determine changes required for rack connectivity, such as FV rules, IP filters, etc.
     66 * Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
     67 * Power requirements - based on site requirements
     68 * Administrative accounts - Setup of site administrator account that will be created on the management/head node.
     69 * Delivery logistics - Details for ''where'' the rack is to be delivered, ''who'' will accept the delivers, and ''when'' the delivery will take place.  Also covers any physical restrictions for the rack delivery.
     70
     71If you are interested in building your own starter help and would like help from the GPO, please contact us at [mailto:help@geni.net].
     72
     73------
     74
     75= InstaGENI Racks =
     76The InstaGENI project is an effort to deploy low-end, expandable GENI Racks at large number of campuses and is typically found outside the site firewall, more details are to be added as defined. An overview of the project is available from the [http://groups.geni.net/geni/attachment/wiki/GEC12GENIUpdatePlenary/McGeer%20InstaGENI.pdf?format=raw  GEC12 InstaGENI Racks] presentation. An [http://groups.geni.net/syseng/attachment/wiki/GENI-Infrastructure-Portal/GENIRacks/InstaGENI%20Design%20Document%20Rick%202-15%282%29.doc?format=raw InstaGENI Design] document is available.
     77
     78== InstaGENI Components ==
     79
     80Each InstaGENI racks delivers a small ProtoGENI cluster with OpenFlow networking based on FOAM. This rack includes the following types of systems:
     81   * Control Node -  Xen Server that runs 3 VMS to provide:
     82            * PG boss node, web server and GENI API Server
     83            * Local File Server node
     84            * FOAM Controller
     85   * Experiment Nodes - Five nodes managed by ProtoGENI software stack, which provides boot services, account creation, experimental management, etc.
     86   * Monitoring Node - Should? run on the control node...TBD
     87   * !OpenFlow Switch - Provides internal routing and data plane connectivity to the GENI backbone (layer 2 and layer 3)
     88   * Management Switch - Provides control plane connectivity to the Internet (layer 3)
     89
     90Following are the network connections for an InstaGENI rack:
     91
     92[[Image(GENI-InstaGENI-components.jpg)]]
     93
     94== InstaGENI Specifications ==
     95The current hardware components specification for the InstaGENI Racks includes:
     96
     97''__Compute Resource Specifications__'' 5 HP !ProLiant DL360 G7 Server series hosts to provide the VM Server, Monitoring, Storage(??), and Application functions:
     98  * Control Node - 1 HP !ProLiant DL360 G7 Server, quad-core, single-socket, 12 GB Ram, 4 TB Disk (RAID), and dual NIC
     99  * Experiment Nodes - 5 HP !ProLiant DL360 G7 Server, six-core, dual-socket,48GB Ram, 1TB Disk, and dual NIC
     100  * Bare Metal Node??
     101  * Monitoring Node??
     102
     103''__Network Components__'' X network components to provide access to GENI Backbone and Commodity Internet:
     104 * HP !ProCurve 2610 switch, 24 10/100 Mb/s ports, 2 1 Gb/s ports
     105 * HP !ProCurve 6600 switch 48 1 Gb/s ports, 4 10 Gb/s ports
     106
     107''__Misc. Components_'' General purpose hardware also included:
     108  * 1 or more HP Intelignt Mod PDU 
     109  * HP Dedicated iLO Management Port Option
     110  * HP TFT7600 KVM Console US Kit
     111
     112== InstaGENI Implementation ==
     113Any configuration or run-time requirements for any of the InstaGENI rack systems will be documented here, when available.
     114
     115Each InstaGENI Rack systems requires a specific setup, which is to captured for each of the rack components: !OpenFlow Switch, control node, experiment nodes, bare-metal nodes, etc.. See the '''[http://groups.geni.net/syseng/wiki/GENI-Infrastructure-Portal/GENIRacks/InstaGENICompSetup InstaGENI Component Setup]''' for details about the required configuration. For help about the configuration settings, please contact [mailto:help@geni.net help@geni.net]. Some examples are available to get insight into '''[wiki:GENI-Infrastructure-Portal/GENIRacksAdministration#InstaGENIAdministration  InstaGENIRacks Administration]''' tasks.
     116
     117== InstaGENI Monitoring ==
     118InstaGENI Racks meet the following [http://groups.geni.net/geni/wiki/GeniRacks#D.MonitoringRequirements monitoring requirements].  InstaGENI Monitoring data is currently being defined and will be available at the [https://gmoc-db.grnoc.iu.edu/api-demo GMOC SNAPP home].
     119
     120== InstaGENI Site Requirements ==
     121InstaGENI racks have the following site requirements:
     122 * Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
     123 * Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
     124 * Power requirements - based on site requirements
     125 * Administrative accounts - Setup of site administrator account that will be created on the management/head node.
     126 * Delivery logistics - Details for ''where'' the rack is to be delivered, ''who'' will accept the delivers, and ''when'' the delivery will take place.  Also covers any physical restrictions for the rack delivery.
     127
     128If you are interested in becoming an InstaGENI deployment site, please contact us at [mailto:rick.mcgeer@hp.com].
     129
     130--------------------
     131
     132= ExoGENI Racks =
     133The [http://www.exogeni.net ExoGENI]project is an effort to implement high-performance GENI Racks via a partnership between RENaissance Computing Institute (RENCI), Duke and IBM.  ExoGENI racks are assembled and tested by IBM and shipped directly to sites, where they are managed by the RENCI team.  ExoGENI racks deliver support for multi-domain cloud structure with flexible virtual networking topologies that allow combining ExoGENI, Meso-scale Open-Flow and WiMAX resource.  An overview of this project was presented at the [http://groups.geni.net/geni/attachment/wiki/GEC12GENIUpdatePlenary/GEC12-ExoGENI-Racks.pptx?format=raw GEC12 ExoGENI Presentation]. Also available are a [https://code.renci.org/gf/download/docmanfileversion/13/691/ExoGENIDesignv1.02.pdf ExoGENI Design] document and a [http://www.cs.duke.edu/~chase/exogeni.pdf ExoGENI white paper].
     134
     135== ExoGENI Components ==
     136
     137An ExoGENI Rack delivers the following types of systems:
     138 * Management Switch - An IBM G8052R switched is delivered to allow access to/from the local network provider.
     139 * VPN appliance - A Juniper SSG5 provides backup access to manage nodes.
     140 * !OpenFlow-enabled switch - An IBM G8264R switch to carry experimental traffic via layer2 to GENI backbone (I2, NRL) and to local OF campus.
     141 * Management Node - An IBM x3650 host running Elastic Compute Cloud(EC2) with !OpenStack to provision VMs and running xCat to provision bare-metal nodes. Also runs monitoring functions.
     142 * Worker Nodes - Ten IBM 3650 M3 Worker nodes provide both !OpenStack virtualized instances and Bare-metal Xcat nodes
     143 * Monitoring Host - None, Monitoring is through Nagios from GMOC.
     144
     145The ExoGENI resources have the following connections:
     146
     147[[Image(GENI-ExoGENI-components.jpg)]]
     148
     149== ExoGENI Specifications ==
     150
     151An initial inventory of the ExoGENI Rack hardware components is found [https://docs.google.com/document/d/1hzleT6TNmiDb0YkkgqjXxPFJ37P4O6qApLmXgKJHBZQ/edit?hl=en_US here], which is superseded by the following:
     152
     153''__Compute Resource__'' A total of 12 hosts are in the rack to provide the Resources, Monitoring, Storage and Application functions:
     154  * Management node: 1 IBM x3650 M3, 2x146GB 10K SAS hard drives, 12G RAM, dual-socket 4-core Intel X5650 2.66Ghz CPU, Quad-port 1Gbps adapter
     155  * Worker/Bare-Metal nodes: 10 IBM x3650 M3, 1x146GB 10K SAS hard drive +1x500+GB secondary drive, 48G RAM, dual-socket 6-core Intel X5650 2.66Ghz CPU, dual 1Gbps adapter, 10G dual-port Chelseo adapter
     156  * Sliverable Storage: 1 IBM DS3512 storage NAS 6x1TB 7200RPM drives
     157
     158''__Network Components__''
     159 * Management Switch: IBM BNT G8052R 1G client/10G uplink ports - Access to commodity internet.
     160 * !OpenFlow Switch: IBM BNT G8264R 10G client/40G uplink ports - Access to GENI backbone.
     161 * VPN Appliance: Juniper SSG5 - Backup management access.
     162
     163''__Misc. Components__'' General purpose hardware included:
     164  * IBM PDU based on site power requirements, (GPO=IBM 5897 PDU; RENCI=DPI 5900 PDU)
     165  * No UPS included
     166  * IBM Local 2x16 Console Manager (LCM16)
     167  * IBM 1U 17-inch Flat Panel Console Kit (PN 172317X)
     168
     169== ExoGENI Implementation ==
     170Any configuration or run-time requirements for the ExoGENI rack systems will be documented here, when available.
     171
     172Each ExoGENI Rack systems requires a specific setup, which is to captured for each of the rack components: !OpenFlow Switch, Management node, Worker Nodes, etc.. See the '''[http://groups.geni.net/syseng/wiki/GENI-Infrastructure-Portal/GENIRacks/ExoGENICompSetup ExoGENI Component Setup]''' for details about the required configuration. For help about the configuration settings, please contact [mailto:help@geni.net help@geni.net]. Some examples are available to get insight into '''[wiki:GENI-Infrastructure-Portal/GENIRacksAdministration#ExoGENIAdministration  ExoGENIRacks Administration]''' tasks.
     173
     174== ExoGENI Monitoring ==
     175Monitoring data for the ExoGENI rack is collected on the management node by a Nagios aggregator and then forwarded to the GMOC. The type of data that will be available is currently being defined.  ExoGENI Racks meet the GENI [http://groups.geni.net/geni/wiki/GeniRacks#D.MonitoringRequirements monitoring requirements]. ExoGENI Monitoring data will be available at the [https://gmoc-db.grnoc.iu.edu/api-demo GMOC SNAPP Home].
     176
     177== ExoGENI Site Requirements ==
     178ExoGENI racks have the following site requirements:
     179 * Network Setup - Define how the rack will connect to the Internet and to the GENI backbones. Ex Regional connections, connection speed, VLANs to be used, etc.
     180 * Site Security Requirements- Determine changes required for rack connectivity, such as FV rules, IP filters, etc.
     181 * Address assignment for rack components - Define which address, subnet mask, routing is to be configured for the rack components.
     182 * Power requirements - based on site requirements
     183 * Administrative accounts - Setup of site administrator account that will be created on the management/head node.
     184 * Delivery logistics - Details for ''where'' the rack is to be delivered, ''who'' will accept the delivers, and ''when'' the delivery will take place.  Also covers any physical restrictions for the rack delivery.
     185
     186If you are interested in becoming an ExoGENI deployment site, please contact us at [mailto:ibaldin@renci.org].
     187-----
     188
     189= Get Your Own GENI Rack =
     190If you are interested in making your own GENI Starter Rack, the GPO can help with the following:
     191
     192    * Develop specification for GENI racks, defining storage, compute, network resources, etc.
     193    * Evaluate, integrate and manage new software and configurations for rack solution.
     194    * Test and integrate early rack hardware and software.
     195    * Define acceptance criteria to demonstrate successful rack deployment.
     196To get started, please contact [mailto:help@geni.net] for getting started.
     197
     198If you would like to make your own InstaGENI or ExoGENI rack, or to be considered as a potential site for the next phase of funded deployments, contact the project PI: 
     199   * InstaGENI PI contact: [mailto:ibaldin@renci.org]
     200   * ExoGENI PI contact: [mailto:rick.mcgeer@hp.com]
     201
     202----
     203{{{
     204#!html
     205Email <a href="mailto:help@geni.net"> help@geni.net </a> for GENI support or email <a href="mailto:luisa.nevers@bbn.com">me</a> with feedback on this page!
     206}}}