Changes between Initial Version and Version 1 of Internet Scale Overlay Hosting

03/16/09 15:16:26 (13 years ago)
Jon Turner



  • Internet Scale Overlay Hosting

    v1 v1  
     1'''Project Number'''
     5'''Project Title'''
     7Internet Scale Overlay Hosting [[BR]]
     9'''Technical Contacts'''
     11Principal Investigator: Jon Turner [[BR]]
     12Principal Investigator: Patrick Crowley
     15'''Participating Organizations'''
     16Washington University, St. Louis, MO
     20The objective of the project is to acquire, assemble, deploy and operate five high performance overlay
     21hosting platforms, and make them available for use by the research community, as part of the emerging
     22GENI infrastructure. These systems will be hosted in the Internet 2 backbone at locations to be
     23determined. We will provide a control interface compatible with the emerging GENI control framework
     24that will allow the network-scale control software provided by Princeton to configure the systems in
     25response to requests from research users. The project will leverage and extend our Supercharged
     26PlanetLab Platform (SPP) to provide an environment in which researchers can experiment with the kind
     27of capabilities that will ultimately be integrated into GENI. We also plan to incorporate the netFPGA to
     28enable experimentation with hardware packet processing, in the overlay context.
     32[milestone:"SPP: Develop initial component interface"][[BR]]
     33[milestone:"SPP: Deploy two SPP nodes"][[BR]]
     34[milestone:"SPP: Architecture and Design Documentation (Draft)"][[BR]]
     35[milestone:"SPP: Limited research available"][[BR]]
     36[milestone:"SPP: Component Manager ICD"][[BR]]
     37[milestone:"SPP: User Web Site"][[BR]]
     40'''Project Technical Documents'''
     42  ''Links to wiki pages for the project's technical documents go here.  List should include any document in the working groups, as well as other useful documents.  Projects may have a full tree of wiki pages here.''
     44''Link to WUSTL wiki'' [ Internet Scale Overlay Hosting][[BR]]
     47'''Quarterly Status Reports''' [[BR]]
     48[ 4Q08 Status Report]
     50'''Recommended reading'''[[BR]]
     52[] [[BR]]
     53[] [[BR]]
     55[[BR]]'''Spiral 1 Connectivity'''
     57SPP Nodes will be located at Internet2 backbone sites.  Five sites are currently planned for GENI (see [ the plan briefed at GEC3].  SPP nodes will tag outgoing packets with VLAN tags and will
     58expect VLAN tags on input. However, it will will not perform VLAN switching. To view an SPP node configuration with connection requirements [ click here.]     
     60''Size & Power Specifications''[[BR]]
     61The current status for the SPP node looks like this:
     62ATCA Chassis: 6U rack space, 1200W -48V power, max (via two 1/4“ - 20 UNC studs). It is capable of using two, redundant power sources, if available. Each would need to be rated for at least 25A @ -48V. [[BR]]
     63Power Supply: 1U rack space, one NEMA 5-15 receptacle(if -48V is not available, this power supply will provide it)[[BR]]
     64Control Processor: 1U rack space, one NEMA 5-15 receptacle (650W max)[[BR]]
     6524-port Switch: 1U rack space, one NEMA 5-15 receptacle (240W max)[[BR]]
     67The total rack space is, thus, 8U or 9U depending on if the power supply is required or not. The total power receptacles needed are either two or three, again depending on the external power supply requirement. The power requirements are enough for any expansion we do inside the ATCA chassis. If we need to add an external piece, we will need additional power receptacles for it as well as rack space.
     69''IP addresses''
     71Each SPP will connect to a local Internet 2 router, using multiple 1 GbE connections.
     72The number of such connections per site will vary from 2 to 4.
     73Each will need an I2 IP address for each such interfaces. These addresses should be advertised to I2-connected
     74universities, but need not be advertised to the public Internet.
     75Each SPP will also require a separate connection from its control processor. This can be a low bandwidth (10/100 Mb/s)
     76connection, and should be accessible from any I2-connected university. Public addresses have been requested from Internet2.
     78IP connectivity between the PlanetLab clearinghouse and the SPP nodes is required.
     80''Layer 2 connections''
     82Layer 2 virtual Ethernets between the Gigabit Ethernet switches at all all deployed SPP nodes in Internet2 are required (see [ SPP interface block diagram].  The !NetGear GSM7328S with a 10GbE module is a likely candidate for the  Gigabit Ethernet switch.   
     84The SPP does terminate VLANS, but does not perform VLAN switching.  Initially, we plan that Internet2 will create multiple static VLAN connections with distinct tags between each pair of SPPs in the I2 core !PoPs (two SPPs are in Spiral 1). Typically, each adjacent pair of SPPs will be connected by 2 parallel links. PoP locations for GENI deployment are under discussion with I2.
     86To support some kinds of end-to-end non-IP experiments, virtual ethernets through regional networks to experimental data sources and sinks will also be needed.  These will be created incrementally in the spirals, as early experimenters join the integration teams.  The endpoints and regional networks are currently TBD, but some likely early candidates will come from other project sites in the PlanetLab control framework in Spiral 1.
     88'''GPO Liason System Engineer'''
     89John Jacob
     91'''Related Projects'''
     93  ''[http://URL Project-Name]  Includes non-GENI projects.''''''''