wiki:OverlayHostingNodes

Version 2 (modified by hdempsey@bbn.com, 13 years ago) (diff)

fix links

Project Number

1578

Project Title

Internet Scale Overlay Hosting
a.k.a. SPP

Technical Contacts

Principal Investigator: Jon Turner Jon Turner
Principal Investigator: Patrick Crowley pcrowley@wustl.edu

Participating Organizations Washington University, St. Louis, MO

Scope

The objective of the project is to acquire, assemble, deploy and operate five high performance overlay hosting platforms, and make them available for use by the research community, as part of the emerging GENI infrastructure. These systems will be hosted in the Internet 2 backbone at locations to be determined. We will provide a control interface compatible with the emerging GENI control framework that will allow the network-scale control software provided by Princeton to configure the systems in response to requests from research users. The project will leverage and extend our Supercharged PlanetLab Platform (SPP) to provide an environment in which researchers can experiment with the kind of capabilities that will ultimately be integrated into GENI. We also plan to incorporate the netFPGA to enable experimentation with hardware packet processing, in the overlay context.

Milestones

SPP: Develop initial component interface
SPP: Deploy two SPP nodes
SPP: Architecture and Design Documentation (Draft)
SPP: Limited research available
SPP:Component Manager ICD
SPP: User Web Site

Project Technical Documents

Links to wiki pages for the project's technical documents go here. List should include any document in the working groups, as well as other useful documents. Projects may have a full tree of wiki pages here.

Link to WUSTL wiki Internet Scale Overlay Hosting

Recommended reading

http://www.arl.wustl.edu/~jst/pubs/sigcomm07.pdf
http://www.arl.wustl.edu/~jst/pubs/ancs06-turner.pdf

Spiral 1 Connectivity

SPP Nodes will be located at Internet2 backbone sites. SPP nodes are not being developed to support the termination of VLAN tags, thus requiring a VLAN aware switch to connect to the SPP 1Gbps I/O. To view an SPP node configuration with connection requirements click here.

Size & Power Specifications
The current status for the SPP node looks like this: ATCA Chassis: 6U rack space, 1200W -48V power, max (via two 1/4“ - 20 UNC studs). It is capable of using two, redundant power sources, if available. Each would need to be rated for at least 25A @ -48V.
Power Supply: 1U rack space, one NEMA 5-15 receptacle(if -48V is not available, this power supply will provide it)
Control Processor: 1U rack space, one NEMA 5-15 receptacle (650W max)
24-port Switch: 1U rack space, one NEMA 5-15 receptacle (240W max)

The total rack space is, thus, 8U or 9U depending on if the power supply is required or not. The total power receptacles needed are either two or three, again depending on the external power supply requirement. The power requirements are enough for any expansion we do inside the ATCA chassis. If we need to add an external piece, we will need additional power receptacles for it as well as rack space.

IP addresses

Each SPP needs one public IP address, which is used by experimenters and others who want to access the SPP. The virtual Ethernet interfaces do not use IP addresses. There is a possible need for a non-advertised IP address for each SPP for debugging, console, and other types of maintenance access. Public addresses have been requested from Internet2.

IP connectivity between the PlanetLab clearinghouse and the SPP nodes is required.

Layer 2 connections

Layer 2 virtual Ethernets between the Gigabit Ethernet switches at all all deployed SPP nodes in Internet2 are required (see SPP interface block diagram. The NetGear GSM7328S with a 10GbE module is a likely candidate for the Gigabit Ethernet switch.

Although the SPP does not terminate VLANS, it is VLAN-aware (can identify tags for example), and should eventually be able to manipulate VLANs as part of the PlanetLab control framework. Initially, we plan that Internet2 will create static virtual Ethernets with distinct tags between each pair of SPPs in the I2 core PoPs (two SPPs are in Spiral 1). PoP locations for GENI deployment are under discussion with I2.

To supoort some kinds of end-to-end non-IP experiments, virtual ethernets through regional networks to experiment data sources and sinks will also be needed. These will be created incrementally in the spirals, as early experimenters join the integration teams. The endpoints and regional networks are currently TBD, but some likely early candidates will come from other project sites in the PlanetLab control framework in Spiral 1.

GPO Liason System Engineer John Jacob jjacob@geni.net

Related Projects

Project-Name Includes non-GENI projects.

Attachments (2)

Download all attachments as: .zip