wiki:Internet Scale Overlay Hosting

Version 39 (modified by Christopher Small, 14 years ago) (diff)

--

Project Number

1578

Project Title

Internet Scale Overlay Hosting
a.k.a. SPP

Technical Contacts

PI: Jon Turner Jon Turner
Patrick Crowley pcrowley@wustl.edu

Participating Organizations

Washington University, St. Louis, MO

Scope

The objective of the project is to acquire, assemble, deploy and operate five high performance overlay hosting platforms, and make them available for use by the research community, as part of the emerging GENI infrastructure. These systems will be hosted in the Internet 2 backbone at locations to be determined. We will provide a control interface compatible with the emerging GENI control framework that will allow the network-scale control software provided by Princeton to configure the systems in response to requests from research users. The project will leverage and extend our Supercharged PlanetLab Platform (SPP) to provide an environment in which researchers can experiment with the kind of capabilities that will ultimately be integrated into GENI. We also plan to incorporate the netFPGA to enable experimentation with hardware packet processing, in the overlay context.

Current Capabilities

Three Supercharged PlanetLab Platforms have been deployed, in Washington DC, Kansas City, and Salt Lake City.

Milestones

MilestoneDate(SPP: Limited research available)?
MilestoneDate(SPP: Component Manager ICD)?
MilestoneDate(SPP: User Web Site)?
MilestoneDate(SPP: S2.a geniwrapper)?
MilestoneDate(SPP: S2.b rspec)?
MilestoneDate(SPP: S2.c userdoc)?
MilestoneDate(SPP: S2.d demo6)?
MilestoneDate(SPP: S2.e demo7)?
MilestoneDate(SPP: S2.f demo8)?
MilestoneDate(SPP: S2.g tutorial7)?
MilestoneDate(SPP: S2.h tutorial8)?
MilestoneDate(SPP: S2.i ops support)?
MilestoneDate(SPP: S2.j secreview)?
MilestoneDate(SPP: S2.k opsreview)?
MilestoneDate(SPP: S2.l transition plan)?
MilestoneDate(SPP: S2.m code)?
MilestoneDate(SPP: S2.n openflow)?
MilestoneDate(SPP: S2.o interfacedoc)?

Project Technical Documents

Links to wiki pages for the project's technical documents go here. List should include any document in the working groups, as well as other useful documents. Projects may have a full tree of wiki pages here.

Link to WUSTL wiki Internet Scale Overlay Hosting
SPP System Architecture

Quarterly Status Reports

4Q08 Status Report
1Q09 Status Report
2Q09 Status Report
3Q09 Status Report

Recommended reading

Main Project Page
GEC Presentation (10/2008)
SIGCOMM 2007 Paper on SPP Nodes
ANCS 2006 Paper on a GENI Backbone Platform Architecture

Spiral 1 Connectivity

SPP Nodes will be located at Internet2 backbone sites. Five sites are currently planned for GENI (see the plan briefed at GEC3. SPP nodes will tag outgoing packets with VLAN tags and will expect VLAN tags on input. However, it will will not perform VLAN switching. To view an SPP node configuration with connection requirements click here.

Size & Power Specifications
The current status for the SPP node looks like this: ATCA Chassis: 6U rack space, 1200W -48V power, max (via two 1/4“ - 20 UNC studs). It is capable of using two, redundant power sources, if available. Each would need to be rated for at least 25A @ -48V.
Power Supply: 1U rack space, one NEMA 5-15 receptacle(if -48V is not available, this power supply will provide it)
Control Processor: 1U rack space, one NEMA 5-15 receptacle (650W max)
24-port Switch: 1U rack space, one NEMA 5-15 receptacle (240W max)

The total rack space is, thus, 8U or 9U depending on if the power supply is required or not. The total power receptacles needed are either two or three, again depending on the external power supply requirement. The power requirements are enough for any expansion we do inside the ATCA chassis. If we need to add an external piece, we will need additional power receptacles for it as well as rack space.

IP addresses

Each SPP will connect to a local Internet 2 router, using multiple 1 GbE connections. The number of such connections per site will vary from 2 to 4. Each will need an I2 IP address for each such interfaces. These addresses should be advertised to I2-connected universities, but need not be advertised to the public Internet. Each SPP will also require a separate connection from its control processor. This can be a low bandwidth (10/100 Mb/s) connection, and should be accessible from any I2-connected university. Public addresses have been requested from Internet2.

IP connectivity between the PlanetLab clearinghouse and the SPP nodes is required.

Layer 2 connections

Layer 2 virtual Ethernets between the Gigabit Ethernet switches at all all deployed SPP nodes in Internet2 are required (see SPP interface block diagram. The NetGear GSM7328S with a 10GbE module is a likely candidate for the Gigabit Ethernet switch.

The SPP does terminate VLANS, but does not perform VLAN switching. Initially, we plan that Internet2 will create multiple static VLAN connections with distinct tags between each pair of SPPs in the I2 core PoPs (two SPPs are in Spiral 1). Typically, each adjacent pair of SPPs will be connected by 2 parallel links. PoP locations for GENI deployment are under discussion with I2.

To support some kinds of end-to-end non-IP experiments, virtual ethernets through regional networks to experimental data sources and sinks will also be needed. These will be created incrementally in the spirals, as early experimenters join the integration teams. The endpoints and regional networks are currently TBD, but some likely early candidates will come from other project sites in the PlanetLab control framework in Spiral 1.

GPO Liaison System Engineer

Christopher Small

Related Projects

Attachments (6)