| 1 | |
| 2 | = Internet Scale Overlay Hosting = |
| 3 | |
| 4 | The experimental system prototype for implementing the Internet Scale Overlay Hosting services is named Supercharged !PlanetLab Platform(SPP). Information reviewed for SPP and its tutorial is available at: |
| 5 | |
| 6 | * [http://wiki.arl.wustl.edu/index.php/Internet_Scale_Overlay_Hosting Internet Scale Overlay Hosting] |
| 7 | * [http://wiki.arl.wustl.edu/index.php/The_SPP_Tutorial The SPP Tutorial] |
| 8 | |
| 9 | SPP can be accessed at the Washington University's [http://drn06.arl.wustl.edu/ SPP PlanetLab] Site. |
| 10 | |
| 11 | = Internet Scale Overlay Hosting Findings = |
| 12 | SPP is a high performance substitute for a typical !PlanetLab node, which normally is a PC running a customized Linux running multiple virtual machines, using a Linux vServer mechanism. The SPP design modifies this definition to achieve these goals: |
| 13 | * Increase performance |
| 14 | * Scale by incorporating multiple servers |
| 15 | * Use Network Processor blades |
| 16 | * Improve control of computing and networking resources |
| 17 | |
| 18 | Both hardware and software components have been used to implement SPP, a high level outline is given below: |
| 19 | |
| 20 | '''=> SPP Hardware Components''' |
| 21 | |
| 22 | The system consists of a number of processing components that are connected by an Ethernet switching layer. The most significant parts of the system are the 2 types of Processing Engines: |
| 23 | * General Purpose Processing Engine (GPE) - A dual processor blade (Radisys ATCA 4310) with 2 network GbE interfaces. |
| 24 | * Network Processing Engine (NPE) - A Radisys ATCA 7010 blade with 2 Intel IXP 2850 Network Processors subsystems. NPE supports fast path processing and provides up to 10 Gb/s of IO bandwidth. |
| 25 | * Line Card (LC) is used to handle all input and output. |
| 26 | * Switching Substrate - A chassis switch (Radisys ATCA 2210) with 2 switches (1 fabric switch w/10 GbE ports, 1 base switch w/1 GbE ports) |
| 27 | * ATCA Chassis - A Zephyr Shroff 5U six slot ATCA chassis to force component reboot. |
| 28 | * Control Processor - a Dell !PowerEdge 860 that allows maintenance access to base/fabric switches, serial interfaces, chassis switch blade, and the GPEs. |
| 29 | * NetFPGA - a Xilinx Virtex 2 Pro 50 FPGA PCI card with 4 network connections available for forwarding. |
| 30 | |
| 31 | '''=> Network Processor Datapath Software''' |
| 32 | |
| 33 | Various software components use the Line Card to process incoming and outgoing traffic. One Line Card Network Processor is used to process incoming traffic and one s used to process outgoing traffic. The packet processing is structured as a pipeline where packets flow across stages which use one or more Micro Engine (ME). A detailed description is available at [http://wiki.arl.wustl.edu/index.php/SPP_Datapath_Software SPP Data Path Software]. |
| 34 | |
| 35 | |
| 36 | '''=> Control Software''' |
| 37 | |
| 38 | The central component of the system is the System Resource Manager(SRM). The SRM retrieves slice descriptions from !PlanetLab Central (PLC), creates the slices, and allocates slice resources. A detailed description of the control software is available at [http://wiki.arl.wustl.edu/index.php/SPP_Control_Software SPP Control Software]. |
| 39 | |
| 40 | |
| 41 | Note: Milestone SPP.S2.a geniwrapper, due 03/31/10 is late. Not sure if requesting access to the web site listed makes sense before geniwrapper is delivered. |
| 42 | |
| 43 | |
| 44 | |