wiki:EnterpriseGENI/1Q09

Version 2 (modified by hdempsey@bbn.com, 15 years ago) (diff)

Fix OpenFlow to not be wiki links

Enterprise GENI Q1 2009 Quarterly Report
Submitted 14 April, 2009 by Guido Appenzeller
(Proposal-related information was deleted from this status report by Dempsey)

Milestones achieved / Deliverables made

Our work is currently focused on designing and implementing our Enterprise GENI aggregate manager and integrating it with other projects.

The OpenFlow Hypervisor (OFH) version 0.3 correctly performs network slicing of OpenFlow switches. We have successfully demonstrated "multi-layer" network slicing, i.e., virtual network slices that defined by any combination of physical layer (l1), link layer (l2), network layer (l3), and transport layer (l4).

In an effort to integrate our Enterprise GENI code with PlanetLab, we meet in Denver to discuss the interface between the GENI clearing house and our Enterprise GENI aggregate manager. As a result of that meeting, we promised to implement a draft SOAP-based interface by March 13th, which we meet. Details regarding the interface are available at http://www.openflowswitch.org/wk/index.php/GENILight.

We have rolled out an initial deployment of OpenFlow in one building at Stanford. The plan is to use this deployment as a GENI substrate. We currently run live traffic over the substrate to get experience with the quality of service and stability.

Activities and findings

The main activity has been continued development of the Network Virtualization Software that is the foundation for allowing experimental use of production networking infrastructure. The basic components of this infrastructure is a Hypervisor and the Aggregate Manager.

The OpenFlow Hypervisor (OFH) virtualizes a physical OpenFlow switch into multiple logical OpenFlow switches, which can be owned and operated by different experimenters. The OFH appears to a network of OpenFlow switches as a single controller running as open-source software on a Linux-PC - the switches run the unmodified OpenFlow Protocol (currently Version 0.8.9). The OFH is critical to allowing multiple experimenters to run independent experiments simultaneously in one physical campus network. The OFH consists of two main parts: (1) A policy engine that defines the logical switches (e.g. "all http traffic", "Aaron's traffic", "Network administrator's experimental traffic between 12midnight and 3am"), and (2) An OpenFlow mux/demux that implements the policy by directing OpenFlow commands to/from an OpenFlow controller and its network of logical OpenFlow switches.

Current implementation status: Version 0.3 of the OFH has successfully run 5 distinct research projects each in their own network slice. The underlying physical test network includes switches and routers from multiple manufacturers (HP, NEC, Juniper, Cisco, etc.) and includes nodes in Stanford, Internet2 (Houston, New York, Los Angelas), and JGNPlus (Japan). Version 0.3 of the OFH has also been used to slice wireless networks in a project called OpenRoads, where each mobile node has their own slice with their own corresponding mobility manager. Plans are underway to incorporate the OFH (now renamed "FlowVisor") into the production OpenFlow deployment in the Gates building.

The Aggregate Manager will build on top of the OFH -- essentially, the Aggregate Manager is an OpenFlow controller that controls a subset of the network resources, as defined by the local administrator. We are currently drafting the aggregate manager architecture and integrating it with the HyperVisor. We proposed an API between the Aggregate Manager and clearing house, as mentioned above.

Project participants

Rob Sherwood (Hypervisor Design) Srini Seetharaman (Aggregate Manager) David Underhill (User Interfaces) Kk Yap (Wireless Network Testing) Glen Gibb (Hypervisor Testing) Guido Appenzeller (Project Manager)

Publications (individual and organizational)

No publications as of yet, but work involving the OpenFlow hypervisor is under submission to both Sigcomm 2009 and MobiComm.

Collaborations

We are continuing discussions with both the PlanetLab and DRAGON project groups for both code integration and potential collaboration.

Attachments (2)