Changes between Initial Version and Version 1 of EnterpriseGENI/1Q09


Ignore:
Timestamp:
06/29/09 12:58:56 (15 years ago)
Author:
hdempsey@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • EnterpriseGENI/1Q09

    v1 v1  
     1'''Enterprise GENI Q1 2009 Quarterly Report[[BR]]
     2'''
     3'''Submitted 14 April, 2009 by Guido Appenzeller'''[[BR]]
     4(Proposal-related information was deleted from this status report by Dempsey)
     5
     6Milestones achieved / Deliverables made
     7
     8Our work is currently focused on designing and implementing our
     9Enterprise GENI aggregate manager and integrating it with other
     10projects.
     11
     12The OpenFlow Hypervisor (OFH) version 0.3 correctly performs network
     13slicing of OpenFlow switches.  We have successfully demonstrated
     14"multi-layer" network slicing, i.e., virtual network slices that
     15defined by any combination of physical layer (l1), link layer (l2),
     16network layer (l3), and transport layer (l4).
     17
     18In an effort to integrate our Enterprise GENI code with PlanetLab, we
     19meet in Denver to discuss the interface between the GENI clearing
     20house and our Enterprise GENI aggregate manager.  As a result of that
     21meeting, we promised to implement a draft SOAP-based interface by
     22March 13th, which we meet.  Details regarding the interface are
     23available at http://www.openflowswitch.org/wk/index.php/GENILight.
     24
     25We have rolled out an initial deployment of OpenFlow in one building at
     26Stanford. The plan is to use this deployment as a GENI substrate. We
     27currently run live traffic over the substrate to get experience with the
     28quality of service and stability.
     29
     30Activities and findings
     31
     32The main activity has been continued development of the Network
     33Virtualization Software that is the foundation for allowing
     34experimental use of production networking infrastructure. The basic
     35components of this infrastructure is a Hypervisor and the Aggregate
     36Manager.
     37
     38The OpenFlow Hypervisor (OFH) virtualizes a physical OpenFlow switch
     39into multiple logical OpenFlow switches, which can be owned and
     40operated by different experimenters. The OFH appears to a network of
     41OpenFlow switches as a single controller running as open-source
     42software on a Linux-PC - the switches run the unmodified OpenFlow
     43Protocol (currently Version 0.8.9). The OFH is critical  to allowing
     44multiple experimenters to run independent experiments simultaneously
     45in one physical campus network. The OFH consists of two main parts:
     46(1) A policy engine that defines the logical switches (e.g. "all http
     47traffic", "Aaron's traffic", "Network administrator's experimental
     48traffic between 12midnight and 3am"), and (2) An OpenFlow mux/demux
     49that implements the policy by directing OpenFlow commands to/from an
     50OpenFlow controller and its network of logical OpenFlow switches.
     51
     52Current implementation status: Version 0.3 of the OFH has successfully
     53run 5 distinct research projects each in their own network slice.  The
     54underlying physical test network includes switches and routers from
     55multiple manufacturers (HP, NEC, Juniper, Cisco, etc.) and includes
     56nodes in Stanford, Internet2 (Houston, New York, Los Angelas), and
     57JGNPlus (Japan).  Version 0.3 of the OFH has also been used to slice
     58wireless networks in a project called OpenRoads, where each mobile
     59node has their own slice with their own corresponding mobility
     60manager.  Plans are underway to incorporate the OFH (now renamed
     61"FlowVisor") into the production OpenFlow deployment in the Gates
     62building.
     63
     64The Aggregate Manager will build on top of the OFH -- essentially, the
     65Aggregate Manager is an OpenFlow controller that controls a subset of
     66the network resources, as defined by the local administrator.  We are
     67currently drafting the aggregate manager architecture and integrating
     68it with the HyperVisor.  We proposed an API between the Aggregate
     69Manager and clearing house, as mentioned above.
     70
     71Project participants
     72
     73Rob Sherwood (Hypervisor Design)
     74Srini Seetharaman (Aggregate Manager)
     75David Underhill (User Interfaces)
     76Kk Yap (Wireless Network Testing)
     77Glen Gibb  (Hypervisor Testing)
     78Guido Appenzeller (Project Manager)
     79
     80Publications (individual and organizational)
     81
     82No publications as of yet, but work involving the OpenFlow hypervisor
     83is under submission to both Sigcomm 2009 and MobiComm.
     84
     85Collaborations
     86
     87We are continuing discussions with both the PlanetLab and DRAGON
     88project groups for both code integration and potential collaboration.
     89