| 1 | '''Enterprise GENI Q1 2009 Quarterly Report[[BR]] |
| 2 | ''' |
| 3 | '''Submitted 14 April, 2009 by Guido Appenzeller'''[[BR]] |
| 4 | (Proposal-related information was deleted from this status report by Dempsey) |
| 5 | |
| 6 | Milestones achieved / Deliverables made |
| 7 | |
| 8 | Our work is currently focused on designing and implementing our |
| 9 | Enterprise GENI aggregate manager and integrating it with other |
| 10 | projects. |
| 11 | |
| 12 | The OpenFlow Hypervisor (OFH) version 0.3 correctly performs network |
| 13 | slicing of OpenFlow switches. We have successfully demonstrated |
| 14 | "multi-layer" network slicing, i.e., virtual network slices that |
| 15 | defined by any combination of physical layer (l1), link layer (l2), |
| 16 | network layer (l3), and transport layer (l4). |
| 17 | |
| 18 | In an effort to integrate our Enterprise GENI code with PlanetLab, we |
| 19 | meet in Denver to discuss the interface between the GENI clearing |
| 20 | house and our Enterprise GENI aggregate manager. As a result of that |
| 21 | meeting, we promised to implement a draft SOAP-based interface by |
| 22 | March 13th, which we meet. Details regarding the interface are |
| 23 | available at http://www.openflowswitch.org/wk/index.php/GENILight. |
| 24 | |
| 25 | We have rolled out an initial deployment of OpenFlow in one building at |
| 26 | Stanford. The plan is to use this deployment as a GENI substrate. We |
| 27 | currently run live traffic over the substrate to get experience with the |
| 28 | quality of service and stability. |
| 29 | |
| 30 | Activities and findings |
| 31 | |
| 32 | The main activity has been continued development of the Network |
| 33 | Virtualization Software that is the foundation for allowing |
| 34 | experimental use of production networking infrastructure. The basic |
| 35 | components of this infrastructure is a Hypervisor and the Aggregate |
| 36 | Manager. |
| 37 | |
| 38 | The OpenFlow Hypervisor (OFH) virtualizes a physical OpenFlow switch |
| 39 | into multiple logical OpenFlow switches, which can be owned and |
| 40 | operated by different experimenters. The OFH appears to a network of |
| 41 | OpenFlow switches as a single controller running as open-source |
| 42 | software on a Linux-PC - the switches run the unmodified OpenFlow |
| 43 | Protocol (currently Version 0.8.9). The OFH is critical to allowing |
| 44 | multiple experimenters to run independent experiments simultaneously |
| 45 | in one physical campus network. The OFH consists of two main parts: |
| 46 | (1) A policy engine that defines the logical switches (e.g. "all http |
| 47 | traffic", "Aaron's traffic", "Network administrator's experimental |
| 48 | traffic between 12midnight and 3am"), and (2) An OpenFlow mux/demux |
| 49 | that implements the policy by directing OpenFlow commands to/from an |
| 50 | OpenFlow controller and its network of logical OpenFlow switches. |
| 51 | |
| 52 | Current implementation status: Version 0.3 of the OFH has successfully |
| 53 | run 5 distinct research projects each in their own network slice. The |
| 54 | underlying physical test network includes switches and routers from |
| 55 | multiple manufacturers (HP, NEC, Juniper, Cisco, etc.) and includes |
| 56 | nodes in Stanford, Internet2 (Houston, New York, Los Angelas), and |
| 57 | JGNPlus (Japan). Version 0.3 of the OFH has also been used to slice |
| 58 | wireless networks in a project called OpenRoads, where each mobile |
| 59 | node has their own slice with their own corresponding mobility |
| 60 | manager. Plans are underway to incorporate the OFH (now renamed |
| 61 | "FlowVisor") into the production OpenFlow deployment in the Gates |
| 62 | building. |
| 63 | |
| 64 | The Aggregate Manager will build on top of the OFH -- essentially, the |
| 65 | Aggregate Manager is an OpenFlow controller that controls a subset of |
| 66 | the network resources, as defined by the local administrator. We are |
| 67 | currently drafting the aggregate manager architecture and integrating |
| 68 | it with the HyperVisor. We proposed an API between the Aggregate |
| 69 | Manager and clearing house, as mentioned above. |
| 70 | |
| 71 | Project participants |
| 72 | |
| 73 | Rob Sherwood (Hypervisor Design) |
| 74 | Srini Seetharaman (Aggregate Manager) |
| 75 | David Underhill (User Interfaces) |
| 76 | Kk Yap (Wireless Network Testing) |
| 77 | Glen Gibb (Hypervisor Testing) |
| 78 | Guido Appenzeller (Project Manager) |
| 79 | |
| 80 | Publications (individual and organizational) |
| 81 | |
| 82 | No publications as of yet, but work involving the OpenFlow hypervisor |
| 83 | is under submission to both Sigcomm 2009 and MobiComm. |
| 84 | |
| 85 | Collaborations |
| 86 | |
| 87 | We are continuing discussions with both the PlanetLab and DRAGON |
| 88 | project groups for both code integration and potential collaboration. |
| 89 | |