Changes between Version 2 and Version 3 of OFIU-GEC12-status
- Timestamp:
- 11/30/11 13:22:42 (12 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
OFIU-GEC12-status
v2 v3 11 11 * OpenFlow Training for campus network staff 12 12 * Restoration of cross-campus links 13 * Additional wired and wireless OpenFlow deployments 13 14 * ProtoGENI Deployment 14 * OpenFlow Training for campus network staff15 * Additional wired and wireless OpenFlow deployments16 * Measurement Manager packaging and development17 15 18 16 = Major Accomplishments = … … 20 18 21 19 22 = Deleverables = 20 = Description of Work Performed = 21 22 __FlowScale deployment and demo at GEC12 and SC11:__ Flowscale, a load balancing application, was demonstrated at GEC12 in the plenary and the demo session. Flowscale is an OpenFlow based application deployed at Indiana University to loadbalance traffic being fed to the campus IDS system. The goals for the project are not only to provide a inexpensive and customizable loadbalancer but to: 23 * Provide a non-mision critical way to introduce OpenFlow switches to the production campus network 24 * Introduce OpenFlow technology more widely to the campus networking staff to assist in wider deployments that can be used by GENI 25 * Provide an application that can be used by GENI and non-GENI campuses that can allow them to connect to GENI 26 * Provide an production application using OpenFlow technology that can be reliable and run at extremely high capacity (>10Gb/sec) 27 28 At SC11 we demonstrated an additional capability of the FlowScale system to fit into a elastic cloud application where a remote site supports a portion of the traffic load dynamicly. This included deployment of a wide area circuit across the GENI NLR backbone to the SC Show Floor and the SCinet Research Sandbox. 29 30 A poster and slides describing FlowScale are posted on the GENI wiki. A web site with Source code repository, documentation and forums to provide a community to continue development will be availible at 31 [http://incntre.iu.edu/flowscale http://incntre.iu.edu/flowscale] 23 32 24 33 25 = Description of Work Performed = 34 __RouteFlow regional testbed deployment and demo at Open Networking Summit:__ At the Open Networking Summit (ONS) held at Stanford University we demonstrated a hardware deployment of the RouteFlow software code, in collaboration with CpQD, Google and [http://opensourcerouting.org/ Open Source Routing]. The demonstration showed a sample network with BGP and OSPF peerings between OpenFlow switches and connections to commercial routers over wide area circuits between a router at the CIC Omnipop in Chicago and the Indiana Gigapop in Indianapoolis. RouteFlow uses a virtual routing topology with Quagga (or any other server based routing software) that is mapped to hardware switches using OpenFlow. More information about the Routeflow regional hardware testbed is included in the ONS poster and at the RouteFlow [https://sites.google.com/site/routeflow/updates/succesfulldemosatonsnextofeliachangesummerschoolandsupercomputinginseattle site] 26 35 27 __FlowScale deployment and demo at GEC12 and SC11:__ Flowscale, a load balancing application, 28 29 __GENI VM "stack" for Expedient/OIM/Flowvisor/Measurement:__ In order to support OpenFlow in campus in a more supportable way we have moved Expedient/OIM and Flowvisor to a production server located in the Bloomington Data Center Enterprise Machine Room instead of it's formner location in the testlab. Production 24/7 monitoring is in place to detect any problems with the server and alert the IU GlobalNOC Service Desk of any issue. The design and layout is based on the Internet2 and National Lambda Rail OpenFlow VM servers. The concept is to provide some seperation of the tools as there are still some issues with library and OS distribution support and conflicts. It also provides some ability to migrate and create new test VMs to minimise downtime in case a major restructuring or upgrade/change of OpenFlow tools. 36 Not only is RouteFlow an experiment in itself but provides an opportunity for developing and deploying Layer 3 infrastructure easily and robustly as part of GENI where additional experiments can be run on top of the RouteFlow framework. The hardware used for the experiment 4 Pronto switches may become part of GENI either as part of the Internet2 NDDI backbone or as part of the Indiana campus or regional networks. 30 37 31 38 32 __ Workshop/Rapid development VM environment:__ We have created a new framework for rapid development and testing of OpenFlow hardware and Software. This consists of a stock of VM images that can be rapidly deployed in the IU testlab for testing OpenFlow features. For example we have a VM images based on the Stanford OpenFlow Tutorial image that contains most of the OpenFlow tools (mininet, nox, flowvisor, openflow reference code, wireshark w/ OF plugin, etc..) to begin most experimentation with OpenFlow.The goal of this framework is to provide very easy tools for experimenters and testers to have the tools available and pre-built to conduct research and functional testing of hardware and software.39 __OpenFlow Training for campus network staff:__ We have conducted a full day hands-on training for IU campus network staff on OpenFlow technology and GENI. This training was essentially the same as the GENI OpenFlow workshop for network engineers conducted at GEC12. 12 engineers participated in the workshop and we received lots of great feedback from the participants who learned the fundamentals of Openflow and how it can be applied to problems facing campus networks. This workshop will be also improved and conducted at the next 2 Internet2 Joint Tech Conferences. We are also investigating conducting another workshop for campus in the region at Indianapolis. 33 40 34 We also plan on using this as a basis to build a infrastructure to conduct hands-on OpenFlow workshops where each attendee can get a remote VM built for each workshop attendee. Once the workshop is over the VM instances can be deleted.41 The focus of these workshops are for network engineers to get operators of the network comfortable with the technology and able to suggest areas where it may be able to help them in there daily tasks. Hopeful this will allow a greater deployment of switches and infrastructure that can be utilized by GENI. 35 42 36 We plan in the future on building more VMs such as a Layer 3 specific one that would have software such as Quagga and RouteFlow.43 __Restoration of Campus Links :__ We have restored the cross campus link between Indianapolis and Bloomington as a 1Gig vlan across the campus networks. We are still exploring ways to possibly get a dedicated 10G circuit between campuses for experimentation including GENI. 37 44 38 __ Production controller deployment:__ We have deployed a Big Switch controller for controlling non-GENI OpenFlow work. This controller is connected to vlans directly and does not go through Flowvisor (though other vlans on the switches could/are connected to Flowvisor) This replaces some of our early experiments with using SNAC for production users.45 __Deployment to new buildings/IU UITS building moves:__ We are continuing deployments to new switches and new buildings. We have moved OpenFlow connections from additional switches in the Telcom building to the main production switches in the building. We have also added wireless OpenFlow support in the Indianapolis campus. The moves of the IT staff to different offices has been coordinated and OpenFlow enabled ports moved to the new office locations. Additional OpenFlow capabilities are also available in the Lindley building on the Bloomington campus. 39 46 40 41 __Deployment to new buildings/Preparation for IU UITS building moves:__ We are continuing deployments to new switches and new buildings. Adding two IU-purchased HP switches to the E2 and Informatics buildings. We are also planning on a major move of users this August and September as most IU IT staff will be moving into a new building or to space freed up by other moves. We are planning to make sure current OF users remain connected and are looking for opportunities to enable more users to be added to OpenFlow networks. 42 43 __Testlab deployments:__ We are working with 3-4 new vendor switches in our testlab (not purchased using grant funds). We are running preexisting (oftest) and newly developed test code against the switches to verify their ability to function for projects like FlowScale and for general deployment where we want the flexibility to handle as many types of applications and experiments as possible. 47 __ProtoGENI deployments:__: We have deployed 2 ProtoGENI switches in the Bloomington Data Center. These will very shortly be attached to GENI as we get the Bloomington switches configured into the GENI OpenFlow slice using the cross-campus link 44 48 45 49 == Activities and Findings == 46 50 47 __Measurement Manager packaging and development:__ We have released the Measurement Manager code. This mostly involved better documentation of the features and packaging the software demonstrated at GEC 9 and 10 in one downloadable package. The Measurement Manager release 1.0 is available at: 48 49 [http://gmoc-db.grnoc.iu.edu/sources/meas_manager/meas_manager-0.0.3.tar http://gmoc-db.grnoc.iu.edu/sources/meas_manager/meas_manager-0.0.3.tar] 50 51 52 __FlowScale (Load Balancer Application)__: We have completed the initial coding of a load balancer application that utilizes OpenFlow switches. The initial deployment will allow Indiana University to distribute traffic to multiple Intrusion Detection System devices. The initial design for a custom OpenFlow app that divides the traffic and provides a simple Web interface to control traffic. The prototype is in the test lab with a number of IDS sensors with 1 Gig interfaces. We are testing the prototype with a number of vendor's switches to make a determination of what hardware will be selected for final purchase. 53 54 The final deployment of the FlowScale IDS will be scaled up to additional 10G sensors and capable of processing all traffic transmitted across the Bloomington and Indianapolis campus networks. 55 56 57 Full deployment will be done when final approvals of the additional sensors and switches by the IU Security Office. It is expected that the deployment will happen early in the fall semester. 58 59 We are also looking at possible applications beyond the Intrusion Detection use which will require a much larger production deployment of OpenFlow devices. 60 61 __Interop Las Vegas:__ Provided assistance in the deployment of the OpenFlow lab at the Interop Las Vegas conference 51 __SC11:__ Provided assistance in the deployment of the OpenFlow lab at the Interop Las Vegas conference 62 52 http://www.interop.com/lasvegas/it-expo/interopnet/openflow-lab/ 63 53 … … 69 59 * John Meylor 70 60 * Ali Khalfan 71 * Ed Furia72 61 * Jason Muller 73 62 * Ron Milford … … 78 67 == Collaborations == 79 68 80 * Clemson ( loadBalancer)69 * Clemson (FlowScale) 81 70 * Stanford (monitoring) 82 * CpQD ( L3 router)83 * Internet2 (monitoring)84 * NLR (monitoring)85 * OpNET (Interop)71 * CpQD (RouteFlow) 72 * Google (RouteFlow) 73 * Open Source Routing/ISC (RouteFlow) 74 * SCinet (Research Sandbox) 86 75 87 76 88 77 == Publications & Documents == 89 78 90 Small, C, Measurement Manager Poster GENI Engineering Conference, San Juan Nov 2-491 http://groups.geni.net/geni/attachment/wiki/OFIU-GEC1 0-status/IU_OF_GEC10_poster.pdf79 Small, C, FlowScale Poster GENI Engineering Conference 12, Kansas City, MO 80 http://groups.geni.net/geni/attachment/wiki/OFIU-GEC12-status/FlowScale_poster.pdf 92 81 93 Davy, M., "IU OpenFlow and NDDI Developments" 94 http://groups.geni.net/geni/attachment/wiki/GEC11OpenFlowDeployment/2011-07-26.GEC11-Updates.pdf 82 Small, C, RouteFlow Regonial Deplyment Poster 83 http://groups.geni.net/geni/attachment/wiki/OFIU-GEC12-status/RF_ONS_poster.pdf 84 85 Small C., Davy M. OpenFlow overview from GEC12 network Engineer workshop 86 http://groups.geni.net/geni/attachment/wiki/OFIU-GEC12-status/geni-of-workshop.pdf 95 87 96 88