Version 52 (modified by, 13 years ago) (diff)

Add link to OFRewind poster, additional demo participants

GENI Engineering Conference 10

March 15, 2011 San Juan, Puerto Rico

Table of Contents

  1. Cognitive Radio (COGRADIO)
  2. Data-Intensive Cloud Control for GENI
  3. Digital Object Registry (DIGOBREG)
  4. EXPERIMENTS: Clemson (Brooks)
  5. EXPERIMENTS: Clemson (Shen)
  7. GENI/Ecalyptus Federated Resource Allocation (GENICloud)
  8. GMOC-GENI Meta Operations (GMOC)
  9. Great Plains Environment for Network Innovation (GpENI)
  10. GENI For Everyone (GPO)
  11. iGENI: A Distributed Network Research Infrastructure for the Global …
  12. GENI IMF: Integrated Measurement Framework and Tools for Cross Layer …
  13. Indigo
  14. Instrumentation Tools for a GENI Prototype (INSTOOLS)
  15. GENI-fying and Federating Autonomous Kansei Wireless Sensor Networks …
  16. GENI-fying and Federating Autonomous Kansei Wireless Sensor Networks …
  17. K-GENI and Virtualized Programmable Platform
  18. Leveraging and Abstracting Measurements with perfSONAR (LAMP I&M)
  19. Programmable Measurements over Texas-based Research Network: LEARN
  20. Maestro
  21. Mid-Atlantic Crossroads (MAX)
  22. Instrumentation and Measurement for GENI
  23. A Prototype of a Million Node GENI (MILNGENI)
  24. netKarma: GENI Provenance Registry (NETKARMA)
  25. NOX-at-Home
  26. NUST OpenFlow
  27. OpenFlow Campus Trials at Clemson University (OFCLEM)
  28. OpenFlow Campus Trials at Indiana University (OFIU)
  29. OFRewind (GLAB)
  30. OpenFlow Campus Trials at University of Washington (OFUWA)
  31. OpenFlow Campus Trials at University of Wisconsin (OFUWI: Mobile …
  32. Open Flow Campus Trials at University of Wisconsin (OFUWI: Network Coding)
  33. OnTimeMeasure: Centralized and Distributed Measurement Orchestration …
  34. Deploying a Vertically Integrated GENI “Island”: A Prototype GENI …
  35. ProtoGENI
  36. Scalable, Extensible, and Safe Monitoring of GENI (S3MONITOR)
  37. A SCAFFOLD for GENI-based Distributed Services (SCAFFOLD)
  38. Exploiting Insecurity to Secure Software Update Systems (SecureUpdates)
  39. SPLIT Architecture MPLS/OpenFlow
  40. Internet Scale Overlay Hosting (SPP)
  41. TIED: Trial Integration Environment in DETER
  42. Programmable Edge Node (UMLPEN)
  43. UQAM OpenFlow
  44. Virtualizable Wireless Substrate
  45. C-VeT --UCLA Campus Vehicular Testbed: An Open Platform for Vehicular …
  46. A Programmable Facility for Experimentation with Wireless …
  48. OFRewind (TUBerlin/Deutsche Telekom Labs)

Live demonstrations, posters, and presentations at GEC10 highlight results from GENI projects. See project descriptions, posters and presentations here.

Cognitive Radio (COGRADIO)

Demo participants: Dirk Grunwald, Ivan Seskar, Peter Wolniansky
Affiliation: University of Colorado Boulder, WINLAB/Rutgers University, Radio Technology Systems, LLC

The Geni CogRadio project will demonstrate over-the-air data on the current SDR (2.4Ghz/5Ghz) radio using a high-performance OFDM transmitter / receiver design. We'll also have a demonstration using the next generation "WDR" radio that operates over a very wide range.

Data-Intensive Cloud Control for GENI

Demo participants: Michael Zink, Prashant Shenoy, Jim Kurose, David Irwin, Emmanuel Cecchet
Affiliation: UMASS Amherst

Demonstration of DiCloud budgeting software on Amazon EC2 and S3.

Digital Object Registry (DIGOBREG)

Demo participants: Larry Lannom, Giridhar Manepalli, Jim French
Affiliation: Corporation for National Research Initiatives (CNRI)

Corporation for National Research Initiatives (CNRI) will be demonstrating the functionality of the proposed Measurement Data Archive, which is implemented using the Digital Object Architecture.

The Measurement Data Archive prototype system consists of two components: 1) User Workspace and 2) Object Archive. The User Workspace component is an entry point for users (e.g., experimenters, instrumentation researchers, etc.) to store and transfer measurement data, which could be in a variety of forms (e.g., formatted datasets, raw files, etc.). Data and metadata files managed in the user workspace can be archived for long-term storage in an Object Archive. Once data is archived, a persistent and unique identifier is created.

GENI Proposed Services Poster
Demo Screencast

EXPERIMENTS: Clemson (Brooks)

Demo participants: Juan Deng, Ilker Oczelik, Richard Brooks
Affiliation: Clemson University

WiMAX is an evolving standard. Through simulations, we have shown that WiMAX system throughput is very sensitive to settings of two bandwidth contention parameters. ANOVA analysis of simulated DDoS attacks find that settings on those two parameters account for over 80% of the variance in system throughput. We are using the Rutgers' ORBIT testbed to verify if this correlation holds when physical radios are used. This work also serves to test the fidelity of the ns-2 simulation for wireless systems. We will first show and explain our existing results, we will also show our work to date in replicating these results with hardware-in-the-loop. We will have posters, slide presentation, and attempt to give an interactive demonstration on using the Rutgers WiMAX testbed.

EXPERIMENTS: Clemson (Shen)

Demo participants: Kang Chen, Ke Xu
Affiliation: Clemson University

Demonstration of one P2P data sharing application over Planetlab and one locality-based distributed data sharing protocol in MANETs over the Orbit platform. This demonstration will also show the data sharing application/protocol over federated networks.

Clemson EAGER P2P demo poster


Demo participants:
Affiliation: University of Buffalo

Demonstration of creation of SSI clusters using GENI nodes along with performance speedup (as opposed to individual nodes)

GENI/Ecalyptus Federated Resource Allocation (GENICloud)

Demo participants: Rick McGeer, Andy Bavier, Alex Snoren, Yvonne Coady
Affiliation: HP Labs, PlanetWorks, U.C. San Diego, U. of Victoria, U. of Illinois, Urbana-Champaign

Demonstration of a persistent Cloud infrastructure over multiple sites and continents, tied to the PlanetLab Control Framework.

GMOC-GENI Meta Operations (GMOC)

Demo participants: Jon-Paul Herron, Luke Fowler
Affiliation: Indiana University, Global Research Network Operations Center

GMOC visualization and integration efforts.

Measurement Manager (GMOC OpenFlow integration) GEC10 poster

Great Plains Environment for Network Innovation (GpENI)

Demo participants: James P.G. Sterbenz, Deep Medhi, Byrav Ramamurthy, Caterina Scoglio, Don Gruenbacher, Greg Monaco, Jeff Verrant, Cort Buffington, David Hutchison, Bernhard Platter, Joseph B. Evans, Rick McMullen, Baek-Young Choi, Jim Archuleta, Andrew Scott
Affiliation: The University of Kansas, Kansas State University, University of Missouri, University of Nebraska, Great Plains Network, Ciena Government Solutions, Qwest Government Services, KanREN, MOREnet, Lancaster University, ETH Zurich

Demonstration of GpENI (Great Plains Environment for Network Innovation) programmable testbed for future Internet research. GpENI is an international testbed centered on a Midwest US regional optical network that is programmable at all layers of the protocol stack, using PlanetLab, VINI, and DCN, and interconnected to ProtoGENI in the US and G-Lab and ResumeNet in Europe. We will demonstrate the topology, functionality, and operations of GpENI. We will additionally demonstrate a chat application on the PlanetLab subaggregate, topology control in the VINI subaggregate, and the creation VLANs in the DCN sub-=aggregate.

GENI For Everyone (GPO)

Demo participants: GPO
Affiliation: GPO

We will provide a hands-on opportunity for attendees to use the GENI command-line tool 'omni' to reserve live production OpenFlow, PlanetLab, and ProtoGENI resources, and briefly interact with them. Attendees will leave the demo with a confident understanding of how they could reserve GENI resources and use them in their research, links to documentation to help them do so, and contact information if they need any assistance.

GENI For Everyone demo poster
GENI For Everyone info page on GENI wiki

iGENI: A Distributed Network Research Infrastructure for the Global Environment for Network Innovation

Demo participants: Joe Mambretti, Maxine Brown, Thomas A. DeFanti
Affiliation: Northwestern University, International Center for Advanced Internet Research (iCAIR), University of Illinois at Chicago, Electronic Visualization Laboratory (EVL), California Institute for Telecommunications and Information Technology (Calit2)

The iGENI dynamic network provisioning demonstrations showcase capabilities for large scale (national and international) multiple domain dynamic provisioning, including L1/L2 path involving multiple sites, using specialized signaling and implementation techniques. Several iGENI demonstrations will showcase this dynamic provisioning. Highly Scalable Network Research TransCloud: This multi-organization TransCloud demonstration showcases a capability for using dynamic large scale (cross continent) cloud and network infrastructure for highly distributed specialized capabilities among multiple sites connected by the iGENI network, including specialized search and digital media transcoding and streaming to multiple edge platforms, supported by scaleable cloud computing and network provisioning.

GENI IMF: Integrated Measurement Framework and Tools for Cross Layer Experimentation

Demo participants: Rudra Dutta, George Rouskas, Ilia Baldine, Keren Bergman
Affiliation: NCSU, CS Dept, Columbia University, EE Dept, Renaissance Computing Insititute (RENCI), Chapel Hill, NC, BEN: Breakable Experimental Network, New Internet Computing Lab (NICL), Open Resource Control Architecture (ORCA)

The IMF project overall goal is to enable (a) measurements from a physical substrate, such as an optical substrate, to be passed to a measurement consumer inside a slice; this is a valuable capability since the optical substrate characteristics may be important to the experimenter in the slice although the substrate itself is not directly observable by the experimenter, and (b) enable the automated (in-stack) consumption of measurement data; this is important for an experimenter who does not merely want to see the optical substrate measurement data after the experiment, but would like to experiment with reactive protocols designed to run inside the stack and react in real-time to measurements.

In its first year (Spiral 2) the project already demonstrated the closed-loop communication between the substrate and the slice. Upon consultation with GPO, Year 2 (Spiral 3) goals focus on integrating the developed IMF capabilities for measurement, and transfer of measurement (from substrate to in-slice stack), with existing GENI Inst&Meas capabilities. Specifically, one of the goals was to integrate and install perfSONAR (pS) MPs and archives to collect performance measurement data from Polatis switches and Infinera DWDM platforms on 4 BEN sites. This capability has been demonstrated successfully at GEC10 ! The goals for the rest of Spiral 3 is to integrate IMF capabilities as resources in ORCA, and possibly to align the slice -> substrate communication with existing GENI I&M capabilities.

The current deliverable also leveraged the Spiral 3 goals of the ERM project, which included integrating previously developed IMF code into the newly designed universal ERM box. Part of the ERM project team forms the Columbia U part of the IMF project team.

Joint IMF and ERM projects poster presented at GEC10 IMF demo
GENI wiki for Integrated Measurement Framework project


Demo participants: Guido Appenzeller, Kyle Forster
Affiliation: Big Switch Networks/Stanford

Indigo is a free, open-source based OpenFlow implementation that runs on a number hardware switches. It forwards packets at line rates of up to 10Gb/s and fully supports the OpenFlow 1.0 standard. Indigo has both a web user interface as well as a CLI. For researchers that have access to the switches SDK, the Indigo source code is available under an open source license.

More information about Indigo and downloadable firmware images can be found on OpenFlow Hub.

Instrumentation Tools for a GENI Prototype (INSTOOLS)

Demo participants: James Griffioen, Zongming Fei, Hussamuddin Nasir
Affiliation: University of Kentucky, Laboratory for Advanced Networking

We will demonstrate the Kentucky Portal Service which enables users to visualize their running experiment and to easily access measurement data across aggregates. We will also demonstrate the ability to interact with archival services, and we will show how the user-interface can be used to configure the particular data to be captured and archived.

Kentucky INSTOOLS Portal and Archival System demo poster

GENI-fying and Federating Autonomous Kansei Wireless Sensor Networks (KanseiGenie)

Demo participants: Anish Arora, Rajiv Ramnath, Hongwei Zhang
Affiliation: Ohio State University

The KanseiGenie team will demonstrate two new features: Kansei Doctor and layer 2/3 switch. Kansei Doctor is a service that periodically and automatically monitors the health of the testbed and provides both visual and textual outputs for diagnosis. The layer 2/3 switch enables GENI experimenters to choose layer 2 or layer 3 connection between the two KanseiGenie sites, namely Kansei testbed and NetEye testbed

GENI-fying and Federating Autonomous Kansei Wireless Sensor Networks (Kansei Open Source)

Demo participants: Anish Arora, Rajiv Ramnath, Hongwei Zhang
Affiliation: Ohio State University

The poster is a study on the sustainability of open source software. The GENI KANSEI project is a case study for analyzing the sustainability of open-source software.

K-GENI and Virtualized Programmable Platform

Demo participants: James Williams, Myung Ki Shin, Dongkyun Kim
Affiliation: Indiana University, ETRI (Electronics and Telecommunications Research Institute), Supercomputing Center KISTI (Korea Institute of Science and Technology Information)

Our demo will consist of two parts: Part 1 - ETRI will show the Future Internet testbed which is composed of NP-based programmable virtualization platform and control framework for it. We will demonstrate GUI-based slice/sliver control function and a virtualized network service implementing a simple ID/Location split concept. Also, we will demonstrate a new smart phone app for mobile users to detect and choose the most optimized network resources. Part 2 - KISTI will introduce current version of K-GENI testbed implementation and will perform an international demonstration between Korea and USA(GENI) for federated network operations over K-GENI testbeds, focused on automatic meta-data acquisition, near real-time exchange of operational datasets, and local/global federated operation views of dvNOC-KR.

Leveraging and Abstracting Measurements with perfSONAR (LAMP I&M)

Demo participants: Martin Swany, Eric Boyd
Affiliation: Department of Computer and Information Sciences, University of Delaware, Internet 2

We will describe and demonstrate the installation and operation of the LAMP I&M project using Periscope. LAMP uses the perfSONAR system to gather performance data from ProtoGENI-based experiments. We will show the Periscope interface for gathering and consuming the data.

Programmable Measurements over Texas-based Research Network: LEARN

Demo participants: Deniz Gurkan, Keren Bergman
Affiliation: University of Houston, Lonestar Education and Research Network (LEARN), Columbia University, EE Dept

Demonstrate the provisioning of VLANs over the members of cluster D (NLR backbone and regional BEN network) to achieve the end to end connectivity between BEN points to LEARN points. Cisco 3400 ME switches are used for VLAN translation by which VLANs of different sites can be stitched together to use the same IP address space.


Demo participants: Zheng Cai
Affiliation: Rice University

Mid-Atlantic Crossroads (MAX)

Demo participants: Tom Lehman, Xi Yang, Abdella Battou, Balu Pillai
Affiliation: Mid-Atlantic Crossroads GigaPOP, University of Maryland, College Park, University of Southern California - Information Sciences Institute Arlington (USC/ISI-Arl)

In this demonstration we will show how the MAX Aggregate Manager handles stitching between multiple aggregates and across external networks. This will include the provision of resources across the MAX Aggregate, ProtoGENI Aggregate, and Internet2 ION network. In addition, we will describe a potential architecture to realize a general GENI wide multi-aggregate stitching capability. We will also describe how the current MAX Aggregate Manager implementation fits into this more general stitching architecture and discuss plans for next steps.

MAX GEC10 Demo Description
MAX GEC10 Demo Poster
MAX GEC10 Demonstration Web Page

Instrumentation and Measurement for GENI

Demo participants: Paul Barford, Mark Crovella, Joel Sommers
Affiliation: University of Wisconsin, Boston University, Colgate University

GIMS is a high-speed traffic capture system for GENI. Our system integrates capture functionality with ProtoGENI via modifications to the Reference Component Manger (RCM). System fuctionality has been expanded in a number of ways since our demo at GEC9, including real-time experiment statistics, enhanced capture options, and enhanced testing and debugging features.

GIMS Traffic Capture System GEC10 Poster

A Prototype of a Million Node GENI (MILNGENI)

Demo participants: Justin Cappos, Monzur Muhammad
Affiliation: University of Washington

For the seattle project we showed three different demos to the participating people in the GENI conference. The three demos were Seobbingo, www Repy Interpreter and HuXiang.

Seobbingo is a peer-to-peer backup system which could be used to backup any local files on local machines as well as on any peer nodes that are online currently. If an user wishes, they can download the backup file from the local machine or from any of the remote nodes if available. The files that are backed up are encrypted with the user's public key. Therefore in order to decrypt and open the backed up file, you will need the corresponding privatekey.

The www Repy Interpreter was implemented by Albert Rafetseder from the University of Vienna along with his colleagues and students. It allows you to deploy a Repy interpreter on any node. The application has a front end http webserver where an user can enter any Repy code and execute it with any arguments.

HuXiang was developed by the UW student Alan Loh which allows an user to deploy a website in a peer-to-peer manner. The deployed website is uploaded on all possible nodes and the ip address of all these nodes are associated with a hostname. The hostname redirects an user who is trying to access the website to one of the nodes that the website is deployed on. If one of the nodes goes down, the website can still be accessed as long as one of the nodes that have the website deployed is still up and running.

netKarma: GENI Provenance Registry (NETKARMA)

Demo participants: Chris Small, Mehmet Aktas
Affiliation: Indiana University

Demonstration of NetKARMA project will mainly focus on visualizing the results from the NetKARMA provenance store.

Demo Handout.pdf Visualizaton of Provenance captured by NetKarma Handout


Demo participants: Affiliation:

The NOX-At-Home demo will demonstrate how Openflow and NOX can be used to create a simple interface for effective management of home networks. We will demonstrate various simple applications, such as implicitly identifying home users by analyzing their social network and email IDs, allowing the user to perform advanced traffic policing using a simple and intuitive web interface, and identifying network security threats like malware infections in real time.

NUST OpenFlow

Demo participants: Affiliation:

We will demonstrate a new OpenFlow deployment using Marvell's xCAT series of switches at the National University of Sciences & Technology (NUST), Pakistan.

OpenFlow Campus Trials at Clemson University (OFCLEM)

Demo participants: Kuang-Ching Wang, Jim Pepin, Dan Schmiedt
Affiliation: Clemson University

Our demo will consist of two parts: Part I: The OpenFlow campus trial at Clemson has deployed as a pilot a 5-node wireless mesh network with OpenFlow access points on campus light poles. A mobile experiment with OpenFlow-based mobility management will be demonstrated using this network to illustrate the potential uses of the network. Part II: To promote OpenFlow engagement with campus operation as well as teaching, undergraduate and graduate students from our team have worked with IT engineers to identify IT use cases and teaching/training labs. We will present a poster with these activities.

Clemson OpenFlow mesh network mobility demo poster
Clemson OpenFlow REU curriculum development and IT engagement demo poster

OpenFlow Campus Trials at Indiana University (OFIU)

Demo participants: Christopher Small, Matthew Davy, Dave Jent
Affiliation: Indiana University

Demonstration of data collection through the Measurement Manager software to allow for the monitoring and management of OpenFlow networks. We will also demonstrate our permanent VM migration demo and visualization.

Measurement Manager GEC10 poster

OFRewind (GLAB)

Demo participants: Affiliation: TU Berlin/Deutsche Telekom Labs/Stanford University

We will demonstrate how OFRewind, a system capable of recording and replaying network events, can be used to debug OpenFlow controller and switch issues. OFRewind, which works in most OpenFlow-enabled networks, provides several knobs for debugging issues: 1) control over the topology (choice of devices and their ports), 2) timeline, 3) subset of traffic to be collected and then replayed. We expect OFRewind to play a major role in helping ongoing OpenFlow deployment projects resolve production problems.

OFRewind Poster

OpenFlow Campus Trials at University of Washington (OFUWA)

Demo participants: Arvind Krishnamurthy, Tom Anderson, Clare Donahue, Art Dong, Vjeko Brajkovic
Affiliation: University of Washington

Distributed QoS mechanisms within datacenter environments.

OpenFlow Campus Trials at University of Wisconsin (OFUWI: Mobile Offloading)

Demo participants: Aditya Akella, Perry Brunelli, Hideko Mills, Theo Benson, Mike Blodgett, Dale Carder, Aaron Gember
Affiliation: University of Wisconsin, Madison

Offloading resource intensive mobile applications to nearby compute resources provides energy savings and latency benefits to mobile devices. A central controller enforces enterprise security policies by assigning specific applications to specific idle resources. The controller configures OpenFlow switches to create paths between mobile devices and a selected offloading destinations. This demonstration builds on demos at previous GECs by showing a full working system that integrates with OpenFlow.

Open Flow Campus Trials at University of Wisconsin (OFUWI: Network Coding)

Demo participants: Aditya Akella, Perry Brunelli, Hideko Mills, Theo Benson, Mike Blodgett, Dale Carder, Aaron Gember
Affiliation: University of Wisconsin, Madison

The demo in GEC 10 will focus on delivering our progress on Network Coding project. At this time, our NetFPGA-based Hardware/Software co-design is able to combine eight or more packets with network coding theory and algorithm; also throughput optimization has already been addressed and will be discussed in the demo session. This GENI experiment and design is integrated into another key hardware component, NetFPGA card, and as a high performance router is provided to research nodes. We will present a poster with all our research activities.

OnTimeMeasure: Centralized and Distributed Measurement Orchestration Software (OnTimeMeasure)

Demo participants: Prasad Calyam, Paul Schopis
Affiliation: Ohio Supercomputer Center/OARnet

We will demonstrate a GENI experiment requesting/managing/querying measurements through OnTimeMeasure to address a network science and engineering research issue (e.g., resource allocation in a virtual desktop cloud). We will also demonstrate our latest integration of OnTimeMeasure measurement service with the Gush experimenter workflow tool on ProtoGENI and PlanetLab.

OnTimeMeasure GEC10 Demo Poster
OnTimeMeasure VDCloud Experiment Demo Videos
OnTimeMeasure-Gush Integration

Deploying a Vertically Integrated GENI “Island”: A Prototype GENI Control Plane (ORCA) for a Metro-Scale Optical Testbed (BEN) (ORCA-BEN)

Demo participants: Ilia Baldine,Yufeng Xin, Anirban Mandal.
Affiliations: BEN: Breakable Experimental Network (; Renaissance Computing Insititute (RENCI), Chapel Hill, NC; Duke University, Durham, NC; Gwangju Institute of Science and Technology (GIST), Korea; Infinera Corporation, Sunnyvale, CA

  • Demonstration of interoperability with protoGENI. Allow protoGENI tools create resource reservations on ORCA substrate. The demonstration showcases a GENI-AM-compatible controller within ORCA as well as ProtoGENI RSpec v1/v2 to ORCA's NDL-OWL converter. Using these components a user can request ORCA resources as if they were a ProtoGENI AM.
  • Demonstration of topology embedding in multiple sites. This demonstration shows ORCA's automated experiment embedding policies, in which the user is unaware which particular cloud sites are selected to embed her experiment. ORCA automatically selects sites based on available resources. QoS-assured links are provisioned between slivers and between sites to support a given experiment topology.


Demo participants: Matt Strum, Rob Ricci, Leigh Stoller
Affiliations: University of Utah

We will demonstrated several new features of ProtoGENI and our primary GUI for it:

  1. Stitching VLANs between multiple aggregates (the Utah Emulab site, and the Emulab sites at the University of Kentucky and the University of Wisconsin
  1. Using the GENI APIs to control components from two different control frameworks: ProtoGENI and PlanetLab
  1. New authentication system for the GUI

Scalable, Extensible, and Safe Monitoring of GENI (S3MONITOR)

Demo participants: Sonia Fahmy, Puneet Sharma
Affiliation: Purdue University, HP Labs

We will be demonstrating the new version of S3Monitor. The demonstration will include installation and deployment on both ProtoGENI and PlanetLab Clusters.

A SCAFFOLD for GENI-based Distributed Services (SCAFFOLD)

Demo participants: Michael Freedman, Jennifer Rexford, Erik Nordstrom
Affiliation: Princeton University

We will demonstrate Serval, a new architecture and network stack built around a service access layer that sits between the network and transport layers. Serval is an evolution of the SCAFFOLD architecture that allows scalable and flexible service access on top of IP, making it compatible with the existing Internet. The service access layer allows applications to access diverse (and potentially replicated) services based on opaque service names instead of network addresses, enabling flexible service discovery while implementing the necessary signaling to maintain connectivity to services across events such as multi-homing, migration, and instance failures. Our demo of Serval will demonstrate a client running on a Smartphone, discovering and accessing local services on laptops using ad-hoc network connectivity.

Poster: serval-poster-GEC10-demos.pdf

Exploiting Insecurity to Secure Software Update Systems (SecureUpdates)

Demo participants: Justin Cappos, Geremy Condra
Affiliation: University of Washington

This demo will show two example software update systems that are protected by TUF. We will show how previous systems were vulnerable to attack and then demonstrate how the systems are not vulnerable when TUF is used.

SPLIT Architecture MPLS/OpenFlow

Demo participants:
Affiliation: GPO

This demo shows a split architecture based control scheme for an operator’s end-to-end MPLS network, developed within the "Split architecture carrier grade networks" (SPARC) 7th framework EU project. In our proof-of-concept research work we make use of open-source components (like NOX) and extend them to control an access/aggregation network with a centralized controller that also interworks with a legacy distributed MPLS core network (see attached Figure 1). We show (i) unicast video stream setup as a result of interworking with the core MPLS network, including the use of OSPF and LDP, (ii) multicast video stream with dynamic subscription/unsubscription and optimal transmission tree recalculation, (iii) LLDP based controller driven restoration versus data plane managed protection.

The SPARC project activities also include looking at ways to make the centralized controller carrier grade by running a detailed evaluation of the scaling and resiliency of the Split Architecture.

Internet Scale Overlay Hosting (SPP)

Demo participants: Jon Turner, Patrick Crowley, John DeHart
Affiliation: Washington University, St. Louis

At GEC-10, we demonstrated he operation of the full five node configuration of Supercharged PlanetLab Platforms (SPP). These are deployed in Salt Lake City, Kansas City, Washington DC, Houston and Atlanta. We demonstrated two slices running on the full five node SPP configuration. The first is the product of a collaboration with Brighten Godfrey of the University of Illinois, on an experimental network concept called Slick Packets. In the Slick Packets framework, packets carry an encoding for a set of alternate routes that can be used in the event that the primary route for a packet fails at some intermediate router. We have implemented a code option for Slick Packets for the SPPs' Network Processor Engine, enabling multigigabit forwarding rates, even for minimum size packets. More details of the demonstration can be found in the attached presentation slides.

The second application demonstrated at GEC-10 was the Forest overlay network architecture, designed to support high quality virtual worlds. This demonstration used all five SPPs plus twenty-five PlanetLab nodes and included 100 "avatar-bots" that used the Forest network services to exchange periodic status reports. In this demonstration, the avatars used region-based multicasts to transmit their status information and to "tune in" to status reports from other nearby avatars. The demonstration also included a monitoring application and remote visualization that allowed demo participants to observe the avatars moving within the virtual world and to see how the clustering of avatars affected network traffic levels. More details can be found in the attached presentation slides.

TIED: Trial Integration Environment in DETER

Demo participants: John Wroclawski, Terry Benzel, Ted Faber
Affiliation: University of Southern California Information Sciences Institute, University of California, Berkeley

Demonstration of attribute-based access control (ABAC) tools for scalable access control

Programmable Edge Node (UMLPEN)

Demo participants: Tim Ficarra, Eric Murray, Yan Luo
Affiliation: University of Massachusetts Lowell

At GEC10, we demonstrated the successful integration of our UMLPEN testbed with the ProtoGENI control framework. In our aggregate site, there are two PEN nodes with specialized network processor boards, which support virtual network interfaces and can be used to accelerate packet processing. These PEN nodes are "shareable" nodes running OpenVZ containers for accommodating user requests. Each OpenVZ container has access to virtual NICs instantiated by the network processor card. We demonstrated the creation of network experiments through ProtoGENI user interfaces.

OpenFlow experiments: We showed how to use our PEN testbed to create and conduct OpenFlow experiments. A user can request to create an OpenFlow switch, which was mapped to an OpenVZ container running on one of the PEN node. The controller and end nodes were also created using OpenVZ containers on another PEN node. Network connections were established and ping/iperf tests showed the operation of the OpenFlow switch and OpenFlow controller.

UQAM OpenFlow

Demo participants:
Affiliation: GPO

In this demo, we show that designing an OpenFlow 1.1-controlled forwarding hardware on network processors (NP) allows a flexible forwarding behavior. In fact, we propose a software design that automatically pushes "on the fly" different forwarding decisions in a NP-based hardware data plane on which we implemented OF1.1. Particularly, we show how long does it take for our proposed software OpenFlow Preprocessor Engine (OPE) to update, at the right time, the multiple memory structures used by the NP hardware to cache the forwarding decisions corresponding to different behaviors/applications mix.

Virtualizable Wireless Substrate

Demo participants:
Affiliation: WIMAX

We demonstrate a virtualization substrate for virtualizing the wireless resources in Wimax N/w. On the virtualized Wimax prototype, we demonstrate a dynamic admission control framework for video traffic and show benefits of virtualization of wireless resources in Cellular networks.

C-VeT --UCLA Campus Vehicular Testbed: An Open Platform for Vehicular Networking (WIMAX UCLA)

Demo participants: Mario Gerla, Giovanni Pau
Affiliation: UCLA CS Dept

We propose a Poster that shows our experimental Result on the UCLA wimax installation for the Campus Vehicular Testbed at UCLA.

We envision one poster reporting on the UCLA experiments with WiMax from the vehicles and on the integration between the WiMax and the WiFi Mesh network. The poster will be accompanied by optional slides on our laptop.

A Programmable Facility for Experimentation with Wireless Heterogeneity and Wide-area Mobility (WIMAX UWI)

Demo participants: Suman Banerjee, Sateesh Addepalli
Affiliation: Wisconsin Wireless and NetworkinG Systems (WiNGS) laboratory, Cisco Systems

A live WiRover networking demonstration.

WiRover poster


Demo participants: GPO
Affiliation: GPO

This demo will showcase the OMF/OML integration with the Wimax base station at GPO. The demo will show client interaction with the wimax base station through OMF and provide detailed information for operators of Wimax sites part of the GENI project. It will also showcase experimenter tools useful for researchers who would like to conduct research in the future using this platform.

Range and throughput measurement results at BBN site

OFRewind (TUBerlin/Deutsche Telekom Labs)

Demo participants: Dan Levin, Srini Seetharaman Affiliation: TU Berlin, Deutsche Telekom Labs

We demonstrate a tool to debug production networks enabled by Split Forwarding Architecture. The tool OFRewind, implemented using OpenFlow, allows recording and replaying control and dataplane traffic in order to recreate network scenarios in a controlled manner.

OFRewind Poster

Attachments (25)