Version 13 (modified by, 14 years ago) (diff)


GENI Engineering Conference 9

November 2, 2010 Washington, DC

Live demonstrations, posters, and presentations at GEC9 highlight results from spiral 2 projects. See project descriptions, posters and presentations here.

BGP Multiplexer

Demo Participants: Nick Feamster, Vytautas Valancius
BGP-Mux is a system that enables wide-area route control for networks inside GENI. BGP-Mux allows scarce resources, such as IP prefixes and AS numbers, to be shared among experimental virtual networks. This demo will showcase the automated management of such resources. Affiliation(s): Georgia Tech

Control, Measurement, and Resource Management Framework for Heterogeneous and Mobile Wireless Testbeds

Demo Participants: Marco Gruteser, Max Ott, Ivan Seskar, Joseph Milkjovic, Thierry Rakotoarivelo
Demonstration of cognitive radio platform with Matlab/Simulink prototyping framework. This demonstration will show the versatility of an agile cognitive radio platform by using spectrum sensing capabilities to identify an empty portion of the spectrum that will be subsequently used for communication.
Affiliation(s): WINLAB/Rutgers University, NICTA

Federating CRON (Cyberinfrastructure of Reconfigurable Optical Networks) testbed with GENI testbeds

Demo Participants: Seung‐Jong Park, Cheng Cui
Federating a Cyberinfrastructure of Reconfigurable Optical Networks (CRON) testbed with ProtoGENI: a CRON is a virtual high speed optical networking and computing testbed (funded by NSF) providing 10Gbps emulated networking and computing environments. This demo will showcase the automated configuration of 10Gbps networking environments and the federation among CRON and other ProtoGENI resources.
Affiliation(s): Louisiana State University

Demo Participants: S. Felix Wu, Peter Seigel, Chen-Nee Chuah
Davis Social Links will demonstrate a distributed DSL core providing social routing services. Our FAITH social network transformation service will also be demonstrated.
Affiliation(s): University of California

GMOC-GENI Meta Operations

Demo Participants: Jon-Paul Herron, Jon-Paul Herron
GMOC GENI cluster and federation visualization
Affiliatoin(s): Indiana University, Global Research Network Operations Center

Great Plains Environment for Network Innovation

Demo Participants: James P.G. Sterbenz, Deep Medhi, Byrav Ramamurthy, Caterina Scolio, Don Gruenbacher, Greg Monaco, Jeff Verrant, Cort Buffington, David Hutchison, Bernhard Platter, Joseph B. Evans, Rick McMullen, Baek-Young Choi, Jim Archuleta, Andrew Scott
Demonstration of GpENI (Great Plains Environment for Network Innovation) programmable testbed for future Internet research. GpENI is an international testbed centered on a Midwest US regional optical network that is programmable at all layers of the protocol stack, using PlanetLab, VINI, and DCN, and interconnected to ProtoGENI in the US and G-Lab and ResumeNet? in Europe. We will demonstrate the topology, functionality, and operations of GpENI.
Affiliatoin(s): The University of Kansas, Kansas State University, University of Missouri, University of Nebraska, Great Plains Network, Ciena Government Solutions, Qwest Government Services, KanREN, MOREnet, Lancaster University, ETH Zurich

GENI/Ecalyptus Federated Resource Allocation (GENICloud)

Demo Participants: Alvin Au Young, Andy Bavier, Jessica Blaine, James Kempf, Joe Mambretti, Rick McGeer, Alex Snoeren, Marco Yuen
GENICloud is a flexible, SFA-based cloud environment running at HP OpenCirrus?, UCSD, and Northwestern. In this demo we will show the capabilities of the GENICloud environment by showing a multi-site cloud-based transcoding application, that will be shown on both the show floor and available to any participant with an Internet connection.
Affiliation(s): HP Labs, PlanetWorks, U.C. San Diego, U. of Victoria, U. of Illinois, Urbana- Champaing


Demo Participants: ?
GENI-VIOLIN's goal is to build an In-network suspend and resume infrastructure for GENI experiments. This project is part of the GENI-alpha plenary session for GEC9. Current GENI experiments cannot be suspended/resumed, and often have to be restarted from beginning, if failures occur in the infrastructure. GENI-VIOLIN solves this problem, by providing "live snapshot" capability to GENI experiments running in GENI slices for fault tolerance, debugging and slice management. GENI-VIOLIN provides the suspend/resume functionality to GENI experiments, with minimal disruption to application performance, while being completely transparent to the application and guest operating systems. During GEC9, we will show suspend/resume functionality that is completely in the network by exploiting Openflow. GENIVIOLIN requires minimal hypervisor support from the end-hosts. We will show a demo of suspend/resume over two sites (Utah and BBN Emulabs) that are geographically separated. Affiliation(s): ?


Demo Participants: ?
We demonstrate the feasibility of multipath TCP from the application level with the support at the network level. The network routes packets based on destination address and last bit of the destination port #. Packets destined to odd and even port numbers are guaranteed to take disjoint paths. With this support, applications can interact using two TCP sockets that establish connections to an odd and even port, thus increasing the end-to-end throughput. The application layer is responsible for splitting and merging the traffic among the different TCP streams, while the individual TCP streams are responsible for reliable transport within the stream.
Affiliation(s): ?

The Hive Mind: Applying a Distributed Security Sensor Network to GENI

Demo Participants: Sean Peisert
This demo is regarding Milestone 1 of Year 2
Affiliation(s): University of Califronia, Davis, Battelle, CA Labs, CA Inc.

iGENI: A Distributed Network Research Infrastructure for the Global Environment for Network Innovation

Demo Participants: Joe Mambretti, Maxine Brown, Thomas A. DeFanti
iGENI GEC 9 Demonstrations The iGENI dynamic network provisioning demonstrations showcase capabilities for large scale (national and international) multiple domain dynamic provisioning, including L1/L2 path involving multiple sites, using specialized signaling and implementation techniques. Dynamic Provisioning for the iGENI Cluster D Network In partnership with RENCI (Renaissance Computing Institute), Duke University, the University of Massachusetts, and other D-Cluster participants, iGENI Consortium has implemented the Open Resource Control Architecture (ORCA) control framework at the StarLight?? international exchange facility and it is supporting a demonstration of flexible, programmable heterogeneous networking among multiple national and international sites, including dynamic path provisioning. Several iGENI demonstrations will showcase this dynamic provisioning. Highly Scalable Network Research TransCloud?? Prototype This multi-organization TransCloud?? demonstration showcases a capability for using dynamic large scale cloud and network infrastructure for highly distributed specialized capabilities among multiple sites connected by the iGENI network, including digital media transcoding and streaming to multiple edge platforms, supported by scaleable cloud computing and network provisioning.
Affiliation(s): Northwestern University, University of Illinois at Chicago, California Institute for Telecommunications and Information Technology (Calit2)

K-GENI Establishment of Operational Linkage between GENI and ETRI/KISTI-Korea for International Federation

Demo Participants: James Williams
ETRI will demonstrate the virtualized programmable platform that support CPU virtualization on NPs. This demo includes dynamic downloading of object codes, configuration of CPU resource, sliver modification of ptp into ptmp, etc. Also, KISTI will perform an international demonstration for federated network operations over K-GENI testbeds between Korea and USA, in a joint effort with Indiana University in the US. The federated operations will be done using two Future Internet meta operations system in Korea and the US, dvNOC and GMOC respectively. Affiliation(s): Indiana University, ETRI (Electronics and Telecommunications Research Institute) , KISTI (Korea Institute of Science and Technology Information)

Mid-Atlantic Network Facility for Research, Experimentation, and Development (MANFRED)

Demo Participants: Peter O'Neil, Tom Lehman
In this demonstration we will show how the PlanetLab SFI/SFA can be used to submit requests to the Mid-Atlantic Crossroads (MAX) GENI Aggregate Manager for the purpose of dynamic instantiation of an experiment topology. The dynamically instantiated topology will be constructed from a diverse set of resources including PlanetLab slices, dynamically provisioned network paths across the MAX network, and dynamically provisioned network paths across ProtoGENI. This demonstration will also include information on how the current MAX Aggregate Manager interfaces with the PlanetLab MyPLC controller, IDC based dynamic networks (Internet2 ION, ESnet SDN, GEANT AutoBahn?, USLHCNet, and various regional networks), and ProtoGENI Aggregate Manager. Affiliation(s): Mid-Atlantic Crossroads GigaPOP , University of Maryland, College Park , University of Southern California, Qwest Eckington Place NE, George Washington University

Instrumentation and Measurement for GENI a.k.a. GENI Inst & Meas (obsolete), MeasurementSystem, MeasurementSys

Demo Participants: Paul Barford, Mark Crovella, Joel Sommers
Affiliation(s): University of Wisconsin , Boston University , Colgate University

A Prototype of a Million Node GENI

Demo Participants: Thomas Anderson, Justin Cappos
This demo will describe common use cases for the Million Node GENI / Seattle software. We will also include a demonstration of student created software including alpha versions of a clicker program written in Seattle and an automatic deployment system. Affiliation(s): University of Washington

netKarma: GENI Provenance Registry

Demo Participants: Beth Plale, Chris Small NetKarma will demonstrate capture of provenance information of a sample experiment running on GENI Planetlab nodes. This will include the capture of workflow of an experiment run in GUSH and the software used and distributed by RAVEN. NetKarma will also demonstrate the deployment of a permanent provenance store in the form of a Karma 3.1 Database. This database will allow data from each experiment to be available either by direct queries or as part of a RabbitMQ Pub/Sub system
Affiliation(s): Indiana University

Attachments (10)