Version 26 (modified by, 8 years ago) (diff)

change category name

GENI Evening Demos


The demo/networking event will be held in the College Avenue Commons (CAVC) building, room 351, on Arizona State University campus.


March 8, Tuesday 5.30pm - 7.30pm

Session Leader

Manu Gosain
GENI Project Office


The evening demo session gives GENI experimenters and developers a chance to share their work in a live network environment. Demonstrations run for the entire length of the session, with teams on hand to answer questions and collaborate. This page lists requested demonstrations categorized in broad interest groups. You can download project posters and supplemental information from attachments listed at the bottom of this page.

Directions and Logistics

Please visit this page for attendee and presenter logistics information.


Note: Demo requests are being submitted and in the approval process. Content for this page is subject to change


GENI Cinema

This demo shows how SDN can be used to implement a live video streaming service for streaming and switching between classroom lectures.

Video streaming over the Internet, be it static or live streaming, is rapidly increasing in popularity. Many video streaming services exist to serve a variety of needs, such as video conferencing, entertainment, education, and the broadcast of live events. These services rely heavily on the server application to adapt to increasing and decreasing demand for a particular video resource. Furthermore, they require the reallocation of resources and the restart of the stream when a client stops, starts, and/or switches to a different stream. SDN and specifically OpenFlow can be creatively used to reallocate some of these tasks to the network and link layers.

Our goal is to provide a scalable service for GENI using OpenFlow that supports the broadcast of live video streams from an arbitrary number of video-producers to an arbitrary number of video-consumers, where video- consumers can change “channels” without disrupting their existing stream and without affecting the load on a particular video stream source.

  • Ryan Izard, Ryan Izard, Clemson University
  • Kuang-Ching Wang, Kuang-Ching Wang, Clemson University
  • Qing Wang,, Clemson University
  • Parmesh Ramanathan,, University of Wisconsin-Madison

Reproducing Networking Research on GENI

Students in a network design and analysis course at NYU are giving reproducible research a try (in a similar vein as CS244 at Stanford: As part of this course, students implement and execute an experiment on GENI to attempt to reproduce classic and recent published results. Subjects include: active queue management, buffer sizing in routers, TCP, network traffic models, datacenter networks, queuing and congestion control under DoS attacks, application layer protocols, and wireless. Students write up their work in sufficient detail for others to reproduce their reproduction on GENI, and may choose to post it on a public course blog so that other students and researchers can reuse and extend their work.

This poster and demo will be of interest both to educators (who may want to try something similar in their own courses) and researchers (who may want to build on some of these reproducible experiments) as well as GENI developers (who will be interested in how well GENI is performing as a platform for reproducible research.)


Virtual Computer Networks Lab

This demo shows assignments that are designed for the use of GENI testbeds in the classroom.

This is a demo for Virtual Computer Networks Lab project. We will present a tool for automation of large-scale experiments using geni-lib.


Teaching using GENI and iPython notebooks

Exhaustive demonstration of the full features provided by VTS across the GENI testbed.

We will show a self-contained VM environment for interacting with GENI using geni-lib completely within a browser interface, with a local Jupyter (iPython) notebook host, and browser-based terminal.

Notebooks can be saved as teaching references, and replayed by any user in their own environment to reproduce a complete session (reserving resources, inspecting them, etc.). Example notebooks will be shown for basic networking labs using VTS, with topology visualization.


Experimenter Resources

Steroid OpenFlow Service

"With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower TCP throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. A software defined networking based solution called Steroid OpenFlow Service (SOS) is proposed as a network service that transparently increases the throughput of data transfers across large networks. SOS scales up in an OpenFlow-based cloud environment to provide increased network throughput for multiple applications simultaneously. A cloud-based approach is particularly beneficial to applications in environments without access to high performance networks.

This demo shows the scalability of SOS and how it can be deployed within GENI to provide significantly increased throughput for long distance data transfers over TCP. An similar demonstration will also be shown on a high performance 10Gbps network."

GENI Desktop

This demo shows a unified interface for accessing GENI resources and managing GENI experiments.

The GENI Desktop provides a unified interface and environment for experimenters to create, control, manage, interact with and measure the performance of GENI slices. We will demonstrate the newly implemented JACKS view of the slice and the session concept used to control the user's interaction with the GUI of the GENI Desktop.



This demo shows CloudLab, a facility for research on the future of cloud computing.

This demo will showcase several new features of CloudLab, including:

  • Status reports (health and available resources) for clusters
  • Realtime status notifications for startup commands, image creation, and other events
  • Persistent storage
  • New organization for profiles


Configuration Management with Chef inside CloudLab Experiments

Users of CloudLab (and other GENI-derived testbeds) commonly use image snapshots to preserve their working environments and to share them with other users. While snapshots re-create software environments byte-for-byte, they are not conducive to composing multiple environments, nor are they good for experiments that must run across many versions of their environments with subtle differences. This demo will present our design and implementation of an alternative experiment management system. This system leverages instances of the Chef configuration management system, and can be used “on top of” existing testbeds. Chef helps us address customization and composability issues encountered when developing multi-component and multi-node software stacks capable of running on multiple hardware platforms. We will demonstrate how our prototype allows orchestrating components of complex software environments in CloudLab experiments. The experiment that we use as motivation and example in this demo is one that facilitates benchmarking and energy efficiency analysis of the CloudLab hardware.


Dynamic Sharing of GENI AM Resources

Resource requirements may change over the lifetime of an experiment or job. The experimenter may think, ""What would happen if I scaled up the number of compute nodes?"", and want to add more temporarily to test a theory --- without recreating other experiment infrastructure. Compute or I/O-intensive jobs may be able to opportunistically use additional resources to increase throughput. Moreover, cluster or testbed resource requirements change as user workloads come and go. Depending on cluster design, location, and resource requirements, it may be useful for clusters to share resources, seeking temporary ""loans"" from under-utilized clusters to increase job throughput at times of high load. We have developed new ProtoGENI API extensions, server-side (AM) management and policy code, and client tools to manage experiments whose resource allocations grow and shrink dynamically over their lifetimes. These features support not only dynamic experiments within an AM, but also allow the AM's resources to be used temporarily by other, external clusters. To arbitrate and facilitate sharing between both clusters and experiments with different resources, priorities, guarantees, and users, our dynamic experiment management software employs a mix of flexible policy, soft and hard resource guarantees, and a general, cooperative encoding of resource values among cluster management and dynamic experiment clients to promote eager sharing of unused resources. Our demo will showcase both dynamic experiments, and inter-cluster resource sharing, at several CloudLab clusters. OpenStack cloud experiments at multiple CloudLab clusters will add nodes when they are available, and give up nodes when the local cluster is under pressure. One CloudLab cluster will share its resources with a Condor pool, and the CloudLab share of the Condor pool will grow and shrink. We also hope to have another CloudLab cluster integrated with an HPC cluster running Slurm,with the HPC cluster requesting CloudLab nodes based on its workload demands, or releasing them when CloudLab is under resource pressure (and thus requests or demands them back). We will be able to twiddle policy knobs to induce dynamic change and show how the clusters and experiments adapt. We plan for demo participants to see this resource dynamism and a snapshot of the management software's decisions in a ""dashboard"" web page.


Dynamic Slices in ExoGENI: Modifying Slice Topology On Demand

This demo shows new ExoGENI features including dynamic slice modification. Functionality includes adding/removing compute nodes, storage nodes, and network links.


Building an End-to-end Slice through Slice Exchange between Virtualized WiFi, VNode, and ProtoGENI

An SDX technology of dynamically building an end-to-end slice across multiple virtualized networks including virtualized wireless access is introduced. We demonstrate building a federated slice between virtualized WiFi, VNode, and ProtoGENI based on the enhanced Slice Exchange Point (SEP) framework over JGN-X and GENI inter-connected testbeds.


Network Troubleshooting with SDN Traceroute Protocol (SDNTrace)

The demo shows a proposed protocol to trace flow paths on a given network composed of SDN network devices.

This demo shows a network protocol to trace L2 flow paths using network function. A probe packet is created to trace a flow paths with an SDNTrace network protocol. The devices on the path will forward the probe to the Network Function (NF). The NF will construct and send a respond packet back to the originator, and as the same time send the original probe back to the devices to forward to the next hop. The process continue until the probe packet reaches the destination and all the traced information is collected at the originator.




Paradrop -- an educational platform to teach network and wireless programming

We will demo the Paradrop Platform, which is a software platform that allows developers to launch applications onto specialized Access Points that exist in the home. This provides the ability to introduce unique control and high quality value adds onto services the end-user chooses to use in their home including applications related to Internet of Things, high-definition media content distribution, and others. For this demo, we will showcase the Platform's ability to dynamically launch and control virtual machines that are running within the Access Point for a few specific services.


SDX and Federation

GENI Enabled Software Defined Exchange (SDX)

This demonstration will show a very early prototype for a GENI enabled Software Defined Exchange (SDX) which utilizes Network Service Interface (NSI) for network element control, and includes public cloud resources from Amazon Web Services (AWS) as part of GENI Stitched topologies. The work demonstrated here is driven by a vision for future R&E cyberinfrastructure that consists of an ecosystem of ad hoc and dynamically federated Software Defined Exchanges (SDXs) and Software Defined ScienceDMZs services. GENI technologies are leveraged in the form of the MAX Aggregate Manager which utilizes the GENI Rack Aggregate Manager (GRAM) software for GENI Federation functions. This MAX/GRAM AM utilizes the Open Grid Forum (OGF) NSI protocol to provision services across the network elements within the Washington International Exchange (WIX) located in McLean, Virginia and the MAX Regional Network.



The Earth Observation Depot Network (EODN) is a distributed storage service that capitalizes on resources from the NSF-funded GENI and Data Logistics Toolkit (DLT) projects.  The Intelligent Data Movement Service (IDMS), a deployment of the DLT on the NSF-funded GENI cloud infrastructure, realizes EODN to enable open access, reduced latency, and fast downloads of valuable Earth science information collected from satellites and other sensors. Beyond basic storage capacity, the IDMS-EODN system includes mechanisms for optimizing data distribution throughout the depot network while also taking into account the desired locality of user data. Accelerating access enables better synchronization of disparate imagery sets and facilitates new meteorological and atmospheric research applications.


GpENI, KanREN, US Ignite Future Internet Testbed & Experiments

Our demo is an interactive visualization system that shows how a given SDN enabled network behaves in the presence of area-based challenges. Our visualization system consists of a Google Map front-end hosted on a server that also enables event based communication between the front-end and the challenged network. The challenges are determined by the user using a real-time editable polygon. The visualization system shows real-time performance parameters from physical experiments defined by the user and carried out using our KanREN OpenFlow testbed. When the challenge is applied on the map, the nodes in the polygon are removed from the underlying OpenFlow network topology and appropriate measures taken to ensure minimal disruption. As performance metrics, we present the real-time packet delivery ratio as well as throughput for the TCP and UDP based application traffic used in the experiments. Furthermore, we have more recently focused on extensive enhancements to the Google map interface by including controls that allow the user detailed control on the state of the experiments and their varied configuration. We have also working on adding support for Mininet based experiments that would also the user to run OpenFlow based experiments on various topologies that are currently part of KU-TopView, a database of topology data from real physical and logical networks.


GENI Experiment Engine/Ignite Collaborative Visualizer

The GENI Experiment Engine is a rapid-deployment infrastructure-as-a-service deployed across the GENI infrastructures. In this demo, we will show the allocation of a GEE Slicelet, and the deployment of a full-featured app across the infrastructure. We also intend to show the GENI Experiment Engine spanning multiple infrastructures, including Chameleon and possibly SAVI.


SDX at SoX: Software Defined Exchange in the Regional Network

The SDX provides a promising opportunity to change the way network operators come together to provide new services and richer implementation of policy. This demo provides an update on the GENI SDX project in the SoX regional network.



Workflow Performance experiments for HPC queuing systems over Hybrid Cloud technologies for 'Simulation-as-a-Service'

Advanced manufacturing today requires diverse computation infrastructure for data processing. Our 'Simulation-as-a-Service' App, currently compute jobs over OSU HPC resources. However, there is a need to access to different computation resources available. We provide to users access to a variety of clouds such as Amazon, and GENI as HPC compute-clusters through the use of HPC queuing systems. The cloud infrastructure is deployed on-demand based on user requirements that are abstracted from a web site and converted to RSpecs that integrates customized scientific software and stored in Catalogs for future utilization whenever similar requirements are needed.


A Cyber Physical testbed for Advanced Manufacturing

This demonstration will be a milestone in the area of Digital Manufacturing and involves showcasing a GENI based cyber physical framework for advanced manufacturing. This Next Internet based framework will enable globally distributed software and manufacturing resources to be accessed from different locations accomplish a complex set of life cycle activities including design analysis, assembly planning, and simulation. The advent of the Next Internet holds the promise of ushering in a new era in Information Centric engineering and digital manufacturing activities. The focus will be on the emerging domain of micro devices assembly, which involves the assembly of micron sized parts using automated micro assembly work cells.



Getting to know RPKI: A GENI-based Tutorial

The Resource Public Key Infrastructure (RPKI) is an important tool for improving the robustness of the Internet by making BGP more secure. This project provides a full RPKI deployment testbed so that network operators can gain experience configuring and operating RPKI in preparation for deployment in their network.


Attachments (12)