[[PageOutline]] = [wiki:GEC24Agenda#ConferenceAgenda GENI] Evening Demos = == Location == The demo/networking event will be held in the College Avenue Commons (CAVC) building, room 351, on Arizona State University campus. == Schedule == March 8, Tuesday 5.30pm - 7.30pm == Session Leader == {{{ #!html
Manu Gosain
GENI Project Office
}}} == Details == The evening demo session gives GENI experimenters and developers a chance to share their work in a live network environment. Demonstrations run for the entire length of the session, with teams on hand to answer questions and collaborate. This page lists requested demonstrations categorized in broad interest groups. You can download project posters and supplemental information from attachments listed at the bottom of this page. == Directions and Logistics == Please visit [wiki:GEC24Agenda/EveningDemoSession/PresenterInfo this page] for attendee and presenter logistics information. == Projects == {{{ #!html

}}} === Education === {{{ #!html

}}} ==== GENI Cinema ==== ''This demo shows how SDN can be used to implement a live video streaming service for streaming and switching between classroom lectures.'' Video streaming over the Internet, be it static or live streaming, is rapidly increasing in popularity. Many video streaming services exist to serve a variety of needs, such as video conferencing, entertainment, education, and the broadcast of live events. These services rely heavily on the server application to adapt to increasing and decreasing demand for a particular video resource. Furthermore, they require the reallocation of resources and the restart of the stream when a client stops, starts, and/or switches to a different stream. SDN and specifically !OpenFlow can be creatively used to reallocate some of these tasks to the network and link layers. Our goal is to provide a scalable service for GENI using !OpenFlow that supports the broadcast of live video streams from an arbitrary number of video-producers to an arbitrary number of video-consumers, where video- consumers can change “channels” without disrupting their existing stream and without affecting the load on a particular video stream source. * Ryan Izard, rizard@clemson.edu, Clemson University * Kuang-Ching Wang, kwang@clemson.edu, Clemson University * Qing Wang, qw@g.clemson.edu, Clemson University * Parmesh Ramanathan, parmesh@ece.wisc.edu, University of Wisconsin-Madison {{{ #!html

}}} ==== Reproducing Networking Research on GENI ==== Students in a network design and analysis course at NYU are giving reproducible research a try (in a similar vein as CS244 at Stanford: http://web.stanford.edu/class/cs244/pa3.html). As part of this course, students implement and execute an experiment on GENI to attempt to reproduce classic and recent published results. Subjects include: active queue management, buffer sizing in routers, TCP, network traffic models, datacenter networks, queuing and congestion control under DoS attacks, application layer protocols, and wireless. Students write up their work in sufficient detail for others to reproduce their reproduction on GENI, and may choose to post it on a public course blog so that other students and researchers can reuse and extend their work. This poster and demo will be of interest both to educators (who may want to try something similar in their own courses) and researchers (who may want to build on some of these reproducible experiments) as well as GENI developers (who will be interested in how well GENI is performing as a platform for reproducible research.) Participants: * Fraida Fund,ffund@nyu.edu, NYU {{{ #!html

}}} ==== Virtual Computer Networks Lab ==== '' This demo shows assignments that are designed for the use of GENI testbeds in the classroom. '' This is a demo for Virtual Computer Networks Lab project. We will present a tool for automation of large-scale experiments using geni-lib. Participants: * Mike Zink, zink@cs.umass.edu, University of Massachusetts * Divyashri Bhat,dbhat@cs.umass.edu, University of Massachusetts {{{ #!html

}}} ==== Teaching using GENI and iPython notebooks ==== '' Exhaustive demonstration of the full features provided by VTS across the GENI testbed. '' We will show a self-contained VM environment for interacting with GENI using geni-lib completely within a browser interface, with a local Jupyter (iPython) notebook host, and browser-based terminal. Notebooks can be saved as teaching references, and replayed by any user in their own environment to reproduce a complete session (reserving resources, inspecting them, etc.). Example notebooks will be shown for basic networking labs using VTS, with topology visualization. Participants: * Nick Bastin, nick.bastin@gmail.com, Barnstormer Softworks {{{ #!html

}}} === Experimenter Resources === {{{ #!html

}}} ==== Steroid OpenFlow Service ==== "With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower TCP throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. A software defined networking based solution called Steroid OpenFlow Service (SOS) is proposed as a network service that transparently increases the throughput of data transfers across large networks. SOS scales up in an OpenFlow-based cloud environment to provide increased network throughput for multiple applications simultaneously. A cloud-based approach is particularly beneficial to applications in environments without access to high performance networks. This demo shows the scalability of SOS and how it can be deployed within GENI to provide significantly increased throughput for long distance data transfers over TCP. An similar demonstration will also be shown on a high performance 10Gbps network." * Ryan Izard, rizard@clemson.edu, Clemson University * cbarrin@clemson.edu, Clemson University {{{ #!html

}}} ==== GENI Desktop ==== '' This demo shows a unified interface for accessing GENI resources and managing GENI experiments. '' The GENI Desktop provides a unified interface and environment for experimenters to create, control, manage, interact with and measure the performance of GENI slices. We will demonstrate the newly implemented JACKS view of the slice and the session concept used to control the user's interaction with the GUI of the GENI Desktop. Participants: * Jim Griffioen, griff@netlab.uky.edu, University of Kentucky * Zongming Fei, fei@netlab.uky.edu, University of Kentucky * Hussamuddin Nasir, nasir@netlab.uky.edu, University of Kentucky {{{ #!html

}}} ==== !CloudLab ==== ''This demo shows !CloudLab, a facility for research on the future of cloud computing. '' This demo will showcase several new features of !CloudLab, including: * Status reports (health and available resources) for clusters * Realtime status notifications for startup commands, image creation, and other events * Persistent storage * New organization for profiles Participants: * Rob Ricci, ricci@cs.utah.edu, University of Utah {{{ #!html

}}} ==== Configuration Management with Chef inside !CloudLab Experiments ==== Users of !CloudLab (and other GENI-derived testbeds) commonly use image snapshots to preserve their working environments and to share them with other users. While snapshots re-create software environments byte-for-byte, they are not conducive to composing multiple environments, nor are they good for experiments that must run across many versions of their environments with subtle differences. This demo will present our design and implementation of an alternative experiment management system. This system leverages instances of the Chef configuration management system, and can be used “on top of” existing testbeds. Chef helps us address customization and composability issues encountered when developing multi-component and multi-node software stacks capable of running on multiple hardware platforms. We will demonstrate how our prototype allows orchestrating components of complex software environments in !CloudLab experiments. The experiment that we use as motivation and example in this demo is one that facilitates benchmarking and energy efficiency analysis of the !CloudLab hardware. Participants: * Rob Ricci, ricci@cs.utah.edu, University of Utah * Dmitry Duplyakin,dmitry.duplyakin@colorado.edu, Univ of Colorado {{{ #!html

}}} ==== Dynamic Sharing of GENI AM Resources ==== Resource requirements may change over the lifetime of an experiment or job. The experimenter may think, ""What would happen if I scaled up the number of compute nodes?"", and want to add more temporarily to test a theory --- without recreating other experiment infrastructure. Compute or I/O-intensive jobs may be able to opportunistically use additional resources to increase throughput. Moreover, cluster or testbed resource requirements change as user workloads come and go. Depending on cluster design, location, and resource requirements, it may be useful for clusters to share resources, seeking temporary ""loans"" from under-utilized clusters to increase job throughput at times of high load. We have developed new ProtoGENI API extensions, server-side (AM) management and policy code, and client tools to manage experiments whose resource allocations grow and shrink dynamically over their lifetimes. These features support not only dynamic experiments within an AM, but also allow the AM's resources to be used temporarily by other, external clusters. To arbitrate and facilitate sharing between both clusters and experiments with different resources, priorities, guarantees, and users, our dynamic experiment management software employs a mix of flexible policy, soft and hard resource guarantees, and a general, cooperative encoding of resource values among cluster management and dynamic experiment clients to promote eager sharing of unused resources. Our demo will showcase both dynamic experiments, and inter-cluster resource sharing, at several !CloudLab clusters. !OpenStack cloud experiments at multiple !CloudLab clusters will add nodes when they are available, and give up nodes when the local cluster is under pressure. One !CloudLab cluster will share its resources with a Condor pool, and the !CloudLab share of the Condor pool will grow and shrink. We also hope to have another !CloudLab cluster integrated with an HPC cluster running Slurm,with the HPC cluster requesting !CloudLab nodes based on its workload demands, or releasing them when !CloudLab is under resource pressure (and thus requests or demands them back). We will be able to twiddle policy knobs to induce dynamic change and show how the clusters and experiments adapt. We plan for demo participants to see this resource dynamism and a snapshot of the management software's decisions in a ""dashboard"" web page. Participants: * David Johnson, johnsond@flux.utah.edu, Univ of Utah {{{ #!html

}}} ==== Dynamic Slices in ExoGENI: Modifying Slice Topology On Demand ==== This demo shows new ExoGENI features including dynamic slice modification. Functionality includes adding/removing compute nodes, storage nodes, and network links. Participants: * Paul Ruth, pruth@renci.org, RENCI {{{ #!html

}}} ==== (POSTER) SDN-ERS: A timely Software Defined Networking Framework for Emergency Response Systems ==== This poster presents our proposed timely SDN-based framework for ERS (Emergency Response System), where efficient schemes that identify routing paths are proposed with minimum delay and prioritize network traffic depending on urgency. Participants: * Mohamed Rahouti, mrahouti@mail.usf.edu, University of South Florida {{{ #!html

}}} ==== Multi-protocol Network Troubleshooting with Pathtrace protocol ==== We showcase a proposed protocol to trace flow paths between two end hosts of a given network topology. For this demo deployment the protocol is implemented using a network function (NF) which is connected to legacy L2 switches. Trace packets traverse the NF through the use of ACLs on the L2 devices, allowing for retrofitting of legacy networks for accurate path tracing through L2 while not affecting normal packet forwarding paths. A custom client has been created (akin to the traceroute tool) to be deployed on hosts to originate tracer packets, which are injected into the network towards the desired endpoint, which the NF records at each hop and uses to return path information to the administrator. The demo will show path tracing across an arbitrary L2 topology deployed using VTS, with live inspection of STP state and failure injection to show tracing through path changes between the same endpoints. Participants: * Deniz Gurkan, dgurkan@central.uh.edu, University of Houston * Nick Bastin, nick.bastin@gmail.com, University of Houston * Kyle Long Tran, kyle.longtran@gmail.com, University of Houston {{{ #!html

}}} === Wireless === {{{ #!html

}}} ==== Paradrop ==== ''Paradrop -- A virtualized WiFi router for edge computing, 3rd party services, and a tool for teaching networking in the classroom.'' We will demo the Paradrop Platform, which is a software platform that allows developers to launch applications onto specialized Access Points that exist in the home and in local businesses. This provides the ability to introduce 3rd party services the end-user chooses to use in their home or business including applications related to Internet of Things, high-definition media content distribution, and others. For this demo, we will showcase the Platform's ability to dynamically launch and control a virtualized access point. Within the container, an application was written to detect motion from content provided wirelessly via a webcam. Participants: * Derek Meyer, dmeyer@cs.wisc.edu, Wisconsin Wireless and Networking Systems (WiNGS) Laboratory * Sejal Chauhan, sejalc@cs.wisc.edu, University of Wisconsin-Madison * Suman Banerjee, suman@cs.wisc.edu, Wisconsin Wireless and Networking Systems (WiNGS) Laboratory ==== Building an End-to-end Slice through Slice Exchange between Virtualized !WiFi, VNode, and ProtoGENI ==== (see listing in the SDX and Federation category) {{{ #!html

}}} === SDX and Federation === {{{ #!html

}}} ==== GENI Enabled Software Defined Exchange (SDX) ==== This demonstration will show a very early prototype for a GENI enabled Software Defined Exchange (SDX) which utilizes Network Service Interface (NSI) for network element control, and includes public cloud resources from Amazon Web Services (AWS) as part of GENI Stitched topologies. The work demonstrated here is driven by a vision for future R&E cyberinfrastructure that consists of an ecosystem of ad hoc and dynamically federated Software Defined Exchanges (SDXs) and Software Defined ScienceDMZs services. GENI technologies are leveraged in the form of the MAX Aggregate Manager which utilizes the GENI Rack Aggregate Manager (GRAM) software for GENI Federation functions. This MAX/GRAM AM utilizes the Open Grid Forum (OGF) NSI protocol to provision services across the network elements within the Washington International Exchange (WIX) located in !McLean, Virginia and the MAX Regional Network. Participants: * Tom Lehman,tlehman@umd.edu, Univ of Maryland * Xi Yang,maxyang@umd.edu, Univ of Maryland {{{ #!html

}}} ==== EON-IDMS ==== The Earth Observation Depot Network (EODN) is a distributed storage service that capitalizes on resources from the NSF-funded GENI and Data Logistics Toolkit (DLT) projects.  The Intelligent Data Movement Service (IDMS), a deployment of the DLT on the NSF-funded GENI cloud infrastructure, realizes EODN to enable open access, reduced latency, and fast downloads of valuable Earth science information collected from satellites and other sensors. Beyond basic storage capacity, the IDMS-EODN system includes mechanisms for optimizing data distribution throughout the depot network while also taking into account the desired locality of user data. Accelerating access enables better synchronization of disparate imagery sets and facilitates new meteorological and atmospheric research applications. Participants: * Ezra Kissel,ekissel@indiana.edu {{{ #!html

}}} ==== GpENI, KanREN, US Ignite Future Internet Testbed & Experiments ==== Our demo is an interactive visualization system that shows how a given SDN enabled network behaves in the presence of area-based challenges. Our visualization system consists of a Google Map front-end hosted on a server that also enables event based communication between the front-end and the challenged network. The challenges are determined by the user using a real-time editable polygon. The visualization system shows real-time performance parameters from physical experiments defined by the user and carried out using our KanREN OpenFlow testbed. When the challenge is applied on the map, the nodes in the polygon are removed from the underlying OpenFlow network topology and appropriate measures taken to ensure minimal disruption. As performance metrics, we present the real-time packet delivery ratio as well as throughput for the TCP and UDP based application traffic used in the experiments. Furthermore, we have more recently focused on extensive enhancements to the Google map interface by including controls that allow the user detailed control on the state of the experiments and their varied configuration. We have also working on adding support for Mininet based experiments that would also the user to run OpenFlow based experiments on various topologies that are currently part of !KU-TopView, a database of topology data from real physical and logical networks. Participants: * Yufie Cheng,yfcheng@ittc.ku.edu, The University of Kansas {{{ #!html

}}} ==== GENI Experiment !Engine/Ignite Collaborative Visualizer ==== The GENI Experiment Engine is a rapid-deployment infrastructure-as-a-service deployed across the GENI infrastructures. In this demo, we will show the allocation of a GEE Slicelet, and the deployment of a full-featured app across the infrastructure. We also intend to show the GENI Experiment Engine spanning multiple infrastructures, including Chameleon and possibly SAVI. Participants: * Rick Mcgeer,rick@mcgeer.com * Andy Bavier,acb@cs.princeton.edu {{{ #!html

}}} ==== SDX at SoX: Software Defined Exchange in the Regional Network ==== The SDX provides a promising opportunity to change the way network operators come together to provide new services and richer implementation of policy. This demo provides an update on the GENI SDX project in the SoX regional network. Participants: * Russ Clark, russ.clark@gatech.edu, Georgia Tech {{{ #!html

}}} ==== Building an End-to-end Slice through Slice Exchange between Virtualized !WiFi, VNode, and ProtoGENI ==== An SDX technology of dynamically building an end-to-end slice across multiple virtualized networks including virtualized wireless access is introduced. We demonstrate building a federated slice between virtualized !WiFi, VNode, and ProtoGENI based on the enhanced Slice Exchange Point (SEP) framework over JGN-X and GENI inter-connected testbeds. Participants: * Akihiro Nakao, nakao@iii.u-tokyo.ac.jp, University of Tokyo * Michiaki Hayashi, mc-hayashi@kddilabs.jp, KDDI * nakauchi@nict.go.jp {{{ #!html

}}} === Manufacturing === {{{ #!html

}}} ==== Workflow Performance experiments for HPC queuing systems over Hybrid Cloud technologies for 'Simulation-as-a-Service' ==== Advanced manufacturing today requires diverse computation infrastructure for data processing. Our 'Simulation-as-a-Service' App, currently compute jobs over OSU HPC resources. However, there is a need to access to different computation resources available. We provide to users access to a variety of clouds such as Amazon, and GENI as HPC compute-clusters through the use of HPC queuing systems. The cloud infrastructure is deployed on-demand based on user requirements that are abstracted from a web site and converted to RSpecs that integrates customized scientific software and stored in Catalogs for future utilization whenever similar requirements are needed. Participants: * Prasad Calyam, calyamp@missouri.edu, University of Missouri-Columbia * rcb553@mail.missouri.edu * rleto@totalsim.us ==== POSTER Experimental Demonstration of Heterogeneous Cross Stratum Broker for Scientific Applications ==== * Alberto Castro, Roberto Proietti, SJ Ben Yoo; Univ of California,Davis {{{ #!html

}}} ==== A Cyber Physical testbed for Advanced Manufacturing ==== This demonstration will be a milestone in the area of Digital Manufacturing and involves showcasing a GENI based cyber physical framework for advanced manufacturing. This Next Internet based framework will enable globally distributed software and manufacturing resources to be accessed from different locations accomplish a complex set of life cycle activities including design analysis, assembly planning, and simulation. The advent of the Next Internet holds the promise of ushering in a new era in Information Centric engineering and digital manufacturing activities. The focus will be on the emerging domain of micro devices assembly, which involves the assembly of micron sized parts using automated micro assembly work cells. Participants: * J. Cecil, j.cecil@okstate.edu, Oklahoma State * Yajun Lu, yajun.lu@okstate.edu, Oklahoma State {{{ #!html

}}} === Security === {{{ #!html

}}} ==== Getting to know RPKI: A GENI-based Tutorial ==== The Resource Public Key Infrastructure (RPKI) is an important tool for improving the robustness of the Internet by making BGP more secure. This project provides a full RPKI deployment testbed so that network operators can gain experience configuring and operating RPKI in preparation for deployment in their network. Participants: * Russ Clark, russ.clark@gatech.edu, Georgia Tech * Samuel Norris,samuel.norris@gatech.edu * Tito Nieves,tito.nieves@gmail.com