Changes between Version 9 and Version 10 of GEC25Agenda/Demos


Ignore:
Timestamp:
02/28/17 16:52:39 (7 years ago)
Author:
lnevers@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GEC25Agenda/Demos

    v9 v10  
    1212
    1313
    14 == !Agenda/Speakers ==
     14= Demo Night !Agenda/Speakers =
    1515
    1616The evening demo session gives GENI experimenters and developers a chance to share their work in a live network environment.  Demonstrations run for the entire length of the session, with teams on hand to answer questions and collaborate.  This page lists requested demonstrations categorized in broad interest groups.  You can download project posters and supplemental information from attachments listed at the bottom of this page.
     
    5757== Virtual Computer Networks Lab ==
    5858
    59 __Authors:__
     59__Authors:__ Bhushan Suresh, Divyashri Bhat, Michael Zink - University of Massachussetts
     60
     61The project focuses on providing students an interacting classroom experience with interesting course modules. A wide variety of course modules with varying degrees of complexity have been developed so far. VCNL offers a web-based tool using JUPYTER, that allows students to automate their experiment procedure and graph the results. The OpenFlow logic for the assignments is implemented in Python using RYU controller.
    6062
    6163== A novel Internet access service with online traffic engineering of elephant flows ==
    6264
    63 __Authors:__
     65__Authors:__ Sourav Maji, Malathi Veeraraghavan, Molly Buchanan, Fatma Alali, Jordi Ros-Giralt, Alan Commike - University of Virginia
    6466
     67Elephant flows, which are high-rate large-sized flows, can cause increased packet delays and packet losses in other flows. Even if the arrival rate of elephant flows is fairly low, large enterprises and lower-tier ISPs often upgrade their access links to provider networks to avoid service degradations caused by elephant flows. This work proposes and demonstrates an alternative solution to access link upgrades. The solution consists of deploying an Elephant Flow Traffic Engineering System (EFTES) that monitors access-link traffic, identifies elephant flows in real time, and instructs the router to add a firewall filter to isolate identified elephant flows to a separate queue. We used a ProtoGENI slice consisting of five bare-metal hosts located in the University of Kentucky testbed to illustrate the value offered by EFTES. One of the hosts was configured to serve as a router and forward packets from three hosts sending out traffic to a receiving host. Among the three sending hosts, one generates an elephant flow; another transmits a real background traffic and remaining sending host sends out a ping traffic. The background traffic was generated by replaying a trace collected by the Center for Applied Internet Data Analysis (CAIDA). The host configured as a router executes an Elephant Flow Identification Engine (EFIE), which is a part of the EFTES that identifies elephant flows in the presence of background traffic. It then uses this elephant flow identifier to set a filter rule that sends subsequent elephant flow packets to a different queue. This action of elephant flow redirection prevents the background traffic and the ping traffic from experiencing large queuing delays. We show from ping delay measurements that EFTES prevents delay sensitive flows from experiencing high latency and demonstrate the value offered by EFTES to delay-sensitive flows in the presence of artificial elephant flows that were added to a background replay of the CAIDA packet traces.
    6568
    6669== Data Flow Prioritization for Scientific Workflows Using A Virtual SDX on ExoGENI ==
    6770
    68 __Authors:__
     71__Authors:__ Anirban Mandal, Paul Ruth, Ilya Baldin, Rafael Ferreira da Silva, Ewa Deelman - Renaissance Computing Institute (RENCI)
    6972
     73This demonstration will showcase a novel, dynamically adaptable networked cloud infrastructure driven by the demand of a data-driven scientific workflow. It will use resources from ExoGENI. The demo will run on dynamically provisioned 'slices' spanning multiple ExoGENI racks that are interconnected using dynamically provisioned connections from Internet2 and ESnet. We will show how a virtual Software Defined Exchange (SDX) platform, instantiated on ExoGENI, provides additional functionality for management of scientific workflows. Using the virtual SDX slice, we will demonstrate how tools developed in the DoE Panorama project can enable the Pegasus Workflow Management System to monitor and manipulate network connectivity and performance between sites, pools, and tasks within a workflow. We will use a representative, data-intensive genome science workflow as a driving use case to showcase the above capabilities.
    7074== Steroid OpenFlow Service ==
    7175
     
    7478== GENI Wireless Testbed: A flexible open ecosystem for wireless communications research ==
    7579
    76 __Authors:__
     80__Authors:__ Michael Sherman, Ivan Seskar, Abhimanyu Gosain - Rutgers
     81
     82"This demo presents the architecture of the GENI edge cloud computing network, in the form of compute and storage resources, a mobile 4G LTE edge, and a high speed campus network connecting these components. We demonstrate a two use cases for LTE on GENI.
     83First, a mobile, infrastructure-less LTE implementation, utilizing two ORBIT radio nodes running OpenAirInterface (OAI). One as the eNB, and one as a UE.
     84Second, we show an implementation with a COTS UE connecting the the OAI eNB, with backhaul to the OAI EPC running on the Rutgers GENI rack."
     85
    7786
    7887== Experiences using GENI and the Affinity Research Group (ARG) Model to foster deep learning ==
    7988
    80 __Authors:__
     89__Authors:__ Graciela Perera - Northeastern Illinois University
     90
     91
     92Student engagement, both inside and outside of the classroom, is a pre-requesite to critical thinking, we have adapted the Affinity Research Group (ARG) Model to foster deep student engagement/learning in computer networking and cybersecurity courses. ARG is a team-based learning strategy for undergraduate research that develops scholarly inquiry skills in students through an intentionally inclusive, cooperative approach.  Because the approach is overtly supportive and reassuring, students can be challenged and engaged using GENI. Students are more comfortable to freely engage in experimentation and not be discourage by failure.
    8193
    8294== Denial of Service Detection and Mitigation on GENI ==
    8395
    84 __Authors:__
     96__Authors:__ Dr. Xenia Mountrouidou, Mac Knight - College of Charleston
     97
     98The demo will consist of showing how SDN can be leveraged to detect and mitigate a SYN flood denial of service attack. An OpenFlow switch is used to duplicate traffic to a monitor that is running an IDS. The monitor then alerts a controller when it believes an attack is taking place. If the controller confirms there is malicious traffic, action is taken and the attack is mitigated. In the demo will go into more detail on how the how the scripts running on the monitor and controller detect and mitigate the malicious traffic. In addition using GENI Desktop graphs, we will show visually when the attack occurred and when it was mitigated.
    8599
    86100== Incident-Supporting Visual Cloud Computing utilizing Software-Defined Networking ==
    87101
    88 __Authors:__
     102__Authors:__ Dmitrii Chemodanov, PhD Prasad Calyam - University of Missouri
     103
     104
     105"We are developing a new set of cloud/fog protocols to support computer vision applications related to the field of real-time visual situational awareness (e.g., tracking objects of interest, 3D scene reconstructions, augmented reality-based communications, etc) which are critical to first responders. These applications require seamless processing of imagery/video at the network edge and core cloud platforms with resilient performance that caters to user Quality of Experience (QoE) expectations. The absence or poor wireless communications at the edge networks near incident scenes further complicate an exploitation of these applications. As part of our project activities, we have setup a realistic virtual environment testbed in GENI and developed an SDN controller to evaluate our hybrid cloud-fog architecture along with the proposed algorithms. Specifically, to enable core-cloud computation we used high-performance nodes for handling large instance processing (e.g., tracking of objects, data fusion for the 3D scene reconstruction). We also used low-performance nodes for handling small instance processing (e.g., image tilling, stabilization, geo-projection) through fogs at the SDN network-edge. To transfer data between the core-cloud and fogs over SDN network, we used OpenFlow Virtual Switches.  Finally, to compensate the lack of wireless networking at the edge in GENI our testbed setup also included campus enterprise network with connected clients. In addition to visual data processing speed up, our preliminary experiment results indicate the need for sustained throughput at the wireless edge networks, and use of novel geographical routing protocols to enhance responders QoE.
     106At the workshop, we will share what barriers we are overcoming in GENI to create a realistic cloud-fog testbed, and how we are building upon these research results on performance engineering of public safety applications. In addition, we will share how we are working towards setting up a city-scale testbed with city collaborator sites and first responder agencies."
    89107
    90108== Layer-Two Peering across SAVI and GENI Testbeds using !HyperExchange ==
    91109
    92 __Authors:__
     110__Authors:__ Saeed Arezoumand, Hadi Bannazadeh, and Alberto Leon-Garcia - University of Toronto
     111
     112We demonstrate the peering of virtual networks between the SAVI and GENI testbeds using HyperExchange, a software-defined exchange fabric. The exchange is deployed between the physical networks of the two testbeds. Specifically, a layer-two WAN including nodes in SAVI testbed is peered with a VLAN in GENI testbed without using encapsulation and overlays. Each of these testbeds has a different logic to create and manage layer-two networks, so this demonstration shows how the HyperExchange is protocol-agnostic and allows tenants to create networks across dissimilar networks.
    93113
    94114== GENI Webinars for Research and Education ==
    95115
    96 __Authors:__
     116__Authors:__ Ben Newton, Jay Aikat, Kevin Jeffay - Universities of North Carolina
     117
     118We will have a poster and demo showing our educational modules, and webinar sessions.
    97119
    98120== Green Energy Aware SDN Platform ==
    99121
    100 __Authors:__
     122__Authors:__ - Garegin Grigoryan, Keivan Bahmani, Grayson Schermerhorn, Yaoqing Liu - Clarkson University
     123
     124"Data centers are massive infrastructures that host today’s internet and cloud services.  A typical data center is consuming around energy budget of 25000 households and almost about 200 times electricity than that of a standard office space [1].  This massive amount of energy motivated a growing interest in using green renewable energy at data centers. Google is planning to provide 100% of electricity supplies for its data centers and offices using the wind and solar power by the end of 2017 [2].
     125The amount of renewable energy that can be generated in data centers depends on their location and time. We introduce a Green Energy Aware SDN platform with an SDN controller, that schedules client requests to servers depending on delay and the current renewable energy generated at the data centers. In this work, we adopt National Solar Radiation Database (NSRDB), maintained by National Renewable Energy Laboratory (NREL) to estimate the amount of solar renewable energy that can be generated in each data center. Our platform can be used for scheduling client requests not solely based on green energy, but other parameters from data centers (e.g. CPU utilization, delay requirements).
     126[1]     M. Poess and R. O. Nambiar, “Energy Cost, the Key Challenge of Today’s Data Centers: A Power Consumption Analysis of TPC-C Results,” Proc VLDB Endow, vol. 1, no. 2, pp. 1229–1240, Aug. 2008.
     127[2]     “We’re set to reach 100% renewable energy — and it’s just the beginning,” Google, 06-Dec-2016. [Online]. Available: http://blog.google:443/topics/environment/100-percent-renewable-energy/. [Accessed: 22-Feb-2017]."
     128
    101129
    102130== !PlanetIgnite: A Viral GEE ==
    103131
    104 __Authors:__
     132__Authors:__ Rick McGeer, Matt Hemmings, Andy Bavier, Glenn Ricart - US Ignite
     133
     134"PlanetIgnite is a general-purpose, Infrastructure-as-a-Service, self-assembling, lightweight edge cloud on virtualized infrastructure with support for single-pane-of-glass distributed application configuration and deployment. This is an entirely new concept. PlanetLab, GENI, and SAVI are general-purpose IaaS edge clouds, but require top-down installation and dedicated hardware resources at each site and do not offer single- pane-of-glass application deployment. Seattle is a lightweight self-assembling edge cloud that offers single- pane-of-class configuration and control, but developers are restricted to using a subset of Python. PlanetIgnite is a Containers-as-a-Service Edge Cloud which offers Docker Containers to each PlanetIgnite user. A PlanetIgnite node is an off-the-shelf Ubuntu 14.04 Virtual machine with Docker installed, meaning it can be installed on any edge node where a VM with a routable v4 address is available. Adding a PlanetIgnite node to the infrastructure is simple: a site wishing to host a PlanetIgnite node simply downloads the image; on boot, the new PlanetIgnite node registers with the PlanetIgnite portal, which runs a series of acceptance tests. Once complete, the image is registered and the node is added to the set of PlanetIgnite sites.
     135"