[[PageOutline]] = GEC 25 Evening Reception with Demonstrations and Posters = '''Location:''' Kovens Conference Center - First Floor Gallery[[BR]] 3000 Northeast 151st Street, North Miami, FL 33181[[BR]] '''Time:''' Tuesday March 14: 05:30pm - 07:30pm = Session Presenters = The evening demo session gives GENI experimenters and developers a chance to share their work in a live network environment. Demonstrations run for the entire length of the session, with teams on hand to answer questions and collaborate. This page lists requested demonstrations. == Moving Target Defense == __Authors:__ Minh Nguyen, Priyanka Samanta, Saptarshi Debroy - The City University of New York A Security demo and presentation for moving Target Defense against cyber attack. == Run My Experiment on GENI == __Authors:__ Fraida Fund - New York University GENI is used in a tremendous variety of ways, from cutting edge research on new Internet architectures, to undergraduate senior projects in distributed systems, to classroom assignments in cognitive radio design. One major advantage of GENI is the ease with which students and researchers can replicate and then extend existing experiments. This demo highlights a wide range of reproducible GENI experiments that others can build on (from the Run My Experiment on GENI blog https://witestlab.poly.edu/blog/ ). == SDN-based Traffic Analysis Resistant Network (TARN) Architecture == __Authors:__ Qing Wang, Geddings Barrineau, Lu Yu; Jon Oakley, Kuang-ching Wang, Richard Brooks - Clemson University This demo will show a prototype of an SDN-based traffic analysis resistant network architecture (TRAN). TRAN intends to explore an end-to-end network architecture that removes traffic analysis vulnerabilities by using SDN-based solutions to ""shake up"" the foundation of the Internet architecture - IP. One of proposed goal for TARN is to circumvent the Internet censorship without being subject to traffic analysis, by reducing the likelihood of being ""tracked"". Unlike other proxy-based solutions (e.g. Tor, Psiphon, Decoy Routing), TARN architecture avoids forcing Internet users to trust intermediate proxy nodes. == Demonstration of a GENI based cyber physical test bed for advanced manufacturing == __Authors:__ J. Cecil , Sadiq !AlBuhamood - Oklahoma State University This demonstration will be a milestone in the area of Digital Manufacturing and involves showcasing a GENI based cyber physical framework for advanced manufacturing. This Next Internet based framework will enable globally distributed software and manufacturing resources to be accessed from different locations to accomplish a complex set of life cycle activities including design analysis, assembly planning, and simulation. The advent of the Next Internet holds the promise of ushering in a new era in Information Centric engineering and digital manufacturing activities. The focus will be on the emerging domain of micro devices assembly, which involves the assembly of micron sized parts using automated micro assembly work cells. == Distributed Delay Minimization Approach for Networked Control Systems in Wide-Area Power Grids == __Authors:__ Mohamed Rahouti, Tommy Chin, Kaiqi Xiong - University of South Florida As the design and deployment of an efficient wide-area communication and computing framework for large power grids remains one of the greatest challenges to harvest the gigantic volume of Phasor Measurement Units (PMUs) data in real-time, in this project, we explore and leverage Software Defined Networking (SDN) and cloud computing to address the above challenge. This poster/demo presents our efficient approaches with their GENI experimental results. == Virtual Computer Networks Lab == __Authors:__ Bhushan Suresh, Divyashri Bhat, Michael Zink - University of Massachusetts The project focuses on providing students an interacting classroom experience with interesting course modules. A wide variety of course modules with varying degrees of complexity have been developed so far. VCNL offers a web-based tool using JUPYTER, that allows students to automate their experiment procedure and graph the results. The !OpenFlow logic for the assignments is implemented in Python using RYU controller. == Novel Internet access service with online traffic engineering of elephant flows == __Authors:__ Sourav Maji, Malathi Veeraraghavan, Molly Buchanan, Fatma Alali, Jordi Ros-Giralt, Alan Commike - University of Virginia Elephant flows, which are high-rate large-sized flows, can cause increased packet delays and packet losses in other flows. Even if the arrival rate of elephant flows is fairly low, large enterprises and lower-tier ISPs often upgrade their access links to provider networks to avoid service degradations caused by elephant flows. This work proposes and demonstrates an alternative solution to access link upgrades. The solution consists of deploying an Elephant Flow Traffic Engineering System (EFTES) that monitors access-link traffic, identifies elephant flows in real time, and instructs the router to add a firewall filter to isolate identified elephant flows to a separate queue. We used a ProtoGENI slice consisting of five bare-metal hosts located in the University of Kentucky testbed to illustrate the value offered by EFTES. One of the hosts was configured to serve as a router and forward packets from three hosts sending out traffic to a receiving host. Among the three sending hosts, one generates an elephant flow; another transmits a real background traffic and remaining sending host sends out a ping traffic. The background traffic was generated by replaying a trace collected by the Center for Applied Internet Data Analysis (CAIDA). The host configured as a router executes an Elephant Flow Identification Engine (EFIE), which is a part of the EFTES that identifies elephant flows in the presence of background traffic. It then uses this elephant flow identifier to set a filter rule that sends subsequent elephant flow packets to a different queue. This action of elephant flow redirection prevents the background traffic and the ping traffic from experiencing large queuing delays. We show from ping delay measurements that EFTES prevents delay sensitive flows from experiencing high latency and demonstrate the value offered by EFTES to delay-sensitive flows in the presence of artificial elephant flows that were added to a background replay of the CAIDA packet traces. == Data Flow Prioritization for Scientific Workflows Using A Virtual SDX on ExoGENI == __Authors:__ Anirban Mandal, Paul Ruth, Ilya Baldin, Rafael Ferreira da Silva, Ewa Deelman - Renaissance Computing Institute (RENCI) This demonstration will showcase a novel, dynamically adaptable networked cloud infrastructure driven by the demand of a data-driven scientific workflow. It will use resources from ExoGENI. The demo will run on dynamically provisioned 'slices' spanning multiple ExoGENI racks that are interconnected using dynamically provisioned connections from Internet2 and ESnet. We will show how a virtual Software Defined Exchange (SDX) platform, instantiated on ExoGENI, provides additional functionality for management of scientific workflows. Using the virtual SDX slice, we will demonstrate how tools developed in the DoE Panorama project can enable the Pegasus Workflow Management System to monitor and manipulate network connectivity and performance between sites, pools, and tasks within a workflow. We will use a representative, data-intensive genome science workflow as a driving use case to showcase the above capabilities. == Steroid OpenFlow Service == __Authors:__ Ryan Izard, Junaid Zulfiqar, Khayam Anjam, Caleb Linduff - Clemson University With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower TCP throughput over large delay bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. Steroid !OpenFlow Service (SOS) is a scalable, SDN-based network service that can transparently improve the performance of TCP-based data transfers. We will demonstrate the simplicity and scalability of the SOS architecture and how it can be flexibly deployed to to improve TCP data transfers across experimental, virtual cloud-based, and even production network environments. == GENI Wireless Testbed: A flexible open ecosystem for wireless communications research == __Authors:__ Michael Sherman, Ivan Seskar, Abhimanyu Gosain - Rutgers This demo presents the architecture of the GENI edge cloud computing network, in the form of compute and storage resources, a mobile 4G LTE edge, and a high speed campus network connecting these components. We demonstrate a two use cases for LTE on GENI. First, a mobile, infrastructure-less LTE implementation, utilizing two ORBIT radio nodes running !OpenAirInterface (OAI). One as the eNB, and one as a UE. Second, we show an implementation with a COTS UE connecting the the OAI eNB, with backhaul to the OAI EPC running on the Rutgers GENI rack. == Experiences using GENI and the Affinity Research Group (ARG) Model to foster deep learning == __Authors:__ Graciela Perera - Northeastern Illinois University Student engagement, both inside and outside of the classroom, is a pre-requesite to critical thinking, we have adapted the Affinity Research Group (ARG) Model to foster deep student engagement/learning in computer networking and cybersecurity courses. ARG is a team-based learning strategy for undergraduate research that develops scholarly inquiry skills in students through an intentionally inclusive, cooperative approach. Because the approach is overtly supportive and reassuring, students can be challenged and engaged using GENI. Students are more comfortable to freely engage in experimentation and not be discourage by failure. == Denial of Service Detection and Mitigation on GENI == __Authors:__ Dr. Xenia Mountrouidou, Mac Knight - College of Charleston The demo will consist of showing how SDN can be leveraged to detect and mitigate a SYN flood denial of service attack. An !OpenFlow switch is used to duplicate traffic to a monitor that is running an IDS. The monitor then alerts a controller when it believes an attack is taking place. If the controller confirms there is malicious traffic, action is taken and the attack is mitigated. In the demo will go into more detail on how the how the scripts running on the monitor and controller detect and mitigate the malicious traffic. In addition using GENI Desktop graphs, we will show visually when the attack occurred and when it was mitigated. == Incident-Supporting Visual Cloud Computing utilizing Software-Defined Networking == __Authors:__ Dmitrii Chemodanov, PhD Prasad Calyam - University of Missouri We are developing a new set of cloud/fog protocols to support computer vision applications related to the field of real-time visual situational awareness (e.g., tracking objects of interest, 3D scene reconstructions, augmented reality-based communications, etc) which are critical to first responders. These applications require seamless processing of imagery/video at the network edge and core cloud platforms with resilient performance that caters to user Quality of Experience (QoE) expectations. The absence or poor wireless communications at the edge networks near incident scenes further complicate an exploitation of these applications. As part of our project activities, we have setup a realistic virtual environment testbed in GENI and developed an SDN controller to evaluate our hybrid cloud-fog architecture along with the proposed algorithms. Specifically, to enable core-cloud computation we used high-performance nodes for handling large instance processing (e.g., tracking of objects, data fusion for the 3D scene reconstruction). We also used low-performance nodes for handling small instance processing (e.g., image tilling, stabilization, geo-projection) through fogs at the SDN network-edge. To transfer data between the core-cloud and fogs over SDN network, we used !OpenFlow Virtual Switches. Finally, to compensate the lack of wireless networking at the edge in GENI our testbed setup also included campus enterprise network with connected clients. In addition to visual data processing speed up, our preliminary experiment results indicate the need for sustained throughput at the wireless edge networks, and use of novel geographical routing protocols to enhance responders QoE. At the workshop, we will share what barriers we are overcoming in GENI to create a realistic cloud-fog testbed, and how we are building upon these research results on performance engineering of public safety applications. In addition, we will share how we are working towards setting up a city-scale testbed with city collaborator sites and first responder agencies. == Layer-Two Peering across SAVI and GENI Testbeds using !HyperExchange == __Authors:__ Saeed Arezoumand, Hadi Bannazadeh, and Alberto Leon-Garcia - University of Toronto We demonstrate the peering of virtual networks between the SAVI and GENI testbeds using !HyperExchange, a software-defined exchange fabric. The exchange is deployed between the physical networks of the two testbeds. Specifically, a layer-two WAN including nodes in SAVI testbed is peered with a VLAN in GENI testbed without using encapsulation and overlays. Each of these testbeds has a different logic to create and manage layer-two networks, so this demonstration shows how the !HyperExchange is protocol-agnostic and allows tenants to create networks across dissimilar networks. == GENI Webinars for Research and Education == __Authors:__ Ben Newton, Jay Aikat, Kevin Jeffay - Universities of North Carolina We will have a poster and demo showing our educational modules, and webinar sessions. == Green Energy Aware SDN Platform == __Authors:__ - Garegin Grigoryan, Keivan Bahmani, Grayson Schermerhorn, Yaoqing Liu - Clarkson University Data centers are massive infrastructures that host today’s internet and cloud services. A typical data center is consuming around energy budget of 25000 households and almost about 200 times electricity than that of a standard office space [1]. This massive amount of energy motivated a growing interest in using green renewable energy at data centers. Google is planning to provide 100% of electricity supplies for its data centers and offices using the wind and solar power by the end of 2017 [2]. The amount of renewable energy that can be generated in data centers depends on their location and time. We introduce a Green Energy Aware SDN platform with an SDN controller, that schedules client requests to servers depending on delay and the current renewable energy generated at the data centers. In this work, we adopt National Solar Radiation Database (NSRDB), maintained by National Renewable Energy Laboratory (NREL) to estimate the amount of solar renewable energy that can be generated in each data center. Our platform can be used for scheduling client requests not solely based on green energy, but other parameters from data centers (e.g. CPU utilization, delay requirements).[1] M. Poess and R. O. Nambiar, “Energy Cost, the Key Challenge of Today’s Data Centers: A Power Consumption Analysis of TPC-C Results,” Proc VLDB Endow, vol. 1, no. 2, pp. 1229–1240, Aug. 2008. [2] “We’re set to reach 100% renewable energy — and it’s just the beginning,” Google, 06-Dec-2016. [Online]. Available: http://blog.google:443/topics/environment/100-percent-renewable-energy/. [Accessed: 22-Feb-2017]. == !PlanetIgnite: A Viral GEE == __Authors:__ Rick !McGeer, Matt Hemmings, Andy Bavier, Glenn Ricart - US Ignite !PlanetIgnite is a general-purpose, Infrastructure-as-a-Service, self-assembling, lightweight edge cloud on virtualized infrastructure with support for single-pane-of-glass distributed application configuration and deployment. This is an entirely new concept. !PlanetLab, GENI, and SAVI are general-purpose IaaS edge clouds, but require top-down installation and dedicated hardware resources at each site and do not offer single- pane-of-glass application deployment. Seattle is a lightweight self-assembling edge cloud that offers single- pane-of-class configuration and control, but developers are restricted to using a subset of Python. !PlanetIgnite is a Containers-as-a-Service Edge Cloud which offers Docker Containers to each !PlanetIgnite user. A !PlanetIgnite node is an off-the-shelf Ubuntu 14.04 Virtual machine with Docker installed, meaning it can be installed on any edge node where a VM with a routable v4 address is available. Adding a !PlanetIgnite node to the infrastructure is simple: a site wishing to host a !PlanetIgnite node simply downloads the image; on boot, the new !PlanetIgnite node registers with the !PlanetIgnite portal, which runs a series of acceptance tests. Once complete, the image is registered and the node is added to the set of !PlanetIgnite sites. == From GENI SDX to Metro SDX == __Authors:__ Sean Donovan, Russ Clark - Georgia Tech A network design that combines the benefits of the software-defined exchange (SDX) concept with integrated edge computation, making interconnection a priority, opportunistically connecting to many networks for resilient paths, while having storage and computation resources at network nodes for both general purpose computation and MetroSDX-specific services.