Changes between Version 25 and Version 26 of GEC21Agenda/EveningDemoSession


Ignore:
Timestamp:
10/20/14 11:06:00 (10 years ago)
Author:
xuan.liu@mail.umkc.edu
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GEC21Agenda/EveningDemoSession

    v25 v26  
    145145==== Middleware for Hadoop-in-a-Hybrid-Cloud ====
    146146
    147 !MapReduce is a programming model for processing and generating large data sets, and Hadoop, a !MapReduce implementation, is a good tool to handle Big Data. Cloud computing with its ubiquitous characteristic, on demand and dynamic resource provisioning at low cost has potential to be the environment to treat big data. However, using Hadoop on the cloud spends time and requires technical knowledge from users. The hybrid cloud leverages these requirements, because it’s necessary to evaluate the resources in private cloud and, if necessary, obtain and prepare on-demand resources in the public cloud. Moreover, the simultaneous management of private and public domains requires an appropriate model that combines performance with minimal cost. We propose an architecture to make the orchestration of Hadoop applications in hybrid clouds. The core of the model consists of a web portal for submissions, an orchestration engine and an execution services factory. Through these three components it’s possible to automate the preparation of a cross-domain cluster, performing the provisioning of files involved, managing the execution of the application, and making the results available to the user.
     147!MapReduce is a programming model for processing and generating large data sets, and Hadoop, a !MapReduce implementation, is a good tool to handle Big Data. Cloud computing with its ubiquitous characteristic, on demand and dynamic resource provisioning at low cost has potential to be the environment to treat big data. However, using Hadoop on the cloud spends time and requires technical knowledge from users. The hybrid cloud leverages these requirements, because it’s necessary to evaluate the resources in private cloud and, if necessary, obtain and prepare on-demand resources in the public cloud. Moreover, the simultaneous management of private and public domains requires an appropriate model that combines performance with minimal cost. We propose an architecture to make the orchestration of Hadoop applications in hybrid clouds. The core of the model consists of a web portal for submissions, an orchestration engine and an execution services factory. Through these three components it’s possible to automate the preparation of a cross-domain cluster, performing the provisioning of files involved, managing the execution of the application, and making the results available to the user. In this demo, we will show the web portal interface and how to use this portal to create VMs and initialze it as a Hadoop worker.
    148148
    149149Participants:
     150  * Shuai Zhao, shuai.zhao@mail.umkc.edu, Univ. of Missouri - Kansas City
    150151  * Xuan Liu, xuan.liu@mail.umkc.edu, Univ. of Missouri-Kansas City
    151 
    152 ==== Dynamic Virtual Router Failure Recovery ====
    153 
    154 Network virtualization allows flexibility to configure virtual networks in a dynamic manner. In such a setting, to provide resilient services to virtual networks, we consider the situation where the substrate network provider wants to have standby virtual routers ready to serve the virtual networks in the event of a failure. Such a failure can affect one or more virtual routers in multiple virtual networks. The goal of our work is to make the optimal selection of standby virtual routers so that virtual networks can be dynamically reconfigured back to their original topologies after a failure. We present an optimization formulation and a preliminary implementation on GENI testbed by applying the idea behind the model. The selection metrics considered are geographical location and the VM load on the standby virtual router's host machine.
    155 
    156 Participants:
    157   * Xuan Liu, xuan.liu@mail.umkc.edu, Univ. of Missouri-Kansas City
    158 
    159 Here is the [http://groups.geni.net/geni/raw-attachment/wiki/GEC21Agenda/EveningDemoSession/gec21-demo.pdf the poster]
    160 
    161 ==== Virtual Computer Networks Lab ====
    162 
    163 In this demo we will present several assignments we have created as part of the Virtual Computer Networks Lab project. This includes assignment on IP routing, learning switch functionality, load balancer, and data center networking. We will also demonstrate new !LabWiki features that will support an instructor in managing these assignments.
    164 
    165 Participants:
    166   * Mike Zink, mzink@cas.umass.edu, Univ. of Massachusetts-Amherst
    167 
    168 ==== A networked Virtual Reality based training environment for orthopedic surgery ====
    169 
    170 This demonstration focuses on a GENI based collaborative Virtual Reality based training environment for orthopedic surgery. A network based collaborative approach has been implemented outlined which enables distributed training of medical students from different locations. The virtual surgery environment is being remotely operated with the aid of a redundancy server.
    171 
    172 Participants:
    173   * Oklahoma State
    174     * bhararm@ostatemail.okstate.edu
    175     * j.cecil@okstate.edu
    176   * Parmesh Ramanathan, parmesh@ece.wisc.edu, Univ. of Wisconsin-Madison
    177 
    178 ==== Jacks ====
    179 
    180 Jacks is a new way to allocate and view your resources. It is embeddable in many contexts and is becoming more capable every day. The new features we wish to showcase are multi-aggregate support, a constraints system that helps users avoid topologies that can't work, and deeper integration of the editor into both Apt and the GENI Portal.
    181 
    182 Participants:
    183   * Jonathon Duerig, duerig@flux.utah.edu, Univ. of Utah
    184   * Rob Ricci, ricci@cs.utah.edu, Univ. of Utah
    185 
    186 ==== GENI Desktop ====
    187 
    188 GENI Desktop provides a unified interface and environment for experimenters to create, control, manage, interact with and measure the performance of GENI slices. This demo will show the newly implemented functions in the GENI Desktop. We enhanced the GENI Desktop to use Speaks-for credentials for accessing resources from other GENI components on behalf of users. We improved the user interface based on feedback from last GEC. We will demo the initial version of the slice verification testing and the revised archival service implemented in GENI Desktop. In addition, we will demo the module for supporting user-defined routes implemented in the Adopt-A-GENI (AAG) project. This demo is suitable to GENI experimenters (beginners and experienced) who want to learn how to manage/control their experiments and interact with GENI resources. It may be also interesting to GENI tool developers who want to see how GENI Desktop uses the Speaks-for credential to interact with other GENI components.
    189 
    190 Here is the [http://groups.geni.net/geni/attachment/wiki/GEC21Agenda/EveningDemoSession/GENIdesktop_poster_gec21.pdf Demo Poster].
    191 
    192 Participants:
    193   * Jim Griffioen, griff@netlab.uky.edu, Univ. of Kentucky
    194   * Zongming Fei, fei@netlab.uky.edu, Univ. of Kentucky
    195   * Hussamuddin Nasir, nasir@netlab.uky.edu, Univ. of Kentucky
    196 
    197 ==== Simulation-As-A-Service App ====
    198 
    199 We will demonstrate new configuration of our simulation-as-a-service (SMaaS) App that involves TotalSim using GENI for PaaS experiments, which will enable them to deliver their App (that has data-intensive computation and data movement workflows) in SaaS form to their customers. We will also show ontology integration for a collaboration use case in advanced manufacturing. Gigabit App developers and cloud infrastructure engineers will particularly find our demo interesting.
    200 
    201 Participants:
    202   * Prasad Calyam, calyamp@missouri.edu, Univ. of Missouri
    203   * Ronny Bazan Antequera, rcb553@mail.missouri.edu, Univ. of Missouri
    204   * Dmitrii Chemodanov, dycbt4@mail.missouri.edu, Univ. of Missouri
    205 
    206 ==== Sea-Cloud Innovation Environment ====
    207 
    208 The aim of the Sea-Cloud Innovation Environment , which is a national wide testbed, is to build an open, general-purpose, federated and large-scale shared experimental facility to foster the emergence of next generation information technology research in China. The demo presents experiment service system, resource control system and measurement system in the Sea-Cloud Innovation Environment. The hardware and distributed end-to-end Openflow testbed update is also introduced. At the same time, some new features in our system is presented this demo as following: experiment workflow control program based on Java & Python language, experiment visualization, light-weight VM management tool, and so on.
    209 
    210 Participants:
    211   * Xiaodan Zhang, zhangxiaodan@cstnet.cn, Chinese Academy of Sciences
    212 
    213 ==== Network Functions Virtualization using ProtoRINA ====
    214 
    215 Network Functions Virtualization (NFV) aims to implement network functions as software instead of dedicated physical devices (middleboxes), and recently it has attracted a lot of attention. NFV is inherently supported by our RINA architecture, and a Virtual Network Function (VNF) can be easily added onto existing networks. In this demo, we demonstrate how ProtoRINA can be used to support RINA-based NFV.
    216 
    217 Participants:
    218   * Ibrahim Matta, matta@bu.edu, Boston University
    219 
    220 ==== GENI Science Shakedown ====
    221 
    222 This demo will feature recent developments from the GENI Science Shakedown project. Specifically, we will show the ADCIRC Storm Surge model (MPI) running across several GENI racks. In addition, we will show new ExoGENI features.
    223 
    224 Participants:
    225   * Paul Ruth, pruth@renci.org, RENCI
    226 
    227 ==== Labwiki ====
    228 
    229 This demonstration presents the latest features to the Labwiki Workspace. We will demonstrate Labwiki's new support for resource selection and provisioning within Labwiki. We will also present its new automated experiment trial validation plugin, e.g. a lecturer can now automatically get information about experiment trials requested by students. We will finally demonstrate Labwiki's new integration within an eBook widget.
    230 
    231 Participants:
    232   * Thierry Rakotoarivelo, thierry.rakotoarivelo@nicta.com.au, NICTA
    233   * Max Ott, max.ott@nicta.com.au, NICTA
    234   * Mike Zink, mzink@cas.umass.edu, Univ. of Massachusetts-Amherst
    235 
    236 ==== InstaGENI ====
    237 
    238 InstaGENI is one of the two GENIRack designs. In this demonstration, we will be show the creation, deployment, and report from and InstaGENI-wide monitoring slice, going across all currently-available InstaGENI racks. We will use the advanced features of the GENI Experiment Engine to deploy the slice.
    239 
    240 Participants:
    241   * Rick McGeer, rick@mcgeer.com
    242 
    243 ==== GENI Experiment Engine ====
    244 
    245 The GENI Experiment Engine is a Platform-as-a-Service programming environment and storage system running on the InstaGENI infrastructure. In this demonstration, we will be showing single pane-of-glass control of distributed application running across the GEE Infrastructure, using the GEE Message System for coordination and the GEE Filesystem to deploy data and results.
    246 
    247 Participants:
    248   * Rick McGeer, rick@mcgeer.com
    249 
    250 
    251 ==== GENI Cinema ====
    252 
    253 GENI Cinema is a live video streaming project under development by Clemson University and the University of Wisconsin. The goal is to allow users in the GENI community to host live events and allow other users to tune-in. The infrastructure is implemented using GENI resources at various GENI aggregates, and OpenFlow is used extensively within the GENI aggregates to provide a seamless and scalable streaming service to the end user, who can simply leverage the service via a web browser. The demonstration at GEC21 will show how multiple users can provide live video streams to the GENI Cinema service, and it will show how subscribers viewing a stream can easily switch from one feed to the next without breaking their sockets.
    254 
    255 Participants:
    256   * Ryan Izard,  rizard@clemson.edu, Clemson Univ.
    257   * Kuang-Ching Wang, kwang@clemson.edu, Clemson Univ.
    258   * Joseph Porter, jvporte@g.clemson.edu, Clemson Univ.
    259   * Benton Kribbs, bkribbs@g.clemson.edu, Clemson Univ.
    260   * Qing Wang, qw@g.clemson.edu, Clemson Univ.
    261   * Aditya Prakash, aprakash6@wisc.edu, University of Wisconsin-Madison
    262   * Parmesh Ramanathan, parmesh@ece.wisc.edu, Univ. of Wisconsin-Madison
    263 
    264 ==== eXpressive Internet Architecture (XIA) ====
    265 
    266 We present the eXpressive Internet Architecture (XIA) project as a platform for networks research. We demonstrate hands-on how to introduce new functionality to XIA. We introduce load balancing as a network primitive.
    267 
    268 Participants:
    269   * Dan Barrett, barrettd@cs.cmu.edu, Carnegie Mellon Univ
    270 
    271 ==== intelligent SDN based Traffic (de)Aggregation and Measurement Paradigm (iSTAMP) ====
    272 
    273 n our proposed SDN measurement framework, called iSTAMP, the flexibility provided by the SDN for real-time reconfiguration of OpenFlow switches is utilized to partition the TCAM entries of switches/routers into two parts to: 1) optimally aggregate part of incoming flows for aggregate measurements, and 2) de-aggregate and directly measure the most informative flows for per-flow measurements. Under hard resource constraint of TCAM entries in SDN switches, iSTAMP designs the optimal aggregation matrix which minimizes the flow-size estimation error via using compressive sensing network inference techniques. Moreover, the iSTAMP framework utilizes an intelligent Multi-Armed Bandit based algorithm to adaptively sample the most ”rewarding” flows, whose accurate measurements have the highest impact on the overall flow measurement and estimation performance. iSTAMP then processes these aggregate and per-flow measurements to effectively estimate network flows using a variety of optimization techniques.
    274 
    275 Participants:
    276   * Mehdi Malboubi, mmalboubi@ucdavis.edu, University of California, Davis
    277   * Chen-Nee Chuah, chuah@ucdavis.edu, University of California, Davis
    278   * Lei Liu, leiliu@ucdavis.edu, University of California, Davis
    279   * S. J. Ben Yoo, sbyoo@ucdavis.edu, University of California, Davis
    280 
    281 
    282 ==== SDX Poster ====
    283 
    284 A poster describing a prototype SDX deployment.
    285 
    286 Participants:
    287   * Brecht Vermeulen, brecht.vermeulen@iminds.be, iMinds
    288   * Tom Lehman, tlehman@maxgigapop.net, MAX
    289   * Marshall Brinn, mbrinn@bbn.com, GENI Project Office
    290   * Niky Riga, nriga@bbn.com, GENI Project Office
    291 
    292 ==== SDX for GENI ====
    293 
    294 The SDX for GENI work lead by the Georgia Tech team will be demonstrated. Updates will be provided on the implementation and deployment. Draft service requirements will be presented.
    295 
    296 Participants:
    297   * Russ Clark, russ.clark@gatech.edu, Georgia Tech
    298 
    299 === Federation / International Projects ===
    300 
    301 ==== SDXs-Software Defined Network Exchanges Inter Domain Prototype ====
    302 
    303 This !Poster/Demo will demonstrate SDXs inter domain prototype underway at !StarLight. This includes the !StarLight GENI AM, vNode Slice Exchange Point, OpenFlow for NSI(ofNSI) and other inter domain control integration at !StarLight. The vNode/Slice Exchange Point(SEP) team will also demonstrate the SDX GK integration between vNode/SEP and SDXs. We will also show the SOX and !StarLight SDXs integration extend to other domains.
    304 
    305 Participants:
    306   * Jim Chen, jim-chen@northwestern.edu, Northwestern Univ
    307   * Joe Mambretti, j-mambretti@northwestern.edu, Northwestern Univ
    308   * Fei Yeh, fyeh@northwestern.edu, Northwestern Univ
    309 
    310 
    311 ==== VNode, FLARE, SDX ====
    312 
    313 We will show our recent progress of VNode system, especially focusing on applications working over VNode system. In GEC21, we are preparing three demos: one for dynamic software function deployment in virtual network for video streaming service (demo1), second for FLARE and network service deployment demo (demo2), and third for federation between different virtualization platforms (demo3).
    314 
    315 demo1: Dynamic software function deployment in virtual network will be demonstrated, and video streaming via virtual network will be shown, and the streaming will be transcoded automatically when network congestion occurs.
    316 
    317 demo2: In FLARE demo, the updated application driven networking will be shown. In network service deployment demo, it will be shown that the service created by the Click based network design tool is automatically deployed over the network virtualized slice. It is also shown in the live demo that the service deployment, start, and stop can be executed by one command.
    318 
    319 demo3: Federation between SDX and VNode is demonstrated. Developers interested in application-driven network should see this demo. Developers interested in SDX or international/heterogeneous virtual network should also see this demonstration, because it will show VNode-SDX federation.
    320 
    321 Participants:
    322   * Univ. of Tokyo
    323     * Akihiro Nakao, nakao@iii.u-tokyo.ac.jp
    324     * Shu Yamamoto, shu@iii.u-tokyo.ac.jp
    325   * Toshiaki Tarui, toshiaki.tarui.my@hitachi.com, Hitachi
    326 
    327 ==== International Federation ====
    328 
    329 Based on previous GEC demos on international federation, we want to demonstrate to what we can currently scale regarding number of resources and what limits the tools and federation currently have. (target: 1000 resources in an experiment built from multiple slices)
    330 
    331 Participants:
    332   * Brecht Vermeulen, brecht.vermeulen@iminds.be, iMinds
    333 
    334 
    335 === Wireless Projects ===
    336 
    337 ==== Vehicular Sensing and Control ====
    338 
    339  This demo mainly focuses on newly developed mechanisms for enhancing the performance of the VSC platform as well as latest achievements developed for Vehicular Sensing and Control (VSC) platform based applications. Specifically, this demo will demonstrate 1) virtualization of camera sensing and vehicle internal sensing, and 2)VSC application-layer emulation with camera sensing using the GENI WiMAX network and ExoGENI racks. Additionally, the extended communication capability of the VSC platform to both local users and remote users will be demonstrated via wirover box when the vehicle is moving out of the coverage of the GENI WiMAX network. Developers and researchers interested in vehicular sensing and control network, resource virtualization, and real-time communication should stop by this demo since we would like to share our experiences and learn from your valuable suggestions.
    340 
    341 Participants:
    342   * yuehua.research@gmail.com
    343 
    344 ==== Integrating GENI/Wireless with Emerging Connected Vehicle and Intelligent Transportation Systems ====
    345 
    346  An OpenFlow-based handover and mobility solution for connected vehicles. A handover can occur on a device on a vehicle when it changes network interfaces or when an interface attaches to a new point on the edge. Traditionally, device mobility has been made possible with various mobile IP solutions. Clemson University was tasked by GENI to create a testbed to support handover and mobility experiments with IPv4. Our solution is entirely OpenFlow-based and includes components onboard the client device and within the network edge and core. The demonstration at GEC21 shows how the various OpenFlow components interact and are used to allow a client device to switch interfaces without disrupting the application layer.
    347 
    348 Scenario 1: Vehicles run one application that requires continuous network connection, e.g., video conference. However, while moving on road, the vehicle may move out of the coverage of currently connected network. Since other networks may exist, the vehicle would wish to be able to transparently switch to another available network. In this demo, we show how the SDN based scheme can enable smooth and transparent handoff to another network without disturbing the performance of the running application.
    349 
    350 Scenario 2: Vehicles will operate two applications. One will be uploading or downloading real-time or video-on-demand streamed data. The second application will be a safety related application. Emerging applications such as collision avoidance will require periods certain type of control messages to be sent from the vehicle or to the vehicle with a high strict network service quality requirements. While these applications are well suited for DSRC vehicle-to-infrastructure deployment areas, it is clear that DSRC networks are unable to scale (both geographically and by the number of participating nodes). Thus, in this demo, we will show how the two applications can transparently switch between different networks with the support of the SDN based handoff solution, so that their requirements on network service quality and robustness can be satisfied.
    351 
    352 Participants:
    353   * Kang Chen, kangc@g.clemson.edu, Clemson Univ.
    354   * Jim Martin, JMARTY@clemson.edu, Clemson Univ.
    355   * Kuang-Ching Wang, kwang@clemson.edu, Clemson Univ.
    356   * Anjan Rayamajhi, arayama@clemson.edu, Clemson Univ.
    357   * Jianwei Liu, ljw725@gmail.com, Clemson Univ.
    358   * Ryan Izard, rizard@g.clemson.edu, Clemson Univ.
    359   * Karthik Ramakrishnan, ramakri@g.clemson.edu, Clemson Univ.   
    360 
    361 ==== GENI Enabling an Ecological Science Community ====
    362 
    363 The University of Wisconsin-Madison's GENI WiMAX installations have extended to the Kemp Natural Resource Station in northern Wisconsin. We are GENI enabling the research facility of ecology students and field classes for research connectivity out in the forest and lake areas. This demo shows the current infrastructure, planned research sites, and video coverage at Kemp.
    364 
    365 Participants:
    366   * Derek Meyer, dmeyer@cs.wisc.edu, Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory
     152  * Luis Russi