Changes between Version 27 and Version 28 of GEC21Agenda/EveningDemoSession


Ignore:
Timestamp:
10/20/14 11:30:49 (10 years ago)
Author:
xuan.liu@mail.umkc.edu
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GEC21Agenda/EveningDemoSession

    v27 v28  
    145145==== Hadoop-in-a-Hybrid-Cloud ====
    146146
    147 !MapReduce is a programming model for processing and generating large data sets, and Hadoop, a !MapReduce implementation, is a good tool to handle Big Data. Cloud computing with its ubiquitous characteristic, on demand and dynamic resource provisioning at low cost has potential to be the environment to treat big data. However, using Hadoop on the cloud spends time and requires technical knowledge from users. The hybrid cloud leverages these requirements, because it’s necessary to evaluate the resources in private cloud and, if necessary, obtain and prepare on-demand resources in the public cloud. Moreover, the simultaneous management of private and public domains requires an appropriate model that combines performance with minimal cost. We propose an architecture to make the orchestration of Hadoop applications in hybrid clouds, which include a private cloud at Unicamp and UMKC, and GENI as the public cloud. The core of the model consists of a web portal for submissions, an orchestration engine and an execution services factory. Through these three components it’s possible to automate the preparation of a cross-domain cluster, performing the provisioning of files involved, managing the execution of the application, and making the results available to the user. In this demo, we will show the web portal interface and how to use this portal to create VMs and initialze it as a Hadoop worker.
    148 
    149 Participants:
    150   * Shuai Zhao, shuai.zhao@mail.umkc.edu, Univ. of Missouri - Kansas City
    151   * Xuan Liu, xuan.liu@mail.umkc.edu, Univ. of Missouri-Kansas City
    152   * Luis Russi
     147MapReduce is a programming model for processing and generating large data sets, and Hadoop, a !MapReduce implementation, is a good tool to handle Big Data. Cloud computing with its ubiquitous characteristic, on demand and dynamic resource provisioning at low cost has potential to be the environment to treat big data. However, using Hadoop on the cloud spends time and requires technical knowledge from users. The hybrid cloud leverages these requirements, because it’s necessary to evaluate the resources in private cloud and, if necessary, obtain and prepare on-demand resources in the public cloud. Moreover, the simultaneous management of private and public domains requires an appropriate model that combines performance with minimal cost. We propose an architecture to make the orchestration of Hadoop applications in hybrid clouds. The core of the model consists of a web portal for submissions, an orchestration engine and an execution services factory. Through these three components it’s possible to automate the preparation of a cross-domain cluster, performing the provisioning of files involved, managing the execution of the application, and making the results available to the user.
     148
     149Participants:
     150
     151 * Xuan Liu,  xuan.liu@mail.umkc.edu, Univ. of Missouri-Kansas City
     152
     153
     154==== Dynamic Virtual Router Failure Recovery ====
     155
     156Network virtualization allows flexibility to configure virtual networks in a dynamic manner. In such a setting, to provide resilient services to virtual networks, we consider the situation where the substrate network provider wants to have standby virtual routers ready to serve the virtual networks in the event of a failure. Such a failure can affect one or more virtual routers in multiple virtual networks. The goal of our work is to make the optimal selection of standby virtual routers so that virtual networks can be dynamically reconfigured back to their original topologies after a failure. We present an optimization formulation and a preliminary implementation on GENI testbed by applying the idea behind the model. The selection metrics considered are geographical location and the VM load on the standby virtual router's host machine.
     157
     158Participants:
     159
     160  * Xuan Liu,  xuan.liu@mail.umkc.edu, Univ. of Missouri-Kansas City
     161    Here is the  [http://groups.geni.net/geni/attachment/wiki/GEC21Agenda/EveningDemoSession/gec21-demo.pdf the poster]
     162
     163
     164==== Virtual Computer Networks Lab ====
     165
     166In this demo we will present several assignments we have created as part of the Virtual Computer Networks Lab project. This includes assignment on IP routing, learning switch functionality, load balancer, and data center networking. We will also demonstrate new LabWiki features that will support an instructor in managing these assignments.
     167
     168Participants:
     169
     170 * Mike Zink,  mzink@cas.umass.edu, Univ. of Massachusetts-Amherst
     171 
     172
     173==== A networked Virtual Reality based training environment for orthopedic surgery ====
     174
     175This demonstration focuses on a GENI based collaborative Virtual Reality based training environment for orthopedic surgery. A network based collaborative approach has been implemented outlined which enables distributed training of medical students from different locations. The virtual surgery environment is being remotely operated with the aid of a redundancy server.
     176
     177Participants:
     178
     179 * Oklahoma State
     180  * bhararm@ostatemail.okstate.edu
     181  * j.cecil@okstate.edu
     182 * Parmesh Ramanathan,  parmesh@ece.wisc.edu, Univ. of Wisconsin-Madison
     183
     184
     185==== Jacks ====
     186
     187Jacks is a new way to allocate and view your resources. It is embeddable in many contexts and is becoming more capable every day. The new features we wish to showcase are multi-aggregate support, a constraints system that helps users avoid topologies that can't work, and deeper integration of the editor into both Apt and the GENI Portal.
     188
     189Participants:
     190
     191 * Jonathon Duerig,  duerig@flux.utah.edu, Univ. of Utah
     192 * Rob Ricci,  ricci@cs.utah.edu, Univ. of Utah
     193
     194
     195==== GENI Desktop ====
     196
     197GENI Desktop provides a unified interface and environment for experimenters to create, control, manage, interact with and measure the performance of GENI slices. This demo will show the newly implemented functions in the GENI Desktop. We enhanced the GENI Desktop to use Speaks-for credentials for accessing resources from other GENI components on behalf of users. We improved the user interface based on feedback from last GEC. We will demo the initial version of the slice verification testing and the revised archival service implemented in GENI Desktop. In addition, we will demo the module for supporting user-defined routes implemented in the Adopt-A-GENI (AAG) project. This demo is suitable to GENI experimenters (beginners and experienced) who want to learn how to manage/control their experiments and interact with GENI resources. It may be also interesting to GENI tool developers who want to see how GENI Desktop uses the Speaks-for credential to interact with other GENI components.
     198
     199Here is the  [http://groups.geni.net/geni/raw-attachment/wiki/GEC21Agenda/EveningDemoSession/GENIdesktop_poster_gec21.pdf Demo Poster].
     200
     201Participants:
     202
     203 * Jim Griffioen,  griff@netlab.uky.edu, Univ. of Kentucky
     204 * Zongming Fei,  fei@netlab.uky.edu, Univ. of Kentucky
     205 * Hussamuddin Nasir,  nasir@netlab.uky.edu, Univ. of Kentucky
     206 
     207
     208
     209==== Simulation-As-A-Service App ====
     210
     211We will demonstrate new configuration of our simulation-as-a-service (SMaaS) App that involves TotalSim? using GENI for PaaS experiments, which will enable them to deliver their App (that has data-intensive computation and data movement workflows) in SaaS form to their customers. We will also show ontology integration for a collaboration use case in advanced manufacturing. Gigabit App developers and cloud infrastructure engineers will particularly find our demo interesting.
     212
     213Participants:
     214
     215 * Prasad Calyam,  calyamp@missouri.edu, Univ. of Missouri
     216 * Ronny Bazan Antequera,  rcb553@mail.missouri.edu, Univ. of Missouri
     217 * Dmitrii Chemodanov,  dycbt4@mail.missouri.edu, Univ. of Missouri
     218
     219
     220==== Sea-Cloud Innovation Environment ====
     221
     222The aim of the Sea-Cloud Innovation Environment , which is a national wide testbed, is to build an open, general-purpose, federated and large-scale shared experimental facility to foster the emergence of next generation information technology research in China. The demo presents experiment service system, resource control system and measurement system in the Sea-Cloud Innovation Environment. The hardware and distributed end-to-end Openflow testbed update is also introduced. At the same time, some new features in our system is presented this demo as following: experiment workflow control program based on Java & Python language, experiment visualization, light-weight VM management tool, and so on.
     223
     224Participants:
     225
     226 * Xiaodan Zhang,  zhangxiaodan@cstnet.cn, Chinese Academy of Sciences
     227
     228
     229==== Network Functions Virtualization using ProtoRINA ====
     230
     231Network Functions Virtualization (NFV) aims to implement network functions as software instead of dedicated physical devices (middleboxes), and recently it has attracted a lot of attention. NFV is inherently supported by our RINA architecture, and a Virtual Network Function (VNF) can be easily added onto existing networks. In this demo, we demonstrate how ProtoRINA can be used to support RINA-based NFV.
     232
     233Participants:
     234
     235 * Ibrahim Matta,  matta@bu.edu, Boston University
     236
     237
     238==== GENI Science Shakedown ====
     239
     240This demo will feature recent developments from the GENI Science Shakedown project. Specifically, we will show the ADCIRC Storm Surge model (MPI) running across several GENI racks. In addition, we will show new ExoGENI features.
     241
     242Participants:
     243
     244 * Paul Ruth,  pruth@renci.org, RENCI
     245
     246
     247==== Labwiki ====
     248
     249This demonstration presents the latest features to the Labwiki Workspace. We will demonstrate Labwiki's new support for resource selection and provisioning within Labwiki. We will also present its new automated experiment trial validation plugin, e.g. a lecturer can now automatically get information about experiment trials requested by students. We will finally demonstrate Labwiki's new integration within an eBook widget.
     250
     251Participants:
     252
     253 * Thierry Rakotoarivelo,  thierry.rakotoarivelo@nicta.com.au, NICTA
     254 * Max Ott,  max.ott@nicta.com.au, NICTA
     255 * Mike Zink,  mzink@cas.umass.edu, Univ. of Massachusetts-Amherst
     256
     257
     258
     259==== InstaGENI ====
     260
     261InstaGENI is one of the two GENIRack designs. In this demonstration, we will be show the creation, deployment, and report from and InstaGENI-wide monitoring slice, going across all currently-available InstaGENI racks. We will use the advanced features of the GENI Experiment Engine to deploy the slice.
     262
     263Participants:
     264
     265 * Rick !McGeer,  rick@mcgeer.com
     266GENI Experiment Engine
     267
     268The GENI Experiment Engine is a Platform-as-a-Service programming environment and storage system running on the InstaGENI infrastructure. In this demonstration, we will be showing single pane-of-glass control of distributed application running across the GEE Infrastructure, using the GEE Message System for coordination and the GEE Filesystem to deploy data and results.
     269
     270Participants:
     271
     272 * Rick !McGeer,  rick@mcgeer.com
     273
     274
     275=== Federation / International Projects ===
     276
     277==== GENI Cinema ====
     278
     279GENI Cinema is a live video streaming project under development by Clemson University and the University of Wisconsin. The goal is to allow users in the GENI community to host live events and allow other users to tune-in. The infrastructure is implemented using GENI resources at various GENI aggregates, and OpenFlow is used extensively within the GENI aggregates to provide a seamless and scalable streaming service to the end user, who can simply leverage the service via a web browser. The demonstration at GEC21 will show how multiple users can provide live video streams to the GENI Cinema service, and it will show how subscribers viewing a stream can easily switch from one feed to the next without breaking their sockets.
     280
     281Participants:
     282
     283 * Ryan Izard,  rizard@clemson.edu, Clemson Univ.
     284 * Kuang-Ching Wang,  kwang@clemson.edu, Clemson Univ.
     285 * Joseph Porter,  jvporte@g.clemson.edu, Clemson Univ.
     286 * Benton Kribbs,  bkribbs@g.clemson.edu, Clemson Univ.
     287 * Qing Wang,  qw@g.clemson.edu, Clemson Univ.
     288 * Aditya Prakash,  aprakash6@wisc.edu, University of Wisconsin-Madison
     289 * Parmesh Ramanathan,  parmesh@ece.wisc.edu, Univ. of Wisconsin-Madison
     290
     291
     292==== XIA ====
     293
     294We present the eXpressive Internet Architecture (XIA) project as a platform for networks research. We demonstrate hands-on how to introduce new functionality to XIA. We introduce load balancing as a network primitive.
     295
     296Participants:
     297
     298 * Dan Barrett,  barrettd@cs.cmu.edu, Carnegie Mellon Univ
     299
     300
     301==== International Federation ====
     302
     303Based on previous GEC demos on international federation, we want to demonstrate to what we can currently scale regarding number of resources and what limits the tools and federation currently have. (target: 1000 resources in an experiment built from multiple slices)
     304
     305Participants:
     306
     307 * Brecht Vermeulen,  brecht.vermeulen@iminds.be, iMinds
     308
     309
     310==== SDX Poster ====
     311
     312A poster describing a prototype SDX deployment.
     313
     314Participants:
     315
     316 * Brecht Vermeulen,  brecht.vermeulen@iminds.be, iMinds
     317 * Tom Lehman,  tlehman@maxgigapop.net, MAX
     318 * Marshall Brinn,  mbrinn@bbn.com, GENI Project Office
     319 * Niky Riga,  nriga@bbn.com, GENI Project Office
     320
     321
     322==== SDX for GENI ====
     323
     324The SDX for GENI work lead by the Georgia Tech team will be demonstrated. Updates will be provided on the implementation and deployment. Draft service requirements will be presented.
     325
     326Participants:
     327
     328 * Russ Clark,  russ.clark@gatech.edu, Georgia Tech
     329
     330
     331==== SDXs-Software Defined Network Exchanges Inter Domain Prototype ====
     332
     333This !Poster/Demo will demonstrate SDXs inter domain prototype underway at !StarLight. This includes the !StarLight GENI AM, vNode Slice Exchange Point, OpenFlow for NSI(ofNSI) and other inter domain control integration at !StarLight. The vNode/Slice Exchange Point(SEP) team will also demonstrate the SDX GK integration between vNode/SEP and SDXs. We will also show the SOX and !StarLight SDXs integration extend to other domains.
     334
     335Participants:
     336
     337 * Jim Chen,  jim-chen@northwestern.edu, Northwestern Univ
     338 * Joe Mambretti,  j-mambretti@northwestern.edu, Northwestern Univ
     339 * Fei Yeh,  fyeh@northwestern.edu, Northwestern Univ
     340
     341
     342==== VNode, FLARE, SDX ====
     343
     344We will show our recent progress of VNode system, especially focusing on applications working over VNode system. In GEC21, we are preparing three demos: one for dynamic software function deployment in virtual network for video streaming service (demo1), second for FLARE and network service deployment demo (demo2), and third for federation between different virtualization platforms (demo3).
     345
     346demo1: Dynamic software function deployment in virtual network will be demonstrated, and video streaming via virtual network will be shown, and the streaming will be transcoded automatically when network congestion occurs.
     347
     348demo2: In FLARE demo, the updated application driven networking will be shown. In network service deployment demo, it will be shown that the service created by the Click based network design tool is automatically deployed over the network virtualized slice. It is also shown in the live demo that the service deployment, start, and stop can be executed by one command.
     349
     350demo3: Federation between SDX and VNode is demonstrated. Developers interested in application-driven network should see this demo. Developers interested in SDX or international/heterogeneous virtual network should also see this demonstration, because it will show VNode-SDX federation.
     351
     352Participants:
     353
     354 * Univ. of Tokyo
     355  * Akihiro Nakao,  nakao@iii.u-tokyo.ac.jp
     356  * Shu Yamamoto,  shu@iii.u-tokyo.ac.jp
     357 * Toshiaki Tarui,  toshiaki.tarui.my@hitachi.com, Hitachi
     358
     359=== Wireless Projects ===
     360
     361==== Vehicular Sensing and Control ====
     362
     363This demo mainly focuses on newly developed mechanisms for enhancing the performance of the VSC platform as well as latest achievements developed for Vehicular Sensing and Control (VSC) platform based applications. Specifically, this demo will demonstrate 1) virtualization of camera sensing and vehicle internal sensing, and 2)VSC application-layer emulation with camera sensing using the GENI WiMAX network and ExoGENI racks. Additionally, the extended communication capability of the VSC platform to both local users and remote users will be demonstrated via wirover box when the vehicle is moving out of the coverage of the GENI WiMAX network. Developers and researchers interested in vehicular sensing and control network, resource virtualization, and real-time communication should stop by this demo since we would like to share our experiences and learn from your valuable suggestions.
     364
     365Participants:
     366
     367 * yuehua.research@gmail.com
     368
     369
     370==== Integrating GENI/Wireless with Emerging Connected Vehicle and Intelligent Transportation Systems ====
     371
     372An OpenFlow-based handover and mobility solution for connected vehicles. A handover can occur on a device on a vehicle when it changes network interfaces or when an interface attaches to a new point on the edge. Traditionally, device mobility has been made possible with various mobile IP solutions. Clemson University was tasked by GENI to create a testbed to support handover and mobility experiments with IPv4. Our solution is entirely OpenFlow-based and includes components onboard the client device and within the network edge and core. The demonstration at GEC21 shows how the various OpenFlow components interact and are used to allow a client device to switch interfaces without disrupting the application layer.
     373
     374Scenario 1: Vehicles run one application that requires continuous network connection, e.g., video conference. However, while moving on road, the vehicle may move out of the coverage of currently connected network. Since other networks may exist, the vehicle would wish to be able to transparently switch to another available network. In this demo, we show how the SDN based scheme can enable smooth and transparent handoff to another network without disturbing the performance of the running application.
     375
     376Scenario 2: Vehicles will operate two applications. One will be uploading or downloading real-time or video-on-demand streamed data. The second application will be a safety related application. Emerging applications such as collision avoidance will require periods certain type of control messages to be sent from the vehicle or to the vehicle with a high strict network service quality requirements. While these applications are well suited for DSRC vehicle-to-infrastructure deployment areas, it is clear that DSRC networks are unable to scale (both geographically and by the number of participating nodes). Thus, in this demo, we will show how the two applications can transparently switch between different networks with the support of the SDN based handoff solution, so that their requirements on network service quality and robustness can be satisfied.
     377
     378Participants:
     379
     380 * Kang Chen,  kangc@g.clemson.edu, Clemson Univ.
     381 * Jim Martin,  JMARTY@clemson.edu, Clemson Univ.
     382 * Kuang-Ching Wang,  kwang@clemson.edu, Clemson Univ.
     383 * Anjan Rayamajhi,  arayama@clemson.edu, Clemson Univ.
     384 * Jianwei Liu,  ljw725@gmail.com, Clemson Univ.
     385 * Ryan Izard,  rizard@g.clemson.edu, Clemson Univ.
     386 * Karthik Ramakrishnan,  ramakri@g.clemson.edu, Clemson Univ.
     387
     388
     389==== GENI Enabling an Ecological Science Community ====
     390
     391The University of Wisconsin-Madison's GENI WiMAX installations have extended to the Kemp Natural Resource Station in northern Wisconsin. We are GENI enabling the research facility of ecology students and field classes for research connectivity out in the forest and lake areas. This demo shows the current infrastructure, planned research sites, and video coverage at Kemp.
     392
     393Participants:
     394
     395 * Derek Meyer,  dmeyer@cs.wisc.edu, Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory