Changes between Version 6 and Version 7 of FirstGenCalyam


Ignore:
Timestamp:
07/31/13 17:39:15 (6 years ago)
Author:
Prasad Calyam
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • FirstGenCalyam

    v6 v7  
    1212
    1313== Institutions ==
    14 
     14 
     15 * University of Missouri-Columbia
    1516 * The Ohio State University
    1617 * Ohio Supercomputer Center
     
    2021
    2122
    22  * PI: Prasad Calyam, pcalyam@osc.edu
     23 * PI: Prof. Prasad Calyam, calyamp@missouri.edu
     24 * Sripriya Seetharam
     25 * Dong Li
     26 * Christopher Dupoch (REU student)
     27 * David Welling (REU student)
     28 * Sudharsan Rajagopalan
    2329 * Rohit Patali
    2430 * Aishwarya Venkataraman
    2531 * Alex Berryman
    2632 * Mukundan Sridharan
    27  * Rajiv Ramnath
     33 * Prof. Rajiv Ramnath
    2834
    2935= Experiments =
     
    3642
    3743To assess the VDC scalability that can be achieved by U-RAM, we conducted experiments guided by realistic utility functions of desktop pools that were obtained from a real-world VDC testbed i.e., [http://vmlab.oar.net VMLab]. We compared performance of U-RAM with different resource allocation models:
    38  *  Fixed RAM (F-RAM): each VD is over provisioned which is common in today’s cloud platforms due to lack of system and network awareness
     44 * Fixed RAM (F-RAM): each VD is over provisioned which is common in today’s cloud platforms due to lack of system and network awareness
    3945 * Network-aware RAM (N-RAM): Allocation is aware of the required network resources, but over provisions system (RAM and CPU) resources due to lack of system awareness information
    4046 * System-aware RAM (S-RAM): Allocation is opposite of N-RAM
     
    4349= Accomplishments =
    4450
    45 We were able to setup our virtual desktop cloud experiment in the GENI facility using ProtoGENI system and network resources, OnTimeMeasure measurement service, and Gush experimenter-workflow tool. We successfully overcame several challenges in setting up a data center environment in ProtoGENI with very helpful assistance from the ProtoGENI team. The challenges primarily related to installing our data center hypervisor image on the ProtoGENI hardware in a manner that is consistent with the procedures ProtoGENI supports for custom OS image installations.
     51We were able to setup our virtual desktop cloud experiment in the GENI facility using ProtoGENI system and network resources, OnTimeMeasure measurement service, and Gush experimenter-workflow tool. We successfully overcame several challenges in setting up a data center environment in ProtoGENI with very helpful assistance from the ProtoGENI team. The challenges primarily related to installing our data center hypervisor image on the ProtoGENI hardware in a manner that is consistent with the procedures ProtoGENI supports for custom OS image installations. Steps to setup a virtual desktop data center and thin-clients in GENI infrastructure can be found in the paper: [http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/VDC-GENI-Expt-GEC13.pdf Experiences from Virtual Desktop Cloud Experiments in GENI]
    4652
    4753We gave a live demonstration of our virtual desktop cloud experiment in GENI at the GEC10 Networking Reception. We setup 2 data centers, one at OSU and one at Utah Emulab and reserved several other nodes to act as thin-client VD connectors. On the reserved nodes, we installed hypervisor, OnTimeMeasure, and VMware tools. In addition, we installed several measurement automation scripts on the reserved nodes that leveraged the other installed software to control the load generation of thin-client VD connections and their corresponding performance measurements in the network and host resources. OnTimeMeasure was instrumented with the Gush experiment XML files to control measurements within the experiment slice. We further set rate limits at the data centers to 10 Mbps network bandwidth using network emulators in order to setup a realistic environment for data center networking for ~15 VD users. Root Beacon of OnTimeMeasure was installed at the OSU data center, and several Node Beacons of OnTimeMeasure were installed at the thin-client VD nodes to measure end-to-end network path measurements. The OSU data center had F-RAM scripts and the Utah Emulab data center had U-RAM scripts. A web-portal was developed to live demonstrate increasing system and network loads at experiment’s data centers through generation of thin-client connections belonging to different user desktop pools. We used Matlab-based animation of a horse point cloud as the thin-client application and demonstrated that U-RAM provides “improve performance” and “increased scalability” in comparison to F-RAM.
     
    5157
    5258[[Image(gec10-vdc-demo.png,50%)]]
     59
     60We gave another live demonstration of our virtual desktop cloud experiment in GENI during our GEC15 Plenary Talk. We setup thin-clients at different sites in the meso-scale backbone network and used an OpenFlow controller application to establish VD connections and re-route them when their performance was affected by cross-traffic. We had the VMLab data center at The Ohio State University (location: Columbus, Ohio) (running VMware ESXi hypervisor for creating VD pools) hosting popular applications (e.g., Excel, Internet Explorer, Media
     61Player) as well as advanced scientific computing applications (e.g., Matlab, Moldflow), connected to the GENI OpenFlow network. A utility-directed provisioning and placement scheme was used to intelligently manage the traffic flows by updating flow rules on OpenFlow switches, and the user QoE was measured/demonstrated with a HD wildlife video clip in the VD session.
     62
     63[[Image(gec15-vdc-demo.png,50%)]]
    5364
    5465