Thin Client Performance Benchmarking based Resource Adaptation in Virtual Desktop Clouds
Popular applications such as email, image/video galleries, and file storage are increasingly being supported by cloud platforms in residential, academia and industry communities. The next frontier for these user communities will be to transition 'traditional desktops' that have dedicated hardware and software configurations into 'virtual desktop clouds' that are accessible via thin clients.
This project aims to develop optimal resource allocation frameworks and performance benchmarking tools that can enable building and managing thin-client based virtual desktop clouds at Internet-scale. Virtual desktop cloud experiments under realistic user and system loads are being conducted by leveraging multiple kinds of GENI resources such as aggregates, measurement services and experimenter workflow tools.
Project outcomes will help in minimizing costly cloud resource over-provisioning, and in avoiding thin client protocol configuration guesswork, while delivering optimum user experience.
- University of Missouri-Columbia
- The Ohio State University
- Ohio Supercomputer Center
- PI: Prof. Prasad Calyam, firstname.lastname@example.org
- Sripriya Seetharam
- Dong Li
- Christopher Dopuch (REU student)
- David Welling (REU student)
- Sudharsan Rajagopalan
- Rohit Patali
- Aishwarya Venkataraman
- Alex Berryman
- Mukundan Sridharan
- Prof. Rajiv Ramnath
Investigation of resource management schemes for thin-client based virtual desktop clouds at Internet-scale
To allocate and manage VDC resources for Internet-scale desktop delivery, existing works focus mainly on managing server-side resources based on utility functions of CPU and memory loads, and do not consider network health and thin-client user experience. Resource allocations without combined utility-directed information of system loads, network health and thin-client user experience in VDC platforms inevitably results in costly guesswork and over-provisioning of resources.
In order to address this issue, we developed a utility-directed resource allocation model (U-RAM) that uses offline benchmarking based utility functions of system, network and human components to dynamically (i.e., online) create and place virtual desktops (VDs) in resource pools at distributed data centers, all while optimizing resource allocations along timeliness and coding efficiency quality dimensions.
To assess the VDC scalability that can be achieved by U-RAM, we conducted experiments guided by realistic utility functions of desktop pools that were obtained from a real-world VDC testbed i.e., VMLab. We compared performance of U-RAM with different resource allocation models:
- Fixed RAM (F-RAM): each VD is over provisioned which is common in today’s cloud platforms due to lack of system and network awareness
- Network-aware RAM (N-RAM): Allocation is aware of the required network resources, but over provisions system (RAM and CPU) resources due to lack of system awareness information
- System-aware RAM (S-RAM): Allocation is opposite of N-RAM
- Greedy RAM (G-RAM): Allocation is aware of the requirement in terms of both the system as well as the network resources based purely conservative rule-of-thumb information, and not based on objective profiling as in the case of U-RAM.
We were able to setup our virtual desktop cloud experiment in the GENI facility using ProtoGENI system and network resources, OnTimeMeasure measurement service, and Gush experimenter-workflow tool. We successfully overcame several challenges in setting up a data center environment in ProtoGENI with very helpful assistance from the ProtoGENI team. The challenges primarily related to installing our data center hypervisor image on the ProtoGENI hardware in a manner that is consistent with the procedures ProtoGENI supports for custom OS image installations. Steps to setup a virtual desktop data center and thin-clients in GENI infrastructure can be found in the paper: Experiences from Virtual Desktop Cloud Experiments in GENI
We gave a live demonstration of our virtual desktop cloud experiment in GENI at the GEC10 Networking Reception. We setup 2 data centers, one at OSU and one at Utah Emulab and reserved several other nodes to act as thin-client VD connectors. On the reserved nodes, we installed hypervisor, OnTimeMeasure, and VMware tools. In addition, we installed several measurement automation scripts on the reserved nodes that leveraged the other installed software to control the load generation of thin-client VD connections and their corresponding performance measurements in the network and host resources. OnTimeMeasure was instrumented with the Gush experiment XML files to control measurements within the experiment slice. We further set rate limits at the data centers to 10 Mbps network bandwidth using network emulators in order to setup a realistic environment for data center networking for ~15 VD users. Root Beacon of OnTimeMeasure was installed at the OSU data center, and several Node Beacons of OnTimeMeasure were installed at the thin-client VD nodes to measure end-to-end network path measurements. The OSU data center had F-RAM scripts and the Utah Emulab data center had U-RAM scripts. A web-portal was developed to live demonstrate increasing system and network loads at experiment’s data centers through generation of thin-client connections belonging to different user desktop pools. We used Matlab-based animation of a horse point cloud as the thin-client application and demonstrated that U-RAM provides “improve performance” and “increased scalability” in comparison to F-RAM.
We gave another live demonstration of our virtual desktop cloud experiment in GENI during our GEC15 Plenary Talk. We setup thin-clients at different sites in the meso-scale backbone network and used an OpenFlow controller application to establish VD connections and re-route them when their performance was affected by cross-traffic. We had the VMLab data center at The Ohio State University (location: Columbus, Ohio) (running VMware ESXi hypervisor for creating VD pools) hosting popular applications (e.g., Excel, Internet Explorer, Media Player) as well as advanced scientific computing applications (e.g., Matlab, Moldflow), connected to the GENI OpenFlow network. A utility-directed provisioning and placement scheme was used to intelligently manage the traffic flows by updating flow rules on OpenFlow switches, and the user QoE was measured/demonstrated with a HD wildlife video clip in the VD session.
- Prasad Calyam, Sudharsan Rajagopalan, Arunprasath Selvadhurai, Saravanan Mohan, Aishwarya Venkataraman, Alex Berryman, Rajiv Ramnath, "Leveraging OpenFlow for Resource Placement of Virtual Desktop Cloud Applications", IFIP/IEEE International Symposium on Integrated Network Management (IM), 2013.
- Prasad Calyam, Alex Berryman, Albert Lai, Matthew Honigford, "VMLab: Infrastructure to Support Desktop Virtualization Experiments for Research and Education", VMware Technical Journal (Invited Paper), 2012.
- Prasad Calyam, Aishwarya Venkataraman, Alex Berryman, Marcio Faerman, "Experiences from Virtual Desktop Cloud Experiments in GENI", First GENI Research and Educational Experiment Workshop (GREE), 2012.
- Prasad Calyam, Rohit Patali, Alex Berryman, Albert Lai, Rajiv Ramnath, "Utility-directed Resource Allocation in Virtual Desktop Clouds", Elsevier Computer Networks Journal (COMNET), 2011. Slides: pdf
- Prasad Calyam, Mukundan Sridharan, Ying Xiao, K. Zhu, A. Berryman, R. Patali, "Enabling Performance Intelligence for Application Adaptation in the Future Internet", Journal of Communications and Networks (JCN), 2011.
- Mukundan Sridharan, Prasad Calyam, Aishwarya Venkataraman, Alex Berryman, "Defragmentation of Resources in Virtual Desktop Clouds for Cost-Aware Utility-Optimal Allocation", IEEE Conference on Utility and Cloud Computing (UCC), 2011. Slides: ppt
- Alex Berryman, Prasad Calyam, Albert Lai, Matthew Honigford, "VDBench: A Benchmarking Toolkit for Thin-client based Virtual Desktop Environments", IEEE Conference on Cloud Computing Technology and Science (CloudCom), 2010. Slides: pdf
- Sudharsan Rajagopalan, MS, 2013, Thesis: 'Leveraging OpenFlow for Resource Placement of Virtual Desktop Cloud Applications'.
- Aishwarya Venkataraman, MS, 2012, Thesis: 'Defragmentation of Resources in Virtual Desktop Clouds for Cost-Aware Utility-Maximal Allocation'.
- Rohit Patali, MS, 2011, Thesis: 'Utility-Directed Resource Allocation in Virtual Desktop Clouds'.
This material is based upon work supported by the National Science Foundation under award number CNS-1050225, VMware, and Dell. Any opinions, ﬁndings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reﬂect the views of the National Science Foundation, VMware and Dell.
- gec10-vdc-demo.png (374.7 kB) -
GEC10 VDC Experiment Demonstration Setup, added by email@example.com on 02/11/12 12:58:30.
- VDC-GENI-Expt-GEC13.pdf (1.0 MB) -
GREE12 Paper on Virtual Desktop Cloud Experiments in GENI, added by firstname.lastname@example.org on 03/25/12 22:14:30.
- gec15-vdc-demo.png (496.0 kB) -
GEC15 VDC Demo Setup, added by email@example.com on 07/31/13 17:41:01.
- vdc-defrag-ucc11.pdf (0.6 MB) -
IEEE UCC Paper on Defragmentation of VDCs, added by firstname.lastname@example.org on 07/31/13 17:47:49.
- fi-ontimemeasure-vdcloud_jcn11.pdf (1.7 MB) -
JCN paper on OnTimeMeasure and VDCloud, added by email@example.com on 07/31/13 17:50:14.
- vdcloud_comnet11.pdf (2.3 MB) -
COMNET U-RAM paper, added by firstname.lastname@example.org on 07/31/13 17:54:39.
- vmlab_vtj12.pdf (0.5 MB) -
VMware Journal paper on VMLab, added by email@example.com on 07/31/13 18:01:36.
- OpenFlow-VDC-LB-IM13.pdf (0.8 MB) -
OpenFlow VDCloud OnTimeMeasure I&M Paper, added by firstname.lastname@example.org on 07/31/13 18:04:41.