[[PageOutline]] = Thin Client Performance Benchmarking based Resource Adaptation in Virtual Desktop Clouds = Popular applications such as email, image/video galleries, and file storage are increasingly being supported by cloud platforms in residential, academia and industry communities. The next frontier for these user communities will be to transition 'traditional desktops' that have dedicated hardware and software configurations into 'virtual desktop clouds' that are accessible via thin clients. This project aims to develop optimal resource allocation frameworks and performance benchmarking tools that can enable building and managing thin-client based virtual desktop clouds at Internet-scale. Virtual desktop cloud experiments under realistic user and system loads are being conducted by leveraging multiple kinds of GENI resources such as aggregates, measurement services and experimenter workflow tools. Project outcomes will help in minimizing costly cloud resource over-provisioning, and in avoiding thin client protocol configuration guesswork, while delivering optimum user experience. = Project Team = == Institutions == * University of Missouri-Columbia * The Ohio State University * Ohio Supercomputer Center * OARnet == Team Members == * PI: Prof. Prasad Calyam, calyamp@missouri.edu * Sripriya Seetharam * Dong Li * Christopher Dupoch (REU student) * David Welling (REU student) * Sudharsan Rajagopalan * Rohit Patali * Aishwarya Venkataraman * Alex Berryman * Mukundan Sridharan * Prof. Rajiv Ramnath = Experiments = == Investigation of resource management schemes for thin-client based virtual desktop clouds at Internet-scale == To allocate and manage VDC resources for Internet-scale desktop delivery, existing works focus mainly on managing server-side resources based on utility functions of CPU and memory loads, and do not consider network health and thin-client user experience. Resource allocations without combined utility-directed information of system loads, network health and thin-client user experience in VDC platforms inevitably results in costly guesswork and over-provisioning of resources. In order to address this issue, we developed a utility-directed resource allocation model (U-RAM) that uses offline benchmarking based utility functions of system, network and human components to dynamically (i.e., online) create and place virtual desktops (VDs) in resource pools at distributed data centers, all while optimizing resource allocations along timeliness and coding efficiency quality dimensions. To assess the VDC scalability that can be achieved by U-RAM, we conducted experiments guided by realistic utility functions of desktop pools that were obtained from a real-world VDC testbed i.e., [http://vmlab.oar.net VMLab]. We compared performance of U-RAM with different resource allocation models: * Fixed RAM (F-RAM): each VD is over provisioned which is common in today’s cloud platforms due to lack of system and network awareness * Network-aware RAM (N-RAM): Allocation is aware of the required network resources, but over provisions system (RAM and CPU) resources due to lack of system awareness information * System-aware RAM (S-RAM): Allocation is opposite of N-RAM * Greedy RAM (G-RAM): Allocation is aware of the requirement in terms of both the system as well as the network resources based purely conservative rule-of-thumb information, and not based on objective profiling as in the case of U-RAM. = Accomplishments = We were able to setup our virtual desktop cloud experiment in the GENI facility using ProtoGENI system and network resources, OnTimeMeasure measurement service, and Gush experimenter-workflow tool. We successfully overcame several challenges in setting up a data center environment in ProtoGENI with very helpful assistance from the ProtoGENI team. The challenges primarily related to installing our data center hypervisor image on the ProtoGENI hardware in a manner that is consistent with the procedures ProtoGENI supports for custom OS image installations. Steps to setup a virtual desktop data center and thin-clients in GENI infrastructure can be found in the paper: [http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/VDC-GENI-Expt-GEC13.pdf Experiences from Virtual Desktop Cloud Experiments in GENI] We gave a live demonstration of our virtual desktop cloud experiment in GENI at the GEC10 Networking Reception. We setup 2 data centers, one at OSU and one at Utah Emulab and reserved several other nodes to act as thin-client VD connectors. On the reserved nodes, we installed hypervisor, OnTimeMeasure, and VMware tools. In addition, we installed several measurement automation scripts on the reserved nodes that leveraged the other installed software to control the load generation of thin-client VD connections and their corresponding performance measurements in the network and host resources. OnTimeMeasure was instrumented with the Gush experiment XML files to control measurements within the experiment slice. We further set rate limits at the data centers to 10 Mbps network bandwidth using network emulators in order to setup a realistic environment for data center networking for ~15 VD users. Root Beacon of OnTimeMeasure was installed at the OSU data center, and several Node Beacons of OnTimeMeasure were installed at the thin-client VD nodes to measure end-to-end network path measurements. The OSU data center had F-RAM scripts and the Utah Emulab data center had U-RAM scripts. A web-portal was developed to live demonstrate increasing system and network loads at experiment’s data centers through generation of thin-client connections belonging to different user desktop pools. We used Matlab-based animation of a horse point cloud as the thin-client application and demonstrated that U-RAM provides “improve performance” and “increased scalability” in comparison to F-RAM. [http://ontime.oar.net/demo/Demo-1.htm GEC10-VDC-Horse-Demo-1] [http://ontime.oar.net/demo/Demo-2.htm GEC10-VDC-Horse-Demo-2] [[Image(gec10-vdc-demo.png,50%)]] We gave another live demonstration of our virtual desktop cloud experiment in GENI during our GEC15 Plenary Talk. We setup thin-clients at different sites in the meso-scale backbone network and used an OpenFlow controller application to establish VD connections and re-route them when their performance was affected by cross-traffic. We had the VMLab data center at The Ohio State University (location: Columbus, Ohio) (running VMware ESXi hypervisor for creating VD pools) hosting popular applications (e.g., Excel, Internet Explorer, Media Player) as well as advanced scientific computing applications (e.g., Matlab, Moldflow), connected to the GENI OpenFlow network. A utility-directed provisioning and placement scheme was used to intelligently manage the traffic flows by updating flow rules on OpenFlow switches, and the user QoE was measured/demonstrated with a HD wildlife video clip in the VD session. [[Image(gec15-vdc-demo.png,50%)]] == Publications == * Prasad Calyam, Sudharsan Rajagopalan, Arunprasath Selvadhurai, Saravanan Mohan, Aishwarya Venkataraman, Alex Berryman, Rajiv Ramnath, “Leveraging OpenFlow for Resource Placement of Virtual Desktop Cloud Applications”, IFIP/IEEE International Symposium on Integrated Network Management (IM), 2013. * Prasad Calyam, Alex Berryman, Albert Lai, Matthew Honigford, ''[http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/vmlab_vtj12.pdf VMLab: Infrastructure to Support Desktop Virtualization Experiments for Research and Education''], VMware Technical Journal (Invited Paper), 2012. * Prasad Calyam, Aishwarya Venkataraman, Alex Berryman, Marcio Faerman, "[http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/VDC-GENI-Expt-GEC13.pdf Experiences from Virtual Desktop Cloud Experiments in GENI]", First GENI Research and Educational Experiment Workshop (GREE), 2012. * Prasad Calyam, Rohit Patali, Alex Berryman, Albert Lai, Rajiv Ramnath, "[http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/vdcloud_comnet11.pdf Utility-directed Resource Allocation in Virtual Desktop Clouds]", Elsevier Computer Networks Journal (COMNET), 2011. Slides: [http://groups.geni.net/geni/attachment/wiki/Gec11Agenda/Lightning%201%20Prasad%20Calyam%20Resource%20Allocation%20in%20Virtual%20Desktop%20Clouds%20VMLab-GENI%20Experiment.pdf?format=raw pdf] * Prasad Calyam, Mukundan Sridharan, Ying Xiao, K. Zhu, A. Berryman, R. Patali, "[http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/fi-ontimemeasure-vdcloud_jcn11.pdf Enabling Performance Intelligence for Application Adaptation in the Future Internet]", Journal of Communications and Networks (JCN), 2011. * Mukundan Sridharan, Prasad Calyam, Aishwarya Venkataraman, Alex Berryman, "[http://groups.geni.net/geni/attachment/wiki/FirstGenCalyam/vdc-defrag-ucc11.pdf Defragmentation of Resources in Virtual Desktop Clouds for Cost-Aware Utility-Optimal Allocation]", IEEE Conference on Utility and Cloud Computing (UCC), 2011. Slides: [http://www.osc.edu/~pcalyam/vdc-defrag-slides-ucc11.pptx ppt] * Alex Berryman, Prasad Calyam, Albert Lai, Matthew Honigford, "VDBench: A Benchmarking Toolkit for Thin-client based Virtual Desktop Environments", IEEE Conference on Cloud Computing Technology and Science (!CloudCom), 2010. Slides: [https://mail.uso.edu/owa/redir.aspx?C=e63be1095eca4f5a837885de240a41e7&URL=http%3a%2f%2fwww.merit.edu%2fevents%2farchive%2fspecialevents%2fdesktopvirtualization%2fpdf%2fBerryman_vdbench.pdf pdf] == Posters == * [http://groups.geni.net/geni/attachment/wiki/FirstConsortiumPosters/PosterPatali.ppt GENI Doctoral Consortium Poster] == Degrees == * Rohit Patali, MS, 2011, Thesis: 'Utility-Directed Resource Allocation in Virtual Desktop Clouds.' * Aishwarya Venkataraman, MS, 2012, Thesis: 'Defragmentation of Resources in Virtual Desktop Clouds for Cost-Aware Utility-Maximal Allocation'. = Acknowledgements = This material is based upon work supported by the National Science Foundation under award number CNS-1050225, VMware, and Dell. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, VMware and Dell.