Changes between Initial Version and Version 1 of DICLOUD-1Q10-status


Ignore:
Timestamp:
05/27/10 09:57:00 (14 years ago)
Author:
jtaylor@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • DICLOUD-1Q10-status

    v1 v1  
     1[[PageOutline]]
     2
     3= !DiCloud Project Status Report =
     4
     5Period: 1Q10
     6== I. Major accomplishments ==
     7The first quarter of the !DiCloud project includes the following major accomplishments:
     8  * Development of Orca handlers and brokers for Amazon EC2 resources.
     9  * A successful intra-cluster demonstration at GEC7 that show-cased ViSE sensor data streaming from a web server running on Amazon EC2 over a dynamically stitched VLAN through NLR and BEN to servers at RENCI using the Orca control framework. The demonstration was presented at a plenary session of GEC7. To complete this demonstration, we worked with the GPO in Boston, Joe Mambretti at the Starlight facility, Ilia Baldine at RENCI, and Jeff Chase at Duke University.
     10  * Continued our collaboration with the University of Puerto Rico, Mayag¨uez (UPRM), as part of our outreach plan. In mid-January Michael Zink and David Irwin organized and led a 3-day seminar at UPRM on GENI, virtualization, could computing and sensor networks.
     11  * Testing and prototyping of various options to implement Orca handlers for EC2 resources.
     12The rest of this document describes in detail the major accomplishments above.
     13
     14=== A. Milestones achieved ===
     15We achieved the following milestones in the 2nd quarter as specified in our original Statement-of-
     16Work.
     17  * '''January 29, 2010.''' Develop 3 Orca handlers to allocate and revoke resources from Amazon’s Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Block Store (EBS) cloud services. Explore the feasibility of integrating third-party handlers from either Eucalyptus or !OpenNebula into GENI/Orca. Note that the handlers interact with Amazon’s API perform allocation/authorization functions, but do not expose the Cloud API to slice controllers.
     18We have developed three Orca handlers for Amazon’s Elastic Compute Cloud (EC2), Simple
     19Storage Service (S3), and Elastic Block Store (EBS) cloud services. The handlers directly invoke
     20the native EC2 tools (s3cmd for S3). While Eucalyptus and !OpenNebula offered interesting
     21alternatives to the EC2 native APIs they don’t offer the monitoring capabilities that are required
     22in the future proxy to estimate the cost of resource usage.
     23  * '''February 10, 2010.''' Develop a first-come first-served (FCFS) clearinghouse (broker) policy that tracks the amount of resource time incurred by each cloud user. Note that this policy does not track fine-grained usage costs, such as the number of I/Os (for EBS) or the aggregate network traffic (for EC2/S3). The proxy will serve this function.
     24We have re-used existing Orca brokers accounting node resources as the current handlers cannot
     25report resource usage. We have explained how to define a given budget into EC2 resources that
     26can be managed by existing brokers in the Orca implementation.
     27  * '''March 16, 2010.''' Demo 1: Archiving of sensor data on a storage service. We will demonstrate an early version of our handlers and our simple FCFS Clearinghouse policy, which will allow a user to lease servers from EC2 and storage volumes from EBS (as an aggregate). We will highlight the addition of storage by uploading sensor data (from CASA’s Oklahoma testbed) to EBS through an EC2 server. The functions will be exposed by extending ViSE’s web portal.
     28
     29We performed a demo at GEC7 where RENCI dynamically stood up a web server after stitching together the larger network that queried archived radar data from a web server running on an EC2 node on Amazon, which ViSE bridged onto the VLAN using OpenVPN through visetestbed. cs.umass.edu (actually its private VLAN IP). The radar data was displayed as a Google Maps animation after Orca finished stitching the links from BEN, NLR, Starlight, to UMass.
     30
     31   '''Milestones in Progress'''
     32  * '''May 15, 2010.''' (deliverable S2f) Report on the feasibility of using Amazon’s new !CloudWatch service to monitor fine-grained usage costs. Amazon’s !CloudWatch was not available at the time of the proposal; in the initial proposal we planned to not give users root access on servers, and, instead, proposed a per-server root monitoring daemon. !CloudWatch may remove this need.
     33We have been looking at multiple options to monitor EC2 resource usage and its cost. While a
     34proxy can easily determine the running time of VMs, estimating network and disk usage
     35accurately is much harder. Amazon !CloudWatch seems to be a good solution for real time
     36monitoring and cost can be adjusted at the end of a lease by extracting the real cost from Amazon
     37billing service. We are testing different proxy strategies to get the most accurate resource
     38utilization information.
     39=== B. Deliverables made ===
     40Deliverable S2.c has been produced on January 28, 2010 along with the Orca handlers code for
     41Amazon EC2 cloud resources. Deliverable S2.d has been provided on February 11, 2010 to
     42explain how EC2 resources can be used with existing brokers. Deliverable S2.e has been
     43demonstrated at GEC7.
     44
     45== II. Description of work performed during last quarter ==
     46
     47=== A. Activities and findings ===
     48The primary work during the quarter has been the implementation of Orca handlers and the
     49demonstration code at GEC7. We also have the materials for the GENI Seminar we conducted in
     50Puerto Rico in mid-January.
     51=== B. Project participants ===
     52The primary PI is Michael Zink. Co-PIs are Prashant Shenoy, and Jim Kurose. Research Staff is
     53David Irwin and Emmanuel Cecchet.
     54
     55=== C. Publications (individual and organizational) ===
     56Bo An, Victor Lesser, David Irwin, and Michael Zink - ''Automated Negotiation with
     57Decommitment for Dynamic Resource Allocation in Cloud Computing'' – Proceedings of the Ninth
     58International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Toronto,
     59Canada, May 2010.
     60
     61=== D. Outreach activities ===
     62As part of our involvement in the GENI project, we scheduled a seminar at the University of
     63Puerto Rico, Mayaguez in early January. The primary purpose of the seminar was to teach
     64students at UPRM about emerging technologies in virtualization, cloud computing, wireless
     65communication, networking, and sensing that make it possible to multiplex experimental testbeds,
     66such as those being incorporated into the GENI prototype. Over the workshop’s two days
     67Michael Zink and David Irwin gave lectures and tutorials on virtualization, the GENI project,
     68wireless communication, research efforts at UMass-Amherst and cloud computing.
     69
     70=== E. Collaborations ===
     71We collaborated with other Cluster D projects significantly during the quarter. First, we provided
     72documentation on the Cluster D mailing list about how to port controllers to the new Bella
     73release. Second, we worked with the GPO, RENCI, Duke, and Northwestern to arrange for the
     74VLAN control in the GEC7 demonstration.
     75=== F. Other Contributions ===