= Raven Quarterly Status Report for Q4 2009 = Reporting period: Oct 1 - Dec 31 2009 PI: John H. Hartman (University of Arizona) == Major accomplishments == * Released {{{raven}}} tool for one-step software installation * Developed {{{ravenbuilder}}} tool for one-step RPM creation * Improved Owl, a service for monitoring slice health * Deployed Stork repository as a VM backed up by Amazon S3 == Description of work performed during last quarter == === Activities and findings === We developed and released {{{raven}}}, a tool for one-step software installation. {{{Raven}}} greatly simplifies the process of deploying software packages on a slice. {{{Raven}}} creates a template directory tree that the user populates with the proper GENI key and software packages. {{{Raven}}} takes care of the rest, creating and signing the proper tpfiles and {{{tempest}}} files, and uploads these files to the Stork repository along with the packages. At that point Stork will install these packages on all nodes in the specified slice. We prototyped {{{ravenbuilder}}}, a tool in the Raven suite that will automatically create RPM packages. This tool integrates with the {{{raven}}} tool so that the user can populate the template directory tree with the files that constitute their package, and the combination of {{{ravenbuilder}}} and {{{raven}}} will create an RPM package from those files and deploy the package in their slice. We deployed the Raven tools on the GpENI testbed, allowing a single Raven experiment to span slices in both the !PlanetLab and GpENI testbeds. We continued to develop Owl, a service for monitoring slice health. Owl consists of an extensible set of client-side plugins that collect information about software running in the slice. This information is sent to a centralized Owl server that stores the information in a database. The Owl server makes this information available via Web pages as well as in XML and JSON format. We added a map feature to Owl that displays slices as pins on a Google map, and the color of the pins displays (user-configurable) information about the health of the slice. We demoed Owl at GEC6. We participated in a meeting (prior to GEC6) and on-line discussions on unifying the GID representations between the !PlanetLab and ProtoGENI clusters. We continue to work on {{{iftd}}}, a data transfer daemon that will allow Stork clients to access files via a variety of transport protocols such as http, ftp, !BitTorrent, and !CoDeploy. Protocol handling and error handling are encapsulated in the {{{iftd}}} daemon, freeing individual Raven tools from having to perform these functions. We anticipate deploying {{{iftd}}} in the next quarter and it will eventually replace the arizona_transfer module the Raven tools currently use. === Project participants === * John H. Hartman (University of Arizona) * Scott Baker (SB Software) * Jude Nelson (University of Arizona) === Publications (individual and organizational) === * None. === Outreach activities === * None. === Collaborations === We worked closely with the following Cluster B members: * !PlanetLab * GUSH * GpENI We are also working with the ProtoGENI cluster to port Raven to their infrastructure. Most of this effort focuses on unifying the GID representations. === Other Contributions === * None.