| 1 | |
| 2 | = Raven Quarterly Status Report for Q4 2009 = |
| 3 | |
| 4 | Reporting period: Oct 1 - Dec 31 2009 |
| 5 | |
| 6 | PI: John H. Hartman (University of Arizona) |
| 7 | |
| 8 | |
| 9 | == Major accomplishments == |
| 10 | |
| 11 | * Released {{{raven}}} tool for one-step software installation |
| 12 | * Developed {{{ravenbuilder}}} tool for one-step RPM creation |
| 13 | * Improved Owl, a service for monitoring slice health |
| 14 | * Deployed Stork repository as a VM backed up by Amazon S3 |
| 15 | |
| 16 | |
| 17 | == Description of work performed during last quarter == |
| 18 | |
| 19 | === Activities and findings === |
| 20 | |
| 21 | We developed and released {{{raven}}}, a tool for one-step software installation. {{{Raven}}} greatly simplifies the process of deploying software packages on a slice. {{{Raven}}} creates a template directory tree that the user populates with the proper GENI key and software packages. {{{Raven}}} takes care of the rest, creating and signing the proper tpfiles and {{{tempest}}} files, and uploads these files to the Stork repository along with the packages. At that point Stork will install these packages on all nodes in the specified slice. |
| 22 | |
| 23 | We prototyped {{{ravenbuilder}}}, a tool in the Raven suite that will automatically create RPM packages. This tool integrates with the {{{raven}}} tool so that the user |
| 24 | can populate the template directory tree with the files that constitute their package, and the combination of {{{ravenbuilder}}} and {{{raven}}} will create an RPM package |
| 25 | from those files and deploy the package in their slice. |
| 26 | |
| 27 | We deployed the Raven tools on the GpENI testbed, allowing a single Raven experiment to span slices in both the !PlanetLab and GpENI testbeds. |
| 28 | |
| 29 | We continued to develop Owl, a service for monitoring slice health. Owl consists of an extensible set of client-side plugins that collect information about software running in the slice. This information is sent to a centralized Owl server that stores the information in a database. The Owl server makes this information available via Web pages as well as in XML and JSON format. We added a map feature to Owl that displays slices as pins on a Google map, and the color of the pins displays (user-configurable) information |
| 30 | about the health of the slice. We |
| 31 | demoed Owl at GEC6. |
| 32 | |
| 33 | We participated in a meeting (prior to GEC6) and on-line discussions on unifying the GID representations between the !PlanetLab and ProtoGENI clusters. |
| 34 | |
| 35 | We continue to work on {{{iftd}}}, a data transfer daemon that will allow Stork clients to access files via a variety of transport protocols such as http, ftp, !BitTorrent, and !CoDeploy. Protocol handling and error handling are encapsulated in the {{{iftd}}} daemon, freeing individual Raven tools from having to perform these functions. We anticipate deploying {{{iftd}}} in the next quarter and it will eventually replace the arizona_transfer module the Raven tools currently use. |
| 36 | |
| 37 | |
| 38 | === Project participants === |
| 39 | |
| 40 | * John H. Hartman (University of Arizona) |
| 41 | * Scott Baker (SB Software) |
| 42 | * Jude Nelson (University of Arizona) |
| 43 | |
| 44 | === Publications (individual and organizational) === |
| 45 | |
| 46 | * None. |
| 47 | |
| 48 | === Outreach activities === |
| 49 | |
| 50 | * None. |
| 51 | |
| 52 | === Collaborations === |
| 53 | |
| 54 | We worked closely with the following Cluster B members: |
| 55 | * !PlanetLab |
| 56 | * GUSH |
| 57 | * GpENI |
| 58 | |
| 59 | We are also working with the ProtoGENI cluster to port Raven to their infrastructure. Most of this effort focuses on unifying the GID representations. |
| 60 | |
| 61 | === Other Contributions === |
| 62 | |
| 63 | * None. |