| 1 | = Raven Quarterly Status Report for Q1 2010 = |
| 2 | |
| 3 | Reporting period: Jan 1 - Mar 31 2010 |
| 4 | |
| 5 | PI: John H. Hartman (University of Arizona) |
| 6 | |
| 7 | == Major accomplishments == |
| 8 | |
| 9 | * Added support for unified GID representation to Raven tools |
| 10 | * Released {{{ravenbuilder}}} tool for one-step RPM creation |
| 11 | * Improved Owl, a service for monitoring slice health |
| 12 | * Added support for multiple slice and node names to Raven tools |
| 13 | * Support automatic installation of SFA using Stork on Cluster-B GENI nodes |
| 14 | * Deployed new version of the Stork nest proxy that makes use of the IFTD data transfer service |
| 15 | |
| 16 | |
| 17 | == Description of work performed during last quarter == |
| 18 | |
| 19 | === Activities and findings === |
| 20 | |
| 21 | We added support for the unified GID representation to the Raven tools. The tools now support both the HRN representation supported by the !PlanetLab cluster and |
| 22 | the URN representation supported by the !ProtoGENI cluster. |
| 23 | |
| 24 | We developed and released {{{ravenbuilder}}}, a tool in the Raven suite that will automatically create RPM packages. This tool integrates with the {{{raven}}} tool so that the user |
| 25 | can populate the experiment build directory tree with the files that constitute their package, and the combination of {{{ravenbuilder}}} and {{{raven}}} will create an RPM package |
| 26 | from those files and deploy the package in their slice. |
| 27 | |
| 28 | We continued to develop Owl, a service for monitoring slice health. The beta version of Owl now supports non-string data types which allows for non-lexical sorting, and has improved |
| 29 | error handling in both the client and server. The server now has a built-in data collection module that stores the date at which the Owl data for a slice was collected, and the user |
| 30 | interface allows the user to select a range of dates for the data. This allows the user to view data that are newer than a specified date. |
| 31 | |
| 32 | |
| 33 | We released a new version of Stork with improved XML parsing performance and reduced memory consumption. |
| 34 | |
| 35 | We extended Stork and Tempest to allow a sliver to be named by |
| 36 | multiple slice and node names. For example, this allows a !PlanetLab |
| 37 | sliver to be known by both its SFA name and its |
| 38 | !PlanetLab name. This support allows Raven to support users who are |
| 39 | transitioning from !PlanetLab to GENI. |
| 40 | |
| 41 | We modified Stork and Tempest to use SFA GIDs to identify |
| 42 | slices and nodes. To decode these GIDs, the SFA library must |
| 43 | be installed in each sliver. Tempest detects |
| 44 | when it is running in GENI sliver and automatically adds the sliver |
| 45 | to a GENI group that in turn causes Stork to automatically |
| 46 | install the SFA library and its dependencies. |
| 47 | |
| 48 | We continue to work on {{{iftd}}}, a data transfer daemon that allows Stork clients to access files via a variety of transport protocols such as http, ftp, !BitTorrent, and !CoDeploy. Protocol handling and error handling are encapsulated in the {{{iftd}}} daemon, freeing individual Raven tools from having to perform these functions. |
| 49 | We deployed {{{iftd}}} in the Stork nest proxy slice so that all Stork client slices running on a node download their metadata and packages using |
| 50 | the Stork nest proxy slice and therefore {{{iftd}}}. |
| 51 | We have also deployed {{{iftd}}} in several beta slices and will deploy it across all slices that use Stork in the next quarter. |
| 52 | |
| 53 | === Project participants === |
| 54 | |
| 55 | * John H. Hartman (University of Arizona) |
| 56 | * Scott Baker (SB Software) |
| 57 | * Jude Nelson (University of Arizona) |
| 58 | |
| 59 | === Publications (individual and organizational) === |
| 60 | |
| 61 | * None. |
| 62 | |
| 63 | === Outreach activities === |
| 64 | |
| 65 | * None. |
| 66 | |
| 67 | === Collaborations === |
| 68 | |
| 69 | We worked closely with the following Cluster B members: |
| 70 | * !PlanetLab |
| 71 | * GUSH |
| 72 | * GpENI |
| 73 | |
| 74 | We are also working with the ProtoGENI cluster to port Raven to their infrastructure. |
| 75 | |
| 76 | === Other Contributions === |
| 77 | |
| 78 | * None. |