wiki:Raven1Q10QSR

Version 1 (modified by Vic Thomas, 14 years ago) (diff)

--

Raven Quarterly Status Report for Q1 2010

Reporting period: Jan 1 - Mar 31 2010

PI: John H. Hartman (University of Arizona)

Major accomplishments

  • Added support for unified GID representation to Raven tools
  • Released ravenbuilder tool for one-step RPM creation
  • Improved Owl, a service for monitoring slice health
  • Added support for multiple slice and node names to Raven tools
  • Support automatic installation of SFA using Stork on Cluster-B GENI nodes
  • Deployed new version of the Stork nest proxy that makes use of the IFTD data transfer service

Description of work performed during last quarter

Activities and findings

We added support for the unified GID representation to the Raven tools. The tools now support both the HRN representation supported by the PlanetLab cluster and the URN representation supported by the !ProtoGENI cluster.

We developed and released ravenbuilder, a tool in the Raven suite that will automatically create RPM packages. This tool integrates with the raven tool so that the user can populate the experiment build directory tree with the files that constitute their package, and the combination of ravenbuilder and raven will create an RPM package from those files and deploy the package in their slice.

We continued to develop Owl, a service for monitoring slice health. The beta version of Owl now supports non-string data types which allows for non-lexical sorting, and has improved error handling in both the client and server. The server now has a built-in data collection module that stores the date at which the Owl data for a slice was collected, and the user interface allows the user to select a range of dates for the data. This allows the user to view data that are newer than a specified date.

We released a new version of Stork with improved XML parsing performance and reduced memory consumption.

We extended Stork and Tempest to allow a sliver to be named by multiple slice and node names. For example, this allows a PlanetLab sliver to be known by both its SFA name and its PlanetLab name. This support allows Raven to support users who are transitioning from PlanetLab to GENI.

We modified Stork and Tempest to use SFA GIDs to identify slices and nodes. To decode these GIDs, the SFA library must be installed in each sliver. Tempest detects when it is running in GENI sliver and automatically adds the sliver to a GENI group that in turn causes Stork to automatically install the SFA library and its dependencies.

We continue to work on iftd, a data transfer daemon that allows Stork clients to access files via a variety of transport protocols such as http, ftp, BitTorrent, and CoDeploy. Protocol handling and error handling are encapsulated in the iftd daemon, freeing individual Raven tools from having to perform these functions. We deployed iftd in the Stork nest proxy slice so that all Stork client slices running on a node download their metadata and packages using the Stork nest proxy slice and therefore iftd. We have also deployed iftd in several beta slices and will deploy it across all slices that use Stork in the next quarter.

Project participants

  • John H. Hartman (University of Arizona)
  • Scott Baker (SB Software)
  • Jude Nelson (University of Arizona)

Publications (individual and organizational)

  • None.

Outreach activities

  • None.

Collaborations

We worked closely with the following Cluster B members:

  • PlanetLab
  • GUSH
  • GpENI

We are also working with the ProtoGENI cluster to port Raven to their infrastructure.

Other Contributions

  • None.