wiki:ProvisioningService

Version 23 (modified by Christopher Small, 14 years ago) (diff)

--

Project Number

1622

Project Title

A Provisioning Service for Long-Term GENI Experiments
a.k.a. Raven, PROVSERV

Technical Contacts

PI: John Hartman jhh@cs.arizona.edu / +1 520 621 2733
Scott Bakersmbaker@sb-software.com / +1 541 689 3128

Participating Organizations

Department of Computer Science
University of Arizona
Tucson, AZ 85721

SB-Software
5996 Kistler Lane
Eugene, OR 97402

Scope

The scope of work on this project is to prototype a provisioning service that provides the infrastructure required to develop, deploy, monitor, and maintain long-term and short-term experiments on GENI.

Priority activities that contribute to this scope include the following items:

  1. Develop a GENI-specific provisioning service that manages software deployment for slices and also interfaces with GENI clearinghouses. Make the service available to projects beginning in Spiral 1 for at least one clearinghouse, and expanding availability in subsequent spirals.
  2. Provide configuration management and resource management for longer-term experiments where software and components can change over the lifetime of the experiment.
  3. Provide other researcher helper tools such as monitoring that make it easier to manage experiments.

Raven Overview Diagram (provided by J. Hartman)

Current Capabilities

  • tempest, an evolution of the pacman tool. Tempest separates group membership determination and package action determination into separate helper commands. As an example, this allows the user can specify a group based on a CoMon query consisting of nodes that have more than a certain amount of free memory.
  • raven, a tool for one-step software installation. Raven greatly simplifies the process of deploying software packages on a slice.
  • the Stork repository now runs in mod_python infrastructure of Apache. This allows the repository to take advantage of Apache's authentication and load balancing
  • iftd, a data transfer daemon that will allow Stork clients to access files via a variety of transport protocols such as http, ftp, BitTorrent, and CoDeploy. Protocol handling and error handling are encapsulated in the iftd daemon, freeing individual Raven tools from having to perform these functions. We anticipate deploying iftd in the next quarter and it will eventually replace the arizona_transfer module the Raven tools currently use.

Please see the quarterly reports (below) for more details.

Milestones

MilestoneDate(RAVEN: S2.a continuous)?
MilestoneDate(RAVEN: S2.b GUSH integration)?
MilestoneDate(RAVEN: S2.c two cluster plan)?
MilestoneDate(RAVEN: S2.d pubsub)?
MilestoneDate(RAVEN: S2.e rel 3.0)?
MilestoneDate(RAVEN: S2.f better xfer)?
MilestoneDate(RAVEN: S2.g spiral 2 id mgmt)?

Project Technical Documents

Quarterly Status Reports

4Q08 Report
1Q09 Report
2Q09 Report
3Q09 Report

Spiral 1 Connectivity

Raven users download controller software from the Stork web site and install it on their desktops. Experimenters will download Raven clients to PlanetLab GENI nodes that they are using in their experiment. Software packages to be installed on these nodes would come from a repository outside PlanetLab. Nodes running the Raven software will need IP connectivity to this repository in order to download. This repository will initially be at the University of Arizona but may eventually migrate into PlanetLab Central.

This project requires only IP connectivity, not layer 2 virtual ethernets. IP connectivity between Tucson, AZ, Eugene OR and the PlanetLab nodes and clearinghouse will be needed for development and integration.

GPO Liaison System Engineer

Christopher Small

Related Projects

PlanetLab

Stork

GushProto

Raven Trac Site

Attachments (3)

Download all attachments as: .zip