| 1 | [[PageOutline]] |
| 2 | |
| 3 | = ProvisioningService Project Status Report = |
| 4 | |
| 5 | Period: 4Q08 |
| 6 | |
| 7 | After much negotiation the contract was finally signed at the end of October. For the roughly two |
| 8 | months since then we have made significant progress towards our upcoming milestones. We |
| 9 | have been working very closely with the !PlanetLab and Gush projects as part of this effort. |
| 10 | |
| 11 | == Port Stork to GENI Environment == |
| 12 | |
| 13 | Much of our effort on this milestone has been working closely with the !PlanetLab project to |
| 14 | define the underlying GENI interfaces to which Stork must be ported. Scott in particular has |
| 15 | done a lot of work on this and developed the 'geniwrapper' software that provides GENI |
| 16 | interfaces on !PlanetLab. Stork currently uses the native !PlanetLab interfaces and we are now |
| 17 | beginning to port it to the GENI interfaces, but this effort should be significantly simplified by |
| 18 | our participation in the geniwrapper project. |
| 19 | |
| 20 | One modification to Stork we have already made is development of a Stork 'nest' proxy slice. |
| 21 | This slice runs on all GENI components and allows the Stork software that is embedded in client |
| 22 | slices to download packages and metadata efficiently. The proxy caches these files, so that each |
| 23 | file is only downloaded once to each GENI component independent of how many slices on that |
| 24 | component use the file. The nest proxy is very similar to an HTTP proxy, except that it allows |
| 25 | files to be identified by the SHA-1 hash of their contents, rather than URL. This avoids naming |
| 26 | conflicts and fits better with Stork's security model than does URLs. We modified the embedded |
| 27 | Stork software so that a client slice first attempts to access files via the proxy, and if that fails, |
| 28 | from the Stork repository directly. |
| 29 | |
| 30 | Second, we have begun integration of Stork with Gush. This work is nearing completion and |
| 31 | should be deployed shortly. Specifically, we have integrated Gush into Stork in two ways. First, |
| 32 | we have added Gush support to the Stork developer tools, so that a Stork user can develop a |
| 33 | package on his/her workstation and use Gush to push the package and relevant metadata directly |
| 34 | to his/her slices. Second, we have added Gush support to the Stork repository so that the |
| 35 | repository periodically uses Gush to push repository metadata to the Stork nest slivers. This |
| 36 | allows the nest slice to provide up-to-date repository metadata to its clients without resorting to |
| 37 | expensive polling of the repository. |
| 38 | |
| 39 | == Support for GENI Slice Management == |
| 40 | We have been working with the PlanetLab project to define the proper interfaces for resource |
| 41 | management (including slice management) for the geniwrapper. This work is in its early stages, |
| 42 | but again, our participation both helps us understand what those interfaces are likely to look like |
| 43 | once complete, and help us ensure that the interfaces provide the functionality Raven will need. |
| 44 | Stork currently contains functionality for performing package management operations on groups |
| 45 | of PlanetLab slices and nodes. Our next task is to change this to support groups of GENI slices |
| 46 | and components, and to interface with the geniwrapper to instantiate slices on the proper |
| 47 | components as specified by the group information. Raven may do via the geniwrapper interfaces |
| 48 | directly, or may make use of the the slice manager tool the PlanetLab project is currently |
| 49 | developing, depending on which best meets our needs. |
| 50 | |
| 51 | We are also planning on integrating with Gush's “experiment” support and have an initial proofof- |
| 52 | concept that we can generate the XML files that Gush uses to define an experiment from |
| 53 | Stork's group information. This, for example, allows a Raven user to use Gush to push out |
| 54 | packages and metadata to the proper GENI components based on the Stork group information. |
| 55 | |
| 56 | == Release v1.0 == |
| 57 | The above milestones form the core of Raven v1.0. In support of the release we are in the |
| 58 | process of setting up a Raven Trac site (raven.cs.arizona.edu) that will contain all of the Raven |
| 59 | documentation, source code, tickets, internal milestones, published papers, etc. The site is |
| 60 | currently under construction but we anticipate moving the Stork code base to the site within the |
| 61 | next month. |
| 62 | As an organizational note, we have decided to split the original Stork project into a suite of tools |
| 63 | in the Raven project. The Stork project contained several tools for deploying packages and |
| 64 | managing slices, one of which is named 'stork'. This causes some amount of confusion, |
| 65 | especially because those of us in the project alternately use the term 'Stork' to refer to the overall |
| 66 | project and to the particular tool. Going forward we will refer to the package management tool as |
| 67 | 'stork', and the package distribution tool as 'tempest', while the overall project is called 'Raven'. |
| 68 | This will likely cause some initial confusion, but should be clearer in the long run than the |
| 69 | current scheme in which there is a 'stork' tool that is part of a larger 'Stork' project. |
| 70 | |
| 71 | |
| 72 | |
| 73 | |
| 74 | Converted submitted file by Julia Taylor (jtaylor@bbn.com). Original file can be found [http://groups.geni.net/geni/attachment/wiki/ProvisioningService/Raven-QSR-Dec08.pdf here] |