Changes between Initial Version and Version 1 of ProtoGENI-1Q09-status

04/21/09 17:16:18 (12 years ago)
Aaron Falk



  • ProtoGENI-1Q09-status

    v1 v1  
     1= ProtoGENI Project Status Report =
     3Period: 1Q 2009
     5== I. Major accomplishments ==
     7=== A. Milestones achieved ===
     9'''Year 1(c): Basic clearinghouse and aggregate manager up and running'''
     11We have released our clearinghouse software under the GENI Public License.  The clearinghouse has two main roles:
     12  * To act as a central trust anchor for federates
     13  * To provide a set of registries for component managers, slice authorities, slices, and users, so that these entities can find one another
     15More details of the capabilities, including complete API documentation, can be found here:  [[BR]]
     16The installation documentation here can be found here:
     18An aggregate manager based on the Emulab software is now running at 4 sites: Utah, Kentucky, Wisconsin, and CMU. This AM is capable of:
     19  * Provisioning "raw" PCs and PlanetLab slivers
     20  * Creating VLANs within a single aggregate
     21  * Creating IP-in-IP tunnels between different aggregates
     22  * Support for our RSpec format for resource advertisement and sliver descriptions (ticket requests)
     23  * Federation through a shared clearinghouse
     25This aggregate manager is part of the Emulab codebase, and is implemented as a new API to Emulab resource manipulation. It is available under the AGPLv3 directly from Emulab CVS.   
     27  More details on the capabilities of this AM can be found at: [[BR]]
     28  API documentation is here: [[BR]]
     29  Instructions for enabling the AM APIs on an Emulab installation are here:
     31'''Year 1(g): Gave demos of progress at GEC #3 and GEC #4'''
     33We demonstrated the abilities of the clearinghouse and aggregate manager listed above at GEC #3 and GEC #4, plus:
     34  * A proof-of-concept GUI for describing GENI slices, including topology specification
     35  * A working federation of 4 sites
     36  * Emergency shutdown of slices
     37  * Delegation of credentials
     39Poster from the GEC3, which included material from the demo:
     40Poster used in the presentation of the GEC4 demo:
     42We also made significant progress on "control plane integration of cluster
     43partners" milestone, more details can be found below.
     45=== B. Deliverables made ===
     47Release of clearinghouse software under the GENI Public License Documentation here:
     49Significant progress on our RSpec prototype was released publicly at:
     51Improvements include:
     52  * Changes to support the needs of the !HomeNet project
     53  * Support for annotations mapping requested links to specific physical paths
     54  * Re-working of the identifiers used to identify and bind resources
     56Ongoing (listed in last report, still going strong):
     57    Documentation of design decisions and plans up at [[BR]]
     58    Early credential and ticket formats (still in progress) released at:
     60== II. Description of work performed during last quarter ==
     62=== A. Activities and findings ===
     64As before, much of the activity billable this quarter to this contract has been
     65integration and collaboration (see below).
     67The most significant progress we have made this quarter is the running of a
     68federation, which joins together four of the projects in our cluster: Utah,
     69Kentucky, Wisconsin, and CMU. This is a full control-plane federation, in which
     70all members act as independent Slice Authorities and Component Managers,
     71establishing trust through a shared Clearinghouse (run at Utah). The APIs and
     72data structures used to communicate among the federates, and for users to access
     73the federates, are our versions of the GENI APIs. All federates at this time
     74are running versions of the Emulab software.
     76Users may request slices that include network topologies - links within a
     77federate are realized as VLANs, and links between federates are currently
     78realized as IP-in-IP tunnels. When the backbone is built out, it will be a
     79member of this federation, and will enable end-to-end VLAN connectivity between
     80a few of the federates (pending the assistance of regional and campus networks).
     82Documentation for joining the federation is here:
     84API documentation is here:
     85 [[BR]]
     86 [[BR]]
     87 [[BR]]
     89A set of scripts to use these APIs are described here:
     91Under other funding, we have made progress on a number of other important
     92tasks, including:
     93    Support for delegation of credentials, which includes a further fleshing out of our security model. Described at: [[BR]]
     94    Progress on a slice embedding service, by adding support for RSpec to Emulab's "assign" (resource mapper)[[BR]]
     95    Support for simple emergency shutdown of slices[[BR]]
     96    Very early support for OpenVZ-based slicing inside of the Emulab testbed
     98We were heavily involved in GEC4, giving three talks, a demo, a poster, and running 6 hours of cluster meeting time. We have also been involved in the
     99planning for GEC6, to be held in Salt Lake City.
     101=== B. Project participants ===
     103University of Utah
     105Subcontracts for HP and Internet2 still under negotiation, due to uncertainties about which Internet2 sites to use in first year - proceeding now that those have been decided
     107=== C. Publications (individual and organizational) ===
     109=== D. Outreach activities ===
     111As part of Solicitation 2, we had discussions with a large number of
     112institutions that are not already part of GENI, encouraging more participation.
     113This included a number of international collaborators.
     115We addressed a meeting of the QUILT, a group of regional networks (primarily
     116academic), about how they can get involved in GENI.
     118=== E. Collaborations ===
     120We have continued to organize bi-weekly Cluster C conference calls, which
     121have helped our cluster members to make progress together. In addition to
     122the members assigned to our cluster, the Security Architecture project has been
     123a frequent participant, and some calls have featured members of th GPO and
     124other invited projects.
     126The following projects have integrated with our clearinghouse by joining our
     127federation (described above):
     128    CMU Testbeds[[BR]]
     129    Instrumentation Tools[[BR]]
     130    Measurement System
     132The following project has added support for its substrate to the Emulab
     133software, bringing it very close to clearinghouse integration:
     134    Programmable Edge node
     136We have begun working out RSpec compatibility with the following project:
     137    DTunnels/BGP Mux
     139We have also interacted with a number of other GPO-funded projects outside of
     140our cluster, including:
     142    SPP Overlay Nodes: Cooperating to share Internet2 donated wave, have been working out details of when, where, and how equipment will be placed at shared POPs
     143    GMOC: Currently evaluating a proposal by the GMOC for using URNs to identify users and resources in GENI. Have also discussed the set of operational data we can collect
     144    Security Architecture: In continuous contact to refine our security model
     145    Million-Node GENI: Now available on Emulab testbed, and plan to soon add  support for automatically giving ProtoGENI users "Seattle" accounts
     146    Mid-Atlantic Network: Have had a discussion about the capabilities of DRAGON and how it could be useful to ProtoGENI in the medium (but probably not short) term
     147    Great Plains Network: Have given them an account on Emulab so that they can see how our control framework is put together
     149We participated in a GPO-sponsored meeting held in Denver in February,
     150primarily composed of Cluster B members, to work out issues relating to the
     151role of the clearinghouse in interacting with aggregate managers.
     153We have continued to be heavily involved in the Control Framework and Services
     154working groups, through in-person meetings, conference calls, and email
     157=== F. Other Contributions ===