Changes between Initial Version and Version 1 of Mid-Atlantic Crossroads-2Q09-status

05/24/10 14:16:17 (12 years ago)



  • Mid-Atlantic Crossroads-2Q09-status

    v1 v1  
     3= Mid-Atlantic Crossroads Project Status Report =
     5Period: 2Q09
     6== I. Major accomplishments ==
     8=== A. Milestones achieved ===
     9  * Milestone: Extend DRAGON’s open-source GMPLS-based control plane implementation to include edge compute resources and support network virtualization [of !PlanetLab per our participation in Cluster B]
     10  * Milestone: Integrate DRAGON/GRMS with candidate GENI control framework
     11  * Milestone: Integrate DRAGON/GENI control framework with DRAGON testbed
     12  * Milestone: Make a working prototype (tested) GENI (substrate and control framework) available for limited external research '''(early achievement)'''
     14=== B. Deliverables made ===
     15  * Continue as co-chair of the Substrate WG
     16  * Designed and developed a working implementation of a SOAP based Aggregate Manager
     17    * JAVA based reference implementation that provides a web services API (WSDL) to clients, including the ability to communicate with a Clearing House
     18    * Code base is public and available for others to use
     19    * API capable of controlling multiple component resources within MAX GENI infrastructure, including:
     20      * !PlanetLab nodes
     21      * Dynamic e2e bandwidth VLANs
     22      * Eucalyptus virtualization nodes (example of API capability)
     23      * PASTA wireless nodes (example of API capability)
     24      * NetFPGA based !OpenFlow switches
     25We believe this to be a significant step for Cluster B geniwrapper integration and
     26Aggregate Manager interoperability within the Cluster B control framework.
     28== II. Description of work performed during last quarter ==
     30=== A. Activities and findings ===
     31Primary efforts this quarter focused on the following MAX deliverable:
     32  * Designed and developed a working implementation of a SOAP based Aggregate Manager
     33'''Focus Next Quarter:'''
     34  * Ticketing and authentication is still a work in progress
     35    * HTTPS could be used for encryption
     36    * Signed SOAP messages for authentication
     37  * Back-end hooks to provisioning systems is still under development
     38    * will interface to OSCARS/DRAGON via Web Services API
     39    * will interface to !PlanetLab via XMLRPC directly to PLCAPI, or XMLRPC (or SOAP) to !PlanetLab GENIWrapper AM
     40    * will be extended to find !OpenFlow controllers or other technologies
     41  * Extend our existing SOAP-based Aggregate Manager component to interoperate with other GENI Cluster B participants
     42  * Reach further consensus within Cluster B on most effective way to interoperate with !PlanetLab and create an effective aggregate manager
     43  * End-to-end slices across AM’s will require something very similar to the interdomain interaction used to create inter-domain dynamic circuits in networks like DRAGON, Internet2 DCN, ESnet, etc.
     44  * For example:
     45    * calculate the end-to-end slice (multi-AM slice) first, see if it is achievable
     46    * then go from AM to AM and try to provision all of the resources
     47  * We believe this will look something like our Path Computation Element (PCE) now but will be more like a Resource Computation Engine (RCE) where Path will be just one of the constraints
     48  * Explore interoperability issues with another Control Framework, i.e, ProtoGENI once the ProtoGENI switch and SPP switch are installed in !McLean, VA August 17-19th that MAX will be assisting with.
     49  * Delivery of preliminary (aggregate manager) design documentation to GPO
     51=== B. Project participants ===
     52Chris Tracy, Jarda Flidr, and Peter O’Neil
     53=== C. Publications (individual and organizational) ===
     55=== D. Outreach activities ===
     57=== E. Collaborations ===
     58  * Attended the RSpec Workshop in Chicago and the Cluster B meeting in Cambridge.
     59  * Attended and presented at the February 13th Cluster B Integration meeting at DIA (
     60  * Engaged with USC ISI-East in Arlington, VA to bring up !PlanetLab node and begin debugging early efforts to reserve Gigabit slices of bandwidth between ISI and MAX !PlanetLab nodes.
     61  * Regular communication and ongoing support with GpENI: Great Plains Environment for Network Innovation on installing and running:
     62    * the DRAGON software suite and configuration support for the Ciena !CoreDirector platform
     63    * a private !PlanetLab central deployment
     64  * Planning for physically interconnecting to the ProtoGENI switch to be installed in !McLean, VA during August
     65  * Provided beta testing results and deployment experiences of Princeton geniwrapper code to the planetlab-devel mailing list. MAX efforts led to the addition of some very limited notion of Ethernet VLANs to their RSpec, so that you can specify a resource request consisting of not only VMs but also some links which connect those VMs.
     66  * There is also been some progress on making it possible for users (who are SSH'ing into planetlab slices) to setup the tagged VLAN interfaces (and the IP addresses on those interfaces) themselves without involving administrator assistance. This information is put into the RSpec, and when the slice is setup on !PlanetLab, attributes are added to the nodes which specify that tagged VLAN may be created and also limit the range of IP addresses that can be assigned by that user. This is all accomplished by something that PlanetLab called 'vsys' script, which is basically a generic mechanism to allow PlanetLab users to run a limited subset of commands on the primary host that runs all of the VMs.
     67  * We have also worked with Adva to upgrade the software release (and firmware) on all of the DRAGON optical equipment. In the next quarter, the MEMS cards will be brought up to the latest manufacturing standards and FPGA/CPLDs upgraded.
     68  * Participated in coordination calls with Cluster B participants
     69  * Discussions with Jon Turner continue about locating one of his SPP nodes at the !McLean, VA Level 3 PoP where Internet2, MAX, and NLR all have suites and tie-fibers.
     70  * In discussions with IU’s Meta-Operations Center on a data comparison of ops info from MAX as compared to the ops info available from !PlanetLab.
     71  * Continuing discussions with EnterpriseGENI/Stanford discussions for experimentation with !OpenFlow and SOAP/WSDL efforts to effectively interoperate with !PlanetLab
     72=== F. Other Contributions ===
     73Updated and further documented our GENI web page to summarize our
     74efforts installing MYPLC !PlanetLab Central v4.3 and presentations to date
     75on the MAX GENI site:
     76  *
     77  *
     78  *