Changes between Initial Version and Version 1 of Mid-AtlanticCrossroads-1Q09-status

05/24/10 13:45:27 (11 years ago)



  • Mid-AtlanticCrossroads-1Q09-status

    v1 v1  
     3= Mid-Atlantic Crossroads Project Status Report =
     5Period: 1Q09
     6== I. Major accomplishments ==
     8=== A. Milestones achieved ===
     9  * Milestone: Extend DRAGON’s open-source GMPLS-based control plane implementation to include edge compute resources and support network virtualization [of !PlanetLab per our participation in Cluster B]
     10  * Milestone: MAX: Purchase virtualization servers and complete end-to-end VLAN connections with backbone
     11    * 5 virtualization servers are running across DRAGON infrastructure (planetlab1–  Each server has a management interface connected to the public Internet and a second high-speed interface connected to the DRAGON backbone.  End-to-end dynamic VLANs can be reserved/provisioned between these nodes using the DRAGON API.
     12    * deployed private !PlanetLab MyPLC software package to manage VMs (vservers) on these servers
     13    * these servers were purchased with funds independent of GENI funding
     16=== B. Deliverables made ===
     17  * Periodically update and refresh the GENI trac wiki pages for our project
     18  * Continue as co-chair of the Substrate WG
     19  * Multiple Aggregate Manager Integration (!PlanetLab AM with DRAGON AM):
     20    * Provide access to !PlanetLab Central API (XMLRPC-based interface) to interested members of GENI Cluster B for instantiating !PlanetLab slices across our 5 nodes
     21      * adjusted polling interval on MyPLC node manager to provide rapid setup and teardown of compute slices (to more closely match the speed at which dynamic circuits may be provisioned and torn down)
     22    * created multiple tagged VLAN interfaces on each of our 5 planetlab nodes
     23    * provisioned circuits (DRAGON slices) between Los Angeles data source node and the planetlab5 node at MAX
     24    * Demonstration at GEC4 included:
     25      * dynamic circuits slice provisioning (via DRAGON SOAP API)
     26      * virtual machine slice provisioning (via PLC’s XMLRPC API)
     27      * integration of non-planetlab and planetlab prototype aggregation managers
     28      * inter-domain circuit creation over the wide-area network with a SONET-switched network
     29      * transfer of 2GB files from disk-to-disk using nuttcp at 450Mbps
     30      * used md5sum to ensure that copied file was valid
     31      * provisioned additional dynamic circuits between planetlab nodes to distribute the data:
     32        * src planetlab5, dst planetlab2
     33        * src planetlab5, dst planetlab3
     34        * src planetlab5, dst planetlab4
     35      * transfer of 2GB files from disk-to-disk between these pairs of nodes
     36      * used md5sum on destination nodes to ensure that copied file was valid
     38We believe this to be a significant step in the right and needed direction to achieve our deliverable goals.  In discussions with the vserver developer, VLANs work by using IP isolation.  Although it is possible to add/remove tagged VLAN interfaces while a !PlanetLab vserver is running, they must be managed manually by administrators.  Extending the !PlanetLab API to allow users to manage these logical subinterfaces is something to be further explored during the next quarter.
     41== II. Description of work performed during last quarter ==
     43=== A. Activities and findings ===
     44Primary efforts this quarter focused on the following MAX deliverable:
     45  * Extend DRAGON’s open-source GMPLS-based control plane implementation to include edge compute resources and support network virtualization;
     47To summarize the particulars listed in the above section, we have demonstrated initial interoperability between:
     48  * !PlanetLab “slivers” (vservers)
     49  * DRAGON dynamic circuits (end-to-end VLANs)
     50  * non-!PlanetLab servers (whole systems, no VMs..)
     51  * tagged VLAN interfaces which are created inside a vserver
     53Based on discussions at the February 13th meeting, we became interested in exploring use of the netFPGA cards and, with MAX funds, purchased two and obtained a license and CAD tools from Xilinx as a University Program Member. 
     55'''Lessons Learned:'''[[BR]]
     56Reservations for network and compute resource are not tightly coupled yet.
     57  * Required manual configuration of VLAN tags on !PlanetLab nodes
     58  * PLCAPI does not currently include functions to manage VLANs or IP addresses dynamically
     59  * !PlanetLab nodes assume static Layer 3 IP network connection
     60  * No knowledge of dynamic network edge port “identifier” (ingress/egress switch ports for dynamic circuit provisioning)
     61  * !PlanetLab’s vserver virtualization technique does not completely isolate traffic
     62    * vserver can ping IPs configured on other VLANs even when using  ip_addresses slice attribute to isolate which IP addresses are visible to a particular vserver
     63  * vservers do provide adequate network performance for disk-to-disk or memory- to-memory transfers
     65'''Focus Next Quarter:'''[[BR]]
     66  * Extend our existing SOAP-based Aggregate Manager component to interoperate with other GENI Cluster B participants
     67  * Provide tighter integration of network and compute resource provisioning
     68  * Extend PLC-API to support dynamic addition/removal of tagged VLAN interfaces and configuration IP addresses – instead of running vconfig/ifconfig manually...
     69  * Implement !PlanetLab node attribute to map physical NIC (e.g. eth1) to globally unique edge port interface on dynamic network (urn:ogf:network:[...])
     70  * Investigate alternatives to vservers for better traffic isolation i.e., !PlanetLab Japan uses Kernel-based Virtual Machine (KVM) and Xen instead of vservers
     71  * Reach consensus within Cluster B as to the most effective way(s) to interoperate with !PlanetLab and create an effective aggregate manager
     73=== B. Project participants ===
     74Chris Tracy, Jarda Flidr, and Peter O’Neil
     75=== C. Publications (individual and organizational) ===
     77=== D. Outreach activities ===
     79=== E. Collaborations ===
     80  * Attended GEC4 in Miami, presented at the Cluster B WG session, the Substrate WG, presented a new poster and demo of current capabilities, and met individually and collectively with other Cluster B and GENI participants
     81  * Attended and presented at the February 13th Cluster B Integration meeting at DIA (
     82  * Engaged with USC ISI-East in Arlington, VA to bring up !PlanetLab node and begin debugging early efforts to reserve Gigabit slices of bandwidth between ISI and MAX !PlanetLab nodes. 
     83  * Regular communication and ongoing support with GpENI: Great Plains Environment for Network Innovation on installing and running:
     84    * the DRAGON software suite and configuration support for the Ciena !CoreDirector platform
     85    * a private !PlanetLab central deployment
     86  * Provided beta testing results and deployment experiences of Princeton geniwrapper code to the planetlab-devel mailing list
     87  * Participated in coordination calls with Cluster B participants
     88  * Discussions with Jon Turner continue about locating one of his SPP nodes at the !McLean, VA Level 3 PoP where Internet2, MAX, and NLR all have suites and tie-fibers.
     89  * In discussions with IU’s Meta-Operations Center on a data comparison of ops info from MAX as compared to the ops info available from !PlanetLab. 
     90  * Enterprise GENI/Stanford exploratory discussions for experimentation with the netFPGA cards about possible VLAN tag manipulation, push/pop of MPLS shim headers, and/or general purpose tunneling; and detailed discussions on the most effective way(s) to interoperate with !PlanetLab and create an effective aggregate manager for Cluster B
     92=== F. Other Contributions ===
     93  * Discuss and answer questions about GENI that arise within the Quilt community of Regional Optical Networks. 
     94  * Created a local GENI web page diagram to summarize our intentions and goals:
     96  * Created a local GENI web page diagram