wiki:ViSE-3Q09-status

Version 1 (modified by jtaylor@bbn.com, 15 years ago) (diff)

--

“SHORT NAME” Project Status Report

Period: “date”

I. Major accomplishments

The fourth quarter of the ViSE project includes the following major accomplishments:

  • A successful demonstration at GEC5 in July. The demonstration showed virtualization of actuators for our RayMarine radars using a single guest VM.
  • A re-submission of ViSE-related technology to NSDI on October 2nd, 2009. The submission, titled ”MultiSense: Fine-grained Multipexing for Steerable Sensor Networks” includes experimentation with both PTZ cameras and radars.
  • Significant progress toward early milestones in Spiral 2. These milestones include complete Xen sensor slivering for multiple VMs due 11/16/09, complete integration of your testbed with broker in cluster clearinghouse, so that your testbed becomes federated with the other associated testbeds due 11/16/09, work with GPO and cluster projects to complete a plan for the setup of VLANs between testbeds, to be carried by Internet 2 (or NLR) backbone network between the testbeds due 11/16/09, and deliver a first release (v1.x) of your testbed and web-based experiment control software, with documentation, to the GPO
  • Significant collaborations with, and contributions to, our Cluster D peers through numerous email exchanges, video conferences, and in-person meetings.
  • A continuing collaboration with the University of Puerto Rico, Mayag¨uez (UPRM), as part of our outreach plan.

The rest of this document describes in detail the major accomplishments above.

A. Milestones achieved

We achieved the following milestones in the 4th quarter as specified in our original Statement-of-Work.

  • July 31st, 2009. Contingent upon availability of reference implementation of Shirako/ORCA at 6 months, import and then integrate.

ViSE is running the latest reference implementation of the Shirako/Orca codebase. Note that Milestone 1c (completed February 1st, 2009) required ViSE to perform an initial integration of Shirako/Orca prior to an official reference implementation being released. See that milestone for specific details related to the integration. Incorporating the latest reference implementation required only minor code porting. Additionally, as a result of the Orca-fest conference call on May 28th, the GENI Project Office and Cluster D set mini-milestones that were not in the original Statement of Work. These milestones are related to Milestone 1e, since they involve the particular instantiation of Orca that Cluster D will use. In particular, by June 15th, 2009, we upgraded our ORCA actors to support secure SOAP messages. As part of this mini-milestone, Brian Lynn of the DOME project and the ViSE project also setup a control plane server that will host the aggregate manager and portal servers for both the DOME and ViSE projects. This server has the DNS name geni.cs.umass.edu. The server includes 4 network interface cards: one connects to a gateway ViSE node on the CS department roof, one will connect to an Internet2 backbone site (via a VLAN), one connects to the public Internet, and one connects to an internal development ViSE node. During the Orca-fest and subsequent Cluster D meetings we set the milestone for shifting within the range of August 15th, 2009 and September 1st, 2009.

  • July 31st, 2009. Complete Xen sensor virtualization.

We completed an initial research-grade implementation of sensor virtualization in Xen and released a technical report that applies the approach to Pan-Tilt-Zoom video cameras. The technical report can be found on the web at http://www.cs.umass.edu/publication/details.php?id=1575 and is also attached to this milestone report. As detailed in our previous quarterly status report we have faced challenges in applying the same techniques to the Raymarine radars in the ViSE testbed because their drivers do not operate by default inside of Xens Domain-0 management domain. The problem affects other high-bandwidth I/O devices that use Xen, and is being actively worked on in the Xen community. As these problems are worked out, we have transitioned to using vservers as ViSEs preferred virtualization technology, and developed vserver handlers for Orca. We are also porting MultiSense to work with vservers as well as with Xen; its modular design makes this port straightforward. Our demonstration at GEC5 in Seattle showed non-slivered VM access to radar control and data using vservers; once we complete our port of MultiSense we will be able to support slivered access. A more detailed description of these Xen, vservers, and sensors in ViSE is available in the quarter 2 quarterly report for ViSE. Since releasing our technical report, we have improved the work and re-submitted it to the NSDI conference. The improved work, which includes experiments with steerable radars, is also attached to this report.

  • October 1st, 2009. Contingent upon available budget, provide a VLAN connection from your testbed to the Internet2.

In cooperation with OIT at UMass-Amherst we have provided a VLAN connection from our control plane server geni.cs.umass.edu to an Internet2 point-of-presence in Boston. In an email dated September 28th, 2009 Rick Tuthill of UMass-Amherst OIT updated us on the status of this connection, as follows. ”I was down at the CS building finishing this link setup on Friday – I think there may have been some confusion in the network jack ordering as there are only two network ports currently activated. The two existing ports that are ’live’ are room 218A jack 2-2-4D and room 226 jack 2-2-9D. These two ports and all intermediary equipment are now configured to provide layer-2 VLAN transport from these networks jacks to the UMass/Northern Crossroads(NoX) handoff at 300 Bent St in Cambridge, MA. The NoX folks are not doing anything with this research VLAN at this time. They need further guidance from GENI on exactly what they’re supposed to do with the VLAN. Also, once IP addressing is clarified for this VLAN, we’ll need to configure some OIT network equipment to allow the selected address range(s) to pass through. I have signed the MOU and will return a countersigned ’original’ to Brian Levine....Let me know if there’s anything I can do to facilitate testing of this link in the next couple of days.”

We intend this VLAN connection to service both the ViSE and the DOME testbeds. Thus, as required by this milestone, we have coordinated with OIT at UMass-Amherst to provide a VLAN connection from our testbed to the Internet2 backbone network. In the coming year, we have committed to planning with our peers in Cluster D and the GPO on how to best use this new capability. As part of this plan, and before we can send/receive traffic on this link, we will discuss the roles and capabilities of Internet2 in forwarding our traffic to its correct destination. Attached is the signed Memorandum-of-Understanding (MOU) between UMass-Amherst OIT and the DOME/ViSE projects with respect to our use of this VLAN connection. As discussed in the MOU, OIT is providing the link free of charge for the first year for prototyping purposes, but charge a fee in subsequent years based on our usage.

  • October 1st, 2009. Virtualization of actuators using a single guest VM and demo.

We demo’d control of the radar’s actuators at GEC5 in Seattle in July using vservers. Our technical reports report more detailed information on the nature of this control.

  • October 1st, 2009. Testbed available for public use within our cluster.

Our testbed is available for limited use within our cluster. We are soliciting a select group of users to allow us to work out the bugs/kinks in the testbed and figure out what needs to be improved. The portal for our testbed is available at http://geni.cs.umass.edu/vise. Note that as we develop our sensor virtualization technology we are initially allowing users to safely access dedicated hardware—the sensors and the wireless NIC. Right now, we are targeting two types of users for our testbed. The first type is users that wish to experiment with longdistance 802.11b wireless communication. Long-distance links are difficult to setup because they require access to towers and other infrastructure to provide line-of-sight. Our two 10km links are thus useful to outside researchers working on these problems. There are a number of students at UMass-Amherst using the testbed to solve problems in this area. The second type of user is radar researchers that can leverage our radar deployment. We are working with students from Puerto Rico and other researchers in CASA to interpret and improve the quality of our radar’s data and test them for detection algorithms. We are soliciting feedback from these users about what they need to do on these nodes, and how the testbed can satisfy their needs. Note that our testbed interacts with a remote Clearinghouse run by RENCI/Duke to facilitate resource allocation.

  • October 1st, 2009. Switch to remote Clearinghouse at RENCI/Duke

While not in our official list of milestones, Harry Mussman asked our Cluster to switch to using a remote Clearinghouse by October 1st. We have made this switch. We first made use of a jail that we controlled at RENCI/Duke to setup and test this Clearinghouse in mid-August, and on September 28th, 2009 sent email to RENCI asking them to switch us over to their Clearinghouse.

Milestones in Progress
Below we list the milestones for quarter 1 of Spiral 2, as recently agreed upon in our Spiral 2 Statement-of-Work.

  • November 16, 2009 Complete Xen sensor slivering (multiple VMs).

We have made significant progress toward this milestone as detailed in our two technical reports. We are beginning to port our existing Xen implementation to vservers.

  • November 16th, 2009. Complete integration of your testbed with a broker in the cluster clearinghouse, so

that your testbed becomes federated with the other associated testbeds. Demo functionality of your testbed in this environment, including access from experiment control tools and service managers that are remote from your testbed.
As detailed above, we have integrated our testbed with the broker at the remote Clearinghouse. Our plan is to demo this portal and integration at the upcoming GEC.

  • November 16th, 2009. Work with GPO and cluster projects to complete a plan for the setup of VLANs

between testbeds, to be carried by Internet 2 (or NLR) backbone network between the testbeds.
Our status relative to Internet2 is detailed above. We are well-prepared to complete a plan for using our existing link.

  • December 1st, 2009. Deliver a first release (v1.x) of your testbed and web-based experiment control software, with documentation, to the GPO.

We are currently readying our code for release, and are in good position to satisfy this milestone.

  • January 1st, 2010. Best-effort installation of Pelham tower x86 sensor node (note: funded from other

sources), to include meteorological sensors, radar (if licensed), communications, computing. No camera. If Pelham node is problematic, optional rapidly-deployed node replaces Pelham node.
We have ordered the parts for the Pelham node and coordinated with the Massachusetts Department of Conservation and Recreation to setup the node. We are pushing to bring the node up this fall before the winter snow.

B. Deliverables made

The third quarter of the project includes work toward a number of deliverables. These deliverables include the ViSE demonstration and poster on GEC5 on July 20th, 2009 in Seattle. Along with Brian Lynn and Ilia Baldine, we aided in drafting documentation and code for other Cluster D projects, such as KanseiGenie, that they can use as templates for integrating their own testbed. We have also contributed to GPO and Cluster-wide GENI discussions.

II. Description of work performed during last quarter

A. Activities and findings

The primary work during the quarter, including our Activities and Findings, centered on achieving the milestones described above and making progress toward our initial Spiral 2 milestones. In addition to attending GEC5, we held monthly Cluster D group meetings via teleconference.

B. Project participants

The primary PI is Prashant Shenoy. Co-PIs are Michael Zink, Jim Kurose, and Deepak Ganesan. Research Staff is David Irwin. Navin Sharma, a graduate student, is also contributing to the project and is the primary author of the ViSE-related submission currently under review.

C. Publications (individual and organizational)

We submitted our research on sensor virtualization with Xen to SenSys. Although the paper was well-reviewed it was not selected for publication. We have improved the work and re-submitted it to the NSDI conference.

D. Outreach activities

We are continuing our discussions with UPRM about integrating with their student radar testbed project. We have bi-weekly meetings with the Student Testbed Project at UPRM. Jorge Trabal, the primary student working on the UPRM project, is visiting UMass-Amherst until he completes his Ph.D. The testbed project at UPRM has the same origins as the ViSE testbed, and thus many of the components are the same. However, while ViSE is focused on the virtualization aspect of the testbed, the UPRM team is focused on improving the data provided by its radar. Thus, the two projects are complementary. We are discussing with Jorge the best way to leverage their improved radar data, as well as the potential for integration with ViSE in the future.

E. Collaborations

We collaborated with other Cluster D projects significantly during the quarter. First, we setup and maintained geni.cs.umass.edu for both the ViSE and DOME projects. We also aided with the integration with RENCI’s Clearinghouse for both ViSE and DOME. Additionally, we had numerous email exchanges on the Orca user mailing list about the intricacies of integration and setup. We also setup geni.cs.umass.edu to connect to the Internet2 VLAN connection. ViSE also participated in the review of GPO documents on Experiment Services and Workflow.

F. Other Contributions