wiki:ViSE-4Q09-status

Version 1 (modified by jtaylor@bbn.com, 14 years ago) (diff)

--

ViSE Project Status Report

Period: 4Q09

I. Major accomplishments

The fourth quarter of the ViSE project includes the following major accomplishments:

  • A successful demonstration at GEC6 in July. The demonstration showed integration with a remote Clearinghouse operated by RENCI and Duke in Chapel Hill, NC.
  • Submission of four papers on ViSE-related technology. We submitted a paper entitled “Towards a Virtualized Sensing Environment” to TridentCom 2010, a conference devoted to testbed technologies. The paper describes ViSE’s integration with the Orca control framework and research problems related to operating shared sensing environments. We submitted an extended abstract entitled “ViSE: Broadening Access to Sensors using Shared Virtualized Testbeds” to IGARSS 2010, a conference devoted to geoscience and remote sensing. The call for papers for IGARSS 2010, which focuses on the new field of community remote sensing that combines remote sensing with citizen science, social networks, and crowd-sourcing to enhance the data obtained from traditional sources, is particularly synergistic with the GENI vision and prototype. We view both conferences as excellent opportunities to interact with both testbed builders and potential users of ViSE. We submitted a paper entitled “Cloudy Computing: Leveraging Weather Forecasts in Energy Harvesting Sensor Systems” to SECON 2010, a conference devoted to wireless sensor networks. The paper describes how a GENI-like testbed operated by wind turbines and/or solar panels can make use of weather forecasts provided by the National Weather Service. Finally, we submitted a paper to NSDI entitled “MultiSense: Fine-grained Multiplexing for Steerable Sensor Networks” that describes our sensor virtualization technology.
  • The successful completion of multiple fifth quarter milestones and significant progress toward early milestones in the sixth quarter. The completed milestones include integration with a Clearinghouse operated by RENCI/Duke by 11/16/2009, aggregates ready for experiments by researchers by 11/16/2009, and completion of sensor slivering by 12/1/2009. We have also completed early sixth quarter milestones including the construction of a rapidly deployable node (Pelham node installation has been delayed until after the winter snow melts) and installation of camera devices on the CSB and MA1 tower nodes by January 1st, 2010. Further we have worked with the GPO to setup and test our VLAN connection from UMass-Amherst to BBN in Boston.
  • In keeping with GENI’s broader Spiral 2 goals, we have joined with the DOME team to support a undergraduate student working on a bus tracking experiment using DOME’s network of buses and ViSE’s pan-tiltzoom camera’s mounted on the roof of the computer science building. The bus tracking experiment allocates resources from both DOME and ViSE and is attempting to detect incoming buses using their wireless signal beacon and take a picture of them as they pass by the road in front of our building. The experiment represents coordination between both DOME and ViSE.
  • As part of our collaboration with the University of Puerto Rico, Mayag¨uez (UPRM), and as part of our outreach plan, we have prepared a GENI and cloud computing seminar to be taught from January 12th through January 15th at UPRM in Puerto Rico. The seminar will be attended primarily by undergraduate and graduate students familiar with radar technology but not with emerging computing technology, such as GENI, virtualization, and cloud computing. We will host a series of 6 lectures on GENI, virtualized sensor networks, cloud computing, wireless communication, virtualized networks, and operating system virtualization, as well as tutorials using both ViSE, Amazon’s web services, and enabling virtualization technology such as VServers, Xen, and VMware. Additionally, we will conduct meetings with the National Weather Service in Puerto Rico and consult with the students setting up their own ViSE-like testbed in Puerto Rico.
  • Significant collaborations with, and contributions to, our Cluster D peers through numerous email exchanges, video conferences, and in-person meetings.

The rest of this document describes in detail the major accomplishments above.

A. Milestones achieved

We achieved the following milestones in the 4th quarter as specified in our original Statement-of-Work. Note: While the October 1st milestones were due in the fifth quarter we also reported on these milestones in our fourth quarter report. We attach the descriptions from the fourth quarter report below for completeness.

  • October 1st, 2009. Contingent upon available budget, provide a VLAN connection from your testbed to the

Internet2. In cooperation with OIT at UMass-Amherst we have provided a VLAN connection from our control plane server geni.cs.umass.edu to an Internet2 point-of-presence in Boston. In an email dated September 28th, 2009 Rick Tuthill of UMass-Amherst OIT updated us on the status of this connection, as follows. ”I was down at the CS building finishing this link setup on Friday – I think there may have been some confusion in the network jack ordering as there are only two network ports currently activated. The two existing ports that are ’live’ are room 218A jack 2-2-4D and room 226 jack 2-2-9D. These two ports and all intermediary equipment are now configured to provide layer-2 VLAN transport from these networks jacks to the UMass/Northern Crossroads(NoX) handoff at 300 Bent St in Cambridge, MA. The NoX folks are not doing anything with this research VLAN at this time. They need further guidance from GENI on exactly what they’re supposed to do with the VLAN. Also, once IP addressing is clarified for this VLAN, we’ll need to configure some OIT network equipment to allow the selected address range(s) to pass through. I have signed the MOU and will return a countersigned ’original’ to Brian Levine....Let me know if there’s anything I can do to facilitate testing of this link in the next couple of days.”

We intend this VLAN connection to service both the ViSE and the DOME testbeds. Thus, as required by this milestone, we have coordinated with OIT at UMass-Amherst to provide a VLAN connection from our testbed to the Internet2 backbone network. In the coming year, we have committed to planning with our peers in Cluster D and the GPO on how to best use this new capability. As part of this plan, and before we can send/receive traffic on this link, we will discuss the roles and capabilities of Internet2 in forwarding our traffic to its correct destination. Attached is the signed Memorandum-of-Understanding (MOU) between UMass-Amherst OIT and the DOME/ViSE projects with respect to our use of this VLAN connection. As discussed in the MOU, OIT is providing the link free of charge for the first year for prototyping purposes, but charge a fee in subsequent years based on our usage.

  • October 1st, 2009. Virtualization of actuators using a single guest VM and demo.

We demo’d control of the radar’s actuators at GEC5 in Seattle in July using vservers. Our technical reports

  • October 1st, 2009. Testbed available for public use within our cluster.

Our testbed is available for limited use within our cluster. We are soliciting a select group of users to allow us to work out the bugs/kinks in the testbed and figure out what needs to be improved. The portal for our testbed is available at http://geni.cs.umass.edu/vise. Note that as we develop our sensor virtualization technology we are initially allowing users to safely access dedicated hardware—the sensors and the wireless NIC. Right now, we are targeting two types of users for our testbed. The first type is users that wish to experiment with long-distance 802.11b wireless communication. Long-distance links are difficult to setup because they require access to towers and other infrastructure to provide line-of-sight. Our two 10km links are thus useful to outside researchers working on these problems. There are a number of students at UMass-Amherst using the testbed to solve problems in this area. The second type of user is radar researchers that can leverage our radar deployment. We are working with students from Puerto Rico and other researchers in CASA to interpret and improve the quality of our radar’s data and test them for detection algorithms. We are soliciting feedback from these users about what they need to do on these nodes, and how the testbed can satisfy their needs. Note that our testbed interacts with a remote Clearinghouse run by RENCI/Duke to facilitate resource allocation.

  • October 1st, 2009. Switch to remote Clearinghouse at RENCI/Duke

While not in our official list of milestones, Harry Mussman asked our Cluster to switch to using a remote Clearinghouse by October 1st. We have made this switch. We first made use of a jail that we controlled at RENCI/Duke to setup and test this Clearinghouse in mid-August, and on September 28th, 2009 sent email to RENCI asking them to switch us over to their Clearinghouse.

  • November 16, 2009. Complete Xen sensor slivering (multiple VMs).

The technical report we published in our previous quarterly report details the completion of this milestone. Unfortunately our submission to NSDI, while well-reviewed, was rejected. We are currently preparing the work for submission to a journal.

  • November 16th, 2009. Complete integration of your testbed with a broker in the cluster clearinghouse, so that your testbed becomes federated with the other associated testbeds. Demo functionality of your testbed in this environment, including access from experiment control tools and service managers that are remote from your testbed.

As detailed above, we have integrated our testbed with the broker at the remote Clearinghouse. At the GEC in Salt Lake City, Utah we demonstrated this functionality.

  • November 16th, 2009. Work with GPO and cluster projects to complete a plan for the setup of VLANs between testbeds, to be carried by Internet 2 (or NLR) backbone network between the testbeds.

At GEC6, we discovered in our meetings that our VLAN connection from the UMass-Amherst CS department to the Northern Crossroads in Boston had been extended to BBN’s offices in Cambridge. As a result, we determined that we would be able to use NLR, rather than Internet2, for VLANs. NLR is advantageous because our other cluster members are using. Prior to our discovery, we had assumed UMass-Amherst could not use NLR because they are not an NLR member. However, after discussions with Kathy Benninger of NLR at GEC6, we found out that GENI’s agreement with NLR does permit our use. We are currently in discussions with NLR to add a port for UMass-Amherst at the Northern Crossroads in Boston. The cost of the addition is $2,195. We have submitted an application to NLR to see if they will cover this cost. There also may be additional costs for the installation at the Northern Crossroads. If we are unable to meet the costs, as a backup we will use BBN’s port at the Northern Crossroads to shepard UMass traffic. Currently, UMass-Amherst’s MOU with our OIT department provides us only a single static VLAN connection to Boston, and not multiple dynamically created VLANs. We are currently discussing with our OIT department and the GPO, the cost implications and technical hurdles required for using dynamic VLANs.

  • December 1st, 2009. Deliver a first release (v1.x) of your testbed and web-based experiment control software, with documentation, to the GPO.

We released our web-based code on schedule.

  • January 1st, 2010. Best-effort installation of Pelham tower x86 sensor node (note: funded from other sources), to include meteorological sensors, radar (if licensed), communications, computing. No camera. If Pelham node is problematic, optional rapidly-deployed node replaces Pelham node.

We have setup a rapidly deployable node in our offices, but have delayed its installation on the Pelham firetower until after the snow melts for the winter. The delay is primarily due to difficulties coordinating with the Massachusetts Department of Conservation and Recreation.

Milestones in Progress
Below we list the milestones for quarter 2 of Spiral 2, as agreed upon in our Spiral 2 Statement-of-Work.

  • March 16th, 2010. All aggregates use Orca to setup connections to Internet2 or NLR.

We are actively working to complete this milestone in some form, as described above. We currently have a VLAN connection from ViSE to NOX and BBN. Thus, BBN and/or NOX is able to “groom” our traffic onto NLR, although we can only support one VLAN and one experiment at a time. We are in discussions with our OIT and the GPO to see if we can support multiple dynamic VLANs to better understand the cost and technical implications.

  • April 1st, 2010. Virtualization of camera devices on CSB and MA1 tower nodes.

We have demonstrated the virtualization of our camera devices in our previously published technical report, and we have installed these camera devices on our nodes. Thus, we are well-positioned to satisfy this milestone.

  • April 1st, 2010. Integration of virtualization/slivering into testbed.

We are currently testing our virtualization/slivering code to ready it for integration and release, and are wellpositioned to meet this milestone.

  • April 1st, 2010. Testbed allocation policy for sensors.

As we integrate our virtualization/slivering code into the testbed our allocation policy will have to accommodate slivers of different sizes. We will modify the current broker policies from RENCI/Duke to accomplish this.

  • April 1st, 2010. Update experiment control framework (ECF), based upon updated reference software, provided by RENCI/Duke group.

We have already planned on upgrading the latest release of Orca during the quarter and before the next GEC in Chapel Hill.

B. Deliverables made

We provided our first release of the ViSE web portal code, as well as our demonstrations and posters at GEC6 in Utah. We have also contributed to GPO and Cluster-wide GENI discussions.

II. Description of work performed during last quarter

A. Activities and findings

The primary work during the quarter, including our Activities and Findings, centered on achieving the milestones described above and making progress toward our early 2010 milestones.

B. Project participants

The primary PI is Prashant Shenoy. Co-PIs are Michael Zink, Jim Kurose, and Deepak Ganesan. Research Staff is David Irwin. Navin Sharma, a graduate student, is also contributing to the project and is the primary author of the ViSE-related technical report.

C. Publications (individual and organizational)

We submitted ViSE-related papers to NSDI, TridentCom, IGARSS, and SECON. Our NSDI paper was rejected, and we are currently preparing it for a journal submission. TridentCom and IGARSS will provide an opportunity to interact with other testbed builders and potential ViSE end users, respectively. Our SECON paper explores how to operate GENI testbed’s using harvesting energy and weather forecasts. The NSDI, TridentCom, and SECON papers are attached to this report.

D. Outreach activities

We have prepared a 6 lecture seminar to be held in Puerto Rico from January 12th-15th, 2010, as described above. Further, we are continuing our discussions with UPRM about integrating with their student radar testbed project. We have bi-weekly meetings with the Student Testbed Project at UPRM. Jorge Trabal, the primary student working on the UPRM project, is visiting UMass-Amherst until he completes his Ph.D. The testbed project at UPRM has the same origins as the ViSE testbed, and thus many of the components are the same. However, while ViSE is focused on the virtualization aspect of the testbed, the UPRM team is focused on improving the data provided by its radar. Thus, the two projects are complementary. We are discussing with Jorge the best way to leverage their improved radar data, as well as the potential for integration with ViSE in the future.

E. Collaborations

We collaborated with other Cluster D projects significantly during the quarter. First, continue to maintain geni.cs.umass.edu for both the ViSE and DOME projects. We also aided with the integration with RENCI’s Clearinghouse for both ViSE and DOME. Additionally, we had numerous email exchanges on the Orca user mailing list about the intricacies of integration and setup. We also testbed geni.cs.umass.edu to connect to the GPO’s offices in Cambridge, MA.

F. Other Contributions