wiki:ViSE-1Q09-status

ViSE Project Status Report

Period: 1Q09

I. Major accomplishments

In the second quarter of the ViSE project we accomplished our initial set of Orca integration and operation milestones and continued work on the software and hardware infrastructure to complete future milestones. We first provide details relating to the work behind achieving the two 2nd quarter milestones below.

A. Milestones achieved

The ViSE project completed milestones M3 and M4 during the second quarter and presented demonstration D1 at GEC4 on March 31st, 2009. Summaries of these milestones follow below.

  • Milestone 3. Initial Orca integration. Xen and Orca software running on three sensor nodes, non-slivered, no radar control via Xen. Due February 1st, 2009.

The Orca control framework comprises a set of three distinct actor servers that correspond to GENI Experiments, Clearinghouses, and Aggregate Managers 1. GENI experiments correspond to Orca service managers, GENI Clearinghouses correspond to Orca brokers, and GENI Aggregate Managers correspond to Orca site authorities. Each server runs in the context of a Java virtual machine and communicates with other servers using local or remote procedure calls. The ViSE project has setup one instance of an Orca service manager, an Orca broker, and an Orca site authority within the same Java virtual machine that communicate using local procedure calls.

The Orca actor servers run on a gateway node connected to both the public Internet (otg.cs.umass.edu) and the sensor node on the roof of the UMass-Amherst CS department. The sensor node on the UMass-Amherst roof, in turn, has a connection to the sensor node on Mount Toby via 802.11b using a long-distance directional antenna, and the Mount Toby node has a connection to the sensor node on the MA1 tower. Each sensor node runs an instance of the Xen virtual machine monitor and an instance of an Orca node agent. The Orca site authority communicates with the Orca node agent to instantiates virtual machines for experiments.

Each node is primed with the software necessary to create Xen virtual machines and sliver their resources. The local storage is a 32gb flash drive partitioned using logical volume manager. The Orca node agent snapshots a template virtual machine image pre-loaded on each node to create each experiment virtual machine. Additionally, tc is installed on each node to shape and limit each experiment’s network traffic.

Using the default Orca web portal, users are able to login and request slices on the ViSE testbed. Currently, only the sensor node on the CS roof is accessible by end-users. We have decided to wait until the end of winter to install the Orca node agent software on the two other ViSE nodes, since they are difficult to access in the winter. We expect to access them in early-to-mid April depending on the weather and the snow melt.

In addition to the software to support Orca, each node has the appropriate foundation software/drivers to operate the sensors, wireless and wired NICs, an attached Gumstix Linux embedded control node, and a GPRS cellular modem. These software artifacts are accessible through Domain-0 in Xen. The wireless NIC is used for communication with other sensor nodes. The wired NIC attaches to the Gumstix Linux embedded control node, which, in turn, is connected to the public Internet using a GPRS cellular modem. The control node is for remote Operations and Management. We have documented the process to create compliant Domain-0 and Domain-U images at http://vise.cs.umass.edu.

The milestone is a pre-cursor to our sensor virtualization work. While users are able to create slices composed of Xen virtual machines bound to slivers of CPU, memory, bandwidth, and local storage, they are not able to access any sensors from their virtual machines yet. We are actively working on this capability and are due to complete it on time in late summer/early fall as specified in our SOW.

  • Milestone 4. [M4] Operational web portal and testbed, permits users to: login and request slices composed of leases for compute slivers (including dedicated sensors under control of dom0) bound to Xen VMs; upload/download files; execute processes. April 1st, 2009.
  • Demonstration 1. Demonstration at GEC4, April 1st, 2009. April 1st, 2009. The ViSE demonstration at GEC4 presented the result of completing Milestone 4, an operation web portal and testbed. The description of the GEC4 demo is as follows: the ViSE project demonstrated sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization.

A Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a ”GENI in a bottle” by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse and aggregate manager guide the creation of slices.

Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver—only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment’s to show the status of the camera.

We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the ”virtualization” takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen’s device driver framework. We show our progress towards ”virtualizing” a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that ”passes through” requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.

Milestones in Progress
The following milestones are in progress with the first two to be completed during the 3rd quarter of the ViSE project.

  • Complete Shirako/ORCA integration. Install Xen and Shirako software on two (total of three; see ticket 40) sensor nodes, non-slivered, no radar control via Xen. Due April 30th, 2009.
  • Contingent upon availability of reference implementation of Shirako/ORCA at 6 months, import and then integrate it into our project over the following 4 months, with a minimum of support from the RENCI/Duke group. Note: Our group will not be taking any explicit responsibility for the other projects using ORCA/Shirako, although we will provide best effort to do so to the extent possible given our own constraints and milestones. Due August 1st, 2009.
  • Complete Xen sensor virtualization. Non-slivered control of radar sensor data. Due August 1st, 2009.

Given our current status and the milestones we have already achieved we are well-positioned to complete each of these three milestones by their due dates. The first milestone is the completion of a quarter 2 milestone. The partial delay was caused by the GENI contract delay, which pushed the start of ViSE’s quarter 2 into winter. Two of our nodes are difficult to access during the winter because of snow and ice in Western Massachusetts The milestone will be completed once the snow melts on the Western Massachusetts mountains, which should happen by at least April 30th, 2009.

The second milestone requires us to import an updated version of the Orca control framework from the RENCI/BEN team. Given that we are already running a version of Orca on ViSE this milestone should not be cumbersome. The operational modifications to the main branch of Orca’s code are not major and any ViSE updates should be easy to incorporate. Further, we will participate in the proposed “Orca-fest” for Cluster D developers and provide any expertise required; we have also been consulting with the DOME project in integrating Orca into their testbed.

As shown by our demonstration at GEC4, we have made significant progress toward completing Xen sensor virtualization and non-slivered control of radar sensor data. GEC4 showed a preview of this virtualization capability for our DavisPro weather station, and we have a working version of the capability for our PTZ camera in the lab. We are continuing these efforts in the third quarter. Code is being updated to the ViSE svn repository at http://vise.cs.umass.edu/svn.

B. Deliverables made

The second quarter of the project resulted in two primary deliverables: the ViSE demonstration and poster at GEC4 and the integration of Orca into ViSE. In addition, we have been active in the control framework working group, participating in GPO-arranged calls, and in Cluster D. ViSE contributed to the Cluster D group session at GEC4.

II. Description of work performed during last quarter

A. Activities and findings

The primary work during the quarter, including our Activities and Findings, centered on achieving the milestones described above. In addition to attending GEC4, we held monthly Cluster D group meetings via teleconference.

B. Project participants

The primary PI is Prashant Shenoy. Co-PIs are Michael Zink, Jim Kurose, and Deepak Ganesan. Research Staff is David Irwin.

C. Publications (individual and organizational)

No publications resulted from the 1st quarter of work; we are submitting some of the initial work on ViSE soon. Once completed, this document will be publicly posted on the ViSE Trac website as a UMASS technical report.

D. Outreach activities

In the first quarter we had no significant outreach activities. However, during the summer we held a seminar for REU Undergraduate Students on the construction of ViSE nodes and sensors. We are still on track to hold a January class at UPRM in Puerto Rico on ViSE.

E. Collaborations

We have been working closely with the DOME project in our cluster at UMass and with the Orca-BEN project at Duke. We integrated the Orca control framework before their official release data 6 months into Spiral 1. Additionally, we are aiding DOME in integrating the Orca control framework. Finally, at GEC4 we discussed integration with the Kansei project at Ohio St., and hope to discuss with them further at “Orca-fest” in mid-May.

F. Other Contributions


Converted submitted file by Julia Taylor (jtaylor@bbn.com). Original file can be found here

Last modified 14 years ago Last modified on 06/02/10 15:17:24