Opened 14 years ago

Closed 14 years ago

#148 closed (fixed)

milestone 1h completion

Reported by: Owned by: David Irwin
Priority: major Milestone: ViSE: S1.h Virtualization of actuators; single guest VM; demo
Component: VISE Version: SPIRAL1
Keywords: Cc:


From 2Q09 QSR on 6/30/09:

We have made significant progress on Xen sensor virtualization. Our recently submitted work demonstrates the virtualization and slivering of a pan-tilt-zoom video camera within the Xen device driver framework. In this case, slivering involves interleaving camera actuations at a fine-grain between multiple VMs. This quarter we began to look at virtualizing ViSE radars in the same way. One difficulty that arose is that unlike the pan-tilt-zoom camera, the device driver for the Analog-to-Digital card that connects to the radar does not work by default in domain-0 of Xen (i.e., the host VM). The reason is that the Analog-to-Digital card makes heavy use of DMA (since it transfers substantial amounts of data), which domain-0 does not directly support. We are currently porting the driver to work with Xen, but this requires significants changes to the driver and, potentially, to Xen itself. Others in the Xen community are also looking similar problems related to DMA and PCI devices.

As a result, to meet this Spiral 1 milestone we are transitioning to using VServers, while we work through these problems. VServers are a less powerful, but more robust, virtualization technology in wide use. For instance, PlanetLab uses them exclusively. ViSE users, which primarily use the sensor’s on each node, should not be affected by the change. We have chosen to use VServers while we work through the ADC issues with Xen because VServers allow direct access to device files from VMs, while also allowing dynamic allocation and revocation of (sensing) devices. These are pre-requisites for integrating a device with GENI. Since VServers virtualize at the OS-level, the standard device drivers for the radar’s analog-to-digital card work out-of-the-box. One side-benefit of using VServers while we work through the issues with Xen is that we will develop VServer resource handlers for Orca that other groups will be able to use (if they desire). With the use of VServers we are on-track to complete this milestone by August 1st, 2009, as scheduled. Once Xen support becomes available we should be able to switch back to using Xen easily.

Change History (1)

comment:1 Changed 14 years ago by

Component: GPOVISE
Resolution: fixed
Status: newclosed

Per 3Q09 status report:

July 31st, 2009. Complete Xen sensor virtualization. We completed an initial research-grade implementation of sensor virtualization in Xen and released a technical report that applies the approach to Pan-Tilt-Zoom video cameras. The technical report can be found on the web at and is also attached to this milestone report. As detailed in our previous quarterly status report we have faced challenges in applying the same techniques to the Raymarine radars in the ViSE testbed because their drivers do not operate by default inside of Xens Domain-0 management domain. The problem affects other high-bandwidth I/O devices that use Xen, and is being actively worked on in the Xen community. As these problems are worked out, we have transitioned to using vservers as ViSEs preferred virtualization technology, and developed vserver handlers for Orca. We are also porting MultiSense to work with vservers as well as with Xen; its modular design makes this port straightforward. Our demonstration at GEC5 in Seattle showed non-slivered VM access to radar control and data using vservers; once we complete our port of MultiSense we will be able to support slivered access. A more detailed description of these Xen, vservers, and sensors in ViSE is available in the quarter 2 quarterly report for ViSE. Since releasing our technical report, we have improved the work and re-submitted it to the NSDI conference. The improved work, which includes experiments with steerable radars, is also attached to this report.

Note: See TracTickets for help on using tickets.