| 73 | Completed Xen sensor virtualization[[BR]] |
| 74 | |
| 75 | We completed an initial research-grade implementation of sensor virtualization in Xen and released a technical |
| 76 | report that applies the approach to Pan-Tilt-Zoom video cameras. [[BR]] |
| 77 | As detailed in our previous quarterly status report we have faced challenges in applying the same |
| 78 | techniques to the Raymarine radars in the ViSE testbed because their drivers do not operate by default inside of |
| 79 | Xens Domain-0 management domain. The problem affects other high-bandwidth I/O devices that use Xen, and |
| 80 | is being actively worked on in the Xen community. As these problems are worked out, we have transitioned to |
| 81 | using vservers as ViSEs preferred virtualization technology, and developed vserver handlers for Orca. We are also |
| 82 | porting MultiSense to work with vservers as well as with Xen; its modular design makes this port straightforward.[[BR]] |
| 83 | |
| 84 | Our demonstration at GEC5 in Seattle showed non-slivered VM access to radar control and data using vservers; |
| 85 | once we complete our port of MultiSense we will be able to support slivered access. [[BR}} |
| 86 | |
| 87 | A more detailed description of these Xen, vservers, and sensors in ViSE is available in the quarter 2 quarterly report for ViSE. Since releasing our technical report, we have improved the work and re-submitted it to the NSDI conference. The improved work, which includes experiments with steerable radars.[[BR]] |
| 88 | |
| 89 | Completed integration with ORCA in Cluster D[[BR]] |
| 90 | |
| 91 | ViSE is running the latest reference implementation of the Shirako/Orca codebase. Note that Milestone 1c (completed |
| 92 | February 1st, 2009) required ViSE to perform an initial integration of Shirako/Orca prior to an official |
| 93 | reference implementation being released. See that milestone for specific details related to the integration. Incorporating |
| 94 | the latest reference implementation required only minor code porting. Additionally, as a result of the |
| 95 | Orca-fest conference call on May 28th, the GENI Project Office and Cluster D set mini-milestones that were not |
| 96 | in the original Statement of Work. These milestones are related to Milestone 1e, since they involve the particular |
| 97 | instantiation of Orca that Cluster D will use. In particular, by June 15th, 2009, we upgraded our ORCA actors to |
| 98 | support secure SOAP messages. [[BR]] |
| 99 | |
| 100 | As part of this mini-milestone, Brian Lynn of the DOME project and the ViSE project also setup a control plane server that will host the aggregate manager and portal servers for both the DOME and ViSE projects. This server has the DNS name geni.cs.umass.edu. The server includes 4 network interface cards: one connects to a gateway ViSE node on the CS department roof, one will connect to an Internet2 backbone site (via a VLAN), one connects to the public Internet, and one connects to an internal development ViSE node. During the Orca-fest and subsequent Cluster D meetings we set the milestone for shifting within the range of August 15th, 2009 and September 1st, 2009.[[BR]] |
| 101 | |
| 102 | While not in our official list of milestones, Harry Mussman asked our Cluster to switch to using a remote |
| 103 | Clearinghouse by October 1st. We have made this switch. We first made use of a jail that we controlled at |
| 104 | RENCI/Duke to setup and test this Clearinghouse in mid-August, and on September 28th, 2009 sent email to |
| 105 | RENCI asking them to switch us over to their Clearinghouse.[[BR]] |
| 106 | |
| 107 | Made testbed available for public use within our cluster.[[BR]] |
| 108 | |
| 109 | Our testbed is available for limited use within our cluster. We are soliciting a select group of users to allow us |
| 110 | to work out the bugs/kinks in the testbed and figure out what needs to be improved. The portal for our testbed is |
| 111 | available at http://geni.cs.umass.edu/vise. [[BR]] |
| 112 | |
| 113 | Note that as we develop our sensor virtualization technology |
| 114 | we are initially allowing users to safely access dedicated hardware—the sensors and the wireless NIC. [[BR]] |
| 115 | |
| 116 | Right |
| 117 | now, we are targeting two types of users for our testbed. The first type is users that wish to experiment with long distance |
| 118 | 802.11b wireless communication. Long-distance links are difficult to setup because they require access to |
| 119 | towers and other infrastructure to provide line-of-sight. Our two 10km links are thus useful to outside researchers |
| 120 | working on these problems. There are a number of students at UMass-Amherst using the testbed to solve problems |
| 121 | in this area. [[BR]] |
| 122 | |
| 123 | The second type of user is radar researchers that can leverage our radar deployment. We are working |
| 124 | with students from Puerto Rico and other researchers in CASA to interpret and improve the quality of our radar’s |
| 125 | data and test them for detection algorithms. We are soliciting feedback from these users about what they need |
| 126 | to do on these nodes, and how the testbed can satisfy their needs. Note that our testbed interacts with a remote |
| 127 | Clearinghouse run by RENCI/Duke to facilitate resource allocation. |
| 128 | |