Changes between Version 65 and Version 66 of ViSE

10/14/09 10:48:02 (10 years ago)



  • ViSE

    v65 v66  
    7171ViSE Node Deployment on Mt. Toby Fire Tower[[BR]]
     73Completed Xen sensor virtualization[[BR]]
     75We completed an initial research-grade implementation of sensor virtualization in Xen and released a technical
     76report that applies the approach to Pan-Tilt-Zoom video cameras. [[BR]]
     77As detailed in our previous quarterly status report we have faced challenges in applying the same
     78techniques to the Raymarine radars in the ViSE testbed because their drivers do not operate by default inside of
     79Xens Domain-0 management domain. The problem affects other high-bandwidth I/O devices that use Xen, and
     80is being actively worked on in the Xen community. As these problems are worked out, we have transitioned to
     81using vservers as ViSEs preferred virtualization technology, and developed vserver handlers for Orca. We are also
     82porting MultiSense to work with vservers as well as with Xen; its modular design makes this port straightforward.[[BR]]
     84Our demonstration at GEC5 in Seattle showed non-slivered VM access to radar control and data using vservers;
     85once we complete our port of MultiSense we will be able to support slivered access. [[BR}}
     87A more detailed description of these Xen, vservers, and sensors in ViSE is available in the quarter 2 quarterly report for ViSE. Since releasing our technical report, we have improved the work and re-submitted it to the NSDI conference. The improved work, which includes experiments with steerable radars.[[BR]]
     89Completed integration with ORCA in Cluster D[[BR]]
     91ViSE is running the latest reference implementation of the Shirako/Orca codebase. Note that Milestone 1c (completed
     92February 1st, 2009) required ViSE to perform an initial integration of Shirako/Orca prior to an official
     93reference implementation being released. See that milestone for specific details related to the integration. Incorporating
     94the latest reference implementation required only minor code porting. Additionally, as a result of the
     95Orca-fest conference call on May 28th, the GENI Project Office and Cluster D set mini-milestones that were not
     96in the original Statement of Work. These milestones are related to Milestone 1e, since they involve the particular
     97instantiation of Orca that Cluster D will use. In particular, by June 15th, 2009, we upgraded our ORCA actors to
     98support secure SOAP messages. [[BR]]
     100As part of this mini-milestone, Brian Lynn of the DOME project and the ViSE project also setup a control plane server that will host the aggregate manager and portal servers for both the DOME and ViSE projects. This server has the DNS name The server includes 4 network interface cards: one connects to a gateway ViSE node on the CS department roof, one will connect to an Internet2 backbone site (via a VLAN), one connects to the public Internet, and one connects to an internal development ViSE node. During the Orca-fest and subsequent Cluster D meetings we set the milestone for shifting within the range of August 15th, 2009 and September 1st, 2009.[[BR]]
     102While not in our official list of milestones, Harry Mussman asked our Cluster to switch to using a remote
     103Clearinghouse by October 1st. We have made this switch. We first made use of a jail that we controlled at
     104RENCI/Duke to setup and test this Clearinghouse in mid-August, and on September 28th, 2009 sent email to
     105RENCI asking them to switch us over to their Clearinghouse.[[BR]]
     107Made testbed available for public use within our cluster.[[BR]]
     109Our testbed is available for limited use within our cluster. We are soliciting a select group of users to allow us
     110to work out the bugs/kinks in the testbed and figure out what needs to be improved. The portal for our testbed is
     111available at [[BR]]
     113Note that as we develop our sensor virtualization technology
     114we are initially allowing users to safely access dedicated hardware—the sensors and the wireless NIC. [[BR]]
     117now, we are targeting two types of users for our testbed. The first type is users that wish to experiment with long distance
     118802.11b wireless communication. Long-distance links are difficult to setup because they require access to
     119towers and other infrastructure to provide line-of-sight. Our two 10km links are thus useful to outside researchers
     120working on these problems. There are a number of students at UMass-Amherst using the testbed to solve problems
     121in this area. [[BR]]
     123The second type of user is radar researchers that can leverage our radar deployment. We are working
     124with students from Puerto Rico and other researchers in CASA to interpret and improve the quality of our radar’s
     125data and test them for detection algorithms. We are soliciting feedback from these users about what they need
     126to do on these nodes, and how the testbed can satisfy their needs. Note that our testbed interacts with a remote
     127Clearinghouse run by RENCI/Duke to facilitate resource allocation.
    74130== Milestones ==