id,summary,reporter,owner,description,status,priority,milestone,component,version,resolution,keywords,cc,dependencies 104,milestone 1d completion,hmussman@bbn.com,hmussman@bbn.com,"Per 1Q09 QSR: Milestone 4. [M4] Operational web portal and testbed, permits users to: login and request slices composed of leases for compute slivers (including dedicated sensors under control of dom0) bound to Xen VMs; upload/download files; execute processes. April 1st, 2009. [[BR]] Demonstration 1. Demonstration at GEC4, April 1st, 2009. April 1st, 2009. [[BR]] The ViSE demonstration at GEC4 presented the result of completing Milestone 4, an operation web portal and testbed. The description of the GEC4 demo is as follows: the ViSE project demonstrated sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization. [[BR]] A Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a ”GENI in a bottle” by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse and aggregate manager guide the creation of slices. [[BR]] Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver—only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment’s to show the status of the camera. [[BR]] We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the ”virtualization” takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen’s device driver framework. We show our progress towards ”virtualizing” a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that ”passes through” requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.",closed,major,,VISE,SPIRAL1,invalid,,hmussman@bbn.com,