Opened 15 years ago

Closed 15 years ago

#49 closed (fixed)

ViSE GEC4 Demonstration

Reported by: David Irwin Owned by: David Irwin
Priority: major Milestone:
Component: VISE Version: SPIRAL1
Keywords: Cc: David Irwin
Dependencies:

Description

The ViSE project will demonstrate sensor control using the Orca control framework, sensor scheduling, and our progress toward sensor virtualization. The specific demonstration description below is subject to some changes/simplifications, depending on our progress near the end of March. The primary demonstration is #1 below, while #2 and #3 will be subject to progress at that point.

Requirements: We will require 2 A/C outlets for plugging in two sensors and two laptops (we will bring a power strip ourselves). We will need table space for our equipment and tack space if a poster is required for GEC4. We will also need wireless access for #3 below. We would like to be placed next to the Orca/BEN project from RENCI/Duke. We would prefer a large monitor to plug our laptop(s) into, as we will not be transporting a monitor ourselves, but this is not required.

  1. We will bring a Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station to GEC4, to act as example sensors (Note: our radars are too large to transport to Miami). We will conduct the demonstration on a single laptop with the PTZ video camera connected via Ethernet. The laptop will be a "GENI in a bottle": we will run 4 VMware virtual machines on the laptop, each of which corresponds to 1 GENI Aggregate Manager (or Orca site authority) and GENI Clearinghouse (or Orca broker), 2 GENI Experiments, and 1 GENI component that the aggregate manager controls. The GENI component will be a VMware virtual machine that runs an instance of the Xen virtual machine monitor inside of it. The GENI Aggregate manager will be empowered to create slivers as Xen virtual machines on the GENI component (since we have only a single component in the demonstration, these slivers correspond to a slice). The Experiments will communicate with the Clearinghouse and Aggregate manager using SOAP network communication.

Importantly, the GENI component VM will be able to control access to the PTZ camera by attaching and detaching virtual network interfaces to/from experiment VMs. Each experiment will request a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments will then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver---only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We will use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment's to show the status of the camera.

  1. We will also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the "virtualization" takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen's device driver framework. We will show our progress towards "virtualizing" a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that "passes through" requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering.
  1. While the above demonstrations will be local, we will also have access to our testbed in Massachusetts, running the Orca software, available, as per our milestone in February. We also hope to demonstrate the basic capabilities of our weather radar remotely, using a standard reflectivity map.

Change History (7)

comment:1 Changed 15 years ago by hdempsey@bbn.com

Sounds very interesting! I just want to confirm that you're bringing the two laptops. Are all you ethernet connections local, or do you need any ethernet to the ORCA clearinghouse project in the same area? Would a projector to connect to the laptop be suitable if there are no large monitors? Posters are expected, so I will request that.

comment:2 Changed 15 years ago by hdempsey@bbn.com

Owner: changed from hdempsey@geni.net to David Irwin

comment:3 Changed 15 years ago by hdempsey@bbn.com

Yes, we will be bringing two laptops and the wired ethernet connections will be local. We don't need any ethernet to Orca or to the external Internet. A project is suitable if there are no large monitors.

comment:4 Changed 15 years ago by hdempsey@bbn.com

Largest available monitor is 19". Would you prefer that or a projector?

Projectors will be rented and fairly expensive, so if you have a portable projector you can bring, and you'd prefer that, please let me know.

comment:5 Changed 15 years ago by hdempsey@bbn.com

Vise using a monitor.

comment:6 Changed 15 years ago by hdempsey@bbn.com

Revised VICE demonstration description from David Irwin:

The ViSE project is demonstrating sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization.

A Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a "GENI in a bottle" by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse and aggregate manager guide the creation of slices.

Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver---only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment's to show the status of the camera.

We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the "virtualization" takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen's device driver framework. We show our progress towards "virtualizing" a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that "passes through" requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.

comment:7 Changed 15 years ago by hdempsey@bbn.com

Resolution: fixed
Status: newclosed

Demo held 3/31/09.

Note: See TracTickets for help on using tickets.