Custom Query (1408 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (70 - 72 of 1408)

Ticket Resolution Summary Owner Reporter
#83 invalid Status of ticket hdempsey@bbn.com pgunn@cs.cmu.edu
Description

Regarding: "Data plane integration with ProtoGENI. Be able to run experiments that share packets between ProtoGENI central (for our purposes, the main Emulab cluster) and the wireless testbeds. Use Internet2 VLAN support to get packets from Emulab to our protoGENI control node (if feasible). Use IP-in-IP tunneling to get packets from the CMU control node to HomeNet? nodes (these nodes, being residential, will not have access to Internet2 VLANs). Note this is ONLY the data-plane tunneling mechanisms, NOT the control mechanisms needed to establish it. "

Right now this is possible using manually created OpenVPN tunnels between Utah's emulab cluster systems and the wireless testbeds. I believe this meets the requirement of this ticket - there is work being done towards software that would automatically create these tunnels ("the control mechanisms needed to establish it") and when that is complete, we will need to integrate that into the existing ProtoGENI federation code.

#84 invalid Duplicate task? hdempsey@bbn.com pgunn@cs.cmu.edu
Description

This task seems to be duplicate to other existing tickets assigned to our group.

#104 invalid milestone 1d completion hmussman@bbn.com hmussman@bbn.com
Description

Per 1Q09 QSR:

Milestone 4. [M4] Operational web portal and testbed, permits users to: login and request slices composed of leases for compute slivers (including dedicated sensors under control of dom0) bound to Xen VMs; upload/download files; execute processes. April 1st, 2009.

Demonstration 1.

Demonstration at GEC4, April 1st, 2009. April 1st, 2009.

The ViSE demonstration at GEC4 presented the result of completing Milestone 4, an operation web portal and testbed. The description of the GEC4 demo is as follows: the ViSE project demonstrated sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization.

A Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a ”GENI in a bottle” by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse and aggregate manager guide the creation of slices.

Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver—only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment’s to show the status of the camera.

We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the ”virtualization” takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen’s device driver framework. We show our progress towards ”virtualizing” a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that ”passes through” requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.

Note: See TracQuery for help on using queries.