Custom Query (1408 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (133 - 135 of 1408)

Ticket Owner Reporter Resolution Summary
#102 blynn@cs.umass.edu hmussman@bbn.com fixed milestone 1c completion
Description

Per 1Q09 QSR, work started on milestone 1c:

The emphasis of our work has been to create the environment for executing 3rd party experiments, and to define an interface for GENI researchers to upload and define experiments. We have made significant progress and will be demonstrating our work at GEC4.

The first phase of our effort was to implement the OS on our bricks as a Xen host domain (dom0), and to support the execution of guest domains (domU). Once completed, we needed to ensure the guest domain had access to the system networking and peripherals. The solutions that we implemented are listed below.

  • Ethernet link. This was solved with the support of Xen guest domains, since Xen

automatically provides support for virtualization (shared access between dom0 and domU) of Ethernet links. Rather than use DHCP to allocate IP addresses, we have implemented a mechanism to assign static IP addresses so that guest domains have well-known addresses accessible through the WiFI access points (APs) attached to each brick's Ethernet port.

  • 3G link. The 3G link is used as the DOME control plane. We have chosen to

make the link sharable between dom0 and domU by implementing NAT routing of domU Ethernet traffic. By making the control plane link available to domU, the guest domain has access to a relatively reliable link (about 90% connectivity). Furthermore, this link can enable guest domains to offer opt-in experiment involving transit passengers.

  • Atheros WiFi PCI device. We have chosen to hide the PCI address from dom0,

making it visible only to the guest domain. This means the guest domain has full, native access to the WiFi device. All features of the Atheros WiFi card and madwifi driver are available to the guest domain. The guest domain may even install customized a device driver.

  • GPS device. We run a gpsd daemon in dom0 that can be accessed from the guest

domain (directly, or via the libgps library).

The second phase of our work was to implement the ability for guest VMs to be installed, scheduled and launched on bricks. To achieve this, we implemented the following.

  • A database schema was created that defined the following objects and the

relations between those objects: users (GENI researches, UMass members of the DOME community), files (VM partitions), experiments (one or more partitions and the associated resources, such as the WiFi device), and instances of experiments (the scheduling of experiments).

  • A set of server-side scripts was developed to enable bricks to access the DB and

retrieve files from the servers.

  • Programs were developed to mange the critical tasks on the bricks.

o dome_pullexperiments: This is a daemon that downloads experiments (i.e., all required files) from a server to a brick. The daemon is designed to deal with the DOME disruptive environment of network disconnections and the powering-down of equipment. Files are downloaded in chunks and progress is checkpointed so that events can be resumed from a known state. The daemon prioritizes downloads based on schedules, and performs garbage collection when disk space becomes a concern. o dome_getexpschedules: This is the daemon that is responsible for making the experiment schedules available to the bricks. o dome_cleanexperiments: This is the daemon responsible for safely removing deprecated experiments from the bricks. o dome_runexp: This is the program that is responsible for launching a guest VM on the brick. It uses input from dome_pullexperiments and dome_getexpschedules to determine the VM to launch. Dome_runexp will create the partitions required by the VM, configure the networking, and make critical information available to the VM. See the Milestone 1b documentation for more information. o Additionally, various utilities (dome_getexpired, dome_killdomu, dome_vmrunning, dome_getrunning) were implemented to monitor the status of guest VMs, and to shutdown VMs.

The above is progress toward our next two milestones. 1c and 1d

Additionally, we have implemented a mechanism for 3rd party experiments (guest VMs) to generate log files, and for the content of the log files to be asynchronously uploaded to arbitrary servers (i.e., sent to user-defined destinations by dom0 when the guest domain is not executing). See the Milestone 1b documentation for more information. Finally, we have implemented the web interface for: uploading files to a server so that they can be staged for installation on buses; defining experiments; and scheduling experiments. This will be shown at GEC4. This effort and the work defined above are intended to be the foundation for integration with ORCA.

#103 blynn@cs.umass.edu hmussman@bbn.com fixed milestone 1d completion
Description

Per 1Q09 QSR, work started on milestone 1d:

The emphasis of our work has been to create the environment for executing 3rd party experiments, and to define an interface for GENI researchers to upload and define experiments. We have made significant progress and will be demonstrating our work at GEC4.

The first phase of our effort was to implement the OS on our bricks as a Xen host domain (dom0), and to support the execution of guest domains (domU). Once completed, we needed to ensure the guest domain had access to the system networking and peripherals. The solutions that we implemented are listed below.

  • Ethernet link. This was solved with the support of Xen guest domains, since Xen

automatically provides support for virtualization (shared access between dom0 and domU) of Ethernet links. Rather than use DHCP to allocate IP addresses, we have implemented a mechanism to assign static IP addresses so that guest domains have well-known addresses accessible through the WiFI access points (APs) attached to each brick's Ethernet port.

  • 3G link. The 3G link is used as the DOME control plane. We have chosen to

make the link sharable between dom0 and domU by implementing NAT routing of domU Ethernet traffic. By making the control plane link available to domU, the guest domain has access to a relatively reliable link (about 90% connectivity). Furthermore, this link can enable guest domains to offer opt-in experiment involving transit passengers.

  • Atheros WiFi PCI device. We have chosen to hide the PCI address from dom0,

making it visible only to the guest domain. This means the guest domain has full, native access to the WiFi device. All features of the Atheros WiFi card and madwifi driver are available to the guest domain. The guest domain may even install customized a device driver.

  • GPS device. We run a gpsd daemon in dom0 that can be accessed from the guest

domain (directly, or via the libgps library).

The second phase of our work was to implement the ability for guest VMs to be installed, scheduled and launched on bricks. To achieve this, we implemented the following.

  • A database schema was created that defined the following objects and the

relations between those objects: users (GENI researches, UMass members of the DOME community), files (VM partitions), experiments (one or more partitions and the associated resources, such as the WiFi device), and instances of experiments (the scheduling of experiments).

  • A set of server-side scripts was developed to enable bricks to access the DB and

retrieve files from the servers.

  • Programs were developed to mange the critical tasks on the bricks.

o dome_pullexperiments: This is a daemon that downloads experiments (i.e., all required files) from a server to a brick. The daemon is designed to deal with the DOME disruptive environment of network disconnections and the powering-down of equipment. Files are downloaded in chunks and progress is checkpointed so that events can be resumed from a known state. The daemon prioritizes downloads based on schedules, and performs garbage collection when disk space becomes a concern. o dome_getexpschedules: This is the daemon that is responsible for making the experiment schedules available to the bricks. o dome_cleanexperiments: This is the daemon responsible for safely removing deprecated experiments from the bricks. o dome_runexp: This is the program that is responsible for launching a guest VM on the brick. It uses input from dome_pullexperiments and dome_getexpschedules to determine the VM to launch. Dome_runexp will create the partitions required by the VM, configure the networking, and make critical information available to the VM. See the Milestone 1b documentation for more information. o Additionally, various utilities (dome_getexpired, dome_killdomu, dome_vmrunning, dome_getrunning) were implemented to monitor the status of guest VMs, and to shutdown VMs.

The above is progress toward our next two milestones. 1c and 1d

Additionally, we have implemented a mechanism for 3rd party experiments (guest VMs) to generate log files, and for the content of the log files to be asynchronously uploaded to arbitrary servers (i.e., sent to user-defined destinations by dom0 when the guest domain is not executing). See the Milestone 1b documentation for more information. Finally, we have implemented the web interface for: uploading files to a server so that they can be staged for installation on buses; defining experiments; and scheduling experiments. This will be shown at GEC4. This effort and the work defined above are intended to be the foundation for integration with ORCA.

#104 hmussman@bbn.com hmussman@bbn.com invalid milestone 1d completion
Description

Per 1Q09 QSR:

Milestone 4. [M4] Operational web portal and testbed, permits users to: login and request slices composed of leases for compute slivers (including dedicated sensors under control of dom0) bound to Xen VMs; upload/download files; execute processes. April 1st, 2009.

Demonstration 1.

Demonstration at GEC4, April 1st, 2009. April 1st, 2009.

The ViSE demonstration at GEC4 presented the result of completing Milestone 4, an operation web portal and testbed. The description of the GEC4 demo is as follows: the ViSE project demonstrated sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization.

A Pan-Tilt-Zoom (PTZ) video camera and a DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a ”GENI in a bottle” by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse and aggregate manager guide the creation of slices.

Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver—only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment’s to show the status of the camera.

We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of the camera, the ”virtualization” takes the form of permitting full access to the camera by one, and only one, VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen’s device driver framework. We show our progress towards ”virtualizing” a Davis Pro weather station that physically connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that ”passes through” requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.

Note: See TracQuery for help on using queries.