Custom Query (1408 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (88 - 90 of 1408)

Ticket Resolution Summary Owner Reporter
#102 fixed milestone 1c completion blynn@cs.umass.edu hmussman@bbn.com
Description

Per 1Q09 QSR, work started on milestone 1c:

The emphasis of our work has been to create the environment for executing 3rd party experiments, and to define an interface for GENI researchers to upload and define experiments. We have made significant progress and will be demonstrating our work at GEC4.

The first phase of our effort was to implement the OS on our bricks as a Xen host domain (dom0), and to support the execution of guest domains (domU). Once completed, we needed to ensure the guest domain had access to the system networking and peripherals. The solutions that we implemented are listed below.

  • Ethernet link. This was solved with the support of Xen guest domains, since Xen

automatically provides support for virtualization (shared access between dom0 and domU) of Ethernet links. Rather than use DHCP to allocate IP addresses, we have implemented a mechanism to assign static IP addresses so that guest domains have well-known addresses accessible through the WiFI access points (APs) attached to each brick's Ethernet port.

  • 3G link. The 3G link is used as the DOME control plane. We have chosen to

make the link sharable between dom0 and domU by implementing NAT routing of domU Ethernet traffic. By making the control plane link available to domU, the guest domain has access to a relatively reliable link (about 90% connectivity). Furthermore, this link can enable guest domains to offer opt-in experiment involving transit passengers.

  • Atheros WiFi PCI device. We have chosen to hide the PCI address from dom0,

making it visible only to the guest domain. This means the guest domain has full, native access to the WiFi device. All features of the Atheros WiFi card and madwifi driver are available to the guest domain. The guest domain may even install customized a device driver.

  • GPS device. We run a gpsd daemon in dom0 that can be accessed from the guest

domain (directly, or via the libgps library).

The second phase of our work was to implement the ability for guest VMs to be installed, scheduled and launched on bricks. To achieve this, we implemented the following.

  • A database schema was created that defined the following objects and the

relations between those objects: users (GENI researches, UMass members of the DOME community), files (VM partitions), experiments (one or more partitions and the associated resources, such as the WiFi device), and instances of experiments (the scheduling of experiments).

  • A set of server-side scripts was developed to enable bricks to access the DB and

retrieve files from the servers.

  • Programs were developed to mange the critical tasks on the bricks.

o dome_pullexperiments: This is a daemon that downloads experiments (i.e., all required files) from a server to a brick. The daemon is designed to deal with the DOME disruptive environment of network disconnections and the powering-down of equipment. Files are downloaded in chunks and progress is checkpointed so that events can be resumed from a known state. The daemon prioritizes downloads based on schedules, and performs garbage collection when disk space becomes a concern. o dome_getexpschedules: This is the daemon that is responsible for making the experiment schedules available to the bricks. o dome_cleanexperiments: This is the daemon responsible for safely removing deprecated experiments from the bricks. o dome_runexp: This is the program that is responsible for launching a guest VM on the brick. It uses input from dome_pullexperiments and dome_getexpschedules to determine the VM to launch. Dome_runexp will create the partitions required by the VM, configure the networking, and make critical information available to the VM. See the Milestone 1b documentation for more information. o Additionally, various utilities (dome_getexpired, dome_killdomu, dome_vmrunning, dome_getrunning) were implemented to monitor the status of guest VMs, and to shutdown VMs.

The above is progress toward our next two milestones. 1c and 1d

Additionally, we have implemented a mechanism for 3rd party experiments (guest VMs) to generate log files, and for the content of the log files to be asynchronously uploaded to arbitrary servers (i.e., sent to user-defined destinations by dom0 when the guest domain is not executing). See the Milestone 1b documentation for more information. Finally, we have implemented the web interface for: uploading files to a server so that they can be staged for installation on buses; defining experiments; and scheduling experiments. This will be shown at GEC4. This effort and the work defined above are intended to be the foundation for integration with ORCA.

#103 fixed milestone 1d completion blynn@cs.umass.edu hmussman@bbn.com
Description

Per 1Q09 QSR, work started on milestone 1d:

The emphasis of our work has been to create the environment for executing 3rd party experiments, and to define an interface for GENI researchers to upload and define experiments. We have made significant progress and will be demonstrating our work at GEC4.

The first phase of our effort was to implement the OS on our bricks as a Xen host domain (dom0), and to support the execution of guest domains (domU). Once completed, we needed to ensure the guest domain had access to the system networking and peripherals. The solutions that we implemented are listed below.

  • Ethernet link. This was solved with the support of Xen guest domains, since Xen

automatically provides support for virtualization (shared access between dom0 and domU) of Ethernet links. Rather than use DHCP to allocate IP addresses, we have implemented a mechanism to assign static IP addresses so that guest domains have well-known addresses accessible through the WiFI access points (APs) attached to each brick's Ethernet port.

  • 3G link. The 3G link is used as the DOME control plane. We have chosen to

make the link sharable between dom0 and domU by implementing NAT routing of domU Ethernet traffic. By making the control plane link available to domU, the guest domain has access to a relatively reliable link (about 90% connectivity). Furthermore, this link can enable guest domains to offer opt-in experiment involving transit passengers.

  • Atheros WiFi PCI device. We have chosen to hide the PCI address from dom0,

making it visible only to the guest domain. This means the guest domain has full, native access to the WiFi device. All features of the Atheros WiFi card and madwifi driver are available to the guest domain. The guest domain may even install customized a device driver.

  • GPS device. We run a gpsd daemon in dom0 that can be accessed from the guest

domain (directly, or via the libgps library).

The second phase of our work was to implement the ability for guest VMs to be installed, scheduled and launched on bricks. To achieve this, we implemented the following.

  • A database schema was created that defined the following objects and the

relations between those objects: users (GENI researches, UMass members of the DOME community), files (VM partitions), experiments (one or more partitions and the associated resources, such as the WiFi device), and instances of experiments (the scheduling of experiments).

  • A set of server-side scripts was developed to enable bricks to access the DB and

retrieve files from the servers.

  • Programs were developed to mange the critical tasks on the bricks.

o dome_pullexperiments: This is a daemon that downloads experiments (i.e., all required files) from a server to a brick. The daemon is designed to deal with the DOME disruptive environment of network disconnections and the powering-down of equipment. Files are downloaded in chunks and progress is checkpointed so that events can be resumed from a known state. The daemon prioritizes downloads based on schedules, and performs garbage collection when disk space becomes a concern. o dome_getexpschedules: This is the daemon that is responsible for making the experiment schedules available to the bricks. o dome_cleanexperiments: This is the daemon responsible for safely removing deprecated experiments from the bricks. o dome_runexp: This is the program that is responsible for launching a guest VM on the brick. It uses input from dome_pullexperiments and dome_getexpschedules to determine the VM to launch. Dome_runexp will create the partitions required by the VM, configure the networking, and make critical information available to the VM. See the Milestone 1b documentation for more information. o Additionally, various utilities (dome_getexpired, dome_killdomu, dome_vmrunning, dome_getrunning) were implemented to monitor the status of guest VMs, and to shutdown VMs.

The above is progress toward our next two milestones. 1c and 1d

Additionally, we have implemented a mechanism for 3rd party experiments (guest VMs) to generate log files, and for the content of the log files to be asynchronously uploaded to arbitrary servers (i.e., sent to user-defined destinations by dom0 when the guest domain is not executing). See the Milestone 1b documentation for more information. Finally, we have implemented the web interface for: uploading files to a server so that they can be staged for installation on buses; defining experiments; and scheduling experiments. This will be shown at GEC4. This effort and the work defined above are intended to be the foundation for integration with ORCA.

#105 fixed milestone 1e completion David Irwin hmussman@bbn.com
Description

Per 1Q09 QSR:

The second milestone requires us to import an updated version of the Orca control framework from the RENCI/BEN team. Given that we are already running a version of Orca on ViSE this milestone should not be cumbersome. The operational modifications to the main branch of Orca’s code are not major and any ViSE updates should be easy to incorporate. Further, we will participate in the proposed “Orca-fest” for Cluster D developers and provide any expertise required; we have also been consulting with the DOME project in integrating Orca into their testbed.

Per report on 7/31/09:

ViSE is running the latest reference implementation of the Shirako/Orca codebase. Note that Milestone 1c (completed February 1st, 2009) required ViSE to perform an initial integration of Shirako/Orca prior to an official reference implementation being released. See that milestone for specific details related to the integration. Incorporating the latest reference implementation required only minor (although tedious) code porting. Additionally, as a result of the Orca-fest conference call on May 28th, the GENI Project Office and Cluster D set mini-milestones that were not in the original Statement of Work. These milestones are related to Milestone 1e, since they involve the particular instantiation of Orca that Cluster D will use. In particular, by June 15th, 2009, we upgraded our ORCA actors to support secure SOAP messages. As part of this mini-milestone, Brian Lynn of the DOME project and the ViSE project also setup a control plane server that will host the aggregate manager and portal servers for both the DOME and ViSE projects. This server has the DNS name geni.cs.umass.edu. The server includes 4 network interface cards: one connects to a gateway ViSE node on the CS department roof, one will connect to an Internet2 backbone site (via a VLAN), one connects to the public Internet, and one connects to an internal development ViSE node. The installation of geni.cs.umass.edu with the latest version of Orca means that we are well-prepared to transition to using a remote Clearinghouse provided by RENCI. During the Orca-fest and subsequent Cluster D meetings we set the milestone for shifting within the range of August 15th, 2009 and September 1st, 2009. We are currently discussing with the Duke/RENCI Orca/Ben group to complete this milestone in a few intermediary steps—-first moving the DOME and ViSE brokers to RENCI and then incorporating them into the same Clearinghouse as the other Cluster D projects. Since we have already setup and tested the latest implementation of Orca on geni.cs.umass.edu, we are well-positioned to transition to a remote Clearinghouse if provided by RENCI.

Note: See TracQuery for help on using queries.