Changes between Initial Version and Version 1 of ViSE-1Q09-status


Ignore:
Timestamp:
05/27/10 10:36:49 (10 years ago)
Author:
jtaylor@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • ViSE-1Q09-status

    v1 v1  
     1[[PageOutline]]
     2
     3= ViSE Project Status Report =
     4
     5Period: 1Q09
     6== I. Major accomplishments ==
     7In the second quarter of the ViSE project we accomplished our initial set of Orca integration and operation milestones and continued work on the software and hardware infrastructure to complete future milestones. We first
     8provide details relating to the work behind achieving the two 2nd quarter milestones below.
     9
     10=== A. Milestones achieved ===
     11The ViSE project completed milestones M3 and M4 during the second quarter and presented demonstration D1 at
     12GEC4 on March 31st, 2009. Summaries of these milestones follow below.
     13  * '''Milestone 3.''' Initial Orca integration. Xen and Orca software running on three sensor nodes, non-slivered, no radar control via Xen. Due February 1st, 2009.
     14
     15The Orca control framework comprises a set of three distinct actor servers that correspond to GENI Experiments,
     16Clearinghouses, and Aggregate Managers 1. GENI experiments correspond to Orca service managers,
     17GENI Clearinghouses correspond to Orca brokers, and GENI Aggregate Managers correspond to Orca site authorities.
     18Each server runs in the context of a Java virtual machine and communicates with other servers using local or
     19remote procedure calls. The ViSE project has setup one instance of an Orca service manager, an Orca broker, and
     20an Orca site authority within the same Java virtual machine that communicate using local procedure calls.
     21
     22The Orca actor servers run on a gateway node connected to both the public Internet (otg.cs.umass.edu) and the
     23sensor node on the roof of the UMass-Amherst CS department. The sensor node on the UMass-Amherst roof, in
     24turn, has a connection to the sensor node on Mount Toby via 802.11b using a long-distance directional antenna, and
     25the Mount Toby node has a connection to the sensor node on the MA1 tower. Each sensor node runs an instance
     26of the Xen virtual machine monitor and an instance of an Orca node agent. The Orca site authority communicates
     27with the Orca node agent to instantiates virtual machines for experiments.
     28
     29Each node is primed with the software necessary to create Xen virtual machines and sliver their resources. The
     30local storage is a 32gb flash drive partitioned using logical volume manager. The Orca node agent snapshots a
     31template virtual machine image pre-loaded on each node to create each experiment virtual machine. Additionally,
     32tc is installed on each node to shape and limit each experiment’s network traffic.
     33
     34Using the default Orca web portal, users are able to login and request slices on the ViSE testbed. Currently,
     35only the sensor node on the CS roof is accessible by end-users. We have decided to wait until the end of winter to
     36install the Orca node agent software on the two other ViSE nodes, since they are difficult to access in the winter.
     37We expect to access them in early-to-mid April depending on the weather and the snow melt.
     38
     39In addition to the software to support Orca, each node has the appropriate foundation software/drivers to
     40operate the sensors, wireless and wired NICs, an attached Gumstix Linux embedded control node, and a GPRS
     41cellular modem. These software artifacts are accessible through Domain-0 in Xen. The wireless NIC is used for
     42communication with other sensor nodes. The wired NIC attaches to the Gumstix Linux embedded control node,
     43which, in turn, is connected to the public Internet using a GPRS cellular modem. The control node is for remote Operations and Management. We have documented the process to create compliant Domain-0 and Domain-U images at http://vise.cs.umass.edu.
     44
     45The milestone is a pre-cursor to our sensor virtualization work. While users are able to create slices composed
     46of Xen virtual machines bound to slivers of CPU, memory, bandwidth, and local storage, they are not able to access
     47any sensors from their virtual machines yet. We are actively working on this capability and are due to complete it
     48on time in late summer/early fall as specified in our SOW.
     49
     50  * '''Milestone 4.''' [M4] Operational web portal and testbed, permits users to: login and request slices composed of leases for compute slivers (including dedicated sensors under control of dom0) bound to Xen VMs; upload/download files; execute processes. April 1st, 2009.
     51  * '''Demonstration 1.''' Demonstration at GEC4, April 1st, 2009. April 1st, 2009. The ViSE demonstration at GEC4 presented the result of completing Milestone 4, an operation web portal and testbed. The description of the GEC4 demo is as follows: the ViSE project demonstrated sensor control using the Orca control framework, sensor scheduling, and our initial progress toward sensor virtualization.
     52
     53A Pan-Tilt-Zoom (PTZ) video camera and a !DavisPro weather station are two of the three sensors currently apart of the ViSE testbed (Note: our radars are too large to transport to Miami). The first part of the demonstration uses the PTZ video camera connected to a single laptop. The laptop represents a ”GENI in a bottle” by executing a collection of Orca actor servers in a set of VMware virtual machines. The actors represent a GENI aggregate
     54manager (an Orca site authority), a GENI clearinghouse (an Orca broker), and 2 GENI experiments (Orca slice
     55controllers). Additionally, one VMware virtual machines runs an instance of the Xen VMM and is connected to
     56the PTZ video camer and serves as an example component. The GENI aggregate manager is empowered to create
     57slivers as Xen virtual machines on the GENI component, and the experiments communicate with the clearinghouse
     58and aggregate manager guide the creation of slices.
     59
     60Importantly, the GENI aggregate manager controls access to the PTZ camera by interposing on the communication
     61between the camera and the experiment VMs. Each experiment requests a slice composed of a single Xen VMM sliver with a reserved proportion of CPU, memory, bandwidth, etc. The experiments then compete for control of, and access to, the PTZ camera by requesting a lease for it from the Clearinghouse and directing the Aggregate Manager to attach it (in the form of a virtual network interface) to their sliver—only a single Experiment can control the camera at one time so the Clearinghouse must schedule access to it accordingly. We use the default Orca web portal to display the process, and the PTZ camera web portal on both experiment’s to show the status of the camera.
     62
     63We also show our progress on true sensor virtualization in the Xen virtual machine monitor. In the case of
     64the camera, the ”virtualization” takes the form of permitting full access to the camera by one, and only one,
     65VM through its virtual network interface. We are currently integrating virtualized sensing devices into Xen’s
     66device driver framework. We show our progress towards ”virtualizing” a Davis Pro weather station that physically
     67connects to a virtual USB port. Our initial goal along this thread is to have the Davis Pro software run inside of a Xen VM on top of a virtual serial driver that ”passes through” requests to the physical device. This is the first step towards our milestones near the end of the year for sensor slivering. This demonstration takes the form of a web portal for the weather station running inside of the Xen VM updating sensor readings in real-time.
     68
     69'''Milestones in Progress'''[[BR]]
     70The following milestones are in progress with the first two to be completed during the 3rd quarter of the ViSE
     71project.
     72  * Complete Shirako/ORCA integration. Install Xen and Shirako software on two (total of three; see ticket 40) sensor nodes, non-slivered, no radar control via Xen. Due April 30th, 2009.
     73  * Contingent upon availability of reference implementation of Shirako/ORCA at 6 months, import and then integrate it into our project over the following 4 months, with a minimum of support from the RENCI/Duke group. Note: Our group will not be taking any explicit responsibility for the other projects using ORCA/Shirako, although we will provide best effort to do so to the extent possible given our own constraints and milestones. Due August 1st, 2009.
     74  * Complete Xen sensor virtualization. Non-slivered control of radar sensor data. Due August 1st, 2009.
     75
     76Given our current status and the milestones we have already achieved we are well-positioned to complete each
     77of these three milestones by their due dates. The first milestone is the completion of a quarter 2 milestone. The
     78partial delay was caused by the GENI contract delay, which pushed the start of ViSE’s quarter 2 into winter. Two of
     79our nodes are difficult to access during the winter because of snow and ice in Western Massachusetts The milestone
     80will be completed once the snow melts on the Western Massachusetts mountains, which should happen by at least
     81April 30th, 2009.
     82
     83The second milestone requires us to import an updated version of the Orca control framework from the
     84RENCI/BEN team. Given that we are already running a version of Orca on ViSE this milestone should not be
     85cumbersome. The operational modifications to the main branch of Orca’s code are not major and any ViSE updates
     86should be easy to incorporate. Further, we will participate in the proposed “Orca-fest” for Cluster D developers
     87and provide any expertise required; we have also been consulting with the DOME project in integrating Orca into
     88their testbed.
     89
     90As shown by our demonstration at GEC4, we have made significant progress toward completing Xen sensor
     91virtualization and non-slivered control of radar sensor data. GEC4 showed a preview of this virtualization capability for our !DavisPro weather station, and we have a working version of the capability for our PTZ camera in
     92the lab. We are continuing these efforts in the third quarter. Code is being updated to the ViSE svn repository at
     93http://vise.cs.umass.edu/svn.
     94=== B. Deliverables made ===
     95The second quarter of the project resulted in two primary deliverables: the ViSE demonstration and poster at GEC4
     96and the integration of Orca into ViSE. In addition, we have been active in the control framework working group,
     97participating in GPO-arranged calls, and in Cluster D. ViSE contributed to the Cluster D group session at GEC4.
     98
     99== II. Description of work performed during last quarter ==
     100
     101=== A. Activities and findings ===
     102The primary work during the quarter, including our Activities and Findings, centered on achieving the milestones
     103described above. In addition to attending GEC4, we held monthly Cluster D group meetings via teleconference.
     104=== B. Project participants ===
     105The primary PI is Prashant Shenoy. Co-PIs are Michael Zink, Jim Kurose, and Deepak Ganesan. Research Staff is
     106David Irwin.
     107=== C. Publications (individual and organizational) ===
     108No publications resulted from the 1st quarter of work; we are submitting some of the initial work on ViSE soon.
     109Once completed, this document will be publicly posted on the ViSE Trac website as a UMASS technical report.
     110=== D. Outreach activities ===
     111In the first quarter we had no significant outreach activities. However, during the summer we held a seminar for
     112REU Undergraduate Students on the construction of ViSE nodes and sensors. We are still on track to hold a January
     113class at UPRM in Puerto Rico on ViSE.
     114=== E. Collaborations ===
     115We have been working closely with the DOME project in our cluster at UMass and with the Orca-BEN project
     116at Duke. We integrated the Orca control framework before their official release data 6 months into Spiral 1.
     117Additionally, we are aiding DOME in integrating the Orca control framework. Finally, at GEC4 we discussed
     118integration with the Kansei project at Ohio St., and hope to discuss with them further at “Orca-fest” in mid-May.
     119=== F. Other Contributions ===