Changes between Initial Version and Version 3 of Ticket #39


Ignore:
Timestamp:
05/05/09 17:11:25 (15 years ago)
Author:
hmussman@bbn.com
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #39

    • Property Status changed from new to closed
    • Property Resolution changed from to fixed
    • Property Summary changed from Submit document to show completion of milestone. to milestone 1c completion
  • Ticket #39 – Description

    initial v3  
     1 Milestone 3. Initial Orca integration. Xen and Orca software running on three sensor nodes, non-slivered,
     2no radar control via Xen. Due February 1st, 2009.
     3The Orca control framework comprises a set of three distinct actor servers that correspond to GENI Experiments,
     4Clearinghouses, and Aggregate Managers 1. GENI experiments correspond to Orca service managers,
     5GENI Clearinghouses correspond to Orca brokers, and GENI Aggregate Managers correspond to Orca site authorities.
     6Each server runs in the context of a Java virtual machine and communicates with other servers using local or
     7remote procedure calls. The ViSE project has setup one instance of an Orca service manager, an Orca broker, and
     8an Orca site authority within the same Java virtual machine that communicate using local procedure calls.
     9The Orca actor servers run on a gateway node connected to both the public Internet (otg.cs.umass.edu) and the
     10sensor node on the roof of the UMass-Amherst CS department. The sensor node on the UMass-Amherst roof, in
     11turn, has a connection to the sensor node on Mount Toby via 802.11b using a long-distance directional antenna, and
     12the Mount Toby node has a connection to the sensor node on the MA1 tower. Each sensor node runs an instance
     13of the Xen virtual machine monitor and an instance of an Orca node agent. The Orca site authority communicates
     14with the Orca node agent to instantiates virtual machines for experiments.
     15Each node is primed with the software necessary to create Xen virtual machines and sliver their resources. The
     16local storage is a 32gb flash drive partitioned using logical volume manager. The Orca node agent snapshots a
     17template virtual machine image pre-loaded on each node to create each experiment virtual machine. Additionally,
     18tc is installed on each node to shape and limit each experiment’s network traffic.
     19Using the default Orca web portal, users are able to login and request slices on the ViSE testbed. Currently,
     20only the sensor node on the CS roof is accessible by end-users. We have decided to wait until the end of winter to
     21install the Orca node agent software on the two other ViSE nodes, since they are difficult to access in the winter.
     22We expect to access them in early-to-mid April depending on the weather and the snow melt.
     23In addition to the software to support Orca, each node has the appropriate foundation software/drivers to
     24operate the sensors, wireless and wired NICs, an attached Gumstix Linux embedded control node, and a GPRS
     25cellular modem. These software artifacts are accessible through Domain-0 in Xen. The wireless NIC is used for
     26communication with other sensor nodes. The wired NIC attaches to the Gumstix Linux embedded control node,
     27which, in turn, is connected to the public Internet using a GPRS cellular modem. The control node is for remote
     281Note that in Orca an Aggregate Manager assumes the role of Management Authority
     29Operations and Management. We have documented the process to create compliant Domain-0 and Domain-U
     30images at http://vise.cs.umass.edu.
     31The milestone is a pre-cursor to our sensor virtualization work. While users are able to create slices composed
     32of Xen virtual machines bound to slivers of CPU, memory, bandwidth, and local storage, they are not able to access
     33any sensors from their virtual machines yet. We are actively working on this capability and are due to complete it
     34on time in late summer/early fall as specified in our SOW.