| 1 | Milestone 3. Initial Orca integration. Xen and Orca software running on three sensor nodes, non-slivered, |
| 2 | no radar control via Xen. Due February 1st, 2009. |
| 3 | The Orca control framework comprises a set of three distinct actor servers that correspond to GENI Experiments, |
| 4 | Clearinghouses, and Aggregate Managers 1. GENI experiments correspond to Orca service managers, |
| 5 | GENI Clearinghouses correspond to Orca brokers, and GENI Aggregate Managers correspond to Orca site authorities. |
| 6 | Each server runs in the context of a Java virtual machine and communicates with other servers using local or |
| 7 | remote procedure calls. The ViSE project has setup one instance of an Orca service manager, an Orca broker, and |
| 8 | an Orca site authority within the same Java virtual machine that communicate using local procedure calls. |
| 9 | The Orca actor servers run on a gateway node connected to both the public Internet (otg.cs.umass.edu) and the |
| 10 | sensor node on the roof of the UMass-Amherst CS department. The sensor node on the UMass-Amherst roof, in |
| 11 | turn, has a connection to the sensor node on Mount Toby via 802.11b using a long-distance directional antenna, and |
| 12 | the Mount Toby node has a connection to the sensor node on the MA1 tower. Each sensor node runs an instance |
| 13 | of the Xen virtual machine monitor and an instance of an Orca node agent. The Orca site authority communicates |
| 14 | with the Orca node agent to instantiates virtual machines for experiments. |
| 15 | Each node is primed with the software necessary to create Xen virtual machines and sliver their resources. The |
| 16 | local storage is a 32gb flash drive partitioned using logical volume manager. The Orca node agent snapshots a |
| 17 | template virtual machine image pre-loaded on each node to create each experiment virtual machine. Additionally, |
| 18 | tc is installed on each node to shape and limit each experiment’s network traffic. |
| 19 | Using the default Orca web portal, users are able to login and request slices on the ViSE testbed. Currently, |
| 20 | only the sensor node on the CS roof is accessible by end-users. We have decided to wait until the end of winter to |
| 21 | install the Orca node agent software on the two other ViSE nodes, since they are difficult to access in the winter. |
| 22 | We expect to access them in early-to-mid April depending on the weather and the snow melt. |
| 23 | In addition to the software to support Orca, each node has the appropriate foundation software/drivers to |
| 24 | operate the sensors, wireless and wired NICs, an attached Gumstix Linux embedded control node, and a GPRS |
| 25 | cellular modem. These software artifacts are accessible through Domain-0 in Xen. The wireless NIC is used for |
| 26 | communication with other sensor nodes. The wired NIC attaches to the Gumstix Linux embedded control node, |
| 27 | which, in turn, is connected to the public Internet using a GPRS cellular modem. The control node is for remote |
| 28 | 1Note that in Orca an Aggregate Manager assumes the role of Management Authority |
| 29 | Operations and Management. We have documented the process to create compliant Domain-0 and Domain-U |
| 30 | images at http://vise.cs.umass.edu. |
| 31 | The milestone is a pre-cursor to our sensor virtualization work. While users are able to create slices composed |
| 32 | of Xen virtual machines bound to slivers of CPU, memory, bandwidth, and local storage, they are not able to access |
| 33 | any sensors from their virtual machines yet. We are actively working on this capability and are due to complete it |
| 34 | on time in late summer/early fall as specified in our SOW. |