= PrimoGENI Project Status Report = Period: '''2010 Q1''' (January 2010 - March 2010) == I. Major accomplishments == === A. Milestones achieved === We completed two milestones during this period: * '''PrimoGENI: S2.b. Design document review''': We completed a review of initial design document at the ProtoGENI cluster teleconference meeting. * '''PrimoGENI: S2.c. Demonstration''': We demonstrated initial PrimoGENI functionality at GEC 7, using a prototype system implemented at our lab. We demonstrated the capabilities of the PrimoGENI aggregate with a medium-sized campus network model with physical, simulated and emulated network entities. === B. Deliverables made === We are current in the middle of developing a fully-functional PrimoGENI aggregate. We expect to deliver our first full-fledged implementation in July. == II. Description of work performed during last quarter == === A. Activities and findings === We briefly discuss the current design and implementation of the PrimoGENI aggregate in the following. The goal of our project is to incorporate real-time network simulation capabilities (in particular, our real-time network simulator called PRIME) into the GENI "ecosystem". In order to interact with other GENI facilities, PRIME functions as a GENI aggregate (component), so that experimenters can use a well-defined interface to remotely control and realize network experiments consisting of both physical, simulated and emulated network entities exchanging real network traffic. See [wiki:PrimoGENIDesignDocument PrimoGENI Aggregate 1.0 Design Document] for details. PrimoGENI uses the ProtoGENI control framework to manage, control and access the underlying resources. Our design makes the distinction between two types of resources: * ''Meta resources'' include compute nodes and network connectivity between the compute nodes. We call these resources ''meta resources'' to distinguish them from the physical resources (also known as the substrate), since they could be virtual machines and/or virtual network tunnels. Meta resources are managed by and accessible within the ProtoGENI/EmuLab suite. * ''Virtual resources'' are elements of the virtual network instantiated by PRIME, which include simulated hosts, routers, links, protocols, and emulated hosts. We call these resources ''virtual resources'' as they represent the target (virtual) computing and network environment for the GENI experiments; they encompass both simulated network entities and emulated hosts (which are run on virtual machines). PrimoGENI exports an aggregate interface as defined by the ProtoGENI control framework, and provide mechanisms for instantiating the virtual network onto the ProtoGENI/EmuLab facilities as configured and allocated on behalf of the experimenter. In essence, the PrimoGENI aggregate uses the control framework (in our case, the ProtoGENI control framework) to allocate the meta resources to run simulation and emulation, presented as virtual resource to end users. As such, the PrimoGENI aggregate can be viewed as a layered system. At the lowest layer is the '''physical resources (substrate) layer''', which is composed of cluster nodes, switches, and other resources that constitute the ProtoGENI/EmuLab suite. A '''meta resources layer''' is created upon resource assignment in a sliver. PrimoGENI uses the ProtoGENI/EmuLab suite to allocate the meta resources (including a subset of cluster nodes, and possibly VLAN connectivity among the nodes, and possible GRE channels created for communicating with resources off-site). Each physical cluster node is viewed by PrimoGENI as an independent scaling unit loaded with an operating system image that supports virtual machines (e.g., OpenVZ). Multiple virtual machines may be created on the same physical machine to run the PRIME instance and the emulated hosts, respectively. A '''simulation and emulation execution layer''' is created according to the virtual network specification of a sliver. The PRIME instances and the emulated hosts are mapped to the meta resources at the layer below. Communication is established between the emulated hosts and the corresponding real-time simulator instance, so that traffic generated by the emulated hosts is captured by the real-time simulator and conducted on the simulated network with appropriate delays and losses according to the simulated network conditions. Once the slivers are created and the slice is operational, experimenters can conduct experiments on the '''experiment (logical) layer'''. She will be able to log into individual emulated hosts, upload software, and launch it. Traffic between the emulated hosts will be conducted on the virtual network. === B. Project participants === * '''Jason Liu''', PI. * '''Julio Ibarra''', Co-PI. * '''Heidi Alvarez''', Co-PI. * '''Ernie Rubi''', network engineer * '''Miguel Erazo''', Ph.D. student. * '''Nathanael Van Vorst''', Ph.D. student. * '''Eduardo Pena''', undergraduate student. * '''Eduardo Tibau''', undergraduate student. === C. Publications (individual and organizational) === We created two posters to describe our initial implementation, one presented as GEC 7 and the other at SIMUTools 2010, both in March 2010. === D. Outreach activities === Not yet. === E. Collaborations === We are currently collaborating with Yan Luo from University of Massachusetts, Lowell, and Raju Rangaswami from Florida International University to develop the high-performance conduit for interconnecting simulation instances and emulation hosts running on light-weight virtual machines. We are also pursuing collaboration with researchers (both domestically and internationally) to incorporate OpenFlow switches in our experiment testbed. === F. Other Contributions === N/A.