32 | | This effort provides a reconfigurable optical network emulator aggregate connected to the GENI backbone over Louisiana Optical Network Initiative (LONI). The role of optical network emulation in GENI is to provide a predictable environment for repeatable experiments, and to perform early tests of network research experimentation prior to acquiring real network resources. The tools and services developed by this project will integrate with the ProtoGENI suite of tools. The aggregate manager and network connections between LONI and GENI for this project will also allow other LONI sites to participate in GENI. |
| 39 | This effort provides a reconfigurable optical network emulator aggregate (CRON) connected to the GENI backbone over Louisiana Optical Network Initiative (LONI). The role of optical network emulation in GENI is to provide a predictable environment for repeatable experiments, and to perform early tests of network research experimentation prior to acquiring real network resources. The tools and services developed by this project will integrate with the ProtoGENI suite of tools. The aggregate manager and network connections between LONI and GENI for this project will also allow other LONI sites to participate in GENI. |
| 40 | |
| 41 | A basic workflow of integration of CRON with ProtoGENI is similar to those of current federations among ProtoGENI and the outher clusters shown in Figure 1 because the CRON testbed adopts the architectures and protocols of Emulab and ProtoGENI and modifies them based on characteristics of different resources at CRON, such as hardware emulators, a high capacity 10Gbps switch, and users opt-in control fitted to scientific application users. |
| 42 | |
| 43 | All CRON’s resources, such as 10Gbps paths, virtual built-in optical networks (NRL, LONI, Internet2, etc), and computing resources, will be defined based on the resource specification (RSpecs). Those defined resources will be reported to the ProtoGENI Cleaninghouse so that any user at other federated sites can share them at an assigned slice. The list of defined resources, which will be given to the ProtoGENI cleaning house, are following: |
| 44 | |
| 45 | (1) four 10Gbps virtual optical networking and computing environments[[BR]] |
| 46 | |
| 47 | (2) four 5Gbps virtual optical networking and computing environments |
| 48 | To mange and exchange the information of defined resources, management, control data, CRON will host two servers, an aggregate manager and a component manager administrating resources and slices with common global identifiers (GID).[[BR]][[BR]] |
| 49 | |
| 50 | Below figure shows the implementation of Federation of CRON into ProtoGENI Cluster C at the Spiral 2[[BR]] |
| 51 | |
| 52 | |
| 53 | [[Image(CRON:CRON_implementation.png,40%)]][[BR]] |
| 54 | |
| 55 | |
48 | | Uncertainty: There are discrepancies between the expected and the actual performance experienced while developing new application softwares over high speed networks. And the reasons of the discrepancies can be the computing environment, networking environment, or the application software itself. Networks, especially in the early phase of deployment, can show varying levels of quality, such as bandwidth/throughput, latency, jitter and loss rate. The uncertain cause of these problems prevents the application developers from identifying their origin; they cannot tell if the problem source is the network or the software. It is critical for application developers to be able to identify the origin of problems so that they have a chance to solve them. If the problem is in the network, it is usually hard to find its origin as the application developers need to check all potential problems of all the resources utilized for development. This dramatically increases overall development time. Application developers should be able to concentrate on debugging only their software or end-systems when an unexpected level of performance is observed. To make this possible, they need to be able to decouple the networking environment from the computing environment. CRON is a decoupled networking environment that is guaranteed to provide a stable level of network quality because it is localized and fully controlled. |
| 71 | The cyberinfrastructure, CRON, provides integrated and automated access to a wide range of high speed networking configurations. Figure 2 shows how CRON can be reconfigured to emulate such optical networks as NLR (National Lambda Rail), Internet2, LONI (Louisiana Optical Network Initiative) configuration, or purely user-defined networks having different networking characteristics such as bandwidth, latency, and data loss rates. Moreover, users can dynamically reconfigure whole computing resources, such as operating system, middleware, applications, based on their specific demands. Due to the automated and reconfigurable characteristics, all types of experiments over CRON will be repeatable and controllable. This reconfigurable feature of CRON coincides with that of GENI – Programmability.[[BR]] |
59 | | Above figure shows the architecture of the CRON, which consists of two main components: (i) hardware (H/W) components, including a 10Gbps switch, a 1Gbps switch, optical fibers, network emulators, and workstations; and (ii) software (S/W) components, creating an automatic configuration server that will integrate all the H/W components to create virtual networking and computing environments based on the users' requirements. All components are connected at two different networking planes: a control plane connected with 100/1000 Mbps Ethernet links and a data plane connected with 10 Gbps optical links. To allow access from outside networks, such as Internet2, NLR, and LONI, and to connect to external compute resources, the switch at the data plane has four external 10 Gbps optical connections that will extend the capacity of CRON and integrate it with existing networks and projects for the purpose of cooperative scenarios shown in the Figure. |
| 83 | Above figure shows the architecture of the CRON, which consists of two main components: (i) hardware (H/W) components, including a 10Gbps switch, a 1Gbps switch, optical fibers, network emulators, and workstations; and (ii) software (S/W) components, creating an automatic configuration server that will integrate all the H/W components to create virtual networking and computing environments based on the users' requirements. All components are connected at two different networking planes: a control plane connected with 100/1000 Mbps Ethernet links and a data plane connected with 10 Gbps optical links. To allow access from outside networks, such as Internet2, NLR, and LONI, and to connect to external compute resources, the switch at the data plane has four external 10 Gbps optical connections that will extend the capacity of CRON and integrate it with existing networks and projects for the purpose of cooperative scenarios shown in the Figure |
84 | | 9. Mar 2010 -Finished CRON system version upgrading from Utah Emulab stable source branch. |
| 108 | 9. Mar 2010 -Finished CRON system version upgrading from Utah Emulab stable source branch.[[BR]] |
| 109 | [[BR]] |
| 110 | |
| 111 | '''Project participants''' |
| 112 | |
| 113 | |
| 114 | Professor Seung‐Jong Park: PI[[BR]] |
| 115 | |
| 116 | Cheng Cui: CRON system lead developer[[BR]] |
| 117 | Mohammed Abul Monzur Azad: CRON system developer[[BR]] |
| 118 | Praveenkumar Shivappa Kondikoppa: CRON system developer[[BR]] |
| 119 | Lin Xue: CRON system developer[[BR]] |
| 120 | [Cheng, Mohammed, Praveenkumar and Lin are Ph.D. students at LSU.] |