[[PageOutline]] GPO coordinator: Josh Karlin jkarlin@bbn.com Cluster D System Engineer: Harry Mussman hmussman@bbn.com Current version represents the plan established at the GEC-6 meeting on 11/17/09 = 5.3 VLAN Connectivity Plan = == 5.3.1 Goals == The following goals have been identified for the Cluster D connectivity in Spiral 2: a) Cluster D entities (aggregates and the clearinghouse) need reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between each other to meet their experimentation goals for Spiral 2. b) L3 will be used to reach publically-available services via public IP addresses and /or DNS names. c) L2 (VLAN) connectivity will be used to join aggregate (and possibly other) resources via pt-to-pt or multi-point VLANs. d) Pt-to-pt or multi-point VLANs will be used between entities involved in an experiment, where each entity is assigned a unique private IP addresses for the duration of the experiment. This has proven to be a very useful and convenient way to group entities for an experiment, e.g., the OMF network arrangement. e) Pt-to-pt or multi-point VLANs may be used between entities involved in an experiment, where each entity utilizes a L3 protocol that is not IP. f) Two backbone network have donated resources to GENI for Spiral 2, and these will be utilized to provide reliable, high-bandwidth L3 and L2 connectivity: [[http://groups.geni.net/geni/wiki/GeniInternet2 Internet2]] and [[http://groups.geni.net/geni/wiki/GeniNationalLambdaRail NLR]] g) Each Cluster D entity (aggregate or clearinghouse) must reach Internet2 and/or NLR to get L3 and L2 connectivity. It is expected that this will include connections through a campus network and a regional network to Internet2 or NLR (but not typically both). h) It is important that all Cluster D entities have L3 and L2 connectivity with each other in Spiral 2; if not, then the combinations of joint experiment would be limited. This implies connectivity is needed between Internet2 and NLR, which is not currently supported. i) It is important that the complexities of providing the connectivity be hidden from the experimenter, who should be able to request a L3 (IP) or L2 (VLAN) connection by dealing with primarily their own entity. j) It should be possible to extend the Cluster D connectivity plan to include resources in the other GENI clusters, with the eventual goal of reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between all GENI entities. k) The resulting connectivity plan should support this roadmap for Cluster D VLAN capabilities: [[http://groups.geni.net/geni/attachment/wiki/ClusterD/vlan.jpg roadmap]] [[BR]] == 5.3.2 Backbone Options == The NLR and Internet2 backbone networks are the current options for carrying VLANs between GENI Cluster D aggregates. === 5.3.2.1 Internet2 === a) See [[http://www.internet2.edu/ Internet2]] Contact: ? b) L3 service Uses public IP addresses, but only connects to other endpoints who are connected to Internet2. Details? Reach? c) Pre-configured VLANs on GENI “wave”, between certain Internet2 Wave System nodes. See[[http://www.internet2.edu/waveco/ Internet2 WAVE]] This is the approach being used by ProtoGENI backbone. See [[http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone]] See [[http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node]] Multi-point d) Direct connections to the Internet2 Wave System nodes. This is the approach being used by ProtoGENI backbone nodes. See [[http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone]] See [[http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node]] More details on this can be found at ? It states that the ProtoGENI switches (shown below in the IP table) are each attached to the GENI L2 wave but also to I2's Layer 3 network at 1Gb/s. e) Tunnel to Internet2 Wave System nodes. From ?: Those aggregates that cannot get on the wave but are attached to I2 might be able to configure static IP tunnels (such as Ethernet over GRE) to one of the ProtoGENI switches attached to I2's Layer 3 services. f) Switched VLANs using Internet2 ION (DCN) service. The Internet2 (DCN) service provides switched VLANs, but there are two issues that suggest that it will not be utilized for GENI traffic. First, its use has not been included in the package of services donated to GENI, and thus there may be a cost to the GENI aggregate now, or in the future. Second, it is available in only a limited number of Internet2 PoPs, and these may be difficult to reach. Currently, ORCA plans to provide an interface into the Internet2 IDC by 2/1/10. === 5.3.2.2 NLR === a) See [[http://www.nlr.net/ NLR]] Contact: Kathy Benninger, staff engineer for NLR, benninger@psc.edu b) L3 service See [[http://www.nlr.net/packetnet.php PacketNet]] PacketNet is a barebones service providing high-speed access to other endpoints which happen to be on PacketNet. Details? Reach? c) Switched (or pre-configured) VLANs using FrameNet service. See [[http://www.nlr.net/framenet.php NLR FrameNet]] Pt-to-pt. Multi-point? If not now, when? [[BR]] Setup using Sherpa service. FrameNet/SHERPA is used to create dedicated static VLANs between sites with NLR access. Exactly what is interface to Sherpa? [[BR]] BEN has used Sherpa to pre-configure VLANs. See [[http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figure 6-4]] [[BR]] ORCA plans to provide an interface into the Sherpa GCI by 2/1/10 [[BR]] NLR does *not* provide any sort of VLAN translation or tunnelling support, so you are entirely responsible for arriving at your NLR endpoint with your VLANs intact, as it were. This seems likely to cause problems for some sites, given NLR's relatively small number of endpoints. It has certainly caused some problems for us. [[BR]] RENCI controls their access all the way to their NLR switch port, which gives them a lot more flexibility. (And note that, since they can be flexible and do arbitrary VLAN translation on their own equipment, people who want to talk to them (like us), don't have to be as flexible.) d) Switched (or pre-configured) VLANs via C-Wave service. NLR press release includes this is offering to GENI Details? How? === 5.3.2.3 Combined Options === The following bacbone network options are available for carrying VLANs between GENI endpoint aggregates: NLR FrameNet [[BR]] Internet2 GENI WAVE [[BR]] NLR C-Wave [[BR]] Internet2 ION (DCN) (not recommended, and not shown in figure) Figure 1, Backbone Network Options [[BR]] [[Image(Visio-120409_ClusterD_Connectivity_Page_01.jpg, 100%)]] [[BR]] == 5.3.3 Connecting Cluster D Aggregates to a Backbone Network == === 5.3.3.1 Template === a) PI or staff contact:[[BR]] b) Configuration drawing [[BR]] c) Testbed Project: [[BR]] PI or staff contact:[[BR]] Equipment for termination:[[BR]] Range of VLAN tags?[[BR]] How to select VLAN tag? [[BR]] d) Campus network Campus: [[BR]] IT contact: [[BR]] How are VLANs carried? (direct, tunnel) [[BR]] Support OpenFlow? [[BR]] Range of VLAN tags? [[BR]] How to select VLAN tag? [[BR]] e) Regional access network Network: [[BR]] Staff contact: [[BR]] How are VLANs carried? (direct, tunnel) [[BR]] Range of VLAN tags? [[BR]] How to select VLAN tag? [[BR]] f) Backbone network Network: (I2 GENI wave, I2 DCN, NLR Framenet, NLR C-Wave) [[BR]] Staff contact:[[BR]] How are VLANs accepted? (direct, tunnel)[[BR]] How are VLANs carried? (pt-to-pt, multi pt)[[BR]] Range of VLAN tags? [[BR]] How to select VLAN tag? [[BR]] g) References === 5.3.3.2 ORCA/BEN and IMF Aggregates === a) Contact: Chris Heermann ckh@renci.org (RENCI) [[BR]] b) RENCI, UNC-CH and Duke to NLR Connections VLANs from RENCI BEN PoP, via fiber, to NLR FrameNet [[BR]] VLANs from UNC-CH BEN PoP, via BEN, via RENCI BEN PoP, to NLR FrameNet [[BR]] VLANs from Duke BEN MPoP, via Duke Campus OIT, to NLR FrameNet [[BR]] Figure 2-1, RENCI, UNC-CH and Duke Connections to NLR FrameNet g) References [[BR]] See [http://groups.geni.net/geni/wiki/ORCABEN#CurrentCapabilities ORCA/BEN Capabilities] [[BR]] See [http://groups.geni.net/geni/wiki/ORCABEN#Spiral2Connectivity ORCA/BEN Spiral 2 Connectivity] [[BR]] See [http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Fig 6-1 and 6-4][[BR]] Figure 6-4, ORCA/BEN Connectivty for Demo [[BR]] [[Image(orca_ben_demo_connectivity.jpg, 70%)]] [[BR]] === 5.3.3.3 BBN Aggregate === a) Contacts: Josh Karlin jkarlin@bbn.com and Chaos Golubitsky b) BBN to NLR Connection VLANs from BBN, via NOX, to NLR FrameNet [[BR]] An aggregate with a cluster of VMs is included at this site.[[BR]] Figure 2-2, BBN to NLR Connection d) Campus Current [[BR]] L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.[[BR]] VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.[[BR]] See http://groups.geni.net/syseng/wiki/OpsLabConnectivity [[BR]] Pending: [[BR]] L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2 When? [[BR]] VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2. When? [[BR]] g) References [[BR]] === 5.3.3.4 DOME , ViSE and Cloud-Control Aggregates === a) Contacts: [[BR]] David Irwin irwin@cs.umass.edu Brian Lynn blynn@cs.umass.edu Rich Tuthill tuthill@oit.umass.edu b) UMass Amherst to NLR Connection (Fig 2-3) VLANs from control plane server geni.cs.umass.edu in UMass Amherst CS building, via UMass Amherst campus OIT, via Northern Crossroads (NoX), via handoff located at 300 Bent St in Cambridge, MA, to NLR. [[BR]] Current connection runs from UMass to BBN, temporarily. Per Kathy Benninger, staff engineer for NLR, benninger@psc.edu, on 11/17/09: UMass should be able to connect to NLR for GENI per GENI agreement, even though they are not members of NLR.[[BR]] Note: Cloud Control aggregate will require special connectivity arrangements to cloud resources. Figure 2-3, UMass Amherst to NLR Connection c) Campus network: Layer 3 Connectivity: IP access will be through UMass Amherst's campus network, using their public IP addresses. An MOU was agreed upon with the UMass Office of Information Technology (OIT) regarding connecting Internet2 to the DOME and ViSE servers, along with VLAN access. The OIT contact is Rick Tuthill, tuthill email at oit.umass.edu. The agreements include: 1) CS shall order OIT-provisioned network jacks in appropriate locations in the Computer Science building using normal OIT processes. (completed) [[BR]] 2) OIT shall configure these jacks into a single VLAN that shall be extended over existing OIT-managed network infrastructure between the Computer Science building and the Northern Crossroads (NoX) Internet2 Gigapop located at 300 Bent St in Cambridge, MA. [[BR]] 3) OIT agrees to provide a single VLAN for “proof-of-concept” testing and initial GENI research activities. [[BR]] 4) The interconnection of the provided VLAN between the NoX termination point and other Internet2 locations remains strictly the province of the CS researchers and the GENI organization. [[BR]] 5) This service shall be provided by OIT at no charge to CS for the term of one year in the interest of OIT learning more about effectively supporting network-related research efforts on campus. [[BR]] In an email dated September 28th, 2009 Rick Tuthill of UMass-Amherst OIT updated us on the status of this connection, as follows: 1) The two existing ports at the CS building in room 218A and room 226 and all intermediary equipment are now configured to provide layer-2 VLAN transport from these networks jacks to the UMass/Northern Crossroads(NoX) handoff at 300 Bent St in Cambridge, MA. [[BR]] 2) The NoX folks are not doing anything with this research VLAN at this time. They need further guidance from GENI on exactly what they’re supposed to do with the VLAN. [[BR]] 3) Also, once IP addressing is clarified for this VLAN, we’ll need to configure some OIT network equipment to allow the selected address range(s) to pass through. [[BR]] === 5.3.3.5 Kansei Aggregates === a) Contacts: Wenjie Zeng, Ohio State University, zengw@cse.ohio-state.edu, http://www.cse.ohio-state.edu/~zengw/ [[BR]] Hongwei Zhang, Wayne State University, hzhang@cs.wayne.edu, http://www.cs.wayne.edu/~hzhang/ b) Ohio State to NLR Connection VLANs from Ohio State, via ?, to NLR [[BR]] Note: VLANs from Wayne State, via ?, to NLR not yet defined Figure 2-4, Ohio State to NLR Connection === 5.3.3.6 OKGems Aggregate === a) Contact: Xiaolin (Andy) Li, Computer Science Department, Oklahoma State University (OSU) xiaolin@cs.okstate.edu VLANs from Oklahoma State, via ?, to NLR not yet defined === 5.3.3.7 LEARN Regional Network and Connected Aggregates === a) Contact: Deniz Gurkan University of Houston College of Technology dgurkan@uh.edu b) LEARN to NLR Connection (Fig 2-5) VLANs from UH, Rice, TAMU and UT-Austin, via LEARN network, to NLR PoP in Houston See [http://groups.geni.net/geni/attachment/ticket/267/GENI_MS2ab_LEARN_Nov16.pdf ] for details Figure 2-5, LEARN to NLR Connection f) Note: Other backbone network options are pending Internet2 ION - IDC installation is in progress at UH, Rice, TAMU and UT-Austin NLR C-WAVE has a PoP in Houston === 5.3.3.8 iGENI (Starlight) Crossconnect === a) Contacts: Joe Mambretti j-mambretti@northwestern.edu [[BR]] Jim Chen [[BR]] Fei Yeh [[BR]] b) iGENI to Backbone Networks and to International Testbeds (Fig 3) VLANs from Starlight L2 crossconnect service in Chicago, to multiple backbone networks, including NLR FramewNet and Internet2 GENI WAVE and NLR C-Wave.[[BR]] VLANs from Starlight L2 crossconnect service in Chicago, to multiple international testbeds, including Japan, Korea, South America and Europe. [[BR]] See [[http://www.startap.net/starlight/NETWORKS/ for more detail on Starlight facility]] [[BR]] See [[http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf for more detail on Starlight GLIF GOLE May, 2006]][[BR]] Figure 3-1, iGENI Connections to Backbone Networks and to International Testbeds Figure 3-2, StarLight GLIF GOLE Configuration [[Image(SL-TransLight_5.06.jpg, 80%)]] === 5.3.3.9 Combined Configuration === The combined configuration of currently planned connections between Cluster D aggregates, including the iGENI (Starlight) crossconnect, to the backbone networks and to international testbeds, is shown in Figure 4. [[BR]] Figure 4, Connections from all Cluster D Aggregates to Backbone Networks and International Testbeds. == 5.3.4 First Phase: Connect Cluster D Aggregates via NLR FrameNet Service == The first phase will connect Cluster D aggregates via the NLR FrameNet service, as shown in Figure 5. [[BR]] Figure 5, Connections Between Cluster D Aggregates via NLR FrameNet Service Use NLR Framenet, since a majority of Cluster D aggregates can connect with it.[[BR]] Pre-configured paths via NLR FrameNet, regionals and campus networks.[[BR]] A "Cluster D Backbone Aggregate" created to setup and assign these paths to experimenters. [[BR]] "Cluster D Backbone Aggregate" would have one or more pre-configured point-to-point paths between all (or perhaps a subset) of the Cluster D endpoint aggregates. These would be assigned to an experimenter upon request. Each path would have a spcific identifier (e.g., port/VLAN tag) a the interface to the endpoint aggregate.[[BR]] Possible extension: Use SHERPA, and possibly tools for regional networks, to dynamically add paths when more are required.[[BR]] Make connections at each endpoint aggregate between ednpoint of the backbone path and designated endpoint aggregatte component. Typically, this would be done by mapping combinations of VLAN ports/tags in an Ethernet switch dedicated to the endpoint aggegate.[[BR]] Thus, ordering of VLAN connections is likely to proceed from the backbone out to the endpoints, as in the demo on 7/7/09, by the ORCA/BEN project.[[BR]] Used Sherpa to pre-configure VLANs through NLR[[BR]] And then mapped VLAN tags near the endpoint nodes.[[BR]] See [[http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figures 6-2, 6-3 and 6-4]].[[BR]] Similarly, the suggested approach to connectivity specification at [[http://groups.geni.net/syseng/wiki/RspecConnectivity RSpec Connectivity]], was: “we give the backbone priority in selecting a network's VLAN ID and aggregates must map the ID within their local network by whatever means is most convenient for them. This way, the experimenter need only know what backbone the aggregate is connected to, and can design the network from there.”[[BR]] == 5.3.5 Second Phase: Use iGENI Crossconnect to Interconnect Backbones and to Connect to International Testbeds == The second phase will use the iGENI crossconnect to interconnect backbones, and to connect endpoint aggregates to international testbeds, as shown in Figure 6.[[BR]] Figure 6, Use of iGENI Crossconnect to Interconnect Backbones and to Connect to International Testbeds The iGENI (Starlight) crossconnect service includes the Starlight L2 crossconnect service in Chicago, which connects to multiple backbone networks, including NLR FramewNet and Internet2 GENI WAVE and NLR C-Wave, and to multiple international testbeds, including Japan, Korea, South America and Europe.[[BR]] See [[http://www.startap.net/starlight/NETWORKS/ for more detail on Starlight facility]] [[BR]] See [[http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf for more detail on Starlight GLIF GOLE May, 2006]][[BR]] The Starlight crossconnect service should be able to bridge connections between multiple backbone networks, to enable connection of: Cluster D endpoint aggergate via Internet2 GENI WAVE Cluster D endpoint aggergate via NLR C-Wave Aggregate from another cluster, i.e., Cluster C (ProtoGENI) aggregate, via Internet2 GENI WAVE The Starlight crossconnect service should be able to bridge connections between a backbone networks and an international tesbed, to enable connection of: Cluster D endpoint aggragate to an international testbed Aggregate from another cluster, i.e., Cluster C (ProtoGENI) aggregate, to an international testbed.