wiki:clusterdvlan

Version 64 (modified by hmussman@bbn.com, 14 years ago) (diff)

--

GPO coordinator: Josh Karlin Josh Karlin

Cluster D System Engineer: Harry Mussman hmussman@bbn.com

Current version represents the plan established at the GEC-6 meeting on 11/17/09

5.3 VLAN Connectivity Plan

5.3.1 Goals

The following goals have been identified for the Cluster D connectivity in Spiral 2:

a) Cluster D entities (aggregates and the clearinghouse) need reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between each other to meet their experimentation goals for Spiral 2.

b) L3 will be used to reach publically-available services via public IP addresses and /or DNS names.

c) L2 (VLAN) connectivity will be used to join aggregate (and possibly other) resources via pt-to-pt or multi-point VLANs.

d) Pt-to-pt or multi-point VLANs will be used between entities involved in an experiment, where each entity is assigned a unique private IP addresses for the duration of the experiment. This has proven to be a very useful and convenient way to group entities for an experiment, e.g., the OMF network arrangement.

e) Pt-to-pt or multi-point VLANs may be used between entities involved in an experiment, where each entity utilizes a L3 protocol that is not IP.

f) Two backbone network have donated resources to GENI for Spiral 2, and these will be utilized to provide reliable, high-bandwidth L3 and L2 connectivity: http://groups.geni.net/geni/wiki/GeniInternet2 Internet2 and http://groups.geni.net/geni/wiki/GeniNationalLambdaRail NLR

g) Each Cluster D entity (aggregate or clearinghouse) must reach Internet2 and/or NLR to get L3 and L2 connectivity. It is expected that this will include connections through a campus network and a regional network to Internet2 or NLR (but not typically both).

h) It is important that all Cluster D entities have L3 and L2 connectivity with each other in Spiral 2; if not, then the combinations of joint experiment would be limited. This implies connectivity is needed between Internet2 and NLR, which is not currently supported.

i) It is important that the complexities of providing the connectivity be hidden from the experimenter, who should be able to request a L3 (IP) or L2 (VLAN) connection by dealing with primarily their own entity.

j) It should be possible to extend the Cluster D connectivity plan to include resources in the other GENI clusters, with the eventual goal of reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between all GENI entities.

k) The resulting connectivity plan should support this roadmap for Cluster D VLAN capabilities: http://groups.geni.net/geni/attachment/wiki/ClusterD/vlan.jpg roadmap

5.3.2 Backbone Options

The NLR and Internet2 backbone networks are the current options for carrying VLANs between GENI Cluster D aggregates.

5.3.2.1 Internet2

a) See http://www.internet2.edu/ Internet2

Contact: ?

b) L3 service

Uses public IP addresses, but only connects to other endpoints who are connected to Internet2. Details? Reach?

c) Pre-configured VLANs on GENI “wave”, between certain Internet2 Wave System nodes. Seehttp://www.internet2.edu/waveco/ Internet2 WAVE

This is the approach being used by ProtoGENI backbone. See http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone See http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node Multi-point

d) Direct connections to the Internet2 Wave System nodes.

This is the approach being used by ProtoGENI backbone nodes. See http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone See http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node More details on this can be found at ? It states that the ProtoGENI switches (shown below in the IP table) are each attached to the GENI L2 wave but also to I2's Layer 3 network at 1Gb/s.

e) Tunnel to Internet2 Wave System nodes.

From ?: Those aggregates that cannot get on the wave but are attached to I2 might be able to configure static IP tunnels (such as Ethernet over GRE) to one of the ProtoGENI switches attached to I2's Layer 3 services.

f) Switched VLANs using Internet2 ION (DCN) service.

The Internet2 (DCN) service provides switched VLANs, but there are two issues that suggest that it will not be utilized for GENI traffic. First, its use has not been included in the package of services donated to GENI, and thus there may be a cost to the GENI aggregate now, or in the future. Second, it is available in only a limited number of Internet2 PoPs, and these may be difficult to reach. Currently, ORCA plans to provide an interface into the Internet2 IDC by 2/1/10.

5.3.2.2 NLR

a) See http://www.nlr.net/ NLR

Contact: Kathy Benninger, staff engineer for NLR, benninger@psc.edu

b) L3 service See http://www.nlr.net/packetnet.php PacketNet

PacketNet is a barebones service providing high-speed access to other endpoints which happen to be on PacketNet. Details? Reach?

c) Switched (or pre-configured) VLANs using FrameNet service. See http://www.nlr.net/framenet.php NLR FrameNet

Pt-to-pt. Multi-point? If not now, when?
Setup using Sherpa service. FrameNet/SHERPA is used to create dedicated static VLANs between sites with NLR access. Exactly what is interface to Sherpa?
BEN has used Sherpa to pre-configure VLANs. See http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figure 6-4
ORCA plans to provide an interface into the Sherpa GCI by 2/1/10
NLR does *not* provide any sort of VLAN translation or tunnelling support, so you are entirely responsible for arriving at your NLR endpoint with your VLANs intact, as it were. This seems likely to cause problems for some sites, given NLR's relatively small number of endpoints. It has certainly caused some problems for us.
RENCI controls their access all the way to their NLR switch port, which gives them a lot more flexibility. (And note that, since they can be flexible and do arbitrary VLAN translation on their own equipment, people who want to talk to them (like us), don't have to be as flexible.)

d) Switched (or pre-configured) VLANs via C-Wave service.

NLR press release includes this is offering to GENI Details? How?

5.3.2.3 Combined Options

The following bacbone network options are available for carrying VLANs between GENI endpoint aggregates:

NLR FrameNet Internet2 GENI WAVE NLR C-Wave

Figure 1 Backbone Network Options

5.3.3 Connecting Cluster D Aggregates to a Backbone Network

5.3.3.1 Template

a) PI or staff contact:

b) Configuration drawing

c) Testbed

Project:
PI or staff contact:
Equipment for termination:
Range of VLAN tags?
How to select VLAN tag?

d) Campus network

Campus:
IT contact:
How are VLANs carried? (direct, tunnel)
Support OpenFlow?
Range of VLAN tags?
How to select VLAN tag?

e) Regional access network

Network:
Staff contact:
How are VLANs carried? (direct, tunnel)
Range of VLAN tags?
How to select VLAN tag?

f) Backbone network

Network: (I2 GENI wave, I2 DCN, NLR Framenet)
Staff contact:
How are VLANs accepted? (direct, tunnel)
How are VLANs carried? (pt-to-pt, multi pt)
Range of VLAN tags?
How to select VLAN tag?

g) References

5.3.3.2 ORCA/BEN and IMF Aggregates

a) Contact: Chris Heermann ckh@renci.org (RENCI)

b) RENCI, UNC-CH and Duke to NLR Configuration (Fig 2-1)

VLANs from RENCI BEN PoP, via fiber, to NLR FrameNet
VLANs from UNC-CH BEN PoP, via BEN, via RENCI BEN PoP, to NLR FrameNet
VLANs from Duke BEN MPoP, via Duke Campus OIT, to NLR FrameNet

g) References

See ORCA/BEN Capabilities
See ORCA/BEN Spiral 2 Connectivity
See Fig 6-1 and 6-4

Figure 6-4, ORCA/BEN Connectivty for Demo

5.3.3.3 BBN Aggregate

a) Contact: Josh Karlin Josh Karlin

b) BBN to NLR Configuration (Fig 2-2)

VLANs from BBN, via NOX, to NLR FrameNet
Note: An aggregate with a cluster of VMs is included at this site.

d) Current

L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.
VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.
See http://groups.geni.net/syseng/wiki/OpsLabConnectivity

Pending:

L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2
VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2.

g) References

5.3.3.4 DOME , ViSE and Cloud-Control Aggregates

a) Contacts:

David Irwin David Irwin Brian Lynn blynn@cs.umass.edu Rich Tuthill tuthill@oit.umass.edu

b) UMass to NLR Configuration (Fig 2-3)

VLANs from control plane server geni.cs.umass.edu in UMass Amherst CS building, via UMass Amherst campus OIT, via Northern Crossroads (NoX), via handoff located at 300 Bent St in Cambridge, MA, to NLR.
Note: Current connection runs from UMass to BBN, temporarily. Note: Per Kathy Benninger, staff engineer for NLR, benninger@psc.edu, on 11/17/09: UMass should be able to connect to NLR for GENI per GENI agreement, even though htey are not members of NLR.
Note: Cloud Control aggregate will require special connectivity arrangements to cloud resources.

c) Campus network:

Layer 3 Connectivity: IP access will be through UMass Amherst's campus network, using their public IP addresses.

An MOU was agreed upon with the UMass Office of Information Technology (OIT) regarding connecting Internet2 to the DOME and ViSE servers, along with VLAN access. The OIT contact is Rick Tuthill, tuthill email at oit.umass.edu. The agreements include: 1) CS shall order OIT-provisioned network jacks in appropriate locations in the Computer Science building using normal OIT processes. (completed)
2) OIT shall configure these jacks into a single VLAN that shall be extended over existing OIT-managed network infrastructure between the Computer Science building and the Northern Crossroads (NoX) Internet2 Gigapop located at 300 Bent St in Cambridge, MA.
3) OIT agrees to provide a single VLAN for “proof-of-concept” testing and initial GENI research activities.
4) The interconnection of the provided VLAN between the NoX termination point and other Internet2 locations remains strictly the province of the CS researchers and the GENI organization.
5) This service shall be provided by OIT at no charge to CS for the term of one year in the interest of OIT learning more about effectively supporting network-related research efforts on campus.

In an email dated September 28th, 2009 Rick Tuthill of UMass-Amherst OIT updated us on the status of this connection, as follows: 1) The two existing ports at the CS building in room 218A and room 226 and all intermediary equipment are now configured to provide layer-2 VLAN transport from these networks jacks to the UMass/Northern Crossroads(NoX) handoff at 300 Bent St in Cambridge, MA.
2) The NoX folks are not doing anything with this research VLAN at this time. They need further guidance from GENI on exactly what they’re supposed to do with the VLAN.
3) Also, once IP addressing is clarified for this VLAN, we’ll need to configure some OIT network equipment to allow the selected address range(s) to pass through.

g) References:

See DOME Spiral 1 Connectivity

5.3.3.5 Kansei Aggregates

a) Contacts:

Wenjie Zeng, Ohio State University, zengw@cse.ohio-state.edu, http://www.cse.ohio-state.edu/~zengw/
Hongwei Zhang, Wayne State University, hzhang@cs.wayne.edu, http://www.cs.wayne.edu/~hzhang/

b) Ohio State to NLR Configuration (Fig 2-4)

VLANs from Ohio State, via ?, to NLR
Note: VLANs from Wayne State, via ?, to NLR not yet defined

5.3.3.6 OKGems Aggregate

a) Contact: Xiaolin (Andy) Li, Computer Science Department, Oklahoma State University (OSU) andyli

Note: VLANs from Oklahoma State, via ?, to NLR not yet defined

5.3.3.7 LEARN Regional Network and Connected Aggregates

a) Contact: Deniz Gurkan University of Houston College of Technology dgurkan@uh.edu

b) LEARN to NLR configuration drawing (Fig 2-5)

VLANs from UH, Rice, TAMU and UT-Austin, via LEARN network, to NLR PoP in Houston See http://groups.geni.net/geni/attachment/ticket/267/GENI_MS2ab_LEARN_Nov16.pdf for details

f) Note: Other backbone network options are pending

Internet2 ION - IDC installation is in progress at UH, Rice, TAMU and UT-Austin NLR C-WAVE has a PoP in Houston

5.3.3.8 iGENI (Starlight) Crossconnect

a) Contacts:

Joe Mambretti Jim Chen Fei Yeh

b) iGENI to Backbone Networks and to International Testbeds (Fig 3)

VLANs from Starlight L2 crossconnect service in Chicago, to multiple backbone networks, including NLR FramewNet and Internet2 GENI WAVE and NLR C-Wave. See http://www.startap.net/starlight/NETWORKS/ Starlight See http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf Starlight GLIF GOLE May, 2006

5.3.3.9 Combined Configuration

See Fig 4

5.3.4 First Phase: Connect Cluster D Aggregates via NLR FrameNet Service

See Fig 5

Use NLR Framenet, since a mojority of Cluster D aggregates can connect with it.

Implement pre-configured (or dynamic) connectivity via NLR and regionals

Implement dynamic connections at each aggregate.

Ordering of VLAN connections is likely to proceed from the backbone out to the endpoints:

In a demo on 7/7/09, the ORCA/BEN project: Used Sherpa to pre-configure VLANs through NLR And then mapped VLAN tags near the endpoint nodes. See http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figures 6-2, 6-3 and 6-4.

In a suggested approach to connectivity specification at http://groups.geni.net/syseng/wiki/RspecConnectivity RSpec Connectivity, it is suggested that: “we give the backbone priority in selecting a network's VLAN ID and aggregates must map the ID within their local network by whatever means is most convenient for them. This way, the experimenter need only know what backbone the aggregate is connected to, and can design the network from there.”

Dynamic setup would allow additional flexibility, with some complications.

5.3.5 Second Phase: Interconnect Backbones via iGENI Crossconnect

See Fig 6

The iGENI (Starlight) crossconnect includes the Starlight L2 crossconnect service in Chicago, which connects to multiple backbone networks, both domestic and international, including NLR and Internet2. See http://www.startap.net/starlight/NETWORKS/ Starlight See http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf Starlight GLIF GOLE May, 2006 The Starlight service should be able to bridge connections between multiple backbone networks, allowing access to distant testbeds. It may be able to bridge VLANs between NLR and Internet2, if allowed.

Attachments (25)