wiki:clusterdvlan

Version 83 (modified by hmussman@bbn.com, 10 years ago) (diff)

--

GPO coordinator: Josh Karlin Josh Karlin

Cluster D System Engineer: Harry Mussman hmussman@bbn.com

Current version represents the plan established at the GEC-6 meeting on 11/17/09

5.3 VLAN Connectivity Plan

5.3.1 Goals

The following goals have been identified for the Cluster D connectivity in Spiral 2:

a) Cluster D entities (aggregates and the clearinghouse) need reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between each other to meet their experimentation goals for Spiral 2.

b) L3 will be used to reach publically-available services via public IP addresses and /or DNS names.

c) L2 (VLAN) connectivity will be used to join aggregate (and possibly other) resources via pt-to-pt or multi-point VLANs.

d) Pt-to-pt or multi-point VLANs will be used between entities involved in an experiment, where each entity is assigned a unique private IP addresses for the duration of the experiment. This has proven to be a very useful and convenient way to group entities for an experiment, e.g., the OMF network arrangement.

e) Pt-to-pt or multi-point VLANs may be used between entities involved in an experiment, where each entity utilizes a L3 protocol that is not IP.

f) Two backbone network have donated resources to GENI for Spiral 2, and these will be utilized to provide reliable, high-bandwidth L3 and L2 connectivity: http://groups.geni.net/geni/wiki/GeniInternet2 Internet2 and http://groups.geni.net/geni/wiki/GeniNationalLambdaRail NLR

g) Each Cluster D entity (aggregate or clearinghouse) must reach Internet2 and/or NLR to get L3 and L2 connectivity. It is expected that this will include connections through a campus network and a regional network to Internet2 or NLR (but not typically both).

h) It is important that all Cluster D entities have L3 and L2 connectivity with each other in Spiral 2; if not, then the combinations of joint experiment would be limited. This implies connectivity is needed between Internet2 and NLR, which is not currently supported.

i) It is important that the complexities of providing the connectivity be hidden from the experimenter, who should be able to request a L3 (IP) or L2 (VLAN) connection by dealing with primarily their own entity.

j) It should be possible to extend the Cluster D connectivity plan to include resources in the other GENI clusters, with the eventual goal of reliable, high-bandwidth L3 (IP) and L2 (VLAN) connectivity between all GENI entities.

k) The resulting connectivity plan should support this roadmap for Cluster D VLAN capabilities: http://groups.geni.net/geni/attachment/wiki/ClusterD/vlan.jpg roadmap

5.3.2 Backbone Options

The NLR and Internet2 backbone networks are the current options for carrying VLANs between GENI Cluster D aggregates.

5.3.2.1 Internet2

a) See http://www.internet2.edu/ Internet2

Contact: ?

b) L3 service

Uses public IP addresses, but only connects to other endpoints who are connected to Internet2. Details? Reach?

c) Pre-configured VLANs on GENI “wave”, between certain Internet2 Wave System nodes. Seehttp://www.internet2.edu/waveco/ Internet2 WAVE

This is the approach being used by ProtoGENI backbone. See http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone See http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node Multi-point

d) Direct connections to the Internet2 Wave System nodes.

This is the approach being used by ProtoGENI backbone nodes. See http://www.protogeni.net/trac/protogeni/wiki/Backbone ProtoGENI Backbone See http://www.protogeni.net/trac/protogeni/wiki/BackboneNode ProtoGENI Backbone Node More details on this can be found at ? It states that the ProtoGENI switches (shown below in the IP table) are each attached to the GENI L2 wave but also to I2's Layer 3 network at 1Gb/s.

e) Tunnel to Internet2 Wave System nodes.

From ?: Those aggregates that cannot get on the wave but are attached to I2 might be able to configure static IP tunnels (such as Ethernet over GRE) to one of the ProtoGENI switches attached to I2's Layer 3 services.

f) Switched VLANs using Internet2 ION (DCN) service.

The Internet2 (DCN) service provides switched VLANs, but there are two issues that suggest that it will not be utilized for GENI traffic. First, its use has not been included in the package of services donated to GENI, and thus there may be a cost to the GENI aggregate now, or in the future. Second, it is available in only a limited number of Internet2 PoPs, and these may be difficult to reach. Currently, ORCA plans to provide an interface into the Internet2 IDC by 2/1/10.

5.3.2.2 NLR

a) See http://www.nlr.net/ NLR

Contact: Kathy Benninger, staff engineer for NLR, benninger@psc.edu

b) L3 service See http://www.nlr.net/packetnet.php PacketNet

PacketNet is a barebones service providing high-speed access to other endpoints which happen to be on PacketNet. Details? Reach?

c) Switched (or pre-configured) VLANs using FrameNet service. See http://www.nlr.net/framenet.php NLR FrameNet

Location of PoPs:

NLR-SEAT Seattle, WA
NLR-SUNN Sunnyvale, CA
NLR-LOSA Los Angeles, CA
NLR-ALBU Albuquerque, NM
NLR-ELPA El Paso, TX
NLR-HOUS Houston, TX Serves LEARN
NLR-TULS Tulsa, OK Serves OKGems?
NLR-BATO Baton Rouge, LA
NLR-JACK Jacksonville, MS
NLR-KANS Kansas City, KS
NLR-CHIC Chicago, IL Serves Ohio State? iGENI?
NLR-CLEV Cleveland. OH Serves Ohio State? Wayne State?
NLR-BOST Boston, MA Serves BBN, UMassAmherst
NLR-NEWY New York, NY
NLR-PITT Pittsburgh, PA
NLR-PHIL Philadelphia, PA
NLR-WASH Washington, DC
NLR-RALE Raleigh, NC Serves RENCI, Duke
NLR-ATLA Atlanta, GA

Current:

Pt-to-pt.
Ethernet interface has tag, but FrameNet does *not* provide any sort of VLAN translation or tunnelling support. thus, the same tag number has to be used at both ends of the connection. Total of approx 4000 tags. RENCI can assign up to 100 tags.
Setup using Sherpa service.
BEN has used Sherpa to pre-configure VLANs. See http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figure 6-4
ORCA plans to provide an interface into the Sherpa GCI by 2/1/10

Soon:

NLR Sherpa developers will be rolling out multi- point VLANs in the next month (January, 2010) or so, followed by q-in-q support in the next several months (March, 2010).

d) Switched (or pre-configured) VLANs via C-Wave service.

NLR press release includes this is offering to GENI
Location of PoPs?
Pt-to-pt?
Q? QinQ?
Setup manually by Kevin McGratin at Cisco.
Uses Cisco 650 equipment.

5.3.2.3 Combined Options

The following bacbone network options are available for carrying VLANs between GENI endpoint aggregates:

NLR FrameNet
Internet2 GENI WAVE
NLR C-Wave
Internet2 ION (DCN) (not recommended, and not shown in figure)

Figure 1, Backbone Network Options

5.3.3 Connecting Cluster D Aggregates to a Backbone Network

5.3.3.1 Template

a) PI or staff contact:

b) Configuration drawing

c) Testbed

Project:
PI or staff contact:
Equipment for termination:
Range of VLAN tags?
How to select VLAN tag?

d) Campus network

Campus:
IT contact:
How are VLANs carried? (direct, tunnel)
Support OpenFlow?
Range of VLAN tags?
How to select VLAN tag?

e) Regional access network

Network:
Staff contact:
How are VLANs carried? (direct, tunnel)
Range of VLAN tags?
How to select VLAN tag?

f) Backbone network

Network: (I2 GENI wave, I2 DCN, NLR Framenet, NLR C-Wave)
Staff contact:
How are VLANs accepted? (direct, tunnel)
How are VLANs carried? (pt-to-pt, multi pt)
Range of VLAN tags?
How to select VLAN tag?

g) References

5.3.3.2 ORCA/BEN and IMF Aggregates

a) Contact: Chris Heermann ckh@renci.org (RENCI)

b) RENCI, UNC-CH and Duke to NLR Connections

VLANs from RENCI BEN PoP, via fiber, to NLR FrameNet
VLANs from UNC-CH BEN PoP, via BEN, via RENCI BEN PoP, to NLR FrameNet
VLANs from Duke BEN MPoP, via Duke Campus OIT, to NLR FrameNet

Figure 2-1, RENCI, UNC-CH and Duke Connections to NLR FrameNet

g) References

See ORCA/BEN Capabilities
See ORCA/BEN Spiral 2 Connectivity
See Fig 6-1 and 6-4

Figure 6-4, ORCA/BEN Connectivty for Demo

5.3.3.3 BBN Aggregate

a) Contacts: Josh Karlin Josh Karlin and Chaos Golubitsky

b) BBN to NLR Connection

VLANs from BBN, via NOX, to NLR FrameNet
NLR FrameNet physical port is bost.layer2.nlr.net, which NoX has connected to a port reserved for our traffic. BBN has a SHERPA account assciated with the port.
An aggregate with a cluster of VMs is included at this site.

Figure 2-2, BBN to NLR Connection

d) Campus

Current

L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.
VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to NLR.
See http://groups.geni.net/syseng/wiki/OpsLabConnectivity

Pending:

L3 service from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2 When?
VLANs from BBN GENI ops lab in Cambridge (CamGENI), via Northern Crossroads (NoX), to Internet2. When?

g) References

5.3.3.4 DOME , ViSE and Cloud-Control Aggregates

a) Contacts:

David Irwin David Irwin Brian Lynn blynn@cs.umass.edu Rich Tuthill tuthill@oit.umass.edu Emmanuel Cecchet cecchet@cs.umass.edu

b) UMass Amherst to NLR Connection (Fig 2-3)

VLANs from control plane server geni.cs.umass.edu in UMass Amherst CS building, via UMass Amherst campus OIT, via Northern Crossroads (NoX), via handoff located at 300 Bent St in Cambridge, MA, to NLR.

Current connection runs from UMass to BBN, temporarily.
VLAN trunk at UMass with tag=533
VLAN trunk at BBN with tag=533

Per Kathy Benninger, staff engineer for NLR, benninger@psc.edu, on 11/17/09: UMass should be able to connect to NLR for GENI per GENI agreement, even though they are not members of NLR.
Note: Cloud Control aggregate will require special connectivity arrangements to cloud resources.

Figure 2-3, UMass Amherst to NLR Connection

c) Campus network:

Layer 3 Connectivity: IP access will be through UMass Amherst's campus network, using their public IP addresses.

An MOU was agreed upon with the UMass Office of Information Technology (OIT) regarding connecting Internet2 to the DOME and ViSE servers, along with VLAN access. The OIT contact is Rick Tuthill, tuthill email at oit.umass.edu. The agreements include: 1) CS shall order OIT-provisioned network jacks in appropriate locations in the Computer Science building using normal OIT processes. (completed)
2) OIT shall configure these jacks into a single VLAN that shall be extended over existing OIT-managed network infrastructure between the Computer Science building and the Northern Crossroads (NoX) Internet2 Gigapop located at 300 Bent St in Cambridge, MA.
3) OIT agrees to provide a single VLAN for “proof-of-concept” testing and initial GENI research activities.
4) The interconnection of the provided VLAN between the NoX termination point and other Internet2 locations remains strictly the province of the CS researchers and the GENI organization.
5) This service shall be provided by OIT at no charge to CS for the term of one year in the interest of OIT learning more about effectively supporting network-related research efforts on campus.

In an email dated September 28th, 2009 Rick Tuthill of UMass-Amherst OIT updated us on the status of this connection, as follows: 1) The two existing ports at the CS building in room 218A and room 226 and all intermediary equipment are now configured to provide layer-2 VLAN transport from these networks jacks to the UMass/Northern Crossroads(NoX) handoff at 300 Bent St in Cambridge, MA.
2) The NoX folks are not doing anything with this research VLAN at this time. They need further guidance from GENI on exactly what they’re supposed to do with the VLAN.
3) Also, once IP addressing is clarified for this VLAN, we’ll need to configure some OIT network equipment to allow the selected address range(s) to pass through.

d) Connection to Amazon Cloud:

See Options and Cost Implications for GENI Network Connectivity to understand the current options and recommendations, as provided by Emmanuel Cecchet.

Overview:

1) Resources allocated on the Amazon EC2 cloud have to be connected with other GENI resources to participate in an experiment. Disk resources (S3 or EBS) can only be accessed from EC2 servers called instances.

2) EC2 instances (servers) are dynamically assigned IP addresses when they are created. A public IP address is available for remote connections and a private IP address is created for internal communications (inside EC2). All network traffic between EC2 instances (inside the same availability zone of the same region) is free. Traffic between resources in different regions is considered Internet traffic. All network exchanges between GENI resources outside of EC2 and EC2 resources are charged.

3) Amazon offers static IPv4 addresses that can be assigned to instances at an additional cost. A customer is limited to 5 such addresses by default but more can be obtained on demand. An instance first starts with a generic public and private IP addresses and then can be remapped to a static IP address (called Elastic IP).

Options:

a) In order for EC2 instances to be part of a VLAN, the simplest solution is to run a VLAN software like OpenVPN in the EC2 instances. It is the responsibility of the user to setup that VLAN so that it can communicate with the rest of its GENI resources. There is no additional cost for such setup besides the network traffic charges described in Section 1.

b) Amazon Virtual Private Cloud service allows setting up a bridge to expand a VLAN with EC2 resources. Note that this can only be a layer 3 VLAN. Amazon VPC provides end-to-end network isolation by utilizing an IP address range that is specified by the user, and routing all network traffic between VPC and the user network through an encrypted IPsec VPN. The customer gateway can be either software or hardware. The current documentation only lists Cisco Integrated Services routers running Cisco IOS 12.4(or later) software and Juniper J-Series routers running JunOS 9.5 (or later) software as compatible devices.

Note: Providing the VPC functionality can only work for a single user (one VPC per AWS account only). This would not allow a broker to manage its resources globally and have multiple concurrent users using a pool of EC2 resources.

Another option illustrated in Figure 3 would consist in having the broker run the customer gateway and act as a bridge with the end-user resources. This option would still have the limitation that all users going through the same broker would be sharing the same VPN on the EC2 side. Having as many AWS accounts as GENI users does not seem practical and would make accounting and billing much more complex.

5.3.3.5 Kansei Aggregates

a) Contacts:

Wenjie Zeng, Ohio State University, zengw@cse.ohio-state.edu, http://www.cse.ohio-state.edu/~zengw/
Mukundan Sridharan, Ohio State University, sridhara@cse.ohio-state.edu, http://www.cse.ohio-state.edu/~sridhara/

Paul Schopis, pschopis@oar.net , http://www.oar.net/network/ , http://www.oar.net/press/media/bios/schopis.shtml

Hongwei Zhang, Wayne State University, hzhang@cs.wayne.edu, http://www.cs.wayne.edu/~hzhang/

b) Ohio State to NLR Connection

VLANs from Ohio State, via OARnet, to NLR

Note: VLANs from Wayne State, via ?, to NLR not yet defined

Figure 2-4, Ohio State to NLR Connection

5.3.3.6 OKGems Aggregate

a) Contact: Xiaolin (Andy) Li, Computer Science Department, Oklahoma State University (OSU) andyli

VLANs from Oklahoma State, via ?, to NLR not yet defined

5.3.3.7 LEARN Regional Network and Connected Aggregates

a) Contact: Deniz Gurkan University of Houston College of Technology dgurkan@uh.edu

b) LEARN to NLR Connection (Fig 2-5)

VLANs from UH, Rice, TAMU and UT-Austin, via LEARN network, to NLR PoP in Houston See http://groups.geni.net/geni/attachment/ticket/267/GENI_MS2ab_LEARN_Nov16.pdf for details

Figure 2-5, LEARN to NLR Connection

f) Note: Other backbone network options are pending

Internet2 ION - IDC installation is in progress at UH, Rice, TAMU and UT-Austin NLR C-WAVE has a PoP in Houston

5.3.3.8 iGENI (Starlight) Crossconnect

a) Contacts:

Joe Mambretti j-mambretti@northwestern.edu
Jim Chen
Fei Yeh

b) iGENI to Backbone Networks and to International Testbeds (Fig 3)

VLANs from Starlight L2 crossconnect service in Chicago, to multiple backbone networks, including NLR FramewNet and Internet2 GENI WAVE and NLR C-Wave.
VLANs from Starlight L2 crossconnect service in Chicago, to multiple international testbeds, including Japan, Korea, South America and Europe.
See http://www.startap.net/starlight/NETWORKS/ for more detail on Starlight facility
See http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf for more detail on Starlight GLIF GOLE May, 2006

Figure 3-1, iGENI Connections to Backbone Networks and to International Testbeds

Figure 3-2, StarLight GLIF GOLE Configuration

5.3.3.9 Combined Configuration

The combined configuration of currently planned connections between Cluster D aggregates, including the iGENI (Starlight) crossconnect, to the backbone networks and to international testbeds, is shown in Figure 4.

Figure 4, Connections from all Cluster D Aggregates to Backbone Networks and International Testbeds.

5.3.4 First Step: Connect Cluster D Aggregates via NLR FrameNet Service

The first phase will connect Cluster D aggregates via the NLR FrameNet service, as shown in Figure 5.

Figure 5, Connections Between Cluster D Aggregates via NLR FrameNet Service

Use NLR Framenet, since a majority of Cluster D aggregates can connect with it.
Pre-configured paths via NLR FrameNet, regionals and campus networks.
A "Cluster D Backbone Aggregate" created to setup and assign these paths to experimenters.
"Cluster D Backbone Aggregate" would have one or more pre-configured point-to-point paths between all (or perhaps a subset) of the Cluster D endpoint aggregates. These would be assigned to an experimenter upon request. Each path would have a spcific identifier (e.g., port/VLAN tag) a the interface to the endpoint aggregate.
Possible extension: Use SHERPA, and possibly tools for regional networks, to dynamically add paths when more are required.

Make connections at each endpoint aggregate between ednpoint of the backbone path and designated endpoint aggregatte component. Typically, this would be done by mapping combinations of VLAN ports/tags in an Ethernet switch dedicated to the endpoint aggegate.

Thus, ordering of VLAN connections is likely to proceed from the backbone out to the endpoints, as in the demo on 7/7/09, by the ORCA/BEN project.

Used Sherpa to pre-configure VLANs through NLR
And then mapped VLAN tags near the endpoint nodes.
See http://groups.geni.net/geni/attachment/wiki/ORCABEN/071509c%20%20ORCA_BEN%20demo.pdf Figures 6-2, 6-3 and 6-4.

Similarly, the suggested approach to connectivity specification at http://groups.geni.net/syseng/wiki/RspecConnectivity RSpec Connectivity, was: “we give the backbone priority in selecting a network's VLAN ID and aggregates must map the ID within their local network by whatever means is most convenient for them. This way, the experimenter need only know what backbone the aggregate is connected to, and can design the network from there.”

5.3.5 Second Step: Use iGENI Crossconnect to Interconnect Backbones and to Connect to International Testbeds

The second phase will use the iGENI crossconnect to interconnect backbones, and to connect endpoint aggregates to international testbeds, as shown in Figure 6.

Figure 6, Use of iGENI Crossconnect to Interconnect Backbones and to Connect to International Testbeds

The iGENI (Starlight) crossconnect service includes the Starlight L2 crossconnect service in Chicago, which connects to multiple backbone networks, including NLR FramewNet and Internet2 GENI WAVE and NLR C-Wave, and to multiple international testbeds, including Japan, Korea, South America and Europe.
See http://www.startap.net/starlight/NETWORKS/ for more detail on Starlight facility
See http://www.startap.net/starlight/ENGINEERING/SL-TransLight_5.06.pdf for more detail on Starlight GLIF GOLE May, 2006
The Starlight crossconnect service should be able to bridge connections between multiple backbone networks, to enable connection of:

Cluster D endpoint aggergate via NLR FrameNet
Aggregate from another cluster, i.e., Cluster C (ProtoGENI) aggregate, via Internet2 GENI WAVE

The Starlight crossconnect service should be able to bridge connections between a backbone networks and an international tesbed, to enable connection of:

Cluster D endpoint aggragate via NLR FrameNet
Aggregate from another cluster, i.e., Cluster C (ProtoGENI) aggregate, via Internet2 GENI WAVE
An international testbed

Attachments (25)