Changes between Version 2 and Version 3 of OpenFlow/CampusTopology


Ignore:
Timestamp:
04/05/11 16:26:40 (14 years ago)
Author:
Josh Smift
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • OpenFlow/CampusTopology

    v2 v3  
    11[[PageOutline]]
    22
    3 This is an example topology of a campus OpenFlow network, designed to allow experimenters to access the GENI nework core in a variety of ways depending on their needs. In particular, it offers three main options for connecting resources at campuses to inter-campus VLANs:
     3This is an example topology of a campus OpenFlow network, designed to allow experimenters to access the GENI network core in a variety of ways depending on their needs. In particular, it offers three main options for connecting resources at campuses to inter-campus VLANs:
    44
    55 1. Connect directly to one or more pre-provisioned core VLANs, without using any campus OpenFlow resources. This is a very simple option for experiments that don't need to use OpenFlow campus resources at all, and merely want to access the GENI network core.
    66
    7  2. Use OpenFlow to connect to one or more pre-provisioned core VLANs, via a cross-connect cable that translates from a campus OpenFlow VLAN onto the core VLANs. This is a fairly simple option for experiments that want to use OpenFlow campus resources, and can use existing core VLANS.
     7 2. Use OpenFlow to connect to one or more pre-provisioned core VLANs, via a cross-connect cable that translates from a campus OpenFlow VLAN onto the core VLANs. This is a fairly simple option for experiments that want to use OpenFlow campus resources, and can use existing core VLANs.
    88
    99 3. Use OpenFlow to connect to any core VLAN, by having OpenFlow configure the switch to do VLAN translation. This is a more complicated option for experiments that want to use OpenFlow campus resources, and need to use VLANs that aren't provisioned with a physical cross-connect for whatever reason (e.g. large numbers of VLANs, dynamically provisioned VLANs, etc).
     
    6161 * The transvl controller can insert a flow rule to handle the translation, so every packet doesn't have to flow to the controller; but this sort of rewriting operation is typically done in the slow path on the switch, rather than at line speed. This can have a significant performance impact, so this approach is more suitable for experiments that don't have high performance requirements.
    6262
    63  * Some switch firmware will reject the tagged packets coming in on port 8, before the transvl controller sees them. In particular, the HP OpenFlow firmare and NEC Product firmware don't seem to permit this configuration; the NEC Prototype firmware does.
     63 * Some switch firmware will reject the tagged packets coming in on port 8, before the transvl controller sees them. In particular, the HP OpenFlow firmware and NEC Product firmware don't seem to permit this configuration; the NEC Prototype firmware does.
    6464
    6565Its main advantage is that experimenters can translate between any VLAN carried on port 7, without requiring any physical provisioning from campus network admins. (Including in a prospective future scenario in which GENI tools become able to provision new inter-campus VLANs, all the way to port 7.)
     
    6767= Hosts =
    6868
    69 Ports 9 and 10 (and so on) are the ports that are connected to the dataplane interfaces on hosts (e.g. MyPLC, ProtoGENI, etc). Their key unusual feature is that they're trunk ports, i.e. they carry multiple tagged VLANs; this requires the hosts that you connect to them to speak 802.1q, aka "VLAN-based subinterfacing". Modern Linux distributions, like Ubuntu and Fedora / Red Hat, do this just fine fine, with interface names like eth1.1700, eth1.3715, etc. Configuring the hosts' dataplane interfaces to trunk ports is the key ingredient that allows experimenters to control which VLANs their compute slivers actually connect to. We're working on detailed guidelines for how campus resource operators can enable this on their hosts, and how experimters can take advantage of it. ''(FIXME: Replace the previous sentence with a link to a page with more information.)''
     69Ports 9 and 10 (and so on) are the ports that are connected to the dataplane interfaces on hosts (e.g. MyPLC, ProtoGENI, etc). Their key unusual feature is that they're trunk ports, i.e. they carry multiple tagged VLANs; this requires the hosts that you connect to them to speak 802.1q, aka "VLAN-based subinterfacing". Modern Linux distributions, like Ubuntu and Fedora / Red Hat, do this just fine fine, with interface names like eth1.1700, eth1.3715, etc. Configuring the hosts' dataplane interfaces to trunk ports is the key ingredient that allows experimenters to control which VLANs their compute slivers actually connect to. We're working on detailed guidelines for how campus resource operators can enable this on their hosts, and how experimenters can take advantage of it. ''(FIXME: Replace the previous sentence with a link to a page with more information.)''