Changes between Version 3 and Version 4 of OpenFlow/DeploymentSummary


Ignore:
Timestamp:
02/05/13 14:00:00 (11 years ago)
Author:
Josh Smift
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • OpenFlow/DeploymentSummary

    v3 v4  
    33= GENI !OpenFlow deployment summary =
    44
    5 This page provides a summary/overview of how !OpenFlow is deployed in GENI (in the mesoscale network).
     5This page provides links to information about OpenFlow is deployed in GENI (in the mesoscale network).
    66
    7 = Switches =
    8 
    9 Sites in GENI have used !OpenFlow switches from HP, NEC, and Pronto. Here's a table of what's currently connected to the mesoscale:
    10 
    11 || '''Site''' || '''Switches'''                 ||
    12 || BBN        || HP !ProCurve 6600              ||
    13 || BBN        || NEC IP8800                     ||
    14 ||            ||                                ||
    15 || Clemson    || HP !ProCurve J8693A 3500yl-48G ||
    16 ||            ||                                ||
    17 || GA Tech    || HP !ProCurve 5400              ||
    18 ||            ||                                ||
    19 || Indiana    || HP !ProCurve 3500              ||
    20 ||            ||                                ||
    21 || Rutgers    || NEC IP8800                     ||
    22 ||            ||                                ||
    23 || Stanford   || NEC IP8800 (3)                 ||
    24 || Stanford   || HP !ProCurve 6600              ||
    25 ||            ||                                ||
    26 || Washington || HP !ProCurve 6600              ||
    27 ||            ||                                ||
    28 || Wisconsin  || HP !ProCurve 6600              ||
    29 || Wisconsin  || HP !ProCurve 5400              ||
    30 ||            ||                                ||
    31 || Internet2  || NEC IP8000 (5)                 ||
    32 ||            ||                                ||
    33 || NLR        || HP !ProCurve 6600 (5)          ||
    34 
    35 = !FlowVisor =
    36 
    37 Sites typically use !FlowVisor to slice their !OpenFlow switches between multiple experiments. Different sites are running their !FlowVisor on a variety of hardware (including in VMs); Stanford currently recommends that a !FlowVisor server should have at least 3 GB of RAM and at least two CPUs. Fast disks also help, as !FlowVisor (as of 0.8.1) can be I/O intensive. These requirements may increase for larger scale deployments.
    38 
    39 It's also advantageous to have a robust control network interface to !FlowVisor. Stanford currently recommends two gigabit interfaces, one to communicate with switches ("downstream") and one to communicate with controllers ("upstream"), if your network design permits. If your upstream and downstream traffic need to share an interface, it might be prudent to have a 10 GB NIC.
    40 
    41 = FOAM =
    42 
    43 Sites typically use [wiki:OpenFlow/FOAM FOAM] to provide a GENI Agregate Manager API interface that allows experimenters to allocate !OpenFlow resources. FOAM is fairly lightweight, and can run on the same server as the !FlowVisor, without additional hardware requirements.
     7 * '''Switches''': The various [wiki:GeniAggregate aggregate info pages] have details about which switches are in use at which !OpenFlow aggregates.
     8 * '''!FlowVisor''': Sites typically use FlowVisor to slice their !OpenFlow switches between multiple experiments.
     9 * '''FOAM''': Sites typically use [wiki:OpenFlow/FOAM FOAM] to provide a GENI Agregate Manager API interface that allows experimenters to allocate !OpenFlow resources.