Version 3 (modified by Josh Smift, 7 years ago) (diff)


GENI OpenFlow deployment summary

This page provides a summary/overview of how OpenFlow is deployed in GENI (in the mesoscale network).


Sites in GENI have used OpenFlow switches from HP, NEC, and Pronto. Here's a table of what's currently connected to the mesoscale:

Site Switches
BBN HP ProCurve 6600
Clemson HP ProCurve J8693A 3500yl-48G
GA Tech HP ProCurve 5400
Indiana HP ProCurve 3500
Rutgers NEC IP8800
Stanford NEC IP8800 (3)
Stanford HP ProCurve 6600
Washington HP ProCurve 6600
Wisconsin HP ProCurve 6600
Wisconsin HP ProCurve 5400
Internet2 NEC IP8000 (5)
NLR HP ProCurve 6600 (5)


Sites typically use FlowVisor to slice their OpenFlow switches between multiple experiments. Different sites are running their FlowVisor on a variety of hardware (including in VMs); Stanford currently recommends that a FlowVisor server should have at least 3 GB of RAM and at least two CPUs. Fast disks also help, as FlowVisor (as of 0.8.1) can be I/O intensive. These requirements may increase for larger scale deployments.

It's also advantageous to have a robust control network interface to FlowVisor. Stanford currently recommends two gigabit interfaces, one to communicate with switches ("downstream") and one to communicate with controllers ("upstream"), if your network design permits. If your upstream and downstream traffic need to share an interface, it might be prudent to have a 10 GB NIC.


Sites typically use FOAM to provide a GENI Agregate Manager API interface that allows experimenters to allocate OpenFlow resources. FOAM is fairly lightweight, and can run on the same server as the FlowVisor, without additional hardware requirements.