Version 57 (modified by, 7 years ago) (diff)


OpenFlow Load Balancer Tutorial

Image Map

2. Implement a Load Balancing OpenFlow Controller

2.1 Run an example Load Balancing OpenFlow Controller

An example OpenFlow Controller that assigns incoming TCP connections to alternating paths based on total number of flows is already downloaded for you. You can find it (load-balancer.rb) in the home directory on node "Switch".

  • 2.1.1 Log on to node "Switch", wait until all interfaces are up and running by issuing "ifconfig", make sure eth1, eth2, eth3 are up and assigned with valid IP addresses.
    Start the example Load Balancer by executing the following:
    /opt/trema-trema-f995284/trema run /root/load-balancer.rb
  • 2.1.2 After you started your Load Balancer, you should be able to see the following (Switch id may vary):
    OpenFlow Load Balancer Conltroller Started!
    Switch is Ready! Switch id: 196242264273477
    This means the OpenFlow Switch is connected to your controller and you can start testing your OpenFlow Load Balancer now.

2.2 Use GIMI Portal to run the experiment and monitor the load balancer

  • 2.2.1 Log on to your LabWiki Account on , on the Prepare Column, type "OpenFlow", it will pop up with a list of .rb choices. Choose any one, and replace the whole content with the ruby script HERE.
  • 2.2.2 Log on to node "Switch" and do "ifconfig" to see the IP addresses on each interfaces.
    • Note: You may not be able to see all interfaces up immediately when node "Switch" is ready; wait for some more time (about 1 min) then try "ifconfig" again.
    • Identify the two interfaces that you want to monitor: the interfaces with IP address and respectively. On the LabWiki page, in your ruby script, find the following line:
      ###### Change the following to the correct interfaces ######
      left = 'eth1'
      right = 'eth3'
      ###### Change the above to the correct interfaces ######
  • 2.2.3 Change eth1 and eth3 to the corresponding two interfaces you found with IP address (the interface that connects to the left path) and (the interface that connects to the right path) and press the "save" icon on your LabWiki page.
  • 2.2.4 Drag the file Icon at the left-top corner on your LabWiki page from Prepare column and drop it to Execute column. Fill in the name of your LabWiki experiment (this can be anything), the name of your slice (this has to be your slice name), and type "true" in the graph box to enable graphs. And then press "Start Experiment" button.
  • Note: Do not start another experiment (i.e., drag and drop the file icon in LabWiki and press "Start Experiment") before your current experiment is finished.

(Optional) 2.3 Fetch experimental results from your iRods account

  • 2.3.1 Log in your iRods account on, use "" as Host/IP, "1247" as Port.
  • 2.3.2 Download your experimental results from your user directory under /geniRenci/home/

2.4 Change link parameters of left path using "ovs-vsctl" and repeat the experiment

  • 2.4.1 Log on to node "left" and change the link capacity for the interface with IP address "" (use "ifconfig" to find the correct interface, here we assume eth1 is the interface connecting to node "Switch"):
    ovs-vsctl set Interface eth1 ingress_policing_rate=100000
    The above will rate-limit the connection from node "Switch" to node "left" to have a bandwidth of 100Mbps.
  • Other ways to e.g., change link delay and lossrate using "tc qdisc netem" can be found in Section 4.

2.5 Repeat Experiment with limited bandwidth on Left path

  • 2.5.1 On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
  • 2.5.2 On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
    ovs-vsctl del-controller br0
  • 2.5.3 On node "Switch", start your Load Balancer using the following command:
    /opt/trema-trema-f995284/trema run /root/load-balancer.rb
  • 2.5.4 Start a new command line window, log onto node "Switch", use the following command to connect the OpenFlow Switch to the controller (the console window that runs your controller should display "Switch is Ready!" when the switch is connected):
    ovs-vsctl set-controller br0 tcp: ptcp:6634:
  • 2.5.5 Go back to your LabWiki web page, drag and drop the file icon and repeat the experiment, as described in section 2.2.4, using a different experiment name (the slice name should stay the same).


  • Did you see any difference from the graphs plotted on LabWiki, compared with the graphs plotted in the first experiment? why?
  • Check out the output of the Load Balancer on node "Switch" and tell how many flows are directed to the left path and how many are on the right path, why?
  • To answer the above question, you need to understand the Load Balancing controller. Check out the "load-balancer.rb" file in your home directory on node "Switch". Check Section 3.1 for hints/explanations about this OpenFlow Controller.

2.6 Modify the OpenFlow Controller to balance throughput among all the TCP flows

  • You need to calculate the average per-flow throughput observed from both left and right path in function "stats_reply" in your load-balancer.rb
  • In function "decide_path", change the path decision based on the calculated average per-flow throughput: forward the flow onto the path with more average per-flow throughput. (Why? TCP tries its best to consume the whole bandwidth so more throughput means network is not congested)
  • If you do not know where to start, check the hints in Section 3.1.
    • If you really do not know where to start after reading the hints, download the answer directly from load-balancer-solution.rb.
    • Save the above solution in your home directory then re-do the experiment on LabWiki. Note: you need to change your LabWiki script at line 185 to use the correct Load Balancing controller (e.g., if your controller is "load-balancer-solution.rb", you should run "/opt/trema-trema-f995284/trema run /root/load-balancer-solution.rb > /tmp/lb.tmp")
  • Redo the experiment using your new OpenFlow Controller following steps in Section 2.5, check the graphs plotted on LabWiki as well as the controller's log on node "Switch" at /tmp/lb.tmp and see the difference.
  • When your experiment is done, you need to stop the Load Balancer:
    • On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
      ovs-vsctl del-controller br0
    • On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"

2.7 Automate your experiment using LabWiki

  • 2.7.1 Add code in your LabWiki script to automate starting and stoping your OpenFlow Controller:
    • Go back to your LabWiki page, un-comment the script from line 184 to line 189 to start your OpenFlow Controller automatically on LabWiki
    • Uncomment the script from line 205 to line 209 to stop your OpenFlow Controller automatically on LabWiki
  • 2.7.2 On your LabWiki web page, drag and drop the file icon and repeat the experiment, as described in section 2.3, using a different experiment name (the slice name should stay the same).
  • If you have more time or are interested in trying out things, go ahead and try section 1.9. The tutorial is over now and feel free to ask questions :-)

2.8 Try more experiments using different kinds of OpenFlow Load Balancers

  • You can find more load balancers from
  • To try out any one of them, follow the steps:
    • At the home directory on node "Switch", download the load balancer you want to try out, e.g.,
      wget /root/
    • Change your LabWiki code at line 185 to use the correct OpenFlow controller.
    • On LabWiki, drag and drop the "File" icon and re-do the experiment as described in section 2.3
  • Some explanations about the different load balancers:
    • "load-balancer-random.rb" is the load balancer that picks path randomly: each path has 50% of the chance to get picked
    • "load-balancer-roundrobin.rb" is the load balancer that picks path in a round robin fashion: right path is picked first, then left path, etc.
    • Load balancers that begin with "load-balancer-bytes" picks path based on the total number of bytes sent out to each path: the one with fewer bytes sent out is picked
      • "load-balancer-bytes-thread.rb" sends out flow stats request in function "packet_in" upon the arrival of a new TCP flow and waits until flow stats reply is received in function "stats_reply" before a decision is made. As a result, this balancer gets the most up-to-date flow stats to make a decision. However, it needs to wait for at least the round-trip time from the controller to the switch (for the flow stats reply) before a decision can be made.
      • "load-balancer-bytes-auto-thread.rb" sends out flow stats request once every 5 seconds in a separate thread, and makes path decisions based on the most recently received flow stats reply. As a result, this balancer makes path decisions based on some old statistics (up to 5 seconds) but reacts fast upon the arrival of a new TCP flow (i.e., no need to wait for flow stats reply)
    • Load balancers that begin with "load-balancer-flows" picks path based on the total number of flows sent out to each path: the one with fewer flows sent out is picked
    • Load balancers that begin with "load-balancer-throughput" picks path based on the total throughput sent out to each path: the one with more throughput is picked

3. Hints / Explanation

3.1 About the OpenFlow controller load-balancer.rb

  • Trema web site:
  • Treme ruby API document:
  • Functions used in our tutorial:
    • start: is the function that will be called when the OpenFlow Controller is started. Here in our case, we read the file /tmp/portmap and figures out which OpenFlow port points to which path
    • switch_ready: is the function that will be called each time a switch connects to the OpenFlow Controller. Here in our case, we allow all non-TCP flows to pass (including ARP and ICMP packets) and ask new inbound TCP flow to go to the controller.
    • packet_in: is the function that will be called each time a packet arrives at the controller. Here in our case, we send out a flow_stats_request to get the current statistics about each flow. Then waits for the latest TCP flow stats. Here we create another thread to wait for the stats reply so that the controller is not blocked. It calls "decide_path()" to get path decisions.
    • stats_reply: is the function that will be called when the OpenFlow Controller receives a flow_stats_reply message from the OpenFlow Switch. Here in our case, we update the flow statistics then signal the thread created in "packet_in" to continue.
    • send_flow_mod_add(): is the function that you should use to add a flow entry into an OpenFlow Switch.
    • decide_path(): is the function that makes path decisions. It returns the path choices based on flow statistics.
  • The Whole Process: Upon the arrival of a new TCP flow, the OpenFlow controller sends out a "FlowStatsRequest" message to the OpenFlow switch. The OpenFlow switch will reply with statistics information about all flows in its flow table. This flow statistics message will be fetched by the "stats_reply" function in the OpenFlow controller implemented by the user on node "Switch". Based on the statistics, experimenters can apply their own policy on which path to choose in different situations. The FlowStatsReply message is in the following format:
      :length => 96,
      :table_id => 0,
      :match =>
      :duration_sec => 10,
      :duration_nsec => 106000000,
      :priority => 0,
      :idle_timeout => 0,
      :hard_timeout => 0,
      :cookie => 0xabcd,
      :packet_count => 1,
      :byte_count => 1,
      :actions => [ ]

3.2 About The Rspec file OpenFlowLBExo.rspec

  • The Rspec file describes a topology we showed earlier--each node is assigned with certain number of interfaces with pre-defined IP addresses
  • Some of the nodes are loaded with softwares and post-scripts. We will take node "Switch" as an example since it is the most complicated one.
    • The following section in the Rspec file for node "Switch":
        <install url="" install_path="/"/>
      means it is going to download that tar ball from the specified URL and extract to directory "/"
    • The following section in the Rspec file for node "Switch":
        <execute shell="bash" command="/tmp/ $sliceName $self.Name() ; /tmp/of-topo-setup/lb-setup"/>
      names the post-boot script that ExoGENI is going to run for you after the nodes are booted.
  • More information about "/tmp/": It is a "hook" to the LabWiki interface. Experimenter run this so that LabWiki knows the name of the slice and the hostname of the particular node that OML/OMF toolkits are running on.
  • More information about "/tmp/of-topo-setup/lb-setup": "lb-setup" is to setup the load balancing switch. The source code as well as explanation is as follows:
    /tmp/of-topo-setup/prep-trema       # install all libraries for trema
    /tmp/of-topo-setup/ovs-start           # create ovs bridge
    cp /usr/bin/trace-oml2 /usr/bin/trace        # a hack to the current LabWiki --> needs to be fixed
    cp /usr/bin/nmetrics-oml2 /usr/bin/nmetrics       # a hack to the current LabWiki --> needs to be fixed
    # download the load balancing openflow controller source code to user directory
    wget -O /root/load-balancer.rb
    # wait until all interfaces are up, then fetch the mapping from interface name to its ip/MAC address and save this info in a file /tmp/ifmap
    # add port to the ovs bridge
    /tmp/of-topo-setup/find-interfaces $INTERFACES | while read iface; do
        ovs-vsctl add-port br0 $iface < /dev/null
    # create port map save it to /tmp/portmap
    ovs-ofctl show tcp: \
        | /tmp/of-topo-setup/ovs-id-ports \
        > /tmp/portmap

3.3 About the GIMI script you run on LabWiki

  • Line 1 to Line 128: the definition of oml trace and oml nmetrics library. It basically defines the command line options for oml2-trace and oml2-nmetrics, as well as the output (the monitoring data that is going to be stored into the oml server)
    • users are not supposed to modify them
    • the definition here we used is not the same as what is provided by the latest OML2 2.10.0 library because there is some version mis-match between the OMF that LabWiki is using and the OML2 toolkit that we are using. It is a temporary hack for now --> to be fixed
    • we added the definition of option "--oml-config" for trace app (Line 27-28) so that oml2-trace accepts configuration files:
      app.defProperty('config', 'config file to follow', '--oml-config',
                      :type => :string, :default => '"/tmp/monitor/conf.xml"')
  • Line 134 to Line 137: user defines the monitoring interfaces here. In our case, we want to monitor the interface on node "Switch" that connects to the left path (with IP and to the right path (with IP
  • Line 139 to Line 169: defines on which node the user wants to run which monitoring app; and the "display graph" option.
    • group "Monitor" monitors the left path statistics using nmetrics and trace.
    • group "Monitor1" monitors the right path statistics using nmetrics and trace.
    • To monitor the throughput information, we used oml2-trace with the option of "--oml-config" which uses the configuration file we created at /tmp/monitor/conf.xml, which simply sums up the number of tcp_packet_size (in Bytes) for each second and save the info into the OML Server (in a Postgre database):
      <omlc id="switch" encoding="binary">
        <collect url="" name="traffic">
          <stream mp="tcp" interval="1">
            <filter field="tcp_packet_size" operation="sum" rename="tcp_throughput" />
    • More information about nmetrics and trace can be found here:
  • Line 173 to Line 218: defines the experiment:
    • Line 175-177: starts the monitoring app
    • Line 179-181: starts the TCP receiver (using iperf)
    • Line 183-189: starts the load balancer and connects ovs switch to the load balancer (controller)
    • Line 191-200: starts 20 TCP flows, with 5 seconds interval between the initial of each Flow
    • Line 205-209: stop the load balancer controller, disconnect the ovs switch from the controller and finish the experiment
  • Line 217 to Line 234: defines the two graphs we want to plot:
    • The first uses the monitoring data from oml2-nmetrics to display the cumulated number of bytes observed from each of the interfaces;
    • The second graph uses the monitoring results from oml2-trace to display the throughput observed from each of the interfaces.

4. Tips: Debugging an OpenFlow Controller

You will find it helpful to know what is going on inside your OpenFlow controller and its associated switch when implementing these exercises.
This section contains a few tips that may help you out if you are using the Open vSwitch implementation provided with this tutorial. If you are using a hardware OpenFlow switch, your instructor can help you find equivalent commands.
The Open vSwitch installation provided by the RSpec included in this tutorial is located in /opt/openvswitch-1.6.1-F15. You will find Open vSwitch commands in /opt/openvswitch-1.6.1-F15/bin and /opt/openvswitch-1.6.1-F15/sbin. Some of these commands may be helpful to you. If you add these paths to your shell’s $PATH, you will be able to access their manual pages with man. Note that $PATH will not affect sudo, so you will still have to provide the absolute path to sudo; the absolute path is omitted from the following examples for clarity and formatting.

  • ovs-vsctl
    Open vSwitch switches are primarily configured using the ovs-vsctl command. For exploring, you may find the ovs-vsctl show command useful, as it dumps the status of all virtual switches on the local Open vSwitch instance. Once you have some information on the local switch configurations, ovs-vsctl provides a broad range of capabilities that you will likely find useful for expanding your network setup to more complex configurations for testing and verification. In particular, the subcommands add-br, add-port, and set-controller may be of interest.
  • ovs-ofctl
    The switch host configured by the given rspec listens for incoming OpenFlow connections on localhost port 6634. You can use this to query the switch state using the ovs-ofctl command. In particular, you may find the dump-tables and dump-flows subcommands useful. For example, sudo ovs-ofctl dump-flows tcp: will output lines that look like this:
    cookie=0x4, duration=6112.717s, table=0, n packets=1, n bytes=74, idle age=78,priority=5,tcp,
    nw src= actions=CONTROLLER:65535
    This indicates that any TCP segment with source IP in the subnet should be sent to the OpenFlow controller for processing, that it has been 78 seconds since such a segment was last seen, that one such segment has been seen so far, and the total number of bytes in packets matching this rule is 74. The other fields are perhaps interesting, but you will probably not need them for debugging. (Unless, of course, you choose to use multiple tables — an exercise in OpenFlow 1.1 functionality left to the reader.)
  • Unix utilities
    You will want to use a variety of Unix utilities, in addition to the tools listed in ExerciseLayout, to test your controllers. The standard ping and /usr/sbin/arping tools are useful for debugging connectivity (but make sure your controller passes ICMP ECHO REQUEST and REPLY packets and ARP traffic, respectively!), and the command netstat -an will show all active network connections on a Unix host; the TCP connections of interest in this exercise will be at the top of the listing. The format of netstat output is out of the scope of this tutorial, but information is available online and in the manual pages.
  • Linux netem
    Use the tc command to enable and configure delay and lossrate constraints on the outgoing interfaces for traffic traveling from the OpenFlow switch to the Aggregator node. To configure a path with a 20 ms delay and 10% lossrate on eth2, you would issue the command:
    sudo tc qdisc add dev eth2 root handle 1:0 netem delay 20ms loss 2%
    Use the "tc qdisc change" command to reconfigure existing links,instead of "tc qdisc add".


Next: Teardown Experiment

Attachments (2)

Download all attachments as: .zip