wiki:GENIEducation/SampleAssignments/OpenFlowLoadBalancerTutorial/ExerciseLayout/Execute

Version 3 (modified by shuang@bbn.com, 11 years ago) (diff)

--

<OpenFlow Load Balancer Tutorial>

Hello GENI index Hello GENI index Hello GENI index

STEPS FOR EXECUTING EXERCISE

Debugging an OpenFlow Controller

You will find it helpful to know what is going on inside your OpenFlow controller and its associated switch when implementing these exercises. This section contains a few tips that may help you out if you are using the Open vSwitch implementation provided with this tutorial. If you are using a hardware OpenFlow switch, your instructor can help you find equivalent commands.
The Open vSwitch installation provided by the RSpec included in this tutorial is located in /opt/openvswitch-1.6.1-F15. You will find Open vSwitch commands in /opt/openvswitch-1.6.1-F15/bin and /opt/openvswitch-1.6.1-F15/sbin. Some of these commands may be helpful to you. If you add these paths to your shell’s $PATH, you will be able to access their manual pages with man. Note that $PATH will not affect sudo, so you will still have to provide the absolute path to sudo; the absolute path is omitted from the following examples for clarity and formatting.

  • ovs-vsctl
    Open vSwitch switches are primarily configured using the ovs-vsctl command. For exploring, you may find the ovs-vsctl show command useful, as it dumps the status of all virtual switches on the local Open vSwitch instance. Once you have some information on the local switch configurations, ovs-vsctl provides a broad range of capabilities that you will likely find useful for expanding your network setup to more complex configurations for testing and verification. In particular, the subcommands add-br, add-port, and set-controller may be of interest.
  • ovs-ofctl
    The switch host configured by the given rspec listens for incoming OpenFlow connections on localhost port 6634. You can use this to query the switch state using the ovs-ofctl command. In particular, you may find the dump-tables and dump-flows subcommands useful. For example, sudo ovs-ofctl dump-flows tcp:127.0.0.1:6634 will output lines that look like this:
    cookie=0x4, duration=6112.717s, table=0, n packets=1, n bytes=74, idle age=78,priority=5,tcp,
    nw src=10.10.10.0/24 actions=CONTROLLER:65535
    
    This indicates that any TCP segment with source IP in the 10.10.10.0/24 subnet should be sent to the OpenFlow controller for processing, that it has been 78 seconds since such a segment was last seen, that one such segment has been seen so far, and the total number of bytes in packets matching this rule is 74. The other fields are perhaps interesting, but you will probably not need them for debugging. (Unless, of course, you choose to use multiple tables — an exercise in OpenFlow 1.1 functionality left to the reader.)
  • Unix utilities
    You will want to use a variety of Unix utilities, in addition to the tools listed in ExerciseLayout, to test your controllers. The standard ping and /usr/sbin/arping tools are useful for debugging connectivity (but make sure your controller passes ICMP ECHO REQUEST and REPLY packets and ARP traffic, respectively!), and the command netstat -an will show all active network connections on a Unix host; the TCP connections of interest in this exercise will be at the top of the listing. The format of netstat output is out of the scope of this tutorial, but information is available online and in the manual pages.

Exercises

  • Load Balancing -- Files to download: load-balancer.rb
    We will implement a Load Balancer OpenFlow Controller on node Switch using Trema. Load balancing in computer networking is the division of network traffic between two or more network devices or paths, typically for the purpose of achieving higher total throughput than either one path, ensuring a specific maximum latency or minimum bandwidth to some or all flows, or similar purposes. For this exercise, you will design a load-balancing OpenFlow controller capable of collecting flow status data from OpenFlow switches and using it to divide traffic between dissimilar network paths so as to achieve full bandwidth utilization.
    An interesting property of removing the controller from an OpenFlow device and placing it in an external system of arbitrary computing power and storage capability is that decision-making for network flows based on external state becomes reasonable. Traditional routing and switching devices make flow decisions based largely on local data (or perhaps data from adjacent network devices), but an OpenFlow controller can collect data from servers, network devices, or any other convenient source, and use this data to direct incoming flows.
    For the purpose of this exercise, data collection will be limited to the flow statistics reported by open vswitches.

Experimental Setup

Follow instructions in the DesignSetup step to build a load balancing experiment topology. Your GENI resources will be configured in a manner similar to the above figure. The various parts of the diagram are as follows:

    • Inside and Outside Nodes: These nodes can be any ExoGENI Virtual Nodes.
    • Switch: The role of the Open vSwitch node may be played either by a software Open vSwitch installation on a ExoGENI Virtual Node, or by the OpenFlow switches available in GENI — consult your instructor.
    • Traffic Shaping Nodes (Left and Right): These are Linux hosts with two network interfaces. You can configure netem on the two traffic shaping nodes to have differing characteristics; the specific values don’t matter, as long as they are reasonable. Use several different delay/loss combinations as you test your load balancer.
    • Aggregator: This node is a Linux host running Open vSwitch with a switch controller that will cause TCP connections to “follow” the decisions made by your OpenFlow controller on the Switch node. So leave this node alone, you only need to implement the OpenFlow controller on node Switch.

Linux netem
Use the tc command to enable and configure delay and lossrate constraints on the outgoing interfaces for traffic traveling from the OpenFlow switch to the Aggregator node. To configure a path with a 20 ms delay and 10% lossrate on eth2, you would issue the command:

sudo tc qdisc add dev eth2 root handle 1:0 netem delay 20ms loss 10%

Use the tc qdisc change command to reconfigure existing links,instead of tc qdisc add.
The outgoing links in the provided rspec are numbered 192.168.4.1 and 192.168.5.1 for left and right, respectively.

Balancing the Load
An example openflow controller that arbitrarily assigns incoming TCP connections to alternating paths can be found at load-balancer.rb (If you have already downloaded it, ignore this).
The goal of your OpenFlow controller will be to achieve full bandwidth utilization of the two links between the OpenFlow switch and the Aggregator host. In order to accomplish this, your OpenFlow switch will intelligently divide TCP flows between the two paths. The intelligence for this decision will come from the flow statistics reports from the open vswitch on node Switch.
The binding of OpenFlow port numbers to logical topology links can be found in the file /tmp/portmap on the switch node when the provided RSpec boots. It consists of three lines, each containing one logical link name (left, right, and outside) and an integer indicating the port on which the corresponding link is connected. You may use this information in your controller configuration if it is helpful.
You will find an example OpenFlow controller that arbitrarily assigns incoming TCP connections to alternating paths in the file load-balancer.rb. This simple controller can be used as a starting point for your controller if you desire. Examining its behavior may also prove instructive; you should see that its effectiveness at achieving the assignment goals falls off as the imbalance between balanced link capacities or delays grows.

Gathering Information
The information you will use to inform your OpenFlow controller about the state of the two load-balanced paths will be gathered via sending OpenFlow FlowStatsRequest from the OpenFlow controller to the OpenFlow switch and then collecting OpenFlow FlowStatsReply message from the OpenFlow switch to the OpenFlow controller. For more information about FlowStatsRequest and FlowStatsReply, please refer to http://rubydoc.info/github/trema/trema/master/Trema/FlowStatsRequest and http://rubydoc.info/github/trema/trema/master/Trema/FlowStatsReply.

Question 1: Implement your load-balancer.rb, run it on switch, and display the total number of bytes sent to the left and right paths when a new TCP flow comes and forward the new TCP flow to the path with less number of bytes transferred.
A sample output is as follows:

[stats_reply]-------------------------------------------
left path: packets 3890, bytes 5879268
right path: packets 7831, bytes 11852922
since there are more bytes going to the right path, let's go *LEFT* for this flow

You can use iperf to generate TCP flows from outside node to inside node:

    • On inside, run:
      iperf -s
      
    • On outside run the following command multiple times, with several seconds interval between each run:
      iperf -c 10.10.10.2 -t 600 &
      
      The above command starts a new TCP flow from outside to inside, that runs for 600 seconds.

Hints:

    • Remember that the default OpenFlow policy for your switch or Open vSwitch instance will likely be to send any packets that do not match a flow spec to the controller, so you will have to handle or discard these packets.

Process: Upon the arrival of a new TCP flow, the OpenFlow controller should send out a FlowStatsRequest message to the OpenFlow switch. The OpenFlow switch will reply with statistics information about all flows in its flow table. This flow statistics message will be fetched by the stats_reply function in the openflow controller implemented by the user on node switch. Based on the statistics, experimenters can apply their own policy on which path to choose in different situations. The FlowStatsReply message is in the following format:

FlowStatsReply.new(
  :length => 96,
  :table_id => 0,
  :match => Match.new
  :duration_sec => 10,
  :duration_nsec => 106000000,
  :priority => 0,
  :idle_timeout => 0,
  :hard_timeout => 0,
  :cookie => 0xabcd,
  :packet_count => 1,
  :byte_count => 1,
  :actions => [ ActionOutput.new ]
)

Note: there is some delay in fetching the flow statistics. The OpenFlow controller may not be able to receive any FlowStatsReply message before two flows are established.

If you really do not know where to start, you can find the answer HERE

Question 2: Modify your load balancer so that it decides path based on the average per-flow throughput observed on each path

Note: since Trema does not yet support multi-thread mode, this simple implementation runs in one thread. As a result, users will experience some delay in fetching the flow statistics (i.e., stats_reply will not be called right after a FlowStatsRequest message has been sent in packet_in handler).

Next: Teardown Experiment

Attachments (2)

Download all attachments as: .zip