Changes between Version 6 and Version 7 of GENIEducation/SampleAssignments/OpenFlowAssignment/ExerciseLayout/Execute


Ignore:
Timestamp:
05/20/13 11:42:59 (11 years ago)
Author:
shuang@bbn.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIEducation/SampleAssignments/OpenFlowAssignment/ExerciseLayout/Execute

    v6 v7  
    125125
    126126 - '''3.3 Load Balancing''' [[BR]]
     127 Load balancing in computer networking is the division of network traffic between two or more network devices or paths, typically for the purpose of achieving higher total throughput than either one path, ensuring a specific maximum latency or minimum bandwidth to some or all flows, or similar purposes. For this exercise, you will design a load-balancing OpenFlow controller capable of collecting external data and using it to divide traffic between dissimilar network paths so as to achieve full bandwidth utilization with minimal queuing delays. [[BR]]
     128 An interesting property of removing the controller from an OpenFlow device and placing it in an external system of arbitrary computing power and storage capability is that decision-making for network flows based on external state becomes reasonable. Traditional routing and switching devices make flow decisions based largely on local data (or perhaps data from adjacent network devices), but an OpenFlow controller can collect data from servers, network devices, or any other convenient source, and use this data to direct incoming flows. [[BR]]
     129 For the purpose of this exercise, data collection will be limited to the bandwidth and queue occupancy of two emulated network links.
     130
     131  '''Experimental Setup [[BR]]'''
     132  [[Image(OpenFlowAssignment2.png, 50%, nolink)]] [[BR]]
     133  Follow instructions in the [http://groups.geni.net/geni/wiki/GENIEducation/SampleAssignments/OpenFlowAssignment/ExerciseLayout/DesignSetup DesignSetup] step to build a load balancing experiment topology. Your GENI resources will be configured in a manner similar to the above figure. The various parts of the diagram are as follows:
     134  -
     135    - '''Inside and Outside Nodes''': These nodes can be any exclusive ProtoGENI PCs.
     136    - '''Switch:''' The role of the Open vSwitch node may be played either by a software Open vSwitch installation on a ProtoGENI node, or by the OpenFlow switches available in GENI — consult your instructor.
     137    - '''Traffic Shaping Nodes (Left and Right)''': These are Linux hosts with two network interfaces. You can configure netem on the two traffic shaping nodes to have differing characteristics; the specific values don’t matter, as long as they are reasonable. (No slower than a few hundred kbps, no faster than a few tens of Mbps with 0-100 ms of delay would be a good guideline.) Use several different bandwidth combinations as you test your load balancer.
     138    - '''Aggregator''': This node is a Linux host running Open vSwitch with a switch controller that will cause TCP connections to “follow” the decisions made by your OpenFlow controller on the Switch node.
     139
     140  '''Linux netem'''[[BR]]
     141  Use the ''tc'' command to enable and configure delay and bandwidth constraints on the outgoing interfaces for traffic traveling from the OpenFlow switch to the Aggregator node. To configure a path with 20 Mbps bandwidth and a 20 ms delay on eth2, you would issue the command:
     142{{{
     143sudo tc qdisc add dev eth2 root handle 1:0 netem delay 20ms
     144sudo tc qdisc add dev eth2 parent 1:0 tbf rate 20mbit buffer 20000 limit 16000
     145}}}
     146  See the ''tc'' and ''tc-tbf'' manual pages for more information on configuring tc token bucket filters as in the second command line. Use the ''tc qdisc change'' command to reconfigure existing links,instead of ''tc qdisc add''. [[BR]]
     147  The outgoing links in the provided lb.rspec are numbered 192.168.4.1 and 192.168.5.1 for left and right, respectively.
     148
     149  '''Balancing the Load''' [[BR]]
     150  An example openflow controller that arbitrarily assigns incoming TCP connections to alternating paths can be found at [http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/load-balancer.rb load-balancer.rb]. [[BR]]
     151  The goal of your OpenFlow controller will be to achieve full bandwidth utilization with minimal queuing delays of the two links between the OpenFlow switch and the Aggregator host. In order to accomplish this, your OpenFlow switch will intelligently divide TCP flows between the two paths. The intelligence for this decision will come from bandwidth and queuing status reports from the two traffic shaping nodes representing the alternate paths. [[BR]]
     152  When the network is lightly loaded, flows may be directed toward either path, as neither path exhibits queuing delays and both paths are largely unloaded. As network load increases, however, your controller should direct flows toward the least loaded fork in the path, as defined by occupied bandwidth for links that are not yet near capacity and queue depth for links that are near capacity. [[BR]]
     153  Because TCP traffic is bursty and unpredictable, your controller will not be able to perfectly balance the flows between these links. However, as more TCP flows are combined on the links, their combined congestion control behaviors will allow you to utilize the links to near capacity, with queuing delays that are roughly balanced. Your controller need not re-balance flows that have previously been assigned, but you may do so if you like. [[BR]]
     154  The binding of OpenFlow port numbers to logical topology links can be found in the file /tmp/portmap on the switch node when the provided RSpec boots. It consists of three lines, each containing one logical link name (left, right, and outside) and an integer indicating the port on which the corresponding link is connected. You may use this information in your controller configuration if it is helpful. [[BR]]
     155  You will find an example OpenFlow controller that arbitrarily assigns incoming TCP connections to alternating paths in the file [http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/load-balancer.rb load-balancer.rb]. This simple controller can be used as a starting point for your controller if you desire. Examining its behavior may also prove instructive; you should see that its effectiveness at achieving the assignment goals falls off as the imbalance between balanced link capacities or delays grows.
     156
     157  '''Gathering Information''' [[BR]]
     158  The information you will use to inform your OpenFlow controller about the state of the two load-balanced paths will be gathered from the traffic shaping hosts. This information can be parsed out of the file /proc/net/dev, which contains a line for each interface on the machine, as well as the tc -p qdisc show command, which displays the number of packets in the token bucket queue. As TCP connections take some time to converge on a stable bandwidth utilization, you may want to collect these statistics once every few seconds, and smooth the values you receive over the intervening time periods. [[BR]]
     159  You may find the file /tmp/ifmap on the traffic shaping nodes useful. It is created at system startup, and identifies the inside- and outside-facing interfaces with lines such as:
     160{{{
     161inside eth2
     162outside eth1
     163}}}
     164  The first word on the line is the “direction” of the interface — toward the inside or outside of the network diagram. The second is the interface name as found in ''/proc/net/dev''. [[BR]]
     165  You are free to communicate these network statistics from the traffic shaping nodes to your OpenFlow controller in any fashion you like. You may want to use a web service, or transfer the data via an external daemon and query a statistics file from the controller. Keep in mind that flow creation decisions need to be made rather quickly, to prevent retransmissions on the connecting host. [[BR]]
     166
     167  '''Hints''' [[BR]]
     168  -
     169    - Remember that the TCP control loop is rathers low — on the order of several round trip times for the TCP connection. This means your load balancing control loop should be slow.
     170    - You may wish to review control theory, as well as TCP congestion control and avoidance principles.
     171    - Without rebalancing, “correcting” a severe imbalance may be difficult or impossible. For testing purposes, add flows to the path slowly and wait for things to stabilize.
     172    - Some thoughts on reducing the flow count when load balancing via Open- Flow can be found in Wang et al. [12] You are not required to implement these techniques, but may find them helpful.
     173    - Remember that the default OpenFlow policy for your switch or Open vSwitch instance will likely be to send any packets that do not match a flow spec to the controller, so you will have to handle or discard these packets.
     174    - You will want your load balancer to communicate with the traffic shaping nodes via their administrative IP address, available in the slice manifest.
     175    - If packet processing on the OpenFlow controller blocks for communication with the traffic shaping nodes, TCP performance may suffer. Use require ’threads’, Thread, and Mutex to fetch load information in a separate thread.
     176    - The OpenFlow debugging hints from Section 3.1 remain relevant for this exercise.
    127177
    128178= [wiki:GENIEducation/SampleAssignments/OpenFlowAssignment/ExerciseLayout/Finish Next: Teardown Experiment] =