wiki:GEC17Agenda/AdvancedOpenFlow/Procedure/Execute

Version 47 (modified by nriga@bbn.com, 10 years ago) (diff)

--

OpenFlow Load Balancer Tutorial

Image Map

2. Configure and Initialize Services

2.1 Login to Nodes Switch, Aggregator, Inside and Outside

2.1.1 Get your reservation details

Click on your Slice name in the GENI Portal, scroll to the aggregate where you reserved your resources and click on "Details" next to it. You will see a page as below.

Use the Login information obtained here to login to your nodes using any SSH client.

2.2. Start a naive OpenFlow controller

An example OpenFlow Controller that assigns incoming TCP connections to alternating paths based on total number of flows (round robin) is already downloaded for you. You can find it (load-balancer.rb) in the home directory on node "Switch".

  • 2.2.1 Check that all interfaces are configured: Issue /sbin/ifconfig and make sure eth1, eth2, eth3 are up and assigned with valid IP addresses.
  • 2.2.2 Start the example Load Balancer by executing the following :

On the Aggregator Node run

    source  /etc/profile.d/rvm.sh
    trema run /tmp/aggregator/aggregator.rb >& /tmp/trema.run &

On the Switch Node run

    source  /etc/profile.d/rvm.sh
    trema run /root/load-balancer.rb
  • 2.2.3 After you started your Load Balancer, you should be able to see the following (Switch id may vary):
    OpenFlow Load Balancer Conltroller Started!
    Switch is Ready! Switch id: 196242264273477
    
    This means the OpenFlow Switch is connected to your controller and you can start testing your OpenFlow Load Balancer now.

2.3. Configure LabWiki to orchestrate and monitor your experiment

For this tutorial, we have a preloaded script (loadbalancer.rb). Type this in the prepare column. Your script should appear. Click on it.

Note: Outside this tutorial, to create a new script, in the Prepare Column, create a new Ruby script by clicking on the "*" at the top-left of the column. Type a name for the script, eg. loadbalancer and save it as an OEDL file. Enter the name of the script you just created, in the prepare column. It is now ready for editing.

For running Iperf from node "Outside" to node "Inside" and observing information about the packets that come out of the two interfaces of "Switch", add the script below to a new file, loadbalancer.rb and click on save at the top of the Prepare Column.

Note: You should change the slice name in the property bar to represent your slice name in the Execute window before you click on 'Start Experiment'.

After your experiment is done, stop iperf servers manually. To do this, login to Nodes Outside and Inside and execute the following steps on both nodes

ps aux | grep "iperf"

root      4728  0.0  1.1 273368  6044 ?        Ssl  19:13   0:00 /usr/bin/iperf_oml2 -s -p 6001 --oml-config /tmp/51449a37-ab3e-43a3-a872-37931c7785ee-1389294789.xml

Execute
kill -9 <process_id> e.g 4278

  • 2.3.2 On the terminal where you are logged in on node "Switch", rerun "ifconfig" to see the IP addresses on each interface.
    You may not be able to see all interfaces up immediately when node "Switch" is ready; wait for some more time (about 1 min) then try "ifconfig" again.
  • 2.3.3 Identify the two interfaces that you want to monitor: the interfaces with IP addresses 192.168.2.1(left) and 192.168.3.1(right) respectively. On the LabWiki page, in your ruby script, find the following lines:
    ###### Change the following to the correct interfaces ######
    left = 'eth1'
    right = 'eth3'
    ###### Change the above to the correct interfaces ######
    
  • 2.3.4 Change eth1 and eth3 to the corresponding two interfaces you found with IP addresses 192.168.2.1 (the interface that connects to the left path) and 192.168.3.1 (the interface that connects to the right path) and press the "save" icon on your LabWiki page.

3. Run your experiment

3.1 Start your experiment with existing configuration

  • 3.1.1 Drag the file Icon at the left-top corner on your LabWiki page from Prepare column and drop it to Execute column. Fill in the name of your LabWiki experiment (this can be anything that does not contain spaces, it is just to help you track the experiments you run), select your project from the drop-down list, select your slice from the list, and type "true" in the graph box to enable graphs. You can also create an experiment context if you wish to save each run of the experiment separately (Click on Add Context in the top right hand corner of the page). Then press the "Start Experiment" button.
  • 3.1.2 When your experiment is finished, turn off your controller and disconnect the switch from your controller:
    • On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
    • On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
      ovs-vsctl del-controller br0
      


Do not start another experiment (i.e., drag and drop the file icon in LabWiki and press "Start Experiment") before your current experiment is finished.

3.2 Run the experiment in paths with different bandwidth

  • 3.2.1 Log on to node "left" (use the readyToLogin.py script) and change the link capacity for the interface with IP address "192.168.2.2" (use "ifconfig" to find the correct interface, here we assume eth1 is the interface connecting to node "Switch"):
    ovs-vsctl set Interface eth1 ingress_policing_rate=10000
    
    The above will rate-limit the connection from node "Switch" to node "left" to have a bandwidth of 10Mbps.
  • Other ways to e.g., change link delay and loss-rate using "tc qdisc netem" can be found in Appendix E.
  • 3.2.2 On node "Switch", start your Load Balancer using the following command:
    /opt/trema-trema-f995284/trema run /root/load-balancer.rb
    
  • 3.2.3 Start a new terminal, log onto node "Switch", use the following command to connect the OpenFlow Switch to the controller (the console window that runs your controller should display "Switch is Ready!" when the switch is connected):
    ovs-vsctl set-controller br0 tcp:127.0.0.1 ptcp:6634:127.0.0.1
    
  • 3.2.4 Go back to your LabWiki web page, drag and drop the file icon and repeat the experiment, as described in section 3.1, using a different experiment name (the slice name should stay the same).
  • 3.2.5 When your experiment is finished, turn off your controller and disconnect switch from your controller:
    • On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
    • On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
      ovs-vsctl del-controller br0
      

Questions

  • Did you see any difference from the graphs plotted on LabWiki, compared with the graphs plotted in the first experiment? why?
  • Check out the output of the Load Balancer on node "Switch" and tell how many flows are directed to the left path and how many are on the right path, why?
  • To answer the above question, you need to understand the Load Balancing controller. Check out the "load-balancer.rb" file in your home directory on node "Switch". Check Appendix B for hints/explanations about this OpenFlow Controller.

3.3 Modify the OpenFlow Controller to balance throughput among all the TCP flows

  • You need to calculate the average per-flow throughput observed from both left and right paths. The modifications need to happen in the function "stats_reply" in load-balancer.rb
  • In function "decide_path", change the path decision based on the calculated average per-flow throughput: forward the flow onto the path with more average per-flow throughput. (Why? TCP tries its best to consume the whole bandwidth so more throughput means network is not congested)
  • If you do not know where to start, check the hints in Section 3.1.
    • If you really do not know where to start after reading the hints, the answer can be found on node "Switch", at /tmp/load-balancer/load-balancer-solution.rb
    • Copy the above solution into your home directory then re-do the experiment on LabWiki.
      You need to change your script to use the correct Load Balancing controller (e.g., if your controller is "load-balancer-solution.rb", you should run "/opt/trema-trema-f995284/trema run /root/load-balancer-solution.rb")
  • Reren the experiment using your new OpenFlow Controller following steps in Section 2.5, check the graphs plotted on LabWiki as well as the controller's log on node "Switch" and see the difference.
  • When your experiment is done, you need to stop the Load Balancer:
    • On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
      ovs-vsctl del-controller br0
      
    • On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"

3.4 Automate your experiment using LabWiki

  • 3.4.1 Add code in your LabWiki script to automate starting and stopping your OpenFlow Controller:
    • 3.4.1.1 Go back to your LabWiki page, un-comment the script from line 184 to line 189 to start your OpenFlow Controller automatically on LabWiki
      You might need to change line 185 to use the correct load balancer controller
    • 3.4.1.2 In your script, uncomment lines 205 to line 209 to stop your OpenFlow Controller automatically on LabWiki
  • 3.4.2 On your LabWiki web page, drag and drop the file icon and repeat the experiment, as described in section 3.1, using a different experiment name (the slice name should stay the same).
  • If you have more time or are interested in trying out things, go ahead and try section 3.5. The tutorial is over now and feel free to ask questions :-)

3.5(Optional) Try different kinds of OpenFlow Load Balancers

  • You can find more load balancers under /tmp/load-balancer/ on node "Switch"
  • To try out any one of them, follow the steps:
    • At the home directory on node "Switch", copy the load balancer you want to try out, e.g.,
      cp /tmp/load-balancer/load-balancer-random.rb /root/
      
    • Change your LabWiki code at line 185 to use the correct OpenFlow controller.
    • On LabWiki, drag and drop the "File" icon and re-do the experiment as described in section 3.1
  • Some explanations about the different load balancers:
    • "load-balancer-random.rb" is the load balancer that picks path randomly: each path has 50% of the chance to get picked
    • "load-balancer-roundrobin.rb" is the load balancer that picks path in a round robin fashion: right path is picked first, then left path, etc.
    • Load balancers that begin with "load-balancer-bytes" picks path based on the total number of bytes sent out to each path: the one with fewer bytes sent out is picked
      • "load-balancer-bytes-thread.rb" sends out flow stats request in function "packet_in" upon the arrival of a new TCP flow and waits until flow stats reply is received in function "stats_reply" before a decision is made. As a result, this balancer gets the most up-to-date flow stats to make a decision. However, it needs to wait for at least the round-trip time from the controller to the switch (for the flow stats reply) before a decision can be made.
      • "load-balancer-bytes-auto-thread.rb" sends out flow stats request once every 5 seconds in a separate thread, and makes path decisions based on the most recently received flow stats reply. As a result, this balancer makes path decisions based on some old statistics (up to 5 seconds) but reacts fast upon the arrival of a new TCP flow (i.e., no need to wait for flow stats reply)
    • Load balancers that begin with "load-balancer-flows" picks path based on the total number of flows sent out to each path: the one with fewer flows sent out is picked
    • Load balancers that begin with "load-balancer-throughput" picks path based on the total throughput sent out to each path: the one with more throughput is picked

Introduction

Next: Teardown Experiment

Attachments (8)

Download all attachments as: .zip