wiki:GENIEducation/SampleAssignments/OpenFlowAssignment/ForInstructors

Version 5 (modified by shuang@bbn.com, 11 years ago) (diff)

--

Materials and Guidance for leading this exercise: <OpenFlow ASSIGNMENT>

Exercise materials

Anything that the instructor might need, e.g.:

Guidance for leading the exercise

  • useful commands: You will find it helpful to know what is going on inside your OpenFlow controller and its associated switch when implementing these exercises.

This section contains a few tips that may help you out if you are using the Open vSwitch implementation provided with this tutorial. If you are using a hardware OpenFlow switch, your instructor can help you find equivalent commands.
The Open vSwitch installation provided by the RSpec included in this tutorial is located in /opt/openvswitch-1.6.1-F15. You will find Open vSwitch commands in /opt/openvswitch-1.6.1-F15/bin and /opt/openvswitch-1.6.1-F15/sbin. Some of these commands may be helpful to you. If you add these paths to your shell’s $PATH, you will be able to access their manual pages with man. Note that $PATH will not affect sudo, so you will still have to provide the absolute path to sudo; the absolute path is omitted from the following examples for clarity and formatting.

  • 2.1 ovs-vsctl
    Open vSwitch switches are primarily configured using the ovs-vsctl command. For exploring, you may find the ovs-vsctl show command useful, as it dumps the status of all virtual switches on the local Open vSwitch instance. Once you have some information on the local switch configurations, ovs-vsctl provides a broad range of capabilities that you will likely find useful for expanding your network setup to more complex configurations for testing and verification. In particular, the subcommands add-br, add-port, and set-controller may be of interest.
  • 2.2 ovs-ofctl
    The switch host configured by the given rspec listens for incoming OpenFlow connections on localhost port 6634. You can use this to query the switch state using the ovs-ofctl command. In particular, you may find the dump-tables and dump-flows subcommands useful. For example, sudo ovs-ofctl dump-flows tcp:127.0.0.1:6634 will output lines that look like this:
    cookie=0x4, duration=6112.717s, table=0, n packets=1, n bytes=74, idle age=78,priority=5,tcp,
    nw src=10.10.10.0/24 actions=CONTROLLER:65535
    
    This indicates that any TCP segment with source IP in the 10.10.10.0/24 subnet should be sent to the OpenFlow controller for processing, that it has been 78 seconds since such a segment was last seen, that one such segment has been seen so far, and the total number of bytes in packets matching this rule is 74. The other fields are perhaps interesting, but you will probably not need them for debugging. (Unless, of course, you choose to use multiple tables — an exercise in OpenFlow 1.1 functionality left to the reader.)
  • 2.3 Unix utilities
    You will want to use a variety of Unix utilities, in addition to the tools listed in ExerciseLayout, to test your controllers. The standard ping and /usr/sbin/arping tools are useful for debugging connectivity (but make sure your controller passes ICMP ECHO REQUEST and REPLY packets and ARP traffic, respectively!), and the command netstat -an will show all active network connections on a Unix host; the TCP connections of interest in this exercise will be at the top of the listing. The format of netstat output is out of the scope of this tutorial, but information is available online and in the manual pages.

Solutions

  • 3.1 Building a Firewall with OpenFlow
    A firewall observes the packets that pass through it, and uses a set of rules to determine whether any given packet should be allowed to pass. A stateless firewall does this using only the rules and the current packet. A stateful firewall keeps track of the packets it has seen in the past, and uses information about them, along with the rules, to make its determinations.
    In this exercise, you will build a stateful firewall controller for TCP connections in OpenFlow. The first packet of each connection will be handled by the controller, but all other connection packets will be handled by the OpenFlow-enabled router or switch without contacting your controller. This design will allow you to write powerful firewall rule sets without unduly impacting packet forwarding speeds. Your controller will parse a simple configuration file to load its rules. Complete stateful firewalls often handle multiple TCP/IP protocols (generally at least both TCP and UDP), track transport protocol operational states, and often understand some application protocols, particularly those utilizing multiple transport streams (such as FTP, SIP, and DHCP). The firewall you will implement for this exercise, however, needs handle only TCP, and will not directly process packet headers or data.

Question 1. fill up the blanks in function switch_ready to insert rules into the openflow switch that allow ICMP and ARP packets to go through
Question 2. fill up the blanks in function packet_in to insert a flow match in the OpenFlow device that allows the packets (as well as those in the reverse path) that match rules in the fw.conf to pass
Question 3. fill up the blanks in function packet_in to insert rules that drops all other packets that does not match the rules specified in fw.conf
Solution: the source code for the OpenFlow Controller as well as a sample configuration file can be downloaded from firewall-solution1.rb and fw-solution1.conf
Change the name to firewall.rb and fw.conf respectively after you downloaded these two files
To verify your implementation, run the following on the switch:

/opt/trema-trema-8e97343/trema run 'firewall.rb fw.conf'

Then try to ping from left to right. Ping should go through since you allowed ICMP packets and ARP packets to pass.
If you are using the fw.conf we provided, try to run a TCP session from left to right using iperf using port 5001, 5002, 5003. Since in the fw.conf file we provided, we specifically allow TCP to go through port 5001 and 5002, but not port 5003, you should be able to see that iperf gives back throughput results for port 5001 and 5002 but not 5003.

Try play with the code as well as the fw.conf file to setup more rules, then verify your setting via iperf or telnet.
You can check the flow table on the OpenFlow Switch via:

sudo /opt/openvswitch-1.6.1-F15/bin/ovs-ofctl dump-flows tcp:127.0.0.1:6634

A sample output should be something like the following:

NXST_FLOW reply (xid=0x4):
 cookie=0x1, duration=165.561s, table=0, n_packets=6, n_bytes=360, idle_age=17,priority=65535,arp actions=NORMAL
 cookie=0xa, duration=43.24s, table=0, n_packets=3, n_bytes=222, idle_timeout=300,idle_age=22,priority=65535,tcp,in_port=1,vlan_tci=0x0000,dl_src=00:02:b3:65:d1:2b,dl_dst=00:03:47:94:c7:fd,nw_src=10.10.10.1,nw_dst=10.10.11.1,nw_tos=0,tp_src=46361,tp_dst=5003 actions=drop
 cookie=0x5, duration=147.156s, table=0, n_packets=18289, n_bytes=27682198, idle_timeout=300,idle_age=137,priority=65535,tcp,in_port=1,vlan_tci=0x0000,dl_src=00:02:b3:65:d1:2b,dl_dst=00:03:47:94:c7:fd,nw_src=10.10.10.1,nw_dst=10.10.11.1,nw_tos=0,tp_src=33385,tp_dst=5001 actions=NORMAL
 cookie=0x9, duration=105.294s, table=0, n_packets=4, n_bytes=296, idle_timeout=300,idle_age=60,priority=65535,tcp,in_port=1,vlan_tci=0x0000,dl_src=00:02:b3:65:d1:2b,dl_dst=00:03:47:94:c7:fd,nw_src=10.10.10.1,nw_dst=10.10.11.1,nw_tos=0,tp_src=46360,tp_dst=5003 actions=drop
 cookie=0x7, duration=124.764s, table=0, n_packets=17902, n_bytes=27095256, idle_timeout=300,idle_age=114,priority=65535,tcp,in_port=1,vlan_tci=0x0000,dl_src=00:02:b3:65:d1:2b,dl_dst=00:03:47:94:c7:fd,nw_src=10.10.10.1,nw_dst=10.10.11.1,nw_tos=0,tp_src=57908,tp_dst=5002 actions=NORMAL
 cookie=0x3, duration=165.561s, table=0, n_packets=1, n_bytes=74, idle_timeout=300,idle_age=124,priority=65535,tcp,nw_src=10.10.10.0/24,nw_dst=10.10.11.0/24,tp_dst=5002 actions=CONTROLLER:65535
 cookie=0x4, duration=165.561s, table=0, n_packets=1, n_bytes=74, idle_timeout=300,idle_age=147,priority=65535,tcp,nw_src=10.10.10.0/24,nw_dst=10.10.11.0/24,tp_dst=5001 actions=CONTROLLER:65535
 cookie=0x2, duration=165.561s, table=0, n_packets=0, n_bytes=0, idle_age=165,priority=65535,icmp actions=NORMAL
 cookie=0x6, duration=147.156s, table=0, n_packets=9387, n_bytes=624254, idle_timeout=300,idle_age=137,priority=65535,tcp,nw_src=10.10.11.1,nw_dst=10.10.10.1,tp_src=5001,tp_dst=33385 actions=NORMAL
 cookie=0x8, duration=124.764s, table=0, n_packets=9257, n_bytes=617666, idle_timeout=300,idle_age=114,priority=65535,tcp,nw_src=10.10.11.1,nw_dst=10.10.10.1,tp_src=5002,tp_dst=57908 actions=NORMAL

Note that for tp_dst=5003, the action is drop, for tp_dst=5001 and 5002 (as well as the reverse path), the action is NORMAL

Extra Credit --- I have not done it yet
For extra credit (if permitted by your instructor), generate TCP reset segment at the firewall to reset rejected connections.

  • 3.2 Extending the Firewall
    OpenFlow controllers can also make complex flow decisions based on arbitrary state. This is one benefit to removing the controller from the network device — the controller is free to perform any computation required over whatever data is available when making decisions, rather than being constrained to the limited computing power and storage of the network device. For this exercise, you will extend the firewall described in Section 3.1 to include rudimentary denial of service prevention using this capability.

Solution: the source code for the OpenFlow Controller as well as a sample configuration file can be downloaded from firewall-solution2.rb and fw-solution2.conf
Change the name to firewall.rb and fw.conf respectively after you downloaded these two files
To verify your implementation, run the following on the switch:

/opt/trema-trema-8e97343/trema run 'firewall.rb fw.conf'

In this OpenFlow controller, I set the idle_timeout to 30 seconds so that you will not need to wait for too long before the connection times out and the flow entries got removed.
I set the maximum number of allowed flows for port 5002 to be 10 in fw.conf.
When the controller is up, run the following on right node:

/usr/local/etc/emulab/emulab-iperf -s -p 5002

Run the following on the left node to create 10 flows sending to port 5002:

/usr/local/etc/emulab/emulab-iperf -c 10.10.11.1 -p 5002 -P 10

Iperf would go through since it allows 10 flows to pass. At the same time, the OpenFlow Controller (firewall.rb) should output the following:

action=allow, datapath_id=0x2b3861f8b, count=1, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35288, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=2, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35289, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=3, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35290, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=4, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35291, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=5, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35292, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=6, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35293, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=7, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35294, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=8, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35295, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=9, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35296, tp_dst = 5002}
action=allow, datapath_id=0x2b3861f8b, count=10, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35297, tp_dst = 5002}

Pay attention to the count and limit in the output.
now if you quickly do the following on the left node to add another flow before the current 10 flows expire:

/usr/local/etc/emulab/emulab-iperf -c 10.10.11.1 -p 5002

You may find that iperf will not go through, and the controller outputs the following:

action=block, datapath_id=0x2b3861f8b, count=10, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 36399, tp_dst = 5002}

This indicates that this additional flow is blocked. Next, if you wait enough time, in our case here, 30 seconds, you will find the controller outputting the following:

Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=9, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35290, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=8, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35293, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=7, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35295, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=6, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35292, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=5, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35291, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=4, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35296, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=3, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35288, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=2, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35289, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=1, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35297, tp_dst = 5002}
Flow Entry Expired or Removed!!!!!!!!!!!action=allow, datapath_id=0x2b3861f8b, count=0, limit=10, message={wildcards = 0(none), in_port = 1, dl_src = 00:02:b3:65:d1:2b, dl_dst = 00:03:47:94:c7:fd, dl_vlan = 65535, dl_vlan_pcp = 0, dl_type = 0x800, nw_tos = 0, nw_proto = 6, nw_src = 10.10.10.1/32, nw_dst = 10.10.11.1/32, tp_src = 35294, tp_dst = 5002}

This means that the controller caught the flow entry removed event from the OpenFlow switch and it updated the count value for the corresponding rule.
Next, if you try to send a TCP flow using iperf again, it would go through.

  • 3.3 Load Balancing
    Load balancing in computer networking is the division of network traffic between two or more network devices or paths, typically for the purpose of achieving higher total throughput than either one path, ensuring a specific maximum latency or minimum bandwidth to some or all flows, or similar purposes. For this exercise, you will design a load-balancing OpenFlow controller capable of collecting external data and using it to divide traffic between dissimilar network paths so as to achieve full bandwidth utilization with minimal queuing delays.
    An interesting property of removing the controller from an OpenFlow device and placing it in an external system of arbitrary computing power and storage capability is that decision-making for network flows based on external state becomes reasonable. Traditional routing and switching devices make flow decisions based largely on local data (or perhaps data from adjacent network devices), but an OpenFlow controller can collect data from servers, network devices, or any other convenient source, and use this data to direct incoming flows.
    For the purpose of this exercise, data collection will be limited to the bandwidth and queue occupancy of two emulated network links.

Linux netem
Use the tc command to enable and configure delay and bandwidth constraints on the outgoing interfaces for traffic traveling from the OpenFlow switch to the Aggregator node. To configure a path with 20 Mbps bandwidth and a 20 ms delay on eth2, you would issue the command:

sudo tc qdisc add dev eth2 root handle 1:0 netem delay 20ms
sudo tc qdisc add dev eth2 parent 1:0 tbf rate 20mbit buffer 20000 limit 16000

See the tc and tc-tbf manual pages for more information on configuring tc token bucket filters as in the second command line. Use the tc qdisc change command to reconfigure existing links,instead of tc qdisc add.
The outgoing links in the provided lb.rspec are numbered 192.168.4.1 and 192.168.5.1 for left and right, respectively.

Balancing the Load
An example openflow controller that arbitrarily assigns incoming TCP connections to alternating paths can be found at load-balancer.rb.
The goal of your OpenFlow controller will be to achieve full bandwidth utilization with minimal queuing delays of the two links between the OpenFlow switch and the Aggregator host. In order to accomplish this, your OpenFlow switch will intelligently divide TCP flows between the two paths. The intelligence for this decision will come from bandwidth and queuing status reports from the two traffic shaping nodes representing the alternate paths.
When the network is lightly loaded, flows may be directed toward either path, as neither path exhibits queuing delays and both paths are largely unloaded. As network load increases, however, your controller should direct flows toward the least loaded fork in the path, as defined by occupied bandwidth for links that are not yet near capacity and queue depth for links that are near capacity.
Because TCP traffic is bursty and unpredictable, your controller will not be able to perfectly balance the flows between these links. However, as more TCP flows are combined on the links, their combined congestion control behaviors will allow you to utilize the links to near capacity, with queuing delays that are roughly balanced. Your controller need not re-balance flows that have previously been assigned, but you may do so if you like.
The binding of OpenFlow port numbers to logical topology links can be found in the file /tmp/portmap on the switch node when the provided RSpec boots. It consists of three lines, each containing one logical link name (left, right, and outside) and an integer indicating the port on which the corresponding link is connected. You may use this information in your controller configuration if it is helpful.
You will find an example OpenFlow controller that arbitrarily assigns incoming TCP connections to alternating paths in the file load-balancer.rb. This simple controller can be used as a starting point for your controller if you desire. Examining its behavior may also prove instructive; you should see that its effectiveness at achieving the assignment goals falls off as the imbalance between balanced link capacities or delays grows.

Gathering Information
The information you will use to inform your OpenFlow controller about the state of the two load-balanced paths will be gathered from the traffic shaping hosts. This information can be parsed out of the file /proc/net/dev, which contains a line for each interface on the machine, as well as the tc -p qdisc show command, which displays the number of packets in the token bucket queue. As TCP connections take some time to converge on a stable bandwidth utilization, you may want to collect these statistics once every few seconds, and smooth the values you receive over the intervening time periods.
You may find the file /tmp/ifmap on the traffic shaping nodes useful. It is created at system startup, and identifies the inside- and outside-facing interfaces with lines such as:

inside eth2
outside eth1

The first word on the line is the “direction” of the interface — toward the inside or outside of the network diagram. The second is the interface name as found in /proc/net/dev.
You are free to communicate these network statistics from the traffic shaping nodes to your OpenFlow controller in any fashion you like. You may want to use a web service, or transfer the data via an external daemon and query a statistics file from the controller. Keep in mind that flow creation decisions need to be made rather quickly, to prevent retransmissions on the connecting host.

Questions
To help user to fetch the information about the amount of traffic as well as the queue depth (measured in number of packets) on both left and right node, we provide a script that the user can download and run on both left and right node
You can download the script from netinfo.py (If you have already downloaded it, ignore this). Then do the following to start monitoring network usage:

    • 1. install Twisted Web package for Python on both left and right node:
      sudo yum install python-twisted-web
      
    • 2. upload netinfo.py onto left and right node, then change the listening IP address in netinfo.py to the public IP address of left and right node respectively. i.e., replacing the following 0.0.0.0 in your netinfo.py file to the public IP address of the left/right node.
      reactor.listenTCP(8000, factory, interface = '0.0.0.0')
      
    • 3. apply qdisc on interface eth1 of both left and right node by executing (you may further change the parameters by using tc qdisc change):
      sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 20ms
      sudo /sbin/tc qdisc add dev eth1 parent 1:0 tbf rate 20mbit buffer 20000 limit 16000
      
    • 4. run the script by:
      python netinfo.py
      
    • 5. verify it is working by opening a web browser and typing the following URL (replacing 155.98.36.69 with your left or right node's public IP address):
      http://155.98.36.69:8000/qinfo/0
      
      For more information about netinfo.py, please look at the comments in the file.
      Question: Implement your load-balancer.rb, run it on switch, and display the number of bytes and queue length on both left and right node when a decision is made
      Solution: The source code for load-balancer.rb can be found at lb-solution.rb
      Steps to verify that it works:
    • 1. Download the code lb-solution.rb, upload to node switch, open the source code, change $leftip and $rightip to the corresponding public IP address of the left and right node
    • 2. Have both left and right node configured with (you can change the parameters but it has to be on dev eth1):
      sudo /sbin/tc qdisc add dev eth1 root handle 1:0 netem delay 20ms
      sudo /sbin/tc qdisc add dev eth1 parent 1:0 tbf rate 20mbit buffer 20000 limit 16000
      
    • 3. On both left and right node, running netinfo.py. On node switch, run sudo /opt/trema-trema-8e97343/trema run lb-solution.rb to run the OpenFlow controller.
    • 4. Run multiple TCP flows from outside node to inside node: on inside, run:
      /usr/local/etc/emulab/emulab-iperf -s
      
      On outside run the following multiple times, with about 6 seconds interval between each run:
      /usr/local/etc/emulab/emulab-iperf -c 10.10.10.2 -t 100 &
      
      This will give the netinfo.py enough time to collect network usage statistics from both left and right node so that the load-balancer can make the right decision.
    • 5. Pay attention to the output from the load balancer every time a new TCP flow is generated from outside. Sample output should be similar to the following:
      left:  5302.5338056252 bytes, Q Depth: 20.5240502532964 packets
      right: 14193.5212452065 bytes, Q Depth: 27.3912943665466 packets
      so this new flow goes left. Total number of flows on left: 1
      left:  30864.5415104438 bytes, Q Depth: 5.35704134958587 packets
      right: 499.390132742089 bytes, Q Depth: 0.963745492987305 packets
      so this new flow goes right. Total number of flows on right: 1
      left:  34797.5504056022 bytes, Q Depth: 4.51942007138619 packets
      right: 119634.69213493 bytes, Q Depth: 9.33169180628916 packets
      so this new flow goes left. Total number of flows on left: 2
      
      Generally speaking, the decision is made as follows, when the available queue length seen on one node is drastically more than the other, then the new flow goes to the that node/path, otherwise the new flow goes to which-ever node/path with less number of bytes seen.
      The experimenters can further try to change the tc token bucket filter parameters to test their load balancer.