303 | | == D. About the GIMI script you run on !LabWiki == |
304 | | - Line 1 to Line 128: the definition of oml trace and oml nmetrics library. It basically defines the command line options for oml2-trace and oml2-nmetrics, as well as the output (the monitoring data that is going to be stored into the oml server) |
305 | | - users are not supposed to modify them |
306 | | - the definition here we used is not the same as what is provided by the latest OML2 2.10.0 library because there is some version mis-match between the OMF that !LabWiki is using and the OML2 toolkit that we are using. It is a temporary hack for now --> to be fixed |
307 | | - we added the definition of option "--oml-config" for trace app (Line 27-28) so that oml2-trace accepts configuration files: |
308 | | {{{ |
309 | | app.defProperty('config', 'config file to follow', '--oml-config', |
310 | | :type => :string, :default => '"/tmp/monitor/conf.xml"') |
311 | | }}} |
312 | | - Line 134 to Line 137: user defines the monitoring interfaces here. In our case, we want to monitor the interface on node "Switch" that connects to the left path (with IP 192.168.2.2) and to the right path (with IP 192.168.3.1) |
313 | | - Line 139 to Line 169: defines on which node the user wants to run which monitoring app; and the "display graph" option. |
314 | | - group "Monitor" monitors the left path statistics using nmetrics and trace. |
315 | | - group "Monitor1" monitors the right path statistics using nmetrics and trace. |
316 | | - To monitor the throughput information, we used oml2-trace with the option of "--oml-config" which uses the configuration file we created at /tmp/monitor/conf.xml, which simply sums up the number of tcp_packet_size (in Bytes) for each second and save the info into the OML Server (in a Postgre database): |
317 | | {{{ |
318 | | <omlc id="switch" encoding="binary"> |
319 | | <collect url="tcp:emmy9.casa.umass.edu:3004" name="traffic"> |
320 | | <stream mp="tcp" interval="1"> |
321 | | <filter field="tcp_packet_size" operation="sum" rename="tcp_throughput" /> |
322 | | </stream> |
323 | | </collect> |
324 | | </omlc> |
325 | | }}} |
326 | | - More information about nmetrics and trace can be found here: http://oml.mytestbed.net/projects/omlapp/wiki/OML-instrumented_Applications#Packet-tracer-trace-oml2 |
327 | | - Line 173 to Line 218: defines the experiment: |
328 | | - Line 175-177: starts the monitoring app |
329 | | - Line 179-181: starts the TCP receiver (using iperf) |
330 | | - Line 183-189: starts the load balancer and connects ovs switch to the load balancer (controller) |
331 | | - Line 191-200: starts 20 TCP flows, with 5 seconds interval between the initial of each Flow |
332 | | - Line 205-209: stop the load balancer controller, disconnect the ovs switch from the controller and finish the experiment |
333 | | - Line 217 to Line 234: defines the two graphs we want to plot: |
334 | | - The first uses the monitoring data from oml2-nmetrics to display the cumulated number of bytes observed from each of the interfaces; |
335 | | - The second graph uses the monitoring results from oml2-trace to display the throughput observed from each of the interfaces. |
336 | | |
337 | | = E. Tips: Debugging an OpenFlow Controller = |
| 303 | = D. Tips: Debugging an OpenFlow Controller = |