Changes between Initial Version and Version 1 of GENIEducation/SampleAssignments/WinterCamp14/GIMITutorial/Procedure/Execute


Ignore:
Timestamp:
01/08/14 16:38:29 (10 years ago)
Author:
divyashri.bhat@gmail.com
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GENIEducation/SampleAssignments/WinterCamp14/GIMITutorial/Procedure/Execute

    v1 v1  
     1
     2= [wiki:GEC17Agenda/AdvancedOpenFlow/Procedure OpenFlow Load Balancer Tutorial] =
     3{{{
     4#!html
     5
     6<div style="text-align:center; width:495px; margin-left:auto; margin-right:auto;">
     7<img id="Image-Maps_5201305222028436" src="http://groups.geni.net/geni/attachment/wiki/GENIExperimenter/Tutorials/Graphics/Execute.jpg?format=raw" usemap="#Image-Maps_5201305222028436" border="0" width="495" height="138" alt="" />
     8<map id="_Image-Maps_5201305222028436" name="Image-Maps_5201305222028436">
     9<area shape="rect" coords="18,18,135,110" href="http://groups.geni.net/geni/wiki/GEC17Agenda/AdvancedOpenFlow/Procedure/DesignSetup" alt="" title=""    />
     10<area shape="rect" coords="180,18,297,111" href="http://groups.geni.net/geni/wiki/GEC17Agenda/AdvancedOpenFlow/Procedure/Execute" alt="" title=""    />
     11<area shape="rect" coords="344,17,460,110" href="http://groups.geni.net/geni/wiki/GEC17Agenda/AdvancedOpenFlow/Procedure/Finish" alt="" title=""    />
     12<area shape="rect" coords="493,136,495,138" href="http://www.image-maps.com/index.php?aff=mapped_users_5201305222028436" alt="Image Map" title="Image Map" />
     13</map>
     14<!-- Image map text links - End - -->
     15
     16</div>
     17}}}
     18
     19== 2. Configure and Initialize Services ==
     20=== 2.1. Start a naive OpenFlow controller ===
     21 An example OpenFlow Controller that assigns incoming TCP connections to alternating paths '''based on total number of flows''' (round robin) is already downloaded for you. You can find it (load-balancer.rb) in the home directory on node "Switch". [[BR]]
     22  - '''2.1.1''' Log on to node "Switch" . For the tutorial to do that we are going to use the `readyToLogin.py` omni script.
     23    - '''2.1.1.1''' Open a terminal window
     24    - '''2.1.1.2''' Run:
     25    {{{
     26       readyToLogin.py -a EG-AM SLICENAME
     27    }}}
     28    - '''2.1.1.3''' Find the line that corresponds to the switch node and copy and paste it
     29  - '''2.1.2''' Check that all interfaces are configured: Issue `ifconfig` and make sure eth1, eth2, eth3 are up and assigned with valid IP addresses. [[BR]]
     30  - '''2.1.3''' Start the example Load Balancer by executing the following :
     31  {{{
     32  /opt/trema-trema-f995284/trema run /root/load-balancer.rb
     33  }}}
     34  - '''2.1.4''' After you started your Load Balancer, you should be able to see the following (Switch id may vary):
     35  {{{
     36  OpenFlow Load Balancer Conltroller Started!
     37  Switch is Ready! Switch id: 196242264273477
     38  }}}
     39  This means the OpenFlow Switch is connected to your controller and you can start testing your OpenFlow Load Balancer now.
     40 
     41=== 2.2. Configure !LabWiki to orchestrate and monitor your experiment ===
     42   - '''2.2.1''' Log on to !LabWiki on http://emmy9.casa.umass.edu:4000 , on the `Prepare` Column, create a new Ruby script by clicking on the '+' at the top of the column. [[BR]]
     43
     44[[Image(LabWiki_newscript.png)]]
     45
     46Type a name for the script, eg. advOF-script and save it as a Ruby file. Enter the name of the script you just created, in the prepare column. It is now ready for editing.
     47
     48[[Image(LabWiki_newscriptsel.png)]]
     49
     50For running Iperf from node "Outside" to node "Inside", add the script below to a new file, udp-iperf.rb and click on save at the top of the Prepare Column.
     51
     52'''Note''':  You should change the slice name in the first two lines of the script to represent your slice name.
     53
     54{{{
     55defProperty('theSender', "outside-debloadbal", "ID of sender node")
     56defProperty('theReceiver', "inside-debloadbal", "ID of receiver node")
     57defProperty('setinterval','1.0',"Time between iperf")
     58defProperty('serverip', "10.10.10.2","Server interface IP")
     59#defProperty('clientip', "192.168.4.1","Client interface IP")
     60defProperty('setbandwidth', "100m", "Throughput of Sender")
     61defApplication('iperf') do |app|
     62  app.description = "Iperf is a traffic generator and bandwidth measurement
     63tool. It provides generators producing various forms of packet streams and port
     64for sending these packets via various transports, such as TCP and UDP."
     65  app.binary_path = "/usr/bin/iperf_oml2"
     66
     67  #app.defProperty('interval', 'pause n seconds between periodic bandwidth reports', '-i',
     68   # :type => :double, :unit => "seconds", :default => '1.')
     69  app.defProperty('len', 'set length read/write buffer to n (default 8 KB)', '-l',
     70                  :type => :integer, :unit => "KiBytes")
     71  app.defProperty('print_mss', 'print TCP maximum segment size (MTU - TCP/IP header)', '-m',
     72                  :type => :boolean)
     73  app.defProperty('output', 'output the report or error message to this specified file', '-o',
     74                  :type => :string)
     75  app.defProperty('port', 'set server port to listen on/connect to to n (default 5001)', '-p',
     76                  :type => :integer)
     77  app.defProperty('udp', 'use UDP rather than TCP', '-u',
     78                  :type => :boolean,
     79                  :order => 2)
     80  app.defProperty('window', 'TCP window size (socket buffer size)', '-w',
     81                  :type => :integer, :unit => "Bytes")
     82  app.defProperty('bind', 'bind to <host>, an interface or multicast address', '-B',
     83                  :type => :string)
     84  app.defProperty('compatibility', 'for use with older versions does not sent extra msgs', '-C',
     85                  :type => :boolean)
     86  app.defProperty('mss', 'set TCP maximum segment size (MTU - 40 bytes)', '-M',
     87                  :type => :integer, :unit => "Bytes")
     88  app.defProperty('nodelay', 'set TCP no delay, disabling Nagle\'s Algorithm', '-N',
     89                  :type => :boolean)
     90  app.defProperty('IPv6Version', 'set the domain to IPv6', '-V',
     91                  :type => :boolean)
     92  app.defProperty('reportexclude', 'exclude C(connection) D(data) M(multicast) S(settings) V(server) reports', '-x',
     93                  :type => :string, :unit => "[CDMSV]")
     94  app.defProperty('reportstyle', 'C or c for CSV report, O or o for OML', '-y',
     95                  :type => :string, :unit => "[CcOo]", :default => "o") # Use OML reporting by default
     96
     97  app.defProperty('server', 'run in server mode', '-s',
     98                  :type => :boolean)
     99
     100  app.defProperty('bandwidth', 'set target bandwidth to n bits/sec (default 1 Mbit/sec)', '-b',
     101                  :type => :string, :unit => "Mbps")
     102  app.defProperty('client', 'run in client mode, connecting to <host>', '-c',
     103                  :type => :string,
     104                  :order => 1)
     105  app.defProperty('dualtest', 'do a bidirectional test simultaneously', '-d',
     106                  :type => :boolean)
     107  app.defProperty('num', 'number of bytes to transmit (instead of -t)', '-n',
     108                  :type => :integer, :unit => "Bytes")
     109  app.defProperty('tradeoff', 'do a bidirectional test individually', '-r',
     110                  :type => :boolean)
     111  app.defProperty('time', 'time in seconds to transmit for (default 10 secs)', '-t',
     112                  :type => :integer, :unit => "seconds")
     113  app.defProperty('fileinput', 'input the data to be transmitted from a file', '-F',
     114                  :type => :string)
     115  app.defProperty('stdin', 'input the data to be transmitted from stdin', '-I',
     116                  :type => :boolean)
     117  app.defProperty('listenport', 'port to recieve bidirectional tests back on', '-L',
     118                  :type => :integer)
     119  app.defProperty('parallel', 'number of parallel client threads to run', '-P',
     120                  :type => :integer)
     121  app.defProperty('ttl', 'time-to-live, for multicast (default 1)', '-T',
     122                  :type => :integer,
     123                  :default => 1)
     124  app.defProperty('linux_congestion', 'set TCP congestion control algorithm (Linux only)', '-Z',
     125                  :type => :boolean)
     126
     127  app.defMeasurement("application"){ |m|
     128    m.defMetric('pid', :integer)
     129    m.defMetric('version', :string)
     130    m.defMetric('cmdline', :string)
     131    m.defMetric('starttime_s', :integer)
     132    m.defMetric('starttime_us', :integer)
     133  }
     134
     135  app.defMeasurement("settings"){ |m|
     136    m.defMetric('pid', :integer)
     137    m.defMetric('server_mode', :integer)
     138    m.defMetric('bind_address', :string)
     139    m.defMetric('multicast', :integer)
     140    m.defMetric('multicast_ttl', :integer)
     141    m.defMetric('transport_protocol', :integer)
     142    m.defMetric('window_size', :integer)
     143    m.defMetric('buffer_size', :integer)
     144  }
     145
     146  app.defMeasurement("connection"){ |m|
     147    m.defMetric('pid', :integer)
     148    m.defMetric('connection_id', :integer)
     149    m.defMetric('local_address', :string)
     150    m.defMetric('local_port', :integer)
     151    m.defMetric('remote_address', :string)
     152    m.defMetric('remote_port', :integer)
     153  }
     154
     155  app.defMeasurement("transfer"){ |m|
     156    m.defMetric('pid', :integer)
     157    m.defMetric('connection_id', :integer)
     158    m.defMetric('begin_interval', :double)
     159    m.defMetric('end_interval', :double)
     160    m.defMetric('size', :uint64)
     161  }
     162
     163  app.defMeasurement("losses"){ |m|
     164    m.defMetric('pid', :integer)
     165    m.defMetric('connection_id', :integer)
     166    m.defMetric('begin_interval', :double)
     167    m.defMetric('end_interval', :double)
     168    m.defMetric('total_datagrams', :integer)
     169    m.defMetric('lost_datagrams', :integer)
     170  }
     171
     172  app.defMeasurement("jitter"){ |m|
     173    m.defMetric('pid', :integer)
     174    m.defMetric('connection_id', :integer)
     175    m.defMetric('begin_interval', :double)
     176    m.defMetric('end_interval', :double)
     177    m.defMetric('jitter', :double)
     178  }
     179
     180  app.defMeasurement("packets"){ |m|
     181    m.defMetric('pid', :integer)
     182    m.defMetric('connection_id', :integer)
     183    m.defMetric('packet_id', :integer)
     184    m.defMetric('packet_size', :integer)
     185    m.defMetric('packet_time_s', :integer)
     186    m.defMetric('packet_time_us', :integer)
     187    m.defMetric('packet_sent_time_s', :integer)
     188    m.defMetric('packet_sent_time_us', :integer)
     189  }
     190
     191end
     192defApplication('nmetrics') do |app|
     193   app.description = 'Measure nmetrics parameters'
     194   app.binary_path = '/usr/bin/nmetrics-oml2'
     195   app.defProperty('interface', 'interface at which to measure', '-i', {:type => :string})
     196   app.defProperty('cpu', 'cpu usage', '-c', {:type => :boolean})
     197   app.defProperty('memory', 'memory usage', '-m', {:type => :boolean})
     198   app.defMeasurement('memory') do |m|
     199    m.defMetric('ram', :uint64)
     200    m.defMetric('total', :uint64)
     201    m.defMetric('used', :uint64)
     202    m.defMetric('free', :uint64)
     203    m.defMetric('actual_used', :uint64)
     204    m.defMetric('actual_free', :uint64)
     205   end 
     206   app.defMeasurement('cpu') do |m|
     207    m.defMetric('user', :uint64)
     208    m.defMetric('sys', :uint64)
     209    m.defMetric('nice', :uint64)
     210    m.defMetric('idle', :uint64)
     211    m.defMetric('wait', :uint64)
     212    m.defMetric('irq', :uint64)
     213    m.defMetric('soft_irq', :uint64)
     214    m.defMetric('stolen', :uint64)
     215    m.defMetric('total', :uint64)       
     216   end
     217   app.defMeasurement('network') do |m|
     218    m.defMetric('name', :string)
     219    m.defMetric('rx_packets', :uint64)
     220    m.defMetric('rx_bytes', :uint64)
     221    m.defMetric('rx_errors', :uint64)
     222    m.defMetric('rx_dropped', :uint64)
     223    m.defMetric('rx_overruns', :uint64)
     224    m.defMetric('rx_frame', :uint64)
     225    m.defMetric('tx_packets', :uint64)
     226    m.defMetric('tx_bytes', :uint64)
     227    m.defMetric('tx_errors', :uint64)
     228    m.defMetric('tx_dropped', :uint64)
     229    m.defMetric('tx_overruns', :uint64)
     230    m.defMetric('tx_collisions', :uint64)
     231    m.defMetric('tx_carrier', :uint64)
     232    m.defMetric('speed', :uint64)
     233  end
     234end
     235defGroup('Receiver',property.theReceiver) do |node|
     236    node.addApplication("iperf") do |app|
     237        #app.setProperty('interval', property.setinterval)
     238        app.setProperty('server',true)
     239        app.setProperty('port',6001)
     240        app.measure('transfer', :samples => 1)
     241        app.measure('jitter', :samples => 1)
     242        app.measure('losses', :samples => 1)
     243    end
     244  #node.addApplication("nmetrics") do |app|
     245   #     app.setProperty('interface','eth3')
     246    #    app.setProperty('cpu',true)
     247     #   app.setProperty('memory',true)
     248      #  app.measure('cpu', :samples => 1)
     249      #  app.measure('memory', :samples => 1)
     250      #  app.measure('network', :samples => 1)
     251    #end 
     252end
     253
     254defGroup('Sender',property.theSender) do |node|
     255    node.addApplication("iperf") do |app|
     256        #app.setProperty('interval',property.setinterval)
     257        app.setProperty('client',property.serverip)
     258        app.setProperty('tradeoff',true)
     259        app.setProperty('parallel', 20)
     260        app.setProperty('time',30)
     261        app.setProperty('port',6001)
     262        #app.setProperty('bandwidth',property.setbandwidth)
     263        app.measure('transfer', :samples => 1)
     264        app.measure('jitter', :samples => 1)
     265        app.measure('losses', :samples => 1)
     266    end
     267  #node.addApplication("nmetrics") do |app|
     268   #     app.setProperty('interface','eth2')
     269    #    app.setProperty('cpu',true)
     270     #   app.setProperty('memory',true)
     271      #  app.measure('cpu', :samples => 1)
     272       # app.measure('memory', :samples => 1)
     273        #app.measure('network', :samples => 1)
     274    #end
     275end
     276
     277onEvent(:ALL_UP_AND_INSTALLED) do |event|
     278    info "starting"
     279      group('Receiver').startApplications
     280      info "Server application started..."
     281     wait 1
     282     group('Sender').startApplications
     283      info "Client application started..."
     284    wait 100
     285     group('Sender').stopApplications
     286    wait 2
     287     group('Receiver').stopApplications
     288     info "All applications stopped."
     289     Experiment.done
     290end
     291defGraph 'Received bytes' do |g|
     292  g.ms('transfer').select {[ oml_ts_client.as(:ts), :size , :oml_sender_id]}
     293  g.caption "Packet length measurement."
     294  g.type 'line_chart3'
     295  g.mapping :x_axis => :ts, :y_axis => :size, :group_by => :oml_sender_id
     296  g.xaxis :legend => 'time [s]'
     297  g.yaxis :legend => 'packet size', :ticks => {:format => 's'}
     298end
     299
     300}}}
     301
     302Add the script below to the new file, advOF-script.rb and click on the save icon at the top of the column.
     303{{{
     304
     305
     306defApplication('oml:app:trace', 'trace') do |app|
     307
     308  app.version(2, 10, 0)
     309  app.shortDescription = 'Packet capture'
     310  app.description = %{'trace' uses libtrace to capture packets matching the
     311BPFilter, and report various header (IP, TCP, UDP, Radiotap,...) fields through
     312OML.
     313
     314Note The pktid field in all MPs can be used to link information about the
     315multiple protocols encapsulated in the same packet, even in cases where multiple
     316packets have been received at the same time, which renders the timestamp field
     317useless as an identifier.
     318  }
     319  app.path = "/usr/bin/trace"
     320
     321  app.defProperty('filter', 'Filter expression BPFEXP', '-f',
     322                  :type => :string, :mnemonic => 'f')
     323  app.defProperty('snaplen', 'Snarf Bytes of data from each  packet', '-s',
     324                  :type => :int, :unit => 'Bytes', :mnemonic => 's')
     325  app.defProperty('promisc', 'Put  the  interface into promiscuous mode', '-p',
     326                  :type => 'boolean', :mnemonic => 'p')
     327  app.defProperty('interface', 'Interface to trace', '-i',
     328                  :type => :string, :mnemonic => 'i', :default => '"eth0"')
     329  app.defProperty('radiotap', 'Enable radiotap', '-r',
     330                  :type => 'boolean', :mnemonic => 'r')
     331  app.defProperty('config', 'config file to follow', '--oml-config',
     332                  :type => :string, :default => '"/tmp/monitor/conf.xml"')
     333
     334
     335  app.defMeasurement("ip") do |m|
     336    m.defMetric('pktid',    :int, ' internal packet ID to link MPs')
     337    m.defMetric('ip_tos',   :int, ' Type of Service')
     338    m.defMetric('ip_len',   :int, ' Total Length')
     339    m.defMetric('ip_id',    :int,  ' Identification')
     340    m.defMetric('ip_off',   :int, ' IP Fragment offset (and flags)')
     341    m.defMetric('ip_ttl',   :int, ' Time to Live')
     342    m.defMetric('ip_proto', :int, ' Protocol')
     343    m.defMetric('ip_sum',   :int, ' Checksum')
     344    m.defMetric('ip_src',   :string, ' Source Address')
     345    m.defMetric('ip_dst',   :string, ' Destination Address')
     346    m.defMetric('ip_sizeofpacket', :int, ' Size of the Packet')
     347    m.defMetric('ip_ts',    :float, ' timestamp of the measurement')
     348  end
     349 
     350  app.defMeasurement("tcp") do |m|
     351    m.defMetric('pktid',        :int, ' internal packet ID to link MPs')
     352    m.defMetric('tcp_source',   :int, ' Source Port')
     353    m.defMetric('tcp_dest',     :int, ' Destination Port')
     354    m.defMetric('tcp_seq',      :int, ' TCP sequence Number')
     355    m.defMetric('tcp_ack_seq',  :int, ' Acknowledgment Number')
     356    m.defMetric('tcp_window',   :int, ' Window Size')
     357    m.defMetric('tcp_checksum', :int, ' Checksum')
     358    m.defMetric('tcp_urgptr',   :int, ' Urgent Pointer')
     359    m.defMetric('tcp_packet_size', :int, ' Size of the Packet')
     360    m.defMetric('tcp_ts',       :float, ' timestamp of the measurement')
     361  end
     362 
     363end
     364
     365
     366
     367defApplication('nmetrics_app', 'nmetrics') do |app|
     368
     369  app.version(2, 10, 0)
     370  app.shortDescription = 'Monitoring node statistcs'
     371  app.description = %{'nmetrics' is monitoring various node specific statistics,
     372such as CPU, memory and network usage and reports them through OML.
     373  }
     374  app.path = "/usr/bin/nmetrics"
     375
     376  app.defProperty('interface', 'Report usage for the specified network interface (can be used multiple times)', '-i',
     377          :type => :string, :mnemonic => 'i',
     378          :default => '"eth0"', :var_name => 'if_name')
     379  app.defProperty('sample-interval', 'Time between consecutive measurements', '-s',
     380          :type => :int, :unit => 'seconds', :mnemonic => 's',
     381          :var_name => 'sample_interval')
     382
     383  app.defMeasurement("network") do |m|
     384    m.defMetric('name', :string)
     385    m.defMetric('rx_packets', :int)
     386    m.defMetric('rx_bytes', :int)
     387    m.defMetric('rx_errors', :int)
     388    m.defMetric('rx_dropped', :int)
     389    m.defMetric('rx_overruns', :int)
     390    m.defMetric('rx_frame', :int)
     391    m.defMetric('tx_packets', :int)
     392    m.defMetric('tx_bytes', :int)
     393    m.defMetric('tx_errors', :int)
     394    m.defMetric('tx_dropped', :int)
     395    m.defMetric('tx_overruns', :int)
     396    m.defMetric('tx_collisions', :int)
     397    m.defMetric('tx_carrier', :int)
     398    m.defMetric('speed', :int)
     399  end
     400
     401  app.defMeasurement("procs") do |m|
     402    m.defMetric('cpu_id', :int)
     403    m.defMetric('total', :int)
     404    m.defMetric('sleeping', :int)
     405    m.defMetric('running', :int)
     406    m.defMetric('zombie', :int)
     407    m.defMetric('stopped', :int)
     408    m.defMetric('idle', :int)
     409    m.defMetric('threads', :int)
     410  end
     411
     412  app.defMeasurement("proc") do |m|
     413    m.defMetric('pid', :int)
     414    m.defMetric('start_time', :int)
     415    m.defMetric('user', :int)
     416    m.defMetric('sys', :int)
     417    m.defMetric('total', :int)
     418  end
     419end
     420#COMMENT
     421defPrototype("system_monitor") do |p|
     422  p.name = "System Monitor"
     423  p.description = "A monitor that reports stats on the system's resource usage"
     424  p.defProperty('monitor_interface', 'Monitor the interface usage', 'eth1')
     425  p.defProperty('sample-interval', 'sample-interval', '1')
     426
     427  p.addApplication("nmetrics_app") do |a|
     428    a.bindProperty('interface', 'monitor_interface')
     429    a.bindProperty('sample-interval', 'sample-interval')
     430    a.measure('network', :samples => 1)
     431  end
     432end
     433
     434
     435
     436
     437
     438###### Change the following to the correct interfaces ######
     439left = 'eth1'
     440right = 'eth2'
     441###### Change the above to the correct interfaces ######
     442
     443##definition of Sender, Receiver and Monitors
     444defProperty('source','outside','ID of sender node')
     445defProperty('sink', 'inside', "ID of receiver node")
     446defProperty ('balancer', 'switch', "ID of the load balancer")
     447defProperty('graph', true, "Display graph or not")
     448
     449defGroup('Sender', property.source) do |node|
     450end
     451
     452defGroup('Receiver', property.sink) do |node|
     453end
     454#measure the total bytes sent out and the total throughput on left path
     455defGroup('Monitor', property.balancer) do |node|
     456  node.addApplication("oml:app:trace") do |app|
     457    app.setProperty("interface", left)
     458    app.setProperty("config", "/tmp/monitor/conf.xml")
     459    app.measure("tcp", :samples => 1)
     460  end
     461  options = { 'sample-interval' => 1, 'monitor_interface' => left }
     462  node.addPrototype("system_monitor", options)
     463end
     464#measure the total bytes sent out and the total throughput on right path
     465defGroup('Monitor1', property.balancer) do |node|
     466  node.addApplication("oml:app:trace") do |app|
     467    app.setProperty("interface", right)
     468    app.setProperty("config", "/tmp/monitor/conf.xml")
     469    app.measure("tcp", :samples => 1)
     470  end
     471  options = { 'sample-interval' => 1, 'monitor_interface' => right }
     472  node.addPrototype("system_monitor", options)
     473end
     474
     475
     476
     477##experiment starts here##
     478onEvent(:ALL_UP_AND_INSTALLED) do |event|
     479  info "starting the monitor"
     480  group('Monitor').startApplications
     481  group('Monitor1').startApplications
     482 
     483  info "Starting the Receiver"
     484  group('Receiver').exec("iperf -s")
     485  wait 2
     486
     487  #--------------------------------------------------------------------
     488  #info "***********starting Load Balancer************"
     489  #group('Monitor').exec("/opt/trema-trema-f995284/trema run /root/load-balancer.rb > /tmp/lb.tmp")
     490  #wait 2
     491  #info "connecting switch to load balancer controller"
     492  #group('Monitor').exec("ovs-vsctl set-controller br0 tcp:127.0.0.1 ptcp:6634:127.0.0.1")
     493  #wait 5
     494 
     495  $i = 1
     496  $exp_time = 100
     497  $interval = 5
     498  $total = $exp_time / $interval
     499  while $i <= $total do
     500    info "Starting Sender " + $i.to_s
     501    group('Sender').exec("iperf -c 10.10.10.2 -t "+ ($exp_time-$i*$interval).to_s)
     502    wait $interval
     503    $i += 1
     504  end
     505   
     506  info "All applications started..."
     507    wait $interval
     508
     509  #info "************stoping Load Balancer***************"
     510  #group('Monitor').exec("killall ruby")
     511  #group('Monitor').exec("rm /opt/trema-trema-f995284/tmp/pid/LoadBalancer.pid")
     512  #info "remove switch-to-controller connection"
     513  #group('Monitor').exec("ovs-vsctl del-controller br0")
     514   
     515  info "All applications stopped."   
     516  allGroups.stopApplications
     517  Experiment.done
     518end
     519
     520 
     521##define the graphs that we want to display##
     522defGraph 'Cumulated number of Bytes' do |g|
     523  g.ms('network').select(:oml_ts_server, :tx_bytes, :oml_sender_id)
     524  g.caption "Total Bytes"
     525  g.type 'line_chart3'
     526  g.mapping :x_axis => :oml_ts_server, :y_axis => :tx_bytes, :group_by => :oml_sender_id
     527  g.xaxis :legend => 'timestamp', :ticks => {:format => 's'}
     528  g.yaxis :legend => 'sent Bytes', :ticks => {:format => 'Byte'}
     529end
     530 
     531defGraph 'TCP Throughput Bytes-per-Second' do |g|
     532  g.ms('tcp').select(:oml_ts_server, :tcp_throughput_sum, :oml_sender_id)
     533  g.caption "TCP throughput"
     534  g.type 'line_chart3'
     535  g.mapping :x_axis => :oml_ts_server, :y_axis => :tcp_throughput_sum, :group_by => :oml_sender_id
     536  g.xaxis :legend => 'timestamp', :ticks => {:format => 's'}
     537  g.yaxis :legend => 'throughput', :ticks => {:format => 'Bytes/s'}
     538end 
     539
     540}}}
     541   - '''2.2.2''' On the terminal where you are logged in on node "Switch", rerun "ifconfig" to see the IP addresses on each interface.
     542    [[BR]][[Image(GENIExperimenter/Tutorials/Graphics:warning-icon-hi.png,5%)]] You may not be able to see all interfaces up immediately when node "Switch" is ready; wait for some more time (about 1 min) then try "ifconfig" again.
     543   - '''2.2.3''' Identify the two interfaces that you want to monitor: the interfaces with IP addresses 192.168.2.1(left) and 192.168.3.1(right) respectively. On the !LabWiki page, in your ruby script, find the following lines:
     544{{{
     545###### Change the following to the correct interfaces ######
     546left = 'eth1'
     547right = 'eth3'
     548###### Change the above to the correct interfaces ######
     549}}}
     550   - '''2.2.4''' Change eth1 and eth3 to the corresponding two interfaces you found with IP addresses 192.168.2.1 (the interface that connects to the left path) and 192.168.3.1 (the interface that connects to the right path) and press the "save" icon on your !LabWiki page.
     551
     552== 3. Run your experiment ==
     553=== 3.1 Start your experiment with existing configuration ===
     554   - '''3.1.1''' Drag the `file Icon` at the left-top corner on your !LabWiki page from `Prepare` column and drop it to `Execute` column. Fill in the name of your !LabWiki experiment (this can be anything that does not contain spaces, it is just to help you track the experiments you run), select your project from the drop-down list, select your slice from the list, and type "true" in the graph box to enable graphs. You can also create an experiment context if you wish to save each run of the experiment separately (Click on Add Context in the top right hand corner of the page). Then press the "Start Experiment" button.
     555   - '''3.1.2''' When your experiment is finished, turn off your controller and disconnect the switch from your controller:
     556      - On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
     557      - On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
     558     {{{
     559     ovs-vsctl del-controller br0
     560     }}}
     561   [[BR]][[Image(GENIExperimenter/Tutorials/Graphics:warning-icon-hi.png,5%)]]  Do not start another experiment (i.e., drag and drop the file icon in !LabWiki and press "Start Experiment") before your current experiment is finished.
     562   
     563=== 3.2 Run the experiment in paths with different bandwidth ===
     564 - '''3.2.1''' Log on to node "left" (use the `readyToLogin.py` script) and change the link capacity for the interface with IP address "192.168.2.2" (use "ifconfig" to find the correct interface, here we assume eth1 is the interface connecting to node "Switch"):
     565{{{
     566ovs-vsctl set Interface eth1 ingress_policing_rate=10000
     567}}}
     568 The above will rate-limit the connection from node "Switch" to node "left" to have a bandwidth of 10Mbps.
     569 - Other ways to e.g., change link delay and loss-rate using "tc qdisc netem" can be found in Appendix D.
     570
     571 - '''3.2.2''' On node "Switch", start your Load Balancer using the following command:
     572 {{{
     573 /opt/trema-trema-f995284/trema run /root/load-balancer.rb
     574 }}}
     575 - '''3.2.3''' Start a new terminal, log onto node "Switch", use the following command to connect the OpenFlow Switch to the controller (the console window that runs your controller should display "Switch is Ready!" when the switch is connected):
     576 {{{
     577 ovs-vsctl set-controller br0 tcp:127.0.0.1 ptcp:6634:127.0.0.1
     578 }}}
     579 - '''3.2.4''' Go back to your !LabWiki web page, drag and drop the `file icon` and repeat the experiment, as described in section 3.1, using a different experiment name (the slice name should stay the same).
     580 - '''3.2.5''' When your experiment is finished, turn off your controller and disconnect switch from your controller:
     581      - On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
     582      - On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
     583     {{{
     584     ovs-vsctl del-controller br0
     585     }}}
     586
     587==== Questions ====
     588 - Did you see any difference from the graphs plotted on !LabWiki, compared with the graphs plotted in the first experiment? why?
     589 - Check out the output of the Load Balancer on node "Switch" and tell how many flows are directed to the left path and how many are on the right path, why?
     590 - To answer the above question, you need to understand the Load Balancing controller. Check out the "load-balancer.rb" file in your home directory on node "Switch". Check [#A.AbouttheOpenFlowcontrollerload-balancer.rb Appendix A] for hints/explanations about this OpenFlow Controller.
     591
     592=== 3.3 Modify the OpenFlow Controller to balance throughput among all the TCP flows ===
     593 - You need to calculate the average per-flow throughput observed from both left and right paths. The modifications need to happen in the function "stats_reply" in load-balancer.rb
     594 - In function "decide_path", change the path decision based on the calculated average per-flow throughput: forward the flow onto the path with more average per-flow throughput. (Why? TCP tries its best to consume the whole bandwidth so more throughput means network is not congested)
     595 - If you do not know where to start, check the hints in Section 3.1.
     596  - If you really do not know where to start after reading the hints, the answer can be found on node "Switch", at /tmp/load-balancer/load-balancer-solution.rb
     597  - Copy the above solution into your home directory then re-do the experiment on !LabWiki.
     598  [[BR]][[Image(GENIExperimenter/Tutorials/Graphics:4NotesIcon_512x512.png,5%)]] You need to change your script to use the correct Load Balancing controller (e.g., if your controller is "load-balancer-solution.rb", you should run "/opt/trema-trema-f995284/trema run /root/load-balancer-solution.rb")
     599 - Reren the experiment using your new OpenFlow Controller following steps in Section 2.5, check the graphs plotted on !LabWiki as well as the controller's log on node "Switch" and see the difference.
     600 - When your experiment is done, you need to stop the Load Balancer:
     601  - On node "Switch", use the following command to disconnect the OpenFlow Switch from the controller:
     602  {{{
     603  ovs-vsctl del-controller br0
     604  }}}
     605  - On node "Switch", press "Ctrl" and "c" key to kill your Load Balancer process on node "Switch"
     606
     607=== 3.4 Automate your experiment using !LabWiki ===
     608 - '''3.4.1''' Add code in your !LabWiki script to automate starting and stopping your OpenFlow Controller:
     609  - '''3.4.1.1''' Go back to your !LabWiki page, un-comment the script from line 184 to line 189 to start your OpenFlow Controller automatically on !LabWiki
     610   [[BR]][[Image(GENIExperimenter/Tutorials/Graphics:4NotesIcon_512x512.png,5%)]] You might need to change line 185 to use the correct load balancer controller
     611  - '''3.4.1.2''' In your script, uncomment lines 205 to line 209 to stop your OpenFlow Controller automatically on !LabWiki
     612 - '''3.4.2''' On your !LabWiki web page, drag and drop the `file icon` and repeat the experiment, as described in section 3.1, using a different experiment name (the slice name should stay the same).
     613 - If you have more time or are interested in trying out things, go ahead and try section 3.5. The tutorial is over now and feel free to ask questions :-)
     614
     615=== 3.5(Optional) Try different kinds of OpenFlow Load Balancers ===
     616 - You can find more load balancers under /tmp/load-balancer/ on node "Switch"
     617 - To try out any one of them, follow the steps:
     618  - At the home directory on node "Switch", copy the load balancer you want to try out, e.g.,
     619  {{{
     620  cp /tmp/load-balancer/load-balancer-random.rb /root/
     621  }}}
     622  - Change your !LabWiki code at line 185 to use the correct OpenFlow controller.
     623  - On !LabWiki, drag and drop the "File" icon and re-do the experiment as described in section 3.1
     624 - Some explanations about the different load balancers:
     625  - "load-balancer-random.rb" is the load balancer that picks path '''randomly''': each path has 50% of the chance to get picked
     626  - "load-balancer-roundrobin.rb" is the load balancer that picks path in a '''round robin''' fashion: right path is picked first, then left path, etc.
     627  - Load balancers that begin with "load-balancer-bytes" picks path based on the total number of bytes sent out to each path: the one with '''fewer bytes''' sent out is picked
     628   - "load-balancer-bytes-thread.rb" sends out flow stats request in function "packet_in" upon the arrival of a new TCP flow and waits until flow stats reply is received in function "stats_reply" before a decision is made. As a result, this balancer gets '''the most up-to-date flow stats''' to make a decision. However, it needs to wait for at least the round-trip time from the controller to the switch (for the flow stats reply) before a decision can be made.
     629   - "load-balancer-bytes-auto-thread.rb" sends out flow stats request once every 5 seconds in a separate thread, and makes path decisions based on the most recently received flow stats reply. As a result, this balancer makes path decisions based on some '''old statistics (up to 5 seconds)''' but reacts fast upon the arrival of a new TCP flow (i.e., no need to wait for flow stats reply)
     630  - Load balancers that begin with "load-balancer-flows" picks path based on the total number of flows sent out to each path: the one with '''fewer flows''' sent out is picked
     631  - Load balancers that begin with "load-balancer-throughput" picks path based on the total throughput sent out to each path: the one with '''more throughput''' is picked
     632
     633= Appendix: Hints and Explanations =
     634== A. About the OpenFlow controller [http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/ExoGENI/load-balancer.rb load-balancer.rb] ==
     635  - Trema web site: http://trema.github.io/trema/
     636  - Treme ruby API document: http://rubydoc.info/github/trema/trema/master/frames
     637  - '''Functions used in our tutorial:'''
     638    - '''start()''': is the function that will be called when the OpenFlow Controller is started. Here in our case, we read the file /tmp/portmap and figures out which OpenFlow port points to which path
     639    - '''switch_ready()''': is the function that will be called each time a switch connects to the OpenFlow Controller. Here in our case, we allow all non-TCP flows to pass (including ARP and ICMP packets) and ask new inbound TCP flow to go to the controller. We also starts a "timer" function that calls "query_stats()" once every 2 seconds.
     640    - '''query_stats()''': is the function that sends out a flow_stats_request to get the current statistics about each flow.
     641    - '''packet_in()''': is the function that will be called each time a packet arrives at the controller. Here in our case, we call "decide_path()" to get path decisions, then send flow entry back to the OpenFlow Switch to instruct the switch which path to take for this new TCP flow.
     642    - '''stats_reply()''': is the function that will be called when the OpenFlow Controller receives a flow_stats_reply message from the OpenFlow Switch. Here in our case, we update the flow statistics so that "decide_path()" can make the right decision.
     643    - '''send_flow_mod_add()''': is the function that you should use to add a flow entry into an OpenFlow Switch.
     644    - '''decide_path()''': is the function that makes path decisions. It returns the path choices based on flow statistics.
     645  - '''The Whole Process: '''
     646    - When the OpenFlow switch is ready, our controller starts a function that asks for flow stats once every 2 seconds.
     647    - The OpenFlow switch will reply with statistics information about all flows in its flow table.
     648    - This flow statistics message will be fetched by the "stats_reply" function in the OpenFlow controller implemented by the user on node "Switch".
     649    - As a result, our controller updates its knowledge about both left and right path once every 2 seconds.
     650    - Upon the arrival of a new TCP flow, the OpenFlow controller decides which path to send the new flow to, based on the updated flow statistics.
     651
     652  The !FlowStatsReply message is in the following format:
     653{{{
     654FlowStatsReply.new(
     655  :length => 96,
     656  :table_id => 0,
     657  :match => Match.new
     658  :duration_sec => 10,
     659  :duration_nsec => 106000000,
     660  :priority => 0,
     661  :idle_timeout => 0,
     662  :hard_timeout => 0,
     663  :cookie => 0xabcd,
     664  :packet_count => 1,
     665  :byte_count => 1,
     666  :actions => [ ActionOutput.new ]
     667)
     668}}}
     669
     670== B. About The Rspec file [http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/openflow-loadbalancer-kvm.rspec OpenFlowLBExo.rspec] ==
     671  - The Rspec file describes a topology we showed earlier--each node is assigned with certain number of interfaces with pre-defined IP addresses
     672  - Some of the nodes are loaded with softwares and post-scripts. We will take node "Switch" as an example since it is the most complicated one.
     673   - The following section in the Rspec file for node "Switch":
     674   {{{
     675     <install url="http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/software/of-switch-exo.tar.gz"
     676                         install_path="/"/>
     677   }}}
     678   means it is going to download that tar ball from the specified URL and extract to directory "/"
     679   - The following section in the Rspec file for node "Switch":
     680   {{{
     681     <execute shell="bash" command="/tmp/postboot_script_exo.sh $sliceName $self.Name() ;
     682                           /tmp/of-topo-setup/lb-setup"/>
     683   }}}
     684   names the post-boot script that ExoGENI is going to run for you after the nodes are booted. 
     685  - More information about "/tmp/postboot_script_exo.sh":
     686   It is a "hook" to the !LabWiki interface. Experimenter run this so that !LabWiki knows the name of the slice and the hostname of the particular node that OML/OMF toolkits are running on.
     687  - More information about "/tmp/of-topo-setup/lb-setup":
     688   "lb-setup" is to setup the load balancing switch. The source code as well as explanation is as follows:
     689   {{{
     690   #!/bin/sh
     691
     692   /tmp/of-topo-setup/prep-trema       # install all libraries for trema
     693   /tmp/of-topo-setup/ovs-start           # create ovs bridge
     694
     695   cp /usr/bin/trace-oml2 /usr/bin/trace        # a hack to the current LabWiki --> needs to be fixed
     696   cp /usr/bin/nmetrics-oml2 /usr/bin/nmetrics       # a hack to the current LabWiki --> needs to be fixed
     697   # download the load balancing openflow controller source code to user directory
     698   wget http://www.gpolab.bbn.com/experiment-support/OpenFlowExampleExperiment/ExoGENI/load-balancer.rb -O /root/load-balancer.rb
     699
     700   INTERFACES="192.168.1.1 192.168.2.1 192.168.3.1"
     701
     702   # wait until all interfaces are up, then fetch the mapping from interface name to its ip/MAC address and save this info in a file /tmp/ifmap
     703   /tmp/of-topo-setup/writeifmap3
     704
     705   # add port to the ovs bridge
     706   /tmp/of-topo-setup/find-interfaces $INTERFACES | while read iface; do
     707       ovs-vsctl add-port br0 $iface < /dev/null
     708   done
     709
     710   # create port map save it to /tmp/portmap
     711   ovs-ofctl show tcp:127.0.0.1:6634 \
     712       | /tmp/of-topo-setup/ovs-id-ports 192.168.1.1=outside 192.168.2.1=left 192.168.3.1=right \
     713       > /tmp/portmap
     714   }}}
     715
     716== C. About the GIMI script you run on !LabWiki ==
     717 - Line 1 to Line 128: the definition of oml trace and oml nmetrics library. It basically defines the command line options for oml2-trace and oml2-nmetrics, as well as the output (the monitoring data that is going to be stored into the oml server)
     718  - users are not supposed to modify them
     719  - the definition here we used is not the same as what is provided by the latest OML2 2.10.0 library because there is some version mis-match between the OMF that !LabWiki is using and the OML2 toolkit that we are using. It is a temporary hack for now --> to be fixed
     720  - we added the definition of option "--oml-config" for trace app (Line 27-28) so that oml2-trace accepts configuration files:
     721  {{{
     722  app.defProperty('config', 'config file to follow', '--oml-config',
     723                  :type => :string, :default => '"/tmp/monitor/conf.xml"')
     724  }}}
     725 - Line 134 to Line 137: user defines the monitoring interfaces here. In our case, we want to monitor the interface on node "Switch" that connects to the left path (with IP 192.168.2.2) and to the right path (with IP 192.168.3.1)
     726 - Line 139 to Line 169: defines on which node the user wants to run which monitoring app; and the "display graph" option.
     727  - group "Monitor" monitors the left path statistics using nmetrics and trace.
     728  - group "Monitor1" monitors the right path statistics using nmetrics and trace.
     729  - To monitor the throughput information, we used oml2-trace with the option of "--oml-config" which uses the configuration file we created at /tmp/monitor/conf.xml, which simply sums up the number of tcp_packet_size (in Bytes) for each second and save the info into the OML Server (in a Postgre database):
     730  {{{
     731<omlc id="switch" encoding="binary">
     732  <collect url="tcp:emmy9.casa.umass.edu:3004" name="traffic">
     733    <stream mp="tcp" interval="1">
     734      <filter field="tcp_packet_size" operation="sum" rename="tcp_throughput" />
     735    </stream>
     736  </collect>
     737</omlc>
     738  }}}
     739  - More information about nmetrics and trace can be found here: http://oml.mytestbed.net/projects/omlapp/wiki/OML-instrumented_Applications#Packet-tracer-trace-oml2
     740 - Line 173 to Line 218: defines the experiment:
     741  - Line 175-177: starts the monitoring app
     742  - Line 179-181: starts the TCP receiver (using iperf)
     743  - Line 183-189: starts the load balancer and connects ovs switch to the load balancer (controller)
     744  - Line 191-200: starts 20 TCP flows, with 5 seconds interval between the initial of each Flow
     745  - Line 205-209: stop the load balancer controller, disconnect the ovs switch from the controller and finish the experiment
     746 - Line 217 to Line 234: defines the two graphs we want to plot:
     747  - The first uses the monitoring data from oml2-nmetrics to display the cumulated number of bytes observed from each of the interfaces;
     748  - The second graph uses the monitoring results from oml2-trace to display the throughput observed from each of the interfaces.
     749
     750= D. Tips: Debugging an OpenFlow Controller =
     751You will find it helpful to know what is going on inside your OpenFlow controller and its associated switch when implementing these exercises. [[BR]]
     752This section contains a few tips that may help you out if you are using the Open vSwitch implementation provided with this tutorial.
     753If you are using a hardware OpenFlow switch, your instructor can help you find equivalent commands. [[BR]]
     754The Open vSwitch installation provided by the RSpec included in this tutorial is located in ''/opt/openvswitch-1.6.1-F15''. You will find Open vSwitch commands in ''/opt/openvswitch-1.6.1-F15/bin'' and ''/opt/openvswitch-1.6.1-F15/sbin''. Some of these commands may be helpful to you. If you add these paths to your shell’s ''$PATH'', you will be able to access their manual pages with man. Note that ''$PATH'' will not affect sudo, so you will still have to provide the absolute path to sudo; the absolute path is omitted from the following examples for clarity and formatting.
     755
     756 - '''ovs-vsctl'''[[BR]]
     757 Open vSwitch switches are primarily configured using the ''ovs-vsctl'' command. For exploring, you may find the ''ovs-vsctl show'' command useful, as it dumps the status of all virtual switches on the local Open vSwitch instance. Once you have some information on the local switch configurations, ''ovs-vsctl'' provides a broad range of capabilities that you will likely find useful for expanding your network setup to more complex configurations for testing and verification. In particular, the subcommands ''add-br'', ''add-port'', and ''set-controller'' may be of interest.
     758 - '''ovs-ofctl''' [[BR]]
     759 The switch host configured by the given rspec listens for incoming OpenFlow connections on localhost port 6634.
     760 You can use this to query the switch state using the ''ovs-ofctl'' command. In particular, you may find the ''dump-tables'' and ''dump-flows'' subcommands useful. For example, ''sudo ovs-ofctl dump-flows tcp:127.0.0.1:6634'' will output lines that look like this:
     761 {{{
     762cookie=0x4, duration=6112.717s, table=0, n packets=1, n bytes=74, idle age=78,priority=5,tcp,
     763nw src=10.10.10.0/24 actions=CONTROLLER:65535
     764 }}}
     765 This indicates that any TCP segment with source IP in the 10.10.10.0/24 subnet should be sent to the OpenFlow controller for processing, that it has been 78 seconds since such a segment was last seen, that one such segment has been seen so far, and the total number of bytes in packets matching this rule is 74. The other fields are perhaps interesting, but you will probably not need them for debugging. (Unless, of course, you choose to use multiple tables — an exercise in OpenFlow 1.1 functionality left to the reader.)
     766 - '''Unix utilities'''[[BR]]
     767 You will want to use a variety of Unix utilities, in addition to the tools listed in [http://groups.geni.net/geni/wiki/GENIEducation/SampleAssignments/OpenFlowAssignment/ExerciseLayout ExerciseLayout], to test your controllers. The standard ping and ''/usr/sbin/arping'' tools are useful for debugging connectivity (but make sure your controller passes ''ICMP ECHO REQUEST'' and ''REPLY'' packets and ''ARP'' traffic, respectively!), and the command ''netstat -an'' will show all active network connections on a Unix host; the TCP connections of interest in this exercise will be at the top of the listing. The format of netstat output is out of the scope of this tutorial, but information is available online and in the manual pages.
     768 - '''Linux netem''' [[BR]]
     769 Use the ''tc'' command to enable and configure delay and lossrate constraints on the outgoing interfaces for traffic traveling from the OpenFlow switch to the Aggregator node. To configure a path with a 20 ms delay and 10% lossrate on eth2, you would issue the command:
     770{{{
     771sudo tc qdisc add dev eth2 root handle 1:0 netem delay 20ms loss 2%
     772}}}
     773 Use the "tc qdisc change" command to reconfigure existing links,instead of "tc qdisc add". [[BR]]
     774
     775
     776= [wiki:GENIEducation/SampleAssignments/WinterCamp14/GIMITutorial/Procedure Introduction] =
     777= [wiki:GENIEducation/SampleAssignments/WinterCamp14/GIMITutorial/Procedure/Finish Next: Teardown Experiment] =