[[PageOutline]] = Hints about using GENI Infrastructure as well as other tools in the GENI system = == GPO Wiki == - To find ppl: http://groups.geni.net/syseng/wiki/ContactInfo#CurrentGPOpeople - New Experiment Tutorial Template: http://groups.geni.net/geni/wiki/GENIEducation/SampleAssignments/Template/ExerciseLayout == All about !LabWiki == Server side log files: [[BR]] OMF Log: /tmp/[experiment-name].log .. [[BR]] OML log: /var/log/oml2-server.log [[BR]] Client side log file: [[BR]] /var/log/omf-resctl.log == Traffic Control == {{{ tc qdisc add dev eth2 root handle 1:0 netem delay 100ms loss 5% tc qdisc add dev eth2 parent 1:0 tbf rate 20mbit buffer 20000 limit 16000 }}} It seems that tc qdisc tbf does not work well (in terms of controlling the throughput of TCP flows) on ovs when ovs switch is connected with a controller. [[BR]] Instead, we use ovs-vsctl: {{{ ovs-vsctl set Interface eth2 ingress_policing_rate=1000 ovs-vsctl set Interface eth2 ingress_policing_burst=100 }}} The above sets the ingress rate to be in 900kbps-1100kbps. [[BR]] To set it back to no rate control, use: {{{ ovs-vsctl set Interface eth2 ingress_policing_rate=0 }}} It is a little bit tricky to configure delay/loss on an OpenVZ virtual machine [[BR]] Step 1: find our qdisc family number by executing "sudo /sbin/tc qdisc", a sample output could be like the following: {{{ [shufeng@center ~]$ sudo /sbin/tc qdisc qdisc htb 270: dev mv6.47 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 260: dev mv6.47 parent 270:1 limit 1000 qdisc htb 150: dev mv6.41 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 140: dev mv6.41 parent 150:1 limit 1000 qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000 qdisc htb 230: dev mv6.45 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 220: dev mv6.45 parent 230:1 limit 1000 }}} Now if the ethernet card you want to change is mv6.43, you can find from following line: {{{ qdisc htb 190: dev mv6.43 root refcnt 2 r2q 10 default 1 direct_packets_stat 0 qdisc netem 180: dev mv6.43 parent 190:1 limit 1000 }}} As a result, you change the delay/loss by executing the following: {{{ sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000 delay 100ms loss 5% sudo /sbin/tc -s qdisc change dev mv6.43 parent 190:1 handle 180: netem limit 1000 }}} == OpenVSwitch Commands == {{{ ovs-vsctl add-br br0 ovs-vsctl add-port br0 eth2 ovs-vsctl set-fail-mode br0 standalone (when loses connection with the controller, the switch acts as a normal switch) ovs-vsctl set-fail-mode br0 secure (when loses connection with the controller, the switch wont forward any packets) ovs-vsctl set bridge br0 datapath_type=netdev (without kernel support, if the vswitch is gonna be used in userspace) }}} == git Commands == {{{ git clone [url] git add -A . (add all files and folders in the current directory) git commit -m "commit message" git status (to check the status of your local copy) git fetch origin git merge origin/master git push origin master (upload your local copy to master (global) repository) }}} == OpenFlow Trema == To restart Trema controller as well as the attached switch: {{{ 1. kill trema process: kill $(pidof ruby) 2. delete lock file: rm /opt/trema...../tmp/pid/controller.pid (controller is the name of your controller, in the case of Load Balancing, its `Load_Balancer`) 3. unlink the switches from controller: ovs-vsctl del-controller br0 4. start controller: /opt/trema....../trema run controller.rb (controller is the name of your controller, in the case of Load Balancing, its load_balancer.rb) 5. link switches to the controller: ovs-vsctl set-controller br0 tcp:127.0.0.1 (might need to wait for 2 seconds for the switch to connect to the controller, to verify that, print something in the "switch_ready" function so that you can see the output when switch is connected) }}}