| 195 | We now want to configure some latency between the nodes in our experiment and generate some baseline performance numbers. |
| 196 | |
| 197 | On gateway0 and gateway1, run: |
| 198 | |
| 199 | {{{ |
| 200 | $ sh /tmp/setup_netem.sh <iface> 20 0 |
| 201 | }}} |
| 202 | |
| 203 | Substitute the interface names assocatiated with 192.168.1.1 and 192.168.1.2. After running this script, 20ms of latency will be added to outgoing packets from each interface on the gateway nodes. If you ping between client0 and client1, you should now see 40ms RTT. |
| 204 | |
| 205 | Now lets see how a network benchmark peforms. We will use IPerf, a common benchmarking tool for this purpose. client1 will be our iperf server and client0 will be our traffic source, sending data from client0 to client1. |
| 206 | |
| 207 | On client1, run: |
| 208 | |
| 209 | {{{ |
| 210 | $ iperf -s |
| 211 | }}} |
| 212 | |
| 213 | On client0, run: |
| 214 | |
| 215 | {{{ |
| 216 | $ iperf -c 10.10.2.1 -t 60 -i 2 |
| 217 | }}} |
| 218 | |
| 219 | You should see a transfer rate of close to 100Mb/s, which is the default link speed of our experiment nodes. |
| 220 | |
| 221 | Now, let's make the network situation a little more complicated. We will add some edge latency and a small amount of loss to the edge 0 side. |
| 222 | |
| 223 | On client0, run: |
| 224 | |
| 225 | {{{ |
| 226 | $ sh /tmp/setup_netem.sh <iface> 5 .01 |
| 227 | }}} |
| 228 | |
| 229 | This will add 5ms edge latency and .01% loss to packets leaving client0. If you run the same iperf from above once more, you should see a significant impact on the achievable transfer performance. |
| 230 | |
| 231 | It's time to see if Phoebus can help in this situation. Let us start the Phoebus service on the two gateway nodes. |
| 232 | |
| 233 | On each node, run (as root): |
| 234 | |
| 235 | {{{ |
| 236 | $ /etc/init.d/phoebus start |
| 237 | $ tail -f /var/log/phoebus |
| 238 | }}} |
| 239 | |
| 240 | This will start the service and allow you to view the Phoebus log on each gateway node. |
| 241 | |
| 242 | Now we need to make the client application (in this case iperf) use Phoebus during transfer tests. To do so, we will make use of a trsnsparent wrapper mechanism known as a "shim" library. This techique uses libray interposition via the Linux LD_PRELOAD mehcniasm to intercept socket() calls and allows them to establish connections to a series of Phoebus Gateways using a protocol known as XSP. Let us give this a try and re-run our transfer tests, this time over Phoebus. |
| 243 | |
| 244 | On client0, run: |
| 245 | |
| 246 | {{{ |
| 247 | export LD_PRELOAD=/usr/lib64/libxsp_phoebuswrapper.so |
| 248 | export XSP_PATH=10.10.1.2/5006,10.10.2.2/5006 |
| 249 | }}} |
| 250 | |
| 251 | After running the above commands, any network application instantiated from that shell will make connections to the PGs specified in the XSP_PATH environment variable. Try this with the same iperf test from above. You should now see a much improved transfer performance, even approaching that of the non-loss case from earlier. |
| 252 | |
| 253 | You may experiment with the scripts and try different latency and loss cases. For repeated tests, or to do parameter sweeps, these scripts can be further integrated and looped to do a number of long-lived tests. |