Changes between Version 17 and Version 18 of PhoebusExperimentGEMINI


Ignore:
Timestamp:
03/20/13 18:23:55 (11 years ago)
Author:
kissel@cis.udel.edu
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • PhoebusExperimentGEMINI

    v17 v18  
    183183For our experiment, we'd also like to measure the latency across all of our nodes.  We can use the active measurement configuration page to do so.
    184184
     185Click on the buttom labeled "Open PS Config".
     186
    185187[[Image(desktop_psconfig.png)]]
    186188
    187 You may add new ping tests under "Schedul BLiPP Test".
     189You may add new ping tests under "Schedule BLiPP Test".
    188190
    189191[[Image(psconfig_ping.png)]]
     
    191193== 8. Run experiments ==
    192194
     195We now want to configure some latency between the nodes in our experiment and generate some baseline performance numbers.
     196
     197On gateway0 and gateway1, run:
     198
     199{{{
     200$ sh /tmp/setup_netem.sh <iface> 20 0
     201}}}
     202
     203Substitute the interface names assocatiated with 192.168.1.1 and 192.168.1.2.  After running this script, 20ms of latency will be added to outgoing packets from each interface on the gateway nodes.  If you ping between client0 and client1, you should now see 40ms RTT.
     204
     205Now lets see how a network benchmark peforms.  We will use IPerf, a common benchmarking tool for this purpose.  client1 will be our iperf server and client0 will be our traffic source, sending data from client0 to client1.
     206
     207On client1, run:
     208
     209{{{
     210$ iperf -s
     211}}}
     212
     213On client0, run:
     214
     215{{{
     216$ iperf -c 10.10.2.1 -t 60 -i 2
     217}}}
     218
     219You should see a transfer rate of close to 100Mb/s, which is the default link speed of our experiment nodes.
     220
     221Now, let's make the network situation a little more complicated.  We will add some edge latency and a small amount of loss to the edge 0 side.
     222
     223On client0, run:
     224
     225{{{
     226$ sh /tmp/setup_netem.sh <iface> 5 .01
     227}}}
     228
     229This will add 5ms edge latency and .01% loss to packets leaving client0.  If you run the same iperf from above once more, you should see a significant impact on the achievable transfer performance.
     230
     231It's time to see if Phoebus can help in this situation.  Let us start the Phoebus service on the two gateway nodes.
     232
     233On each node, run (as root):
     234
     235{{{
     236$ /etc/init.d/phoebus start
     237$ tail -f /var/log/phoebus
     238}}}
     239
     240This will start the service and allow you to view the Phoebus log on each gateway node.
     241
     242Now we need to make the client application (in this case iperf) use Phoebus during transfer tests.  To do so, we will make use of a trsnsparent wrapper mechanism known as a "shim" library.  This techique uses libray interposition via the Linux LD_PRELOAD mehcniasm to intercept socket() calls and allows them to establish connections to a series of Phoebus Gateways using a protocol known as XSP.  Let us give this a try and re-run our transfer tests, this time over Phoebus.
     243
     244On client0, run:
     245
     246{{{
     247export LD_PRELOAD=/usr/lib64/libxsp_phoebuswrapper.so
     248export XSP_PATH=10.10.1.2/5006,10.10.2.2/5006
     249}}}
     250
     251After running the above commands, any network application instantiated from that shell will make connections to the PGs specified in the XSP_PATH environment variable.  Try this with the same iperf test from above.  You should now see a much improved transfer performance, even approaching that of the non-loss case from earlier.
     252
     253You may experiment with the scripts and try different latency and loss cases.  For repeated tests, or to do parameter sweeps, these scripts can be further integrated and looped to do a number of long-lived tests.
    193254
    194255