wiki:GENIEducation/SampleAssignments/UnderstandAMAPI/Procedure/Execute

Version 7 (modified by sedwards@bbn.com, 10 years ago) (diff)

--

Understanding the AM API

Image Map

4 Wait for experiment setup

  1. Please use the command:
    readyToLogin.py --no-keys -a AM_NICKNAME SLICENAME
    

where (as before) AM_NICKNAME and SLICENAME are your aggregate manager nickname and your slice name (both found on your worksheet).

  1. If it reports that the sliver is not yet ready (for example, it might say that the status is "changing"), then please wait a minute

or two and try again. Once everything is complete, readyToLogin.py will give output that should look something like this:

...
server's geni_status is: ready (am_status:ready) 
User example logs in to server using:
	ssh -p 32768 example@pc1.utah.geniracks.net
User example logs in to client using:
	ssh -p 32769 example@pc1.utah.geniracks.net
...
If you didn't previously complete the Flack tutorial (or are not running an ssh agent), then your ssh client might not be set up to log in with above commands. Try re-running readyToLogin.py without the --no-keys option, and it will give you one or more ssh commands to choose from (which should work, although might require the key passphrase).

5 Log in to client node

  1. Copy and paste the ssh command lines directly into your terminal

to log in to either of your hosts. While you're welcome to inspect either one, for the purpose of this experiment, the client host is the one running the iperf tests and collecting all the logs, so please use the client ssh command now.

You may get a warning from ssh complaining that the authenticity of the host cannot be established. This is just because your ssh client has never accessed this VM before, and so does not yet recognise its key. Say "yes", you do want to continue connecting, and you should see a shell prompt from the remote end:

[example@client ~]$

The install and execute services requested in our rspec have already started, and measurements are now being collected.

You can verify that things are working by entering the hostname of the server node in your browser. It should bring up a webpage of statistics from your experiment.

(In addition, you can inspect the /local directory on each host, and looking for the approriate processes with a command like ps ax. If you do not see the proper files and processes, please double-check the rspec you used in the previous step.)

You can verify that things are working by entering the hostname of the server node in your browser. It should bring up a webpage of statistics from your experiment.

(In addition, you can inspect the /local directory on each host, and looking for the appropriate processes with a command like ps ax. If you do not see the proper files and processes, please double-check the rspec you used in the previous step.)
Enter the hostname in your browser to see statistics

Figure 5-1 Enter the hostname of the server node in your browser to see statistics
  1. The client machine is saving all the test results in the /tmp/iperf-logs

directory. Files with timestamps in the names will gradually appear there (there are 100 tests overall, and it may take 20 minutes for all of them to complete if you want to wait for them).

Each log file corresponds to one test with some number of simultaneous TCP connections over the VLAN link you requested between the two hosts. Later tests gradually include more concurrent connections, so the throughput of each individual connection will decrease, but the aggregate throughput (the [SUM] line at the end of each file) should remain approximately consistent.

6. Analyze Experiment

For a real experiment, of course, this step would be the most important and collection, analysis and archival of the results would be critical, but for now, play around as necessary to satisfy your curiosity and then continue.


Introduction

Next: Finish