wiki:GENIExperimenter/Tutorials/PortalOmniExample/ExecuteExperiment

Version 6 (modified by Gary Wong, 11 years ago) (diff)

Renumber the steps to continue from the previous page.

Execute Experiment: Log in to nodes and monitor the experiment execution

Introduction: Getting Started with GENI and the GENI Portal

Design/Setup
Execute
Finish

6.1 Wait for experiment setup

Although the Omni createsliver client request has finished, the aggregate manager may still be busy in the background booting the requested nodes and bringing up network connections. It is possible to use Omni manually to follow the AM's progress (with the "omni sliverstatus" command, which in turn uses the SliverStatus AM call), but instead we will demonstrate a higher-level tool which illustrates how custom scripts can make use of Omni to simplify common sequences of tasks into a single convenient command.

The readyToLogin.py script will contact an AM, determine whether all requested resources at that AM are ready, and if so, display a summary of the hosts including their addresses and the proper SSH keys to use to access them.

  1. Please use the command:
    /usr/local/bin/gcf/examples/readyToLogin.py -a AM_NICKNAME SLICENAME
    

where (as before) AM_NICKNAME and SLICENAME are your aggregate manager nickname and your slice name (both found on your worksheet).

  1. If it reports that the sliver is not yet ready (for example, it might say that the status is "changing"), then please wait a minute

or two and try again. Once everything is complete, readyToLogin.py will give output that should look something like this:

...
server's geni_status is: ready (am_status:ready) 
User example logs in to server using:
	ssh -p 32768 -i /home/geni/.ssh/geni_key_portal example@pc1.utah.geniracks.net
User example logs in to client using:
	ssh -p 32769 -i /home/geni/.ssh/geni_key_portal example@pc1.utah.geniracks.net
...

6.2 Log in to nodes

  1. Copy and paste the ssh command lines directly into your terminal

to log in to either of your hosts. While you're welcome to inspect either one, for the purpose of this experiment, the client host is the one running the iperf tests and collecting all the logs, so please use the client ssh command now.

You may get a warning from ssh complaining that the authenticity of the host cannot be established. This is just because your ssh client has never accessed this VM before, and so does not yet recognise its key. Say "yes", you do want to continue connecting, and you should see a shell prompt from the remote end:

[example@client ~]$

The install and execute services requested in our rspec have already started, and measurements are now being collected. (You can verify that things are working by inspecting the /local directory on each host, and looking for the approriate processes with a command like ps ax. If you do not see the proper files and processes, please double-check the rspec you used in the previous step.)

  1. The client machine is saving all the test results in the /tmp/iperf-logs

directory. Files with timestamps in the names will gradually appear there (there are 100 tests overall, and it may take 20 minutes for all of them to complete if you want to wait for them).

Each log file corresponds to one test with some number of simultaneous TCP connections over the VLAN link you requested between the two hosts. Later tests gradually include more concurrent connections, so the throughput of each individual connection will decrease, but the aggregate throughput (the [SUM] line at the end of each file) should remain approximately consistent.

For a real experiment, of course, this step would be the most imporant and collection, analysis and archival of the results would be critical, but for now, play around as necessary to satisfy your curiosity and then continue.

Next: Finish